FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
40% Positive
Analyzed from 402 words in the discussion.
Trending Topics
#chess#llms#llm#trained#optimal#games#problem#achievement#equation#numbers

Discussion (11 Comments)Read Original on HackerNews
As for chess, although an LLM knows the rules of chess, it is not expected to have been trained on many optimal chess games. As such, is it fair to gauge its skill in chess, especially without showing it generated images of its candidate moves? Even if representational and training limitations were addressed, we know that LLMs are architecturally crippled in that they have no neural memory beyond their context. Imagine a next-gen LLM that if presented with a chess puzzle would first update its internal weights for playing optimal chess via a simulation of a billion games, and then return to address the puzzle you gave it. Even with the current arch, it could equivalently create a fork of itself for the same purpose, a new trained model in effect, but the rushing human's desire for wanting the answer immediately comes in the way.
Well, it's read every book ever written on chess so you would expect it to be at least half-way decent.
If anything, I see greater verticality of specialized software that is using LLMs at their core, but with much aid and technology around it to really make the most out of it.
> This was solved by GPT-5.4 Pro (prompted by Price)
See the discussion here: https://www.erdosproblems.com/forum/thread/1196
Why do these distinctions matter?
is it an LLM, or symbolic, or a combo, or a dozen technologies stitched together. Who cares. It is all automation. It is all artificial.
In the context of evolving LLM this is the crucial distinction.