Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

67% Positive

Analyzed from 584 words in the discussion.

Trending Topics

#chess#llms#llm#epicycles#without#model#reason#sufficient#proof#principle

Discussion (12 Comments)Read Original on HackerNews

edo_cat17 minutes ago
Ah epicycles. The history is always very misunderstood, as well as what people thought of them.

> Right now we seem stuck with Ptolemaic astronomy, scholastically adding epicycles upon epicycles, without making the leap to hit the inverse-square law.

This is a great analogy but just isn’t what happened at all. There is no evidence medieval astronomers added epicycles. Copernicus added epicycles to his heliocentric model - and this was a reason his model was criticised was because it was too complicated!

It’s still good analogy, but in reality each planet required a hand tuned; equant, deferent, epicycle and sometimes 1 epicyclet..

Also surely the great logical leap was Kepler’s elliptical orbits which broke free of the perfect circle constraint?

> Reason may be employed in two ways to establish a point: firstly, for the purpose of furnishing sufficient proof of some principle [...]. Reason is employed in another way, not as furnishing a sufficient proof of a principle, but as confirming an already established principle, by showing the congruity of its results, as in astronomy the theory of eccentrics and epicycles is considered as established, because thereby the sensible appearances of the heavenly movements can be explained; not, however, as if this proof were sufficient, forasmuch as some other theory might explain them.

Thomas Aquinas (dumbass Scholastic)

throwaway210426about 3 hours ago
Needs a “[November 2025]” title. It is already outdated
suddenlybananasabout 3 hours ago
Why?
throwaway210426about 3 hours ago
It was silly at the time but even sillier now (eg see other comment on Erdos 1196)
OutOfHereabout 2 hours ago
The force equation example is disturbing, but it's easy to prevent by disallowing the inclusion of random decimal numbers in the formula, with the latter also suggesting over-fitting to the data. It is immediately obvious that such numbers make the equation inelegant and therefore likely to be wrong. If you're going to use symbolic construction, be careful in what formulations you allow, also having an appropriate penalty for complexity.

As for chess, although an LLM knows the rules of chess, it is not expected to have been trained on many optimal chess games. As such, is it fair to gauge its skill in chess, especially without showing it generated images of its candidate moves? Even if representational and training limitations were addressed, we know that LLMs are architecturally crippled in that they have no neural memory beyond their context. Imagine a next-gen LLM that if presented with a chess puzzle would first update its internal weights for playing optimal chess via a simulation of a billion games, and then return to address the puzzle you gave it. Even with the current arch, it could equivalently create a fork of itself for the same purpose, a new trained model in effect, but the rushing human's desire for wanting the answer immediately comes in the way.

suddenlybananasabout 2 hours ago
>As for chess, although an LLM knows the rules of chess, it is not expected to have been trained on many optimal chess games

Well, it's read every book ever written on chess so you would expect it to be at least half-way decent.

ogogmadabout 3 hours ago
The recent news of multiple solutions to Erdos problem 1196 produced by LLMs without any human help, makes any suggestion that LLMs have hit a wall in reasoning seem less credible. To give you an idea, problem 1196 has been worked on by different experts for years. Now suddenly, LLMs have come along and solved the problem in a multitude of ways. Perhaps LLMs will eventually stall, but this paradigm still has some juice left to squeeze.
DarkNova6about 3 hours ago
But are we talking pure LLMs, or existing AI solvers augmented with LLMs? Because while the latter is impressive, it doesn't state much outside of this specific domain.

If anything, I see greater verticality of specialized software that is using LLMs at their core, but with much aid and technology around it to really make the most out of it.

ogogmadabout 1 hour ago
The announcement says:

> This was solved by GPT-5.4 Pro (prompted by Price)

See the discussion here: https://www.erdosproblems.com/forum/thread/1196

FrustratedMonkyabout 2 hours ago
"are we talking pure LLMs, or existing AI solvers augmented with LLM"

Why do these distinctions matter?

is it an LLM, or symbolic, or a combo, or a dozen technologies stitched together. Who cares. It is all automation. It is all artificial.

DarkNova6about 2 hours ago
True, it's an achievement either way. But if an "out of the box LLM" can solve difficult math problems it is an achievement by the LLM vendor. Otherwise it is an achievement by the people doing the vertical integration.

In the context of evolving LLM this is the crucial distinction.

suddenlybananasabout 2 hours ago
The distinctions matter since computational proofs have been around for decades.