ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
45% Positive
Analyzed from 5530 words in the discussion.
Trending Topics
#problem#problems#intelligence#math#right#llms#chatgpt#erd#still#solve

Discussion (212 Comments)Read Original on HackerNews
https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...
-----------------------------
Yes. In fact the proposed bound is true, and the constant 1 is sharp.
Let w(a)= 1/alog(a)
I will prove that, uniformly for every primitive A⊂[x,∞), ∑w(a)≤1+O(1/log(x)) , which is stronger than the requested 1+o(1).
https://chatgpt.com/share/69ed8e24-15e8-83ea-96ac-784801e4a6...
I'd guess / hope the Pro one has the full context window.
He had a habit of seeking out and documenting mathematical problems people were working on.
The problems range in difficulty from "easy homework for a current undergrad in math" to "you're getting a Fields Medal if you can figure this out".
There's nothing that really connects the problems other than the fact that one of the smartest people of the last 100 years didn't immediately know the answer when someone posed it to him.
One of the things people have been doing with LLMs is to see if they can come up with proofs for these problems as a sort of benchmark.
Each time there's a new model release a few more get solved.
I'm no expert, but based on the commentary from mathematicians, this Erdős proof is a unique milestone because the problem received previous attention from multiple professional mathematicians, and the proof was surprising, elegant, and revealed some new connections.
The previous ChatGPT Erdős proofs have been qualitatively less impressive, more akin to literature search or solving easier problems that have been neglected.
Reading the prompt[1], one wonders if stoking the model to be unconventional is part of the success: "this ... may require non-trivial, creative and novel elements"
[1] https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...
I've long suspected that a lot of these model's real capabilities are still locked behind certain prompts, despite the big labs spending tons of effort on making default responses to simple prompts better. Even really dumb shit like "Answer this: ..." vs "Question: ..." vs "... you'll be judged by <competitor>" that should have zero impact in an ideal world can significantly impact benchmark results. The problem is that you can waste a ton of time finding the right prompt using these "dumb" approaches, while the model actually just required some very specific context that was obvious to you and not to it in many day-to-day situations. My go to method is still to have the model ask me questions as the very first step to any of these problems. They kind of tried that with deep research since the early o-series, but it still needs improvement.
You can say this problem needed a low amount of total creativity, but saying it's void of all creativity seems wrong.
That's a great point. It's in line with research being carried on the backs of graduate students, whose work is to hyperfocus on areas.
Yeah, you should look into the Langlands project sometime
[1] e.g. https://www.sciencenewstoday.org/left-brain-vs-right-brain-t...
I remember one of my professors, a coauthor of Erdős boasted to us after a quiz how proud he was that he was able to assign an Erdős problem that went unsolved for a while as just a quiz problem for his undergrads.
So this is proof of the models actually getting stronger (previous generations of LLMs were unable to solve this one).
No, it's not.
While I don't dispute that new models may perform better at certain tasks, the fact that someone was able to use them to solve a novel problem is not proof of this.
LLM output is nondeterministic. Given the same prompt, the same LLM will generate different output, especially when it involves a large number of output tokens, as in this case. One of those attempts might produce a correct output, but this is not certain, and is difficult if not impossible for a human not expert in the domain to determine this, as shown in this thread.
This is how I feel when I read any mathematics paper.
[1] https://github.com/teorth/erdosproblems
When a model gives a really good answer, does that just mean it’s seen the problem before? When it gives a crappy answer, is that not simply indicating the problem is novel?
I think LLMs can help in limited cases like this by just coming up with a different way of approaching a problem. It doesn’t have to be right, it just needs to give someone an alternative and maybe that will shake things up to get a solution.
That said, I have no idea what the practical value of this Erdős problem is. If you asked me if this demonstrates that LLMs are not junk. My general impression is that is like asking me in 1928 if we should spent millions of dollars of research money on number theory. The answer is no and get out of my office.
By looking the website this problem was never discussed by humans. The last comments were about gpt discovering it. I was expecting older comments coming to a 60 year old problem.
Am I missing something?
Great discovery though, there might be problems like that same case that worth a try for a "gpt check"
[0]: https://www.erdosproblems.com/forum/thread/blog:1
If models are able to pull and join information that already existed in pieces but humankind never discovered by itself, doesn’t this count towards progress anyways?
If the reason it was able to output the proof is that it happened to be included in an in-house university report written in Georgian, then that would make it less useful for research than if it's new entirely.
"An amateur just solved a 60-year-old math problem—by asking AI"
A more honest title would be:
"An AI just solved a 60-year-old math problem—after being asked by amateur"
(Imagine the headline claimed instead that a professor just solved a math problem by asking a grad student.)
Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.
Most intelligent people do not think that.
Eventually, we will arrive at the same conclusion for what LLMs are doing now.
Proposing and proving something like Gödel's theorem's definitely requires intelligence.
Solving an already proposed problem is just crunching through a large search space.
Doing formalized mathematics is as intelligent as multiplying numbers together.
The only reason why it's so hard now is that the standard notation is the equivalent of Roman numerals.
When you start using a sane metalanguage, and not just augmrnted English, to do proofs you gain the same increase in capabilities as going from word equations to algebra.
Hah. It reminds me of this great quote, from the '80s:
> There is a related “Theorem” about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: “AI is whatever hasn’t been done yet.”
We are seeing this right now in the comments. 50 years later, people are still doing this! Oh, this was solved, but it was trivial, of course this isn't real intelligence.
Most people would consider someone who can calculate 56863*2446 instantly in their head to be intelligent. Does that mean pocket calculators are intelligent? The result is the same.
> then they are the one meant to be doing the defining, and to tell us how it can be tested for. If they can't, then there's no reason to pay attention to any of it.
That is the equivalent of responding to criticism with “can you do better?”. One does not need to be a chef (or even know how to cook) to know when food tastes foul. Similarly, one does not need to have a tight definition of “life” to say a dog is alive but a rock isn’t. Definitions evolve all the time when new information arises, and some (like “art”) we haven’t been able to pin down despite centuries of thinking about it.
ChatGPT equalizes intelligence. And that is an attack on their identity. It also exposes their ACTUAL intelligence which is to say most of HN is not too smart.
Yes, I love living in communism too. Imagine if you had to pay money for it or something. The wealthiest people would get unrestricted access to intelligence while the poor none. And the people in the middle would eventually find themselves unable to function without a product they can no longer afford. Chilling, huh? Good thing humans are known for sharing in the benefits of technological progress equally. /s
With real general intelligence you'd expect it to solve problems above a certain difficulty with a good clip
I don't doubt that there are many very real and meaningful limitations of these systems that deserve to be called out. But "text generation" isn't doing that work.
(To be clear: I'm not agreeing or disagreeing. I sometimes feel the same too. I'm just curious how others reconcile these.)
If/when these things solve our hardest problems, that's going to lead to some very uncomfortable conversations and realizations.
Hindsight is 20/20.
Of course LLMs are still absolutely useless at actual maths computation, but I think this is one area where AI can excel --- the ability to combine many sources of knowledge and synthesise, may sometimes yield very useful results.
Also reminds me of the old saying, "a broken clock is right twice a day."
We may have collectively filled libraries full of books, and created yottabytes of digital data, but in the end to create something novel somebody has to read and understand all of this stuff. Obviously this is not possible. Read one book per day from birth to death and you still only get to consume like 80*365=29200 books in the best case, from the millions upon millions of books that have been written.
So these "few tricks" are the accumulation of a lifetime of mathematical training, the culmination of the slice of knowledge that the respective mathematician immersed themselves into. To discover new math and become famous you need both the talent and skill to apply your knowledge in novel ways, but also be lucky that you picked a field of math that has novel things with interesting applications to discover plus you picked up the right tools and right mental model that allows you to discover these things.
This does not go for math only, but also for pretty much all other non-trivial fields. There is a reason why history repeats.
And it's actually a compelling argument why AI is still a big deal even though it's at its core a parrot. It's a parrot yes, but compared to a human, it actually was able to ingest the entirety of human knowledge.
Even this, though, is not useful, to us.
It remains true that, a life without struggle, and acheivement, is not really worth living...
So, it is nice that there is something that could possibly ingest the whole of human knowledge, but that is still not useful, to us.
People are still making a hullabaloo about "using AI" in companies, and there was some nonsense about there will be only two types of companies, AI ones and defunct ones, but in truth, there will simply be no companies...
Anyways I'm sure I will get down voted by the sightless lemmings on here...
The combinatorial nature of trying things randomly means that it would take millennia or longer for light-speed monkeys typing at a keyboard, or GPUs, to solve such a problem without direction.
By now, people should stop dismissing RL-trained reasoning LLMs as stupid, aimless text predictors or combiners. They wouldn’t say the same thing about high-achieving, but non-creative, college students who can only solve hard conventional problems.
Yes, current LLMs likely still lack some major aspects of intelligence. They probably wouldn’t be able to come up with general relativity on their own with only training data up to 1905.
Neither did the vast majority of physicists back then.
Indeed, and so do current humans! And just like LLMs, humans are bad at keeping this fact in view.
On a more serious note, we're going to have a hard time until we can psychologically decouple the concepts of intelligence and consciousness. Like, an existentially hard time.
and homo sapiens, glancing at the clock when it happens to be right, may conjure an entire zodiac to explain it.
I've been using LLMs for much the same purpose: solving problems within my field of expertise where the limiting factor is not intelligence per se, but the ability to connect the right dots from among a vast corpus of knowledge that I would never realistically be able to imbibe and remember over the course of a lifetime.
Once the dots are connected, I can verify the solutions and/or extend them in creative ways with comparatively little effort.
It really is incredible what otherwise intractable problems have become solvable as a result.
I don’t know what this claim is supposed to mean.
If it isn’t supposed to have a precise technical meaning, why is it using the word “interpolate”?
A broken clock can be broken in ways which result in it never being correct.
They are not great at playing chess as well - computational as well as analytic.
Further evidence for the faultiness of your claim, if you don't want to take me up on that: I had problems off to GPT5 to check my own answers. None of the dumb mistakes I make or missed opportunities for simplification are in the book, and, again: it's flawless at pointing out those problems, despite being primed with a prompt suggesting I'm pretty sure I have the right answers.
I found and fixed bugs I wrote into the formulas and spreadsheets, and the LLMs were not my sole reference, but once the LLM mentioned the names of concepts and functions, I used Wikipedia for the general gist of things, and I appreciated the LLMs' relevant explanations that connected these disciplines together.
I did this on March 14, 2026
That's one way to waste a ton of tuition money to just have a clanker do your learning for you.
Unless you're teaching it, in which case I hope your salary is cut by whatever percentage your clanker reduces your workload.
80 hours! 80 hours of just trying shit!
There, fixed that for you.
Then my second question is how much VC money did all those tokens cost.
(Of course, those problems are on another plane than this one.)
It's so expensive!
How is he even posing the question and having even a vague idea of what the proof means or how to understand it?
Seems like standard 23 year old behavior. You're spending $100-$200/mo on the pro subscription, and want to get your money's worth. So you burn some tokens on this legendarily hard math problem sometimes. You've seen enough wrong answers to know that this one looks interesting and pass it on to a friend that actually knows math, who is at a place where experts can recognize it as correct.
Seems like a classic example of in-expert human labeling ML output.
"He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge."
So basically two undergrads/graduates in math, "advanced" is subjective at that point.