Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

65% Positive

Analyzed from 1978 words in the discussion.

Trending Topics

#answer#human#intelligence#writing#knows#model#training#data#knowledge#already

Discussion (69 Comments)Read Original on HackerNews

michaelbuckbee•about 2 hours ago
This feels like a restating of the idea that for any given endeavor AI raises the floor of quality but doesn't push the ceiling.
kreelman•about 3 hours ago
Just wondering... What is Intellgience?
mettamage•about 3 hours ago
The titel has a typo as the actual article has the title "The Social Edge of Intelligence".
ForHackernews•about 3 hours ago
I've corrected the typo now, but I almost let it stand as a testament to my humanity.
SecretDreams•about 3 hours ago
If you gotta ask, you can't afford it.

~ intelligence

quinndupont•about 2 hours ago
The rise of AI writing has only been matched by superficial articles comprised of ideas salad that evince no deep theoretical or historical understanding. Crappy writing has and always will exist, AI doesn’t change that, it just makes awful writing grammatical.
bitmasher9•about 2 hours ago
It has generally reduced the signal-to-noise ratio of writing, but the signal-to-noise ratio has been absolutely terrible for such a long time we’ve all adapted to better signal detection.
yetihehe•about 1 hour ago
Because it made noise look more like signal. Essentially, that is what AI is. A noise2signal generator, but you specify how a signal should look like. Then, it makes the noise into your specifications. If you specify "make it look like good writing", it will look like good writing, but it will still be noise. It won't be good writing.
Lerc•about 3 hours ago
There is a fundamental assumption made about the ability of AI here that I believe is wrong.

It assumes that the outputs are lacking because of a limit of ability.

I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.

If you were given the task of writing the script for a TV show with the criteria that it not offend any people whatsoever. You are told to make something that is as likeable as you can make it without anyone not-liking it at all. The options for what you can do are reduced to something that is okay-ish but rather bland.

That's what AI is giving us. OK but rather bland. It's giving it to us because that's what we've told it we want.

andsoitis•about 3 hours ago
> I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.

Are you asserting that an LLM could be NOT trained to answer when it knows it doesn’t know the answer, or if that’s not possible be trained to NOT answer when it knows it doesn’t know the answer?

If so, I would believe your thinking, but for some reason I have not yet seen a single LLM that behaves with that kind of self-knowledge.

Lerc•about 2 hours ago
It should be trained to answer when it knows the answer, and to state that it does not know the answer when it does not. They might already have a very good understanding of not knowing internally, but are just not trained to express that.

This is not a problem in the ability of the system, it is a problem of how to construct training for such a task.

To provide training examples where it answers it does not know the answer only when it does not know the answer. You need training examples where it says it doesn't know when it does not contain that knowledge, but it provides an answer when it does know the answer.

To create such an example, you need to know in advance what the model knows and what the model does not know. You can't just have a database of facts that it knows, because you also need to count things that it can readily infer.

Any model that can reliably give the sum of any two 10 digit integers should be able to answer so. You can't list every possible number that a model knows how to add. That is just the tiniest subset of the task you would have to do because you have to determine every inferrable fact, not just integers. Adding to the problem is that training on questions like this can add to the knowledge base to the model either from the question itself or by inductively figuring out the answer based upon the combination of the question and the fact that it was not expected to know the answer.

A completely different training system would have to be implemented. There is research on categorising patterns of activations that can determine a form of 'mental state' of a model. A dynamic training approach where the answer that the model is expected-to-give/rewarded-for-giving is partially dependent on the models own state could be achieved through this mechanism.

bluefirebrand•about 1 hour ago
> It should be trained to answer when it knows the answer, and to state that it does not know the answer when it does not

Do LLMs even have any kind of internal model of what they know or don't know? My understanding is that they don't.

lobofta•about 2 hours ago
Of course it's possible.

I don't say this, because I know how, but because I see no reason why we will be unable to crack that problem. If our brains can do it, so will AI one day.

caditinpiscinam•about 3 hours ago
Generative AI is the average of all human knowledge
jdw64•about 3 hours ago
Human intelligence is fundamentally motivated by fear and desire, whereas AI operates on an entirely different paradigm. AI lacks human embodiment, and it lacks the political landscapes born out of complex social relationships. Can we truly equate AI's 'intelligence' with what humans call intelligence? Should we even be calling its functionality 'intelligence' at all?

The author argues that overreliance on AI will degrade the overall intelligence of human society, creating a negative feedback loop where future models train on increasingly degraded human data. I agree with this perspective to some extent. However, to definitively claim that human intelligence will only decline is overly simplistic. Rather, we might be about to witness a different facet—or the flip side—of what we have traditionally defined as intelligence.

Socrates once argued that the invention of writing would degrade the essence of human thought and memory. It is true that our capacity for raw memorization declined, but the act of recording enabled knowledge to be transmitted across generations. Couldn't LLMs represent a similar evolutionary trajectory?

It is undeniably true that LLMs atrophy certain cognitive muscles. However, I believe they catalyze development in other areas. In modern society, human discovery and knowledge are effectively monopolized by specific cliques. Without access to prestigious Western journals or incumbent tech giants, the barrier to entry is immense. The open-source community is no exception. For non-native English speakers, breaking into the open-source culture to access shared knowledge is notoriously difficult. But now, by spending a few dollars on an LLM, I can access the collective knowledge of that open-source ecosystem, translated seamlessly into my native language.

There is an old adage in the Korean Windows community: 'Linux is open, but it is not free.' And it’s true. To use Linux, you had to memorize arcane commands, and due to the lack of proper Korean documentation, the learning curve was vastly steeper than Windows. That very learning curve acted as a gatekeeping wall. LLMs explicitly dismantle that wall.

But this dismantling is a two-way street, and it exposes a fatal flaw in the author’s reliance on Shumailov’s 'Model Collapse' theory. The author claims AI compresses the tails of the data distribution, erasing minority viewpoints. What this ignores is that LLMs act as a conduit for cognitive diversity from the non-Western periphery. When a developer in South Korea or Brazil uses an LLM to translate their culturally embedded logic and problem-solving approaches into fluent English, they are injecting entirely new cognitive patterns into the global corpus. This does not compress the tails of the distribution; it actively thickens and extends them by capturing the 'social mind' of populations previously locked out of the internet's primary, English-dominated datasets.

Furthermore, LLMs function as a tool to re-evaluate things we've historically taken for granted—especially in areas that are too complexly intertwined, socio-politically loaded, or vast for the human mind to fully map. Take DeepMind's AlphaDev discovering a faster sorting algorithm as an example; it was a breakthrough achieved precisely because it reasoned from an alien, non-human perspective.

Human learning is fundamentally bottlenecked by environment and bias. Anyone who has interacted with academia knows it is riddled with pervasive prejudices and systemic inefficiencies. In South Korea, for instance, there is an entrenched bias that only researchers with US pedigrees are legitimate, and only papers in specific Western journals matter. This prejudice has prematurely killed countless promising research initiatives. It makes you wonder if the metrics we have long held up as 'superior' or 'correct' are actually deeply flawed. Modern society is too complex for the 'lone genius' model; paradigm shifts now require the intertwined research of multiple collectives. Yet, during this process, political interests often cause dominant groups to gatekeep and exclude others, completely regardless of scientific efficiency. In this context, an AI that lacks our inherent socio-political biases and optimizes purely based on probabilities can actually drive true breakthroughs.

Given all this, the absolute claim that AI unconditionally degrades human intelligence feels flawed. I seriously question whether the 'total sum' of human intelligence is actually experiencing a meaningful decline. Before making such claims, we desperately need to define what 'intelligence' actually means in this new context. The fatal flaw in current AI discourse is the complete lack of nuance—there is no middle ground. Everything is framed as a binary: either purely utopian or purely apocalyptic.

Speaking from personal experience, my cognitive muscle for writing raw code has atrophied because of AI. However, as a non-native English speaker, I used to struggle immensely with naming conventions. Now, my variable naming and overall architectural design capabilities have vastly improved. Conversely, I acutely feel my skills in manual memory layout management and granular code implementation degrading. The trade-off point will be wildly different for every individual.

Whenever I read doom-saying articles like the author's, I can't shake the feeling that they are simply projecting their own subjective anxieties and trying to pass them off as a universal conclusion

joaovnunes•about 1 hour ago
Great thoughts.

The decline brought by writing was not only in memorization. If in previous ages to understand something was to study deeply, eventually the definition may shift to having asked ChatGPT about it and skimmed through the response. This is discussed thorougly in Technopoly, which I'm still reading.

You should also consider that besides the effects of AI adoption in highly technical and scholarly people it will also affect a majority of average workers who may be more vulnerable to atrophy than others.

Not only that, but eventually AI will be native and people's perspective and usage will not be affected by previous generation habits. If people hardly bother to write their own emails, comments or essays then how will the AI-native generation approach that?

Although you make very solid points, I've been leaning to think that the AI effect on society will be shaped by the average user, not users such as yourself and the colleagues you observe in which case the doom-saying starts to make better sense.

DeathArrow•about 2 hours ago
>In 2024, Ilia Shumailov and colleagues published a paper in Nature with a straight-talking title: AI models collapse when trained on recursively generated data.

Of course, the models are not intelligent. Their generated output reflects the statistical average. And in averaging more and more, you lose a lot of information.

intended•about 3 hours ago
Hey, the more we think about our information economy/environment as a commons, the better.

I fully expect our future to involve PhD factories where doctorates label AI output for the most competitive rates possible.

The majority of us will have to contend with an information environment that is polluted and overrun.

I’ll argue with that the internet pre social media was the “healthiest” in terms of our digital commons.

bsenftner•about 3 hours ago
I'll say it again: because we do not have any material focus on pragmatic, disagreement structuring effective communications, (people are not taught how to discuss disagreement) not only is our current AI being massively misunderstood, the human population do not have the discrete language skills to even use AI without massive hallucination issues that they are in control, but do not have the language nuanced understanding to, well, understand.

The reason being, when taught how to effectively disagree all these counterfactual concepts that AI loses manifest, they are logically necessary. But if people are not taught how to explore the landscape of ideas, they become "fascists for the common" and literally create the hellscape civilization we are all trapped within.

geremiiah•about 3 hours ago
We are already on the cusp of fully automated reasoning, and once we have fully automated reasoning, OpenAI and Anthropic can just dedicate part of their compute towards generating new high quality novel output, which will then be fed as training data during pretraining of subsequent models.
Nasrudith•about 3 hours ago
I don't believe that to be possible in general. Because we've already had Millenia of philosophers attempting to make discoveries through sheer reasoning and with the small in the grand scheme of things exception of formal logic failed to do so. Which leads me to a principle: No matter how smart you are, you still need the real world as a reference.

Once again LLMs will have to be bound to a source of entropy or feedback of some sort as a limit. Sure you might be able to throw terawatts of cycles at say music production but without examples of what people already like or test audiences you cannot answer the question of whether it is any good.

geremiiah•about 2 hours ago
Well, yes, that's why the rest of science was invented, no? I did not mean to imply that AI would restrict itself to philoshical thinking and formal logic.
energy123•about 2 hours ago
It's proven to be possible in narrow areas like Go. There is no entropy or feedback or whatever. It just keeps getting better.
qsera•about 3 hours ago
That is like saying we can get unlimited data compression by feeding the output of a data compressing program into its own input..
geremiiah•about 2 hours ago
Not it's not like saying that at all.
qsera•about 1 hour ago
Yes, that is exactly like it.

Generating new training data from existing data will only generate patterns that already exist in the training data. It might help LLMs to capture it if has not already. But it can never generate novel patterns.

For example, Imagine some kind of neural network architecture that can do OCR. You might be able to generate variations of the letters it already know using some technique and use it to better train the recognition of already known letters.

But it would never be able to generate letters that it does not know.