RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
64% Positive
Analyzed from 8037 words in the discussion.
Trending Topics
#more#plumber#don#llms#knowledge#without#where#why#human#need

Discussion (178 Comments)Read Original on HackerNews
This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.
Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.
The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.
But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
Even if I believe that is what happens in 10% of uses of AI, it doesn't excuse what happens with the rest.
Many people can not do mental math anymore and still more question why we need to learn math at all in the first place when we have simple calculators. "When will I ever use XYZ?" is a common refrain.
AI is currently developed and owned by billionaires who also happen to own news sources. If that correlation doesn't spark questions about why we shouldn't externalize processes to AI, you have likely been using AI too much already.
When AI gains true marketshare in the "think-space", I have zero trust that the corporate overlords controlling these machines will use them in the fairest interests of humanity.
I've been working on a project and using LLMs heavily to inform my design decisions. There's already a long list of cases where it has taught me things I wasn't familiar with, alerted me to possibilities I didn't consider, shown me how to do things that I was struggling with. In those cases I ask for references, and it delivers.
This is not "endangering human development". If anything, it's the exact opposite - allowing human knowledge to be transmitted to other humans in an accessible way that otherwise, usually simply would not have happened.
Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.
I'm not saying there isn't a moral dimension to all this, and areas of serious concern. But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn. The former will be better for human development.
One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.
This is because humans are actually extremely easy to exploit. Our biology is very stupid and also dumb, so even basic attacks can cause us to self-destruct.
And that's how we get obesity, smoking, war, I mean... you name it.
LLMs are basically perfect. While I'm sure some people, somewhere, can theoretically exist attacks from LLMs, on the whole I'm not sure that will be the case.
When I read comments like yours, I’m reminded of (though I’m not comparing you to—I believe you are arguing in good faith) the cryptocurrency shills saying anyone who is against cryptocurrencies is just jealous they didn’t get in on the gold rush; they are incapable of imagining or accepting other people have their own reasons beyond what the author can themselves conceptualise.
When people criticise cryptocurrencies, NFTs, the Metaverse, LLMs, they’re not just stubbornly “resisting change”. Those technologies have important issues and repercussions which should be addressed, we shouldn’t just accept change unquestionably.
> Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.
And the latter is exactly what is going to happen and is already happening in large enough quantity that it’s going to be a serious problem.
> But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn.
That completely ignores the loss of skill that happens without you realising, as you lean more on a tool.
https://www.thelancet.com/journals/langas/article/PIIS2468-1...
https://arxiv.org/abs/2506.08872
This is nothing new. We already know that e.g. heavy GPS use makes us weaker at navigating on our own.
https://www.nature.com/articles/s41598-020-62877-0
> One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.
Yes, that is a good goal. But good luck achieving it.
> One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.
It’s a corner cutting machine that allows people to shift the burden of their work on to others either in the form of more slop we have to wade through OR more work we have to correct because they couldn’t bother to vet the results.
It’s like writing a paper, running spellcheck, then sending it to some less to look over for you without ever taking a pass yourself. It’s selfish.
I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.
That and it's good for going down rabbit holes of questioning a topic that, before, you'd have to reserve for when you accidentally find a resource that perfectly answers that niche question line, or when you're somehow in conversation with someone who has expertise in the thing you're questioning.
her plumber offloaded to chatgpt.
"i just think it's good for humans to know how to do stuff."
are we talking about your sister or her plumber?
some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.
Without further knowledge of what was going on it's hard to say why they used ChatGPT.
In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.
The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?
That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.
Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.
In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.
"how do I fix a clogged toilet?" would be bad..
The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.
And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.
The plumber who turned up leave without fixing the problem,
The plumber fixing something that he didn't know how to do by looking up the answer.
The plumber attempting to fix something that they didn't know how to do.
While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.
Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.
AI psychosis has been going through armchair philosophers, physicists and political theorists the way crack was going through the low income neighborhoods back in the 80s.
How can it be unconvincing if you didn't understand the argument presented in the post?
Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.
That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.
Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.
Setting aside medical movement aids for a moment, I am reminded of places where people commonly ride various kinds of scooters on sidewalks. There is a particular feeling of unfairness when you are pitted against essentially a small vehicle zipping past you with little warning, easily going double your speed without any physical effort from the rider. I remember seeing people in Seoul, especially older people, being startled by and occasionally having to almost jump out of the way of this sort of traffic having the right of way. I won’t lie, I like that riding those things is illegal where I am now.
Let’s talk about medical movement aids, though.
The analogy gets interesting here. Unlike the various scooters, these aids are normally restricted to average walking speed, though I imagine “jailbreaking” them is probably a thing, too.
On flip side, I know for a fact that there are places where perfectly able people are known to ride purported medial movement aids (just for the kicks or in protest). Is this a bad thing? Who is to say whether one is disabled or not anyway? If one is physically able but buys this machine, should one have the freedom to drive around on the sidewalk? Why don’t we just do it by default? What about a flipped world where everybody drives a movement aid everywhere and only special people (Olympian athletes, weirdos, etc.) ever walk?
Which is partially how we found ourselves in the midst of an obesity epidemic.
So are you arguing we should change our relationship with human intelligence? What does that even mean?
I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.
The idea is that a base model could be years old, with all it's faults; superficially fixed and extended with knowledge over time. She points to a research that LLMs don't "fully believe" this new knowledge. The skew could be much longer then a few weeks.
Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.
A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.
I think the difference is that LLMs are a very complex mix of information and concepts, which can be combined in higher orders. So an underlying wrong fact could be undisclosed and contribute to faulty reasoning. A hard fact like a wrong city name would blow up quickly. A wrong assumption about political dynamics is probably harder to detect, as it is a complex mix of information.
"Is it safe to travel to the US as an EU citizen of arab descend?"
GPT: Yes it's safe. GEMINI: Yes but... [gave a few legitimate warnings]
I wouldn't give that recommendation to an arab fellow citizen right now. Thought I am cautious in such matters and I hate to travel anyway. So I am biased. But general concerns aren't totally ungrounded.
Neither of the LLMs pointed out the general tension around ICE activity.
AI is just current scapegoat.
The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.
Children learning in schools should not become product managers. If they are, what exactly is the "product" that they are "managing"? Reducing everything to and looking everything from a corporate viewpoint is bizarre.
Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.
None of this is equivalent to the topic of discussion. The point is that even in a world of division of labour and shared expertise, there is no atrophy in general populace because someone is trying to become expert in something. The whole point is that the brain is being put in use to do something. If not in X, then in Y. If none of the alphabets are available, where do you put your brain in use to?
>I would not assume someone who won the lottery is going to have their life become uninteresting or see some cognitive decline. It could probably happen, but you can also see a path where the person just chooses to do the activities they always wanted to do, where they keep learning and exploring without the burden of usual life constraints. People already play chess when machines have beaten us for decades, just because they enjoy it.
Again, please play attention to the main idea of the article linked. Most of cognitive development happens in the early formative years. Yes, learning itself never stops, but the primary period of it during perhaps the first 25 years of someone's life. You NEED to make mistakes and learn from them during this period. If you are offloading work that your brain was supposed to do here, it's extremely worrying.
>Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.
I think there is some truth to it, but you need to regulate how much AI can assist a student. It can be a patient teacher but it shouldn't replace their cognitive abilities. That is the whole point.
I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.
It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.
When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.
I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.
Then I saw someone's Show HN post for their own vibecoded programming language project, and many of the feature bullet points were the same. Maybe it was partly coincidence (all modern PLs have a fair bit of overlap), but it really gave me pause, and I mostly lost interest in the project after that.
I'm not sure why this is at the top of the page; it's not that it's wrong, it's just a sequence of truisms.
Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.
Probably just 95% of the users. You know, the non-techies.
It will not only answer confidently incorrect, but it will not web search in obvious scenarios where it should.
The words here, aren't meant to be a warning for people in this type of community falling victim to this type of thing, its more for the general public that doesn't grasp the tools they are using, the people that wont ever wander across this article.
This i think is a huge reason we really need to jump into LLM basics classes or something similar as soon as possible. People that others consider "smart" will talk about how great chatgpt or something is, then that person will go try it out because that person they respect must be right, they'll hop on the free model and get an absurdly inferior product and not grasp why. They'll ask something that requires a web search to augment info, not get that web search, and assume the confidently incorrect agent is correct.
The thesis is also I think not entirely about not having that modern info at query time, its more scattered. Someone asks what product they should use to mash potatoes, a tool is suggested. Everyone that asks then receives that same recommendation and instead of having a range of different styles of mashing potatoes, we end up all drifting closer towards one style, and the range of variance in how food is prepared is slowly getting lost.
(At present, Gemini's question-answering capability (which Google kind of makes its users use) seems extremely error-prone -- much worse than competing LLMs when asked the same question.)
I recently saw a video discussing a researcher who published a fake scientific article about a fictitious disease, with bogus author names, even a warning IN the article itself that stated "This is not a real disease, this article is not real" (paraphrasing) but still AI ended up picking up this article and serving information from it as if it was a real disease.
It even got cited in papers (which were later redacted of course), but the fact those papers got published in the first place is a serious issue.
Isn’t a lot of pretraining done by chopping sources up into short-context-window-sized pieces and then shoving them into the SGD process? The AI-in-training could be entirely incapable of correlating the beginning with the end of the article in its development of its supposed knowledge base.
For example a possibly trajectory might be that many years in the future because human thinking has degraded due to AI-assisted cognition, most people will get a chip implant and AI-assistance becomes integrated with the brain. Basically same pattern as most everything else -- technology augments solve the new reality. I'm not saying this will happen, but just a possible outcome of this.
Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.
Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.
This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.
Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".
Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.
Slightly FTFY.
But I am not sure you can compartmentalize the specific skill we can out-source to AI. I would not agree with "you don't need to be able to think in your head."
Some points:
1. Technological inventions are not repetitions of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.
2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like. What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?
Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.
Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.
2. Imagine a hunter gatherer is time travelled to 2026. You have lunch go to a cafe with him, and he learns that food is cheap, delicious, and abundant. He sees your house, and thinks it's amazing compared to his cave. He thinks that 2026 must be absolute paradise. You explain to him, well kinda, but also not really. Is the hunter gatherer right?
He sees you spend your day working but rarely get to go outside or do anything active. Even when you're not working you sit behind a desk staring at a screen.
He wonders why you bother will all the technology when it made your life worse. Is he right?
No. It's not a phenomenon with a pattern, maybe there's a coincidental pattern to some subset of inventions, but there's no logical reason that would apply to some arbitrary next invention (e.g. the pattern of biotechnology intention have allowed us to live longer and healthier lives...until some guy invented some experimental pathogen that wipes out the species).
> 2. Imagine a hunter gatherer is time travelled to 2026....
You're kinda missing my point. Many people smugly assume the present is better than the past, and and can point to cherry-picked this-and-that to feel confident about their claim. But almost every modern person has no sense of what was lost, and what prior generations mourned losing. There's a temptation to smugly dismiss the thoughts of those who lived through those transitions as stupid and ignorant, but they have insight that's no longer available to us first hand.
Some of these inventions we're so proud of having may not have resulted in a net-positive effect on our lives, but we don't have the experience to realize that anymore (like someone in a community that's been living knee-deep in shit all the time doesn't have the experience to realize it's terrible life compared to his distant ancestors').
I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.
I remember a few numbers of my most direct contacts and depend on backups for everything else.
This is how I for one understood this.
id probably start with "who locked us in this sewer?"
Changes on what humans need to remember what to do have, for as far as we have written records, changed the skills humans hone over time. They change our fitness function. Some of those changes are bad for a while, and then get better. Others are just far better at all times. Others might get rejected. Either way, it takes a long time before we know what the technology does to us: See how cheap printing is directly linked to wars of religion.
So it's not that AI could not be bad in the short run, or even in the long run: It appears to be the kind of technology where one cannot evaluate without significant adoption, and at that poing, we are in this rollercoaster for a while whether we want it or not. See social media, or just political innovation, like liberal democracy or communism. We can make guesses, but many guesses made early on look ridiculous in hindsight, like someone complaining about humans relying on writing.
Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.
Therefore, writing is bad for the same reasons that this post thinks that AI is bad.
https://classics.mit.edu/Plato/phaedrus.html#:~:text=there%2...
Looks like even back then, they went "cool story bro" on that text...
This could be describing an internet argument where both parties google for expert articles that seem to support their point of view without really understanding anything about the subject.
Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.
Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.
I’m not sure where LLMs lie on that spectrum. They allow faster access, but it also feels more limited.
Before written word, the uneducated had to just take the words of the (apparently) wise as an authority on all matters, and the only access to their knowledge was through conversation with them. That's gatekeeping and siloing in one go.
And authorities' thoughts themselves often form 2D slices of knowledge once they stop continually updating themselves in the know on SotA. Even if they do keep themselves updated, each conversation you've had with (what a layperson can recollect of it) is a thin 2D slice of that knowledge.
I can think of practically no ways that written expertise is not better.
No. Without the written word, this criticism would not have even existed. There would have been no point to make it. Who criticizes something that doesn't exist?
Also thanks to Mia (she/her), this was a very interesting read.
I was thinking about this recently: The difference between systemic (systematic) learning and opportunistic learning.
AI enables opportunistic learning, or Just-in-time (JIT) learning. It give the impression of infinite knowledge.
Most general concepts are well within the grasp of human understanding.
My curiosity RE the difference between systemic v opportunistic learning was the effect of longer-termed exposure/use to a tool that enables opportunistic learning.
- Gemini
LLM tell right there.
> - Gemini
Yes, we already know. I suppose you think posting AI slop in this context is funny. It isn't.
Also, no, the observation is not sharp. You're being gaslighted and having your cock fluffed by a machine.
A typical deli sandwich in the US should be enough to last any normal person three days. Same goes for e.g. ice cream from Shake Shack (random example I know, but one I came across recently). If you buy one of these and eat them in one sitting, the answer to "why am I obese" is simply "you eat way too much."
A Subway sandwhich is about 600 calories. That's about 1/3 of the daily 2000 calories standard. A Shake Shack shake tops out at 1010 calories, a half of the daily norm.
As for Shake Shack, a single shake is half of the daily total calorie intake? Are you listening to yourself right now?