ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
68% Positive
Analyzed from 6667 words in the discussion.
Trending Topics
#don#more#things#without#own#positive#cars#effects#same#society

Discussion (241 Comments)Read Original on HackerNews
ML promises to be profoundly weird - https://news.ycombinator.com/item?id=47689648 - April 2026 (602 comments)
The Future of Everything Is Lies, I Guess: Part 3 – Culture - https://news.ycombinator.com/item?id=47703528 - April 2026 (106 comments)
The future of everything is lies, I guess – Part 5: Annoyances - https://news.ycombinator.com/item?id=47730981 - April 2026 (169 comments)
The Future of Everything Is Lies, I Guess: Safety - https://news.ycombinator.com/item?id=47754379 - April 2026 (180 comments)
The future of everything is lies, I guess: Work - https://news.ycombinator.com/item?id=47766550 - April 2026 (217 comments)
The Future of Everything Is Lies, I Guess: New Jobs - https://news.ycombinator.com/item?id=47778758 - April 2026 (178 comments)
(The first title was different because of https://news.ycombinator.com/item?id=47695064, but as you can see, I gave up.)
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
You say it in a way that it sounds like automobiles don't have a positive effect. I don't agree - they have some negative effects but overall they have a vast net positive effect for everyone.
The upsides of automobiles generally all exist outside of the 'personal automobile', i.e. logistics. These upsides and downsides don't need to coexist. We could reap the benefits without needing to suffer for it, but here we are.
Yes, you could say that, though I'm not sure who would actually say that seriously.
We can argue about whether this is a good trade off, but the claim that cars make everyone's life better is straightforwardly false.
The only way you receive food (except from your backyard inner-city garden?) is through people DRIVING. The way you receive packages is by DRIVING. They city infrastructure you enjoy is maintained through skilled laborers and tradespeople DRIVING.
I'm not sure what the alternative would be. Maybe everyone lives in giant 10 million+ population cities that are all connected to each other by rail (and rail connects all airports, harbors, etc.) and then you have to show up at rail station to get your groceries or whatever else?
The problem is we are numb to it. 40,000+ people are killed in car accidents every year in just the USA. Wars are started over oil and accepted by the people so they can keep paying less at the pump. Microplastics entering the environment each day along with particulate from brakes, and exhaust. Speaking of exhaust: global warming. Even going electric just shifts the problems as we need to dig up lithium, the new oil. We still have to drill for oil for plastics and metal refining, recycling and fabrication.
All they saw was that trips taking a day could now be done in an hour and produced no manure, and that meant suddenly you could reasonably go to many more places. What's not to like? A model T was cheap, and you didn't even need to worry about insurance or having a driver's license. Surely nobody would drive so carelessly as to crash.
*well, not technically nobody, but nobody important.
What’s really interesting is that you can find newspaper columns in the 1920s recognizing what we now call induced demand as even by then it was clear that adding road capacity simply inspired more people to drive.
Today we have a much better understanding of the world, so we have the means to think down the line of what the negative effects of LLMs and course correct if needed.
I don't see anything positive about being forced to participate in this car-ownership game where 99% of North American cities are designed around car ownership, and if you don't own a car you're screwed. I don't WANT to own a car, I don't want to direct countless thousands of dollars to a car note, car maintenance, gas, etc. I want the freedom to exist without needing to own an absurdly expensive vehicle to get myself around. There's nothing freeing or positive about that unless all you've ever known and all you can imagine is a world in which cities are designed around cars and not people.
The automobile was a revolutionary tool, but I think it has been overprescribed as a solution for the problem of transportation.
The grips of capitalism and consumerism have allowed for automobiles to become a requirement for living nearly everywhere in America except for the densest of areas.
I love cars, I enjoy working on them, driving them, the way they look, the way they sound and feel. They do offer a freedom that is unparalleled, and offer many benefits to those who truly need those guarantees.
Ultimately, to me they are a symbol of toxic individualism. I would be happy if we could move on from them as a society.
We (or lobbyists) resist having carbon costs included in the prices we pay at the pump.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.
It's just an automation tool, and just like all automation tools before it it will create more jobs than destroy. All the CEOs' talks about labor replacement are a fuss, a pile of lies to justify layoffs and worsening financial situation.
[1] https://www.pcmag.com/news/meta-security-researchers-opencla...
The combination of these two things could lead to a situation where there is a massive, startup-dominated market for engineers who can take projects from 0.5 to 1, as well as for consulting companies or services that help founders to do the same.
Another pair of hopes is that a) the LLM systems plateau at a level where any use on complex or important projects requires expert knowledge and prompting, and b) that because of this, the hype of using them to replace engineers dies down. This would hopefully lead to a situation where they are treated like any other tool in our toolbox. Then, just like no one forces me to use emacs or vim (despite the fact that they unambiguously help me to be at least 2x more productive), no one will force me to use LLMs just for the sake of it.
Focusing on option 2 and software development, teams and companies will only downsize if the demand for software doesn’t increase. Make the same amount of stuff you do now but with less people.
What I think will happen is that enough companies will choose to do things that they couldn’t afford or weren’t possible without AI (and new companies will be created to do the same) to offset the ones that choose to cut costs and actually increase the amount of people making software.
I am pretty sure these are well known economic ideas but I don’t know the specific terminology for it.
If AI is smart enough to replace the 99.999% it's also smart enough to replace the 0.001%.
Energy. The key is controlling their access to energy.
But that doesn't really matter when we talk about "replacement" because these people don't "do" they simply "own".
They're not concerned about being outpaced at some skill they perform in exchange for money...they just need the productive output of their capital invested in servers/models/etc to go up.
This only works if people with "secure" livelihoods not just participate, but drive the effort. Getting paid six figures or more in a layoff-proof position? Cool, you need to be the first person walking out the door on May 1st (or whenever this happens), and the first person at the bank counter requesting your max withdrawal.
As for bank runs, no one cares. The big banks no longer need retail customer deposits as a source of capital for fractional reserve lending. Modern bank funding mechanisms are more sophisticated than that.
In which the FDIC took unprecedented action, drawing down the DIF to backstop depositors beyond the insured $250k and offering a credit facility to other banks, in order to prevent "contagion" - a panic, a bank run - which was presumed to be likely after the 3rd largest bank collapse in US history. A bank almost no one outside of California had heard of before it died.
Bank runs are serious business, and far from being something "no one cares" about, even just talking about them makes banks nervous, because they can happen to even "healthy" banks. The big banks have been undercapitalized for more than a decade, and even a moderate run on a regional institution threatens the entire system. Which is why it should be done, or at least signaled as incoming; it's good leverage.
The implicit, "I'll stay here, where I'm nice and secure," is delusion. People care about your outcomes even if you don't care about ours. Take the invitation to organize with others to secure your own future, to show just how much you're needed before your employer decides that you're not (however erroneously).Collective humanity needs to think this matter through and take global action. This is the only way I fear, short of natural calamities (act of God) that unplugs humanity from advanced tech for a few generations again.
What? I don’t know anybody who has a layoff-proof position.
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
Edit: are->our
Ridiculous. You're not acting against their interests by amassing wealth from a technology that will happen with or without you.
This is the problem in a nutshell - people are happy to do things they know are harmful for personal profit.
As someone that loves cleaning up code, I'm actually asking the vibe coders in the team (designer, PM and SEO guy) to just give me small PRs and then I clean up instead of reviewing. I know they will just put the text back in code anyway, so it's less work for me to refactor it.
With a caveat: if they give me >1000 lines or too many features in the same PR, I ask them to reduce the scope, sometimes to start from scratch.
And I also started doing this with another engineer: no review cycle, we just clean up each other's code and merge.
I'm honestly surprised at how much I prefer this to the traditional structure of code reviews.
Additionally, I don't have to follow Jira tickets with lengthy SEO specs or "please change this according to Figma". They just the changes themselves and we go on with our lives.
If my competitors are filling their flour with sawdust, guess I got to just do the same?
At the moment I'm more looking at menial work for one of the local universities. Money is money, and my needs are small; the work is honest, I still should have a decade or so of physical labor left in me, and it carries the perk of free tuition for the degree I never had time for. I would have the time and energy to write, perhaps, even! And, however badly the people in charge are running things lately, the world will always need someone good at cleaning a toilet. (And I am already pretty good at cleaning a toilet!)
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
If we make it easier for people to drive and have cars, isn't that a net positive? If we make it easier for X, isn't that better? No, not necessarily, that's the entire point of this series of essays. Friction is good in some cases! You can't learn without friction. You can't have sex without friction.
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
I always preferred this take:
“Civilization advances by extending the number of important operations which we can perform without thinking of them.” ― Alfred North Whitehead
It's both opposite and complementary to your Frank Herbert quote.
> “There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.” ― Isaac Asimov
The easier society makes it to be unaware of the complexity of everything around us, the easier it becomes to assume everything is actually as simple as their surface-level understanding.
That said, there is no obvious reason to posit that the intergalactic feudal system, CHOAM, or the empire, came to be because of the butlerian jihad. The concrete side effects of the jihad were in fact hyper specialization of cognitive faculties in humans: mentats, guild navigators, and soldiers all possess super human specialized abilities.
On one hand I intuitively think this is correct, on the other hand these very concerns about technology have been around since the invention of... writing.
Here is an excerpt of Socrates speaking on the written word, as recorded in Plato's dialogue Phaedrus - "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom"
(Do you not realize how crazy the entire premise here is? Imagine someone in 1975 saying that ARPANET has been up for years so everything there is to know about networking technology has probably been found already.)
It's the first step on the road to hell.
Do LLMs lie? Of course not, they are just programs. Do the make mistakes or get the facts wrong? Of course they do, not more often then a human does. So what is the point of that article? Why my future is particularly bad now because of LLMs?
of course, non Americans never comment on American policies
Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.
At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.
Source of this claim?
The solution is obviously some form of socialism but a lot of tech people are blinkered libertarians who refuse to put two and two together.
To take the car analogy: it matters how we use the car.
The car in itself can be used to save time and energy that would otherwise be used to walk to places. That extra time and energy can be used well, or poorly.
- It can be squandered by having a longer commute that defeats the point
- Alternatively, it can be wasted by sitting on a couch consuming Netflix or TikTok
- Alternatively, it can be used productively, by playing team sports with friends, or chasing your kids through the park, or building a chicken coop in your back yard
It’s all about wise usage. Yes it can be used as a way to destroy your own body and waste your time and attention, but also it can be used as a tool to deploy your resources better, for example in physical activities that are fun and social rather than required drudgery.
I think it’s the same for LLMs. Managers and executives have always delegated the engineering work, and even researching and writing reports. It matters whether we find places to continue to challenge and deploy our cognition, or completely settle back, delegate everything to the LLM and scroll TikTok while it works.
Yes, individuals have choices. But in a collective, dynamics occur and those dynamics can't usually be overcome by individuals.
Social media could be used differently, but the way it exists Irl is determined by the nature of the medium, the economic structure and other things outside of individuals' control.
But the majority have always chosen the path of least resistance. This is not new! Socrates’ famous exhortation is “the unexamined life is not worth living”. People were living mindlessly on autopilot before TikTok.
I think if you want to give a call to action, as this piece does, the right call to action is “think carefully about how you can make a good use of your time and energy, now that the default path has changed.” I know it’s not as simple or emotionally powerful as “go down kicking and screaming, stick it to the man”, but as a rule of thumb, the less fiercely emotional path is usually the right one.