Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
59% Positive
Analyzed from 4922 words in the discussion.
Trending Topics
#technology#more#gen#world#things#used#labor#don#system#through
Discussion Sentiment
Analyzed from 4922 words in the discussion.
Trending Topics
Discussion (88 Comments)Read Original on HackerNews
In the past there was an implicit contract for white-collar employment that was based on the concept of earned experience through a period of manufacturing type work. You enter your profession by performing uninteresting, low-paying manufacturing tasks (such as, writing boilerplate type code or performing low-level quality assurance) while you gain domain expertise and gain the perspective necessary to perform high-value work at a higher level.
LLMs are now exceptionally good at consuming the 20% of an employees entry-level responsibilities.
What I see happening in the enterprise is that management is using AI to justify pulling the ladder up behind them and closing the door behind them. When a senior engineer's or senior analyst's productivity has increased by 30% due to using LLMs, the executive's response is typically not great, we have more time to work on bigger projects, but instead great, we can freeze junior hiring for 2 years.
The entry-level positions in the labor force are being automated, causing seriously low access to those roles for the Gen Z workforce. On the other hand, most senior-level positions are not being available to Gen Z workers as they lack the skills and experience required to qualify for those positions.
Stagnation in the adoption of artificial intelligence (AI) technology is the direct result of having no entry or junior level employees to work underneath senior staff members, causing a bottleneck for seniors. Employees generating raw output with AI technology have to check the results (output) for accuracy before integrating into work systems and processes as there are no entry-level employees to provide assistance to senior workers.
Gen Z workers do not dislike the tool (AI) however, they do not like how the tool is being implemented and used currently. Currently, the implementation of AI is driven by cost cutting in terms of labor rather than being focused on providing training and developing Gen Z's human capital for future use.
Sue me, I have that right.
I still haven't found a single person willing to go to the movies, and watch an AI movie. If it wasn't made by a person, there is no 'personal'-ity to it. It's just bland.
Eventually things will slow and slide back to thoughtful first, crapload second.
The last 27 marvel movies might as well have been written by ai, plenty of people have been to see those.
I hope AI follows the same path and diminishes. Still available, but only where it make sense.
AI generated films are almost certainly going to have at best mediocre acting/writing/direction and will almost certainly just recycle ideas. I hope AI films flop so hard that studios end up shunning them.
I feel like a lot of the stuff my nieces listen to are AI music. It's like a hodgepodge of popular songs with little rhyme or reason. Very 'sloppy' but if they like it....
It's hard for me to confirm if they really are AI or not. But I'm willing to bet that (random Roblox game they're interested in today) == heavily AI made. Maybe there's some real human effort here or there but I have heavy suspicions.
Didn't we all start as kids listening to music that is so formulaic that it could as well be AI-generated? A subset of people iteratively refines their music tastes and starts listening everything from bebop to obscure Canadian hardcore bands and will recognize quality in music.
When I started in tech, at the dawn of the internet, it was an exciting field full of hope and the promise to empower and enrich the lives of people. Tech now is largely the opposite.
Enshitification is making things progressively worse. tech companies are creating systems and tools with dark patterns abound to ensure you no longer own anything, are under constant surveillance, and populations at large are manipulated through the magic of propaganda and illusory truth. Even the productivity gains are perversely used to not give people more time through fewer work days/hours but to instead give them more work. People are losing their connection to others and the world around them.
Everyone tends to focus on Orwell’s 1984, but I find Fahrenheit 451 to be the more prescient book. I used to be annoyed by the book people’s choice to leave society and wait for it to collapse so they could help rebuild. In my mind, they should have been mounting an resistance. Fair to say I understand the book people’s perspective so much more now.
Some of them are quite happy! Others are miserable; many are abused. It's a high-control group, that raises its children to believe the outside world as a terrible, scary place, and they are the only safe place to be.
Many people are happy in cults, or they couldn't function. But that doesn't mean that the cults are, overall, a positive thing.
And they were all right.
Interesting results regardless when they compare the shift of 2025 to 2026
I love the cognitive dissonance.
Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
And all the benefits that brings. Not just in raw economic terms, but in quality of (family, community, recreational, commercial, ecological, medical) life.
Kind hard to imagine it will suck if another order-of-magnitude leap along that long line happens.
A bit of a tangential anecdote from my dad, who is a retired a biologist. He was one of the first in the department to use a computer in the 1970s and wrote some programs to do tedious calculations that had to be done by hand before and took days of human labor. Even a 1970s computer could finish the calculations with his programs in a few minutes.
His boss, an older tenured professor, could not believe that 'these damn computers' can possibly be right. Doing the same calculations in a few minutes? Impossible. So for a few weeks (or months, I forget), he did all the calculations done on the computer by hand to prove that the computer must be wrong.
One day he comes to my dad and says "can you show me how to use one of these computers?"
The world is changing quickly. Our most coveted defining traits - our minds - are under attack. This is a technology that seeks to replicate your thought processes and critical thinking and then to execute it at machine speeds.
If you think this is like the industrial revolution, you're actually right. We're still replacing animals with machines. But now we are the animals.
Anything other than a serious discussion about UBI or a post-labour economy is a joke. This is technology that aims to displace most of us.
It won't be Marvin saying, "Oh god I'm so depressed, what's the point?" We'll just start killing each other in massive numbers cause, well, if you can't create anything and there isn't enough for everyone, what else is there to do but fight over what there is
It's being deliberately gatekept from us by the wealthy, and by those who believe that no one should be allowed to have anything they haven't "earned".
The tragic thing is, to the extent that you're right, people will probably mostly kill other people who have nothing, rather than turning their anger and violence where it truly deserves to go: the rich bastards who want to own everything and prevent the rest of us from having anything.
Everyone in America is now fed and most children grow up spending a ton of time with both parents. This is because of automation greatly raising productivity and bringing costs down throughout the 20th century.
It's easy to think things are terrible, but they are actually insanely good. Just 100 years ago life was horrible for basically everyone by today's standards, now it's not.
AI will continue the trend, raise productivity and bring costs down. Now it's for white collar output, instead of manufacturing and agriculture.
The labor force disruption will be painful, as it always is, especially in a country without a strong social safety net, but things will be better on the other side because we just made a ton of work more efficient and can produce more with less.
We shouldn't throw the baby out with the bath water just because it affects us this time...
But that is not going to happen. We would need tangible, meaningful productivity improvements for it to be possible. LLMs are sort of moderately useful, but companies adopting it see no real productivity increase (they see cost increase however).
In many ways it is a bullshit technology, being marketed way beyond its capabilities.
The main social problem with automation in general was that less intelligent people have been left behind as only boring physical tasks are left for them to do, and people don't generally want to go back destroying their body from the prospects of an office job.
At some point frontier AI will only getting only worthwile to use for only super highly intelligent and motivated AI researchers which is a tiny part of the population.
May I also add that this isn't just (or at all) about intelligence.
I'm lucky enough to be at a company where I have a large budget in terms of what I can spend in tokens. This gives me an enormous advantage over someone who is just as intelligent as me and who has the same experience as me minus the interaction I have with LLMs.
In this case the crucial difference is not intelligence, it's that I found myself in the right place to be able to go up, whereas a lot of people which are otherwise like me didn't get that opportunity through no fault of their own.
People tend to attribute their successes to their own merit and their failures to happenstance, but if we're honest with ourselves the real world has a lot of randomness in it.
I guess cynicism is trendy.
It's not an anomalous sense of cynicism, hundreds of thousands of people are looking at their options and feeling hopeless. I'm glad I am not in that camp. The reason I'm not is because I was born sooner than they were. I don't blame them at all, it's looking a lot like the generation after them is cannon fodder if things trend the way they are now.
I would tell them this is the problem to fix. Taking your anger out on AI is the most shortsighted thing. When faced with a powerful new capability, disavowing the capability instead of enabling society to leverage it is absurd.
AI is fundamentally the automation of labor, and we can all see the incredible fruits we all reap from similar past leaps in capability.
Structure your society for a post-labor world. Don't halt the progress that has dramatically improved the human condition. To do so is a disservice to the species and all future humans - concretely, your own loved ones and especially your children.
You clearly accept this as Progress, but isn't the core debate here that it doesn't improve life for humans?
UBI also won't fix things. A post ai world that the us tech ceos want us to imagine is not a utopia. The us manufacturers almost nothing on the world scale. Our biggest contributors to the world economy were things like farm goods(which are in peril), fuel (which most countries are trying to phase out for environmental and recent geopolitical issues), software which will be commoditized through AI. Anything the us can manufacture China can do better, cheaper, and faster. It's not been in our culture for decades, and our infrastructure is shoddy.and will be shoddier once data centers spin up and more wealth is concentrated to people who do not pay any taxes.
GenZ and those coming after have no chance at a sustainable life if the billionaires get what they are asking for. Also in a capitalist society asking them to sacrifice their lives for the good of others is hilarious. Especially if there is no foreseeable good to come after.
Of course no one sees it as a collective achievement when the announcements are aimed at either scaring people about how even the team behind them is worried about releasing it or for CEOs to replace workers.
Artemis II, at least in the states, was an example of people genuinely feeling collective achievement. There is absolutely no reason this AI moment couldn't be that. Instead though the companies involved have explicitly chosen fear and capital as their marketing tools. We should be seeing this as an incredible time but those involved do not want us to and plan to keep the spoils for themselves so we shouldn't.
> But instead we're seeing them explicitly marketed as tools for capital centralization.
And labor automation, which is the single most valuable thing any technology can do. But if your answer is "kill the technology" instead of "structure society to live with it," of course you will experience pain.
It is a completely coherent position to like most technological progress, but at the same time be critical of some uses of ML/AI.
You are just making straw men here by suggesting that people that are critical of AI are critical of all technology.
Well, yes, but if humans need to stay in the loop (as most previous automations of labor), it is also moving the means of production into the hands of a small number of tech companies. In 2010 or 2020, anyone with a laptop could create a startup. It might be the case that in 2030, you could only do so if the major frontier model providers allow you to do so and do not make it so expensive that it's only usable by entrenched players.
I am not fundamentally against AI, on the contrary, but I think the models should be in the hands of the wider population (i.e. open weight models), so that everyone has the means of production and can benefit from the automation. Also, it would only be fair, since the models are trained on the collective output of humanity. Of course, there are several barriers currently. There are pretty good open models, but running the near-frontier versions requires a lot of capital in the form of GPUs.
"AI" is an achievement alright (so was designing a nuclear bomb), but if it is allowed to further gut the middle class, lowering wages, and hence spending (and tax receipts, to extent that matters any more) then it will only hasten the spiraling of the US economy down the toilet.
I wish Gen Z channeled their anger into making distributed AI instead of turning their backs on the problem or doing protests that will get nowhere since Boomers are still the biggest voting block.
small + local + distributed
Where is the Gen Z hacker movement? The very few into AI are all sellouts wishing they could join a big lab.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
Software was really hard pre-2010. You actually had to study it because there was no AI, no stackoverflow, no NPM, etc, etc. You had to learn how to write code the hard way, typically from people who already knew how or text books, and more importantly, learn how to solve real problems often applying maths (i.e. you couldn't import a library to find the shortest path in a graph).
Similarly video editing, graphic design, 3d modelling, music production, were some other fields which were really hard. Again, there was no YouTube tutorials or AI and even the software itself was so limited compared to what we have today. You had to spend years learning the craft which meant the skill difference between those who had put years into their thing and those who had not was enormous.
I miss that world so much... I liked not being good at things and finding people who had what seemed like inhuman talent at things. I had a friend who was insanely good at graphic design and the stuff they'd send me would blow me away. The level of detail and precision didn't even seem possible to me. But now I can generate something almost just as good with AI.
Other examples would be how people who spent years practising music are now indistinguishable from someone with AI. Or how people who spent years learning blender are producing models which are indistinguishable from someone with a Meshy subscription.
There's just no reason to dedicate yourself to anything anymore and even if you did you're probably not going to get a job anyway.
I am a hardcore AI doomer, but assuming the doom scenario isn't on the table and we simply see a concentration in wealth and mass white-collar job losses, I know I'd probably be fine or maybe even benefit from that because I grew up in a time where it was hard but very much possible to acquire a talent and use it to build wealth. Gen Z on the other hand stand no chance.
Today's job market feels corrupt and product of pure luck. You either get extremely lucky and somehow land a good job, or know someone who can get you through the door. In the last year I've interview some insanely talented people from the best universities and we have decided not to employ them because we just don't need to. It's honestly hard for me to comprehend being that motivated and working that hard to struggle to even find an entry level job at the relatively mediocre company I work for...
We need to question if more productivity is always good. It seems to me the way that productivity is distributed is essential. If it's largely just corporations benefitting from the productivity gains then we're creating a world that's not suitable for humans. This will create a world in which productivity, and therefore wealth, will concentrate to fewer and fewer people, whilst the average person struggles to find ways to demonstrate their employability. If AI is creating a world that is much richer by some metrics, but much poorer by most the average person cares about, then is it even a technology worth having? Why would Gen Z consent to this world we're building and not seek to overthrow (rightly imo) those who have created it? Technology is suppose to make our lives better not make them harder and financially suppress us.
AI psychosis is real and the billionaires who own the AI chatbots know this.
For AI researchers, it is an understanding of what "intelligence" is and the emergence of an autonomous system that surpasses all human capabilities that learns over time.
For most AI labs like OpenAI and their investors, it used to mean an intelligent system that surpasses all human capabilities at economically useful work, then it meant $100B dollars of profits, now it is an IPO.
For Big Tech, it is digital employees and AI data-centers to "streamline" operations.
To everyone else, it is mass job displacement and unemployment.
For GenZ, it is "permanent underclass".
So it depends on who you are talking to and varies. Therefore "AGI" at this point is meaningless.
Sell NVIDIA!!!
31% seems remarkably high. Here we seem to be running up against the limitations of statistics. It is hard to interpret whether this is a scared-and-angry sort of angry or if there is something AI-related happening that is making them angry. I might have been lucky in my experiences, but generally if people get angry there is a reason other than "things are changing".
Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture. The main selling point of AI so far is that it makes things cheaper to produce while still looking good at a glance.
"The thing I used to like costs the same or more but is now cheaper quality and worse and they think I'm dumb enough not to notice" really isn't a selling point, but pretty much the universal western post-2008 experience, and nothing quite embodies this transformation like AI.
But yeah, you also have all the AI CEOs chewing the scenery like Jeremy Irons in the DnD movie which really hasn't done the image of AI any favors either.
There are at least some redeeming features of AI, but I think it's become this scapegoat for a lot of things that it touches that are also larger unsolved problems with the economy, and it's even used that way, e.g. to motivate layoffs that would otherwise signal to investors that a company isn't doing as well as they'd like you to think.
I really love this comparison. Everyone bitches about Ikea, but at the end of the day unless you're rich as fuck then "buying new furniture" means either Ikea or some other shop that adopted exactly the same business model, because we all know that the price/quality ratio is unbeatable. Ikea furniture can easily outlive you as long as you pick the correct product for your use case. "I put my fat ass on a dining table that's explicitly marketed for light distributed load and it broke in half, boo-hoo Ikea bad" like no shit, if you need a table you can stand on then choose one with extra support beams, Ikea has these too. "But if you disassemble and reassemble Ikea it falls apart" okay cool but the cost of transporting old furniture to your new house is often higher than just buying new furniture anyway. Not to mention that the chances that your old furniture will match your new house are pretty much zero.
This translates to engineers not being able to grasp the concept of "good enough" where end user doesn't care about quality improvements beyond certain threshold. Cue the audiophiles remaining perplexed to this day why nobody uses 24-bit FLAC.
That's my personal impression of the anger. It's not so much luddite anger, its like Clippy anger and millenial anti-Boomer anger mixed together.
It's like a twist on the Turing test, where some humans can't tell the difference between a human and a computer, but others can, and they tend to be younger on average. The Turing test ironically ends up telling you more about the person taking the test.
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
Where did you get this notion? Did you hallucinate it?
Thirty-one percent being smaller than half.