ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
66% Positive
Analyzed from 4235 words in the discussion.
Trending Topics
#more#revolution#technology#industrial#next#media#don#right#jobs#things

Discussion (125 Comments)Read Original on HackerNews
There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.
I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.
There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.
But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.
It's not one-size fits all but I've seen positive effects on more marginalized forums.
So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.
Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"
We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.
As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.
There was recently an article shared around here that an LLM diagnosed ER patients more accurately than doctors.
Looking beyond LLMs image analysis to detect cancer and other diseases.
Like in coding, AI can and should be a useful tool for the human who decides and is ultimately responsible.
Like don't we want people running these companies to be honest to the public rather than misdirection?
[1]. https://www.platformer.news/sam-altman-ai-backlash/
Ironically, this makes even less sense.
If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?
What about any of these folks’ biographies hints that they’re capable of being honest?
The college-age students I interact with hate AI content from other people, but they love using AI for their own work.
They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.
That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.
I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.
That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.
But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.
If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.
Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.
This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.
Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.
I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
It seems, the word "AI" inherently refers to slop now, which I find kind of tragic. The people drowning the world in AI slop were sloppers before. I cannot imagine they've cared more about quality before AI than they do now. They've just been given a tool that multiplies their slop.
I understand why we blame the tool. Yet, I wished, we'd blame the sloppers. I truly believe that AI can help people to create and build wonderful things that are an expression of their own creativity and thinking. And I'm sure it is happening. It is just not as visible as all the slop.
I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?
I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.
That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.
> The Industrial Revolution was good.
The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.
Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.
I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)
> We desperately need some leadership in companies or institutions that can place this technology in its proper context
We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.
I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.
My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.
We need to rethink everything and the whole AI hate from the youth will not change anything about it.
I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.
We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.
That being said we already have relative superabundance and we're more miserable than ever, so it's not clear that more of it will cheer us up.
Distribution of abundance in current time is close to evil, America reducing entitlements and support (not expanding). Rampant waste. No reason to think any of this will change.
That sounds great, but how are LLMs supposed to achieve this? You can't just say "AI will make a utopia". You have to present a vision for how it will get us there.
I'm tired of hearing about how AI will solve all the worlds problems. I want to see actual progress towards achieving these goals. And for the most part that hasn't manifested. Most people would consider AI to have had a net negative impact on their lives.
To be fair, this isn’t the commencement speaker’s job.
You can't have it both ways: either LLMs are an amazing, revolutionary technology that can replace many human jobs in unprecedented ways, or it's going to be a mild transition that really only helps people.
The assembly line was explicitly about replacing skilled with relatively unskilled labor.
I think what they are saying is "that something can replace a job does not inherently imply the next step is poverty". From that perspective, you can absolutely have it both (and many other combinations of) ways.
What actually happened in each case was that employment went up for a good long while, as the efficiency boost to the sectors touched made investment far more viable. Eventually successive rounds of automation did reduce employment in each of weaving and mining, but it wasn’t an overnight catastrophe as initially advertised or feared.
Programmers (and other workers but this a tech centric forum) need to start to accept that programming was a necessary evil of the before times. We didn't have the theories. We didn't have the manufacturing techniques.
Before hardware was powerful enough to run models on a laptop we needed all that hand crafted custom state management to avoid immediate resource exhaustion. Or to hide the deficiencies of the chips of the day.
For all the appeals of tech workers to a lean into a high tech life, programming as humans did in the before times seems pretty outdated. Bring back rotary phones too, I guess.
If we don't have jobs we are free to:
Take up arms against an exploitative political and owner class minority.
Make sure grandma and the kids are ok. Everyone has enough to eat?
Free the sweatshop kids we exploit without giving them a choice of "the mines" or college, from obligations to our own meat suits
???? What else?
Whole lot of job culture too was just busy work to satisfy the beliefs of they who are generationally churning out of life. Bye grandpa; thanks for zero assurances but tons of obligations; you won't be missed!
Elon and such are not an immutable constant of the universe. Few more years and he'll be Mitch McConnelling out on TV. Especially with all the drug abuse.
Everyone under 50 needs to prepare for the future not LARP the past.
Shows you don't need to have red skin and horns to delight in the suffering of starving people.
College graduates being that myopic and failing at such basic logic. One can only wonder about the quality of education they've got and how it would help them in the modern technological world. Though being that hypocritical may be they would exactly do very well.
>University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media
yep, clearly not Stanford.
Yes you can. They use AI and also despise it because it will turn the world into one big caste system. Ones with access to compute, and ones without.
College graduates in a rich, food- and energy-exporting democracy at the centre of the AI build-out will be on the receiving end of this transfer.
The places where should be panic are the Middle East, Russia and South Asia.
Avoiding a repeat of that would be great while also increasing productivity would be good.
The Luddites were all for saving labor, but not if enshittified products and slavery to unreliable machines were the price.
Sounds pretty familiar to me.
Well, yeah.
Or, alternatively, that we need the humanities today in a fundamental, possibly existential, way. If AI is another Industrial Revolution, rise to be our Sinclairs, Dickens and Tolstoys.
Hmm, how would we measure and confirm this hypothesis?
Anyone can pick up a pencil and practice for hours a day! You can look out a window for inspiration! There is no "gatekeeping" art, only people upset it doesn't come as easily to them as B2B SAAS and confusing real effort and introspection as "gatekeeping".
The AI art people were so happy to rub it in artists face, that finally, without effort or appreciation, they no longer had to pay the skilled person for an image.
https://www.youtube.com/watch?v=zwYkHS8jvSE
"Passion--let's go!" Lady read the room.
Somehow I have a feeling that the reaction would have been totally different if it would have been the EECS graduates.
Fear and rejection in certain professions is real and maybe even understandable.
I imagine 25 years ago someone telling music graduates “streaming is the future of music distribution” would have received the same reaction.
However there was a feeling that “the job” is radically changing right now.
The More Young People Use AI, the More They Hate It
https://news.ycombinator.com/item?id=47963163
Study found that young adults have grown less hopeful and more angry about AI
https://news.ycombinator.com/item?id=47704443