ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
56% Positive
Analyzed from 10924 words in the discussion.
Trending Topics
#more#don#years#going#everyone#tech#need#things#companies#something

Discussion (297 Comments)Read Original on HackerNews
According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?
[1] https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.
None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.
They do the dirty, repetitive work, learn the systems inside out, take note of the flaws, and fix them if they are motivated and the system/process allows.
Thinking them as replaceable, worthless gears is allowing your organization rot from inside. I can't believe people can't see it.
In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line. And that risk may be overblown - people are still hiring some juniors, it's not like it has stopped entirely - so future seniors will likely just be worth more than they are currently. To some, that may be worth the risk, especially if you believe AI will continue to get stronger.
I am not saying I agree with this decision making, more pointing out the thought process. We have had to have similar discussions where I am but are still hiring juniors, FYI. That's basically all we're hiring right now, actually, because the market for strong juniors is very good right now.
This is how companies see all of us though, for all ic levels
There is an issue with execs pushing it though. You have people at the top of the company with little to no idea how people work attempting to micromanage tool usage. It is as if you had a group of execs determining what IDEs people could use.
No-one is getting fired because of AI. The start of this year is the start of companies beginning to use AI. The reason layoffs are happening is because of the massive overhiring after Covid.
How long after COVID are we going to be able to keep using this excuse? This is starting to feel like the politician blaming his predecessor even though he's been in office for years. In the year 2033, Company X lays off another 10,000, just as it did each year since 2023, again blaming massive over-hiring during COVID, ten years ago.
I am with you but if you look what happened after COVID it is a big line going waaaaay up. COVID was a significant event and there is no way around it, no? the OPs comment is invalid because we below the pre-COVID (by miles) but COVID should be taken into account (everyone seems to use it to further some agenda by looking at just one particular aspect of what happened post-COVID)
Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.
At the end of the day, I don't plan to use this at daily capacity, but with all the resources poured into this, it's still underwhelming.
My company uses Sharepoint, and can digest all of the documents I have access to on that, one drive, teams, outlooks, etc. across my tenant. Most of the time, it's pretty useless.
There must be some reason for these two disparate experiences. It's the same product offering. I couldn't tell you.
I asked for a concept in Tango music, with a long prompt explaining what I'm looking for. It brought me back a single, Spanish YouTube Video explaining it perfectly alongside its slightly wrong summary, but the video was spot on, and I got what I needed.
Then I asked for something else about a musical instrument, again with a very detailed prompt, and it gave me a very confident answer suggesting that mine is broken and needs to be serviced. After an e-mail to the maker of the said instrument, giving the same model number (and providing a serial) and asking the same question, I got a reply saying that it's supposed to that and it's perfectly fine, it turned out that Gemini hallucinated pretty wildly.
For programming I don't use AI at all. I have a habit of reading library references and writing code directly by RTFM'ing the official docs of what I'm working with. It provides more depth, and I do nail the correct usage in less time.
It’s full out mania. As someone raised in and who escaped a cult, I am having to use every tool in my very large toolbox to stay sane while I wait for this to pass and die down or make my move towards a place that still cares whether their product works.
We’re in what I would call the “dark ages” of tech. There will be a new renaissance led by those who used this as an opportunity to build skills and tools that are genuinely useful and ingenious.
If you keep a long-term horizon this is the perfect opportunity to work on a solo project in stealth mode. Or build professional connections with others who see things the way you do.
When people talk about one’s salary being an imperative to them understanding something, they are talking about exactly you. “This’ll all wash over and we’ll be back to the good old days that I’m used to” has never happened. Ever.
I don't know why people are comparing the Day-1 of one technology with the Day-1000 of another. Yes, AI is useless in many fields - NOW. But you can't imagine doing any work without in a couple years.
Like the kids used to ask - 'How did they build Google without Google?'
Now their kids will ask - 'How did they build chatGPT without chatGPT'?
I expect I’ll be using LLMs now and in the future, but the public is far more right about the companies and the people running them than the tech “insiders” here.
Wouldn't it be one or the other?
On the other hand there probably also is a general correction in the market after the covid hiring spree.
https://www.reddit.com/r/askscience/comments/1975oj/whats_ha...
The reality is most of them are so divorced from reality that they think they are infallible and AI will pick up the slack because they want it to be true.
The lack of results is felt by those using it to assist with their work daily.
AI could be a huge short-term benefit, justify layoffs now, so long as you (the exec doing the laying off) don't have to worry about the long term
AI could have middling net benefit, but be a great excuse to justify layoffs now. In this scenario, the people laid off and those that remain bear the cost (one, losing their jobs; those that remain, burning out with the extra workload) etc etc, many scenarios to consider...
Hilariously, it's the exact same playbook as the big third-world-country-outsourcing hype from a few years ago.
I over floored several rooms in my house (UK, '20s build) with plywood before laying insulation, heating mats and laminate floor boards for the final finish. I don't have a staple gun so I screwed the boards down at roughly 600mm c/c across the floorboards and 300mm along them.
What the blazes has that got to do with LLMs?
Well, I used a nearly inappropriate method for a job and blasted through it nearly as fast as the best method! If I had used a manual screwdriver I would have been at it nearly forever and ended up with a very limp wrist. I do own an old school ratchet screwdriver and that would have speeded things up but still been slow. I did use yellow passivated screws with sharp threads and a notch to initiate biting into the wood - rather more expensive than a staple or a nail.
So I burned through my tokens (screws instead of nails/staples) faster than if I had used a pneumatic nail/staple gun.
Anyway. LLMs are tools. They can be good tools in the right hands or rip your fingers off in the wrong hands.
At the moment, I think that a LLM needs skilled hands too. Have a casual chat - that's fine but for work ... be aware.
I recently dumped a wikimedia (our knowledge base is a wiki) formatted table into a LLM (on prem) and asked it to sort the list on the first column. It lost a few rows for some reason. No problem - I know how my tools work but it was a bit odd!
But the first part of your comment is basically saying "AI insiders think the tech is super awesome and powerful, while other engineers think it doesn't stand up to the hype." Well, if the AI is indeed not as good a tech as its boosters are saying, well, this would be great news for everyone scared about job losses and widening inequality if AI turned out to be a nothing burger.
I'm not going to pass judgement either way; we'll see how it all shakes out.
I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.
Absolutely everyone raves about this but other than a few basic computer related tasks I’ve not seen compelling use cases that justify the billions being lit on fire trying to pursue it.
My cynical take is the crypto bro’s needed something to do with their useless GPU’s after the crash and found the perfect answer in LLM’s.
I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.
You won't be 'underwhelmed' long.
People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.
Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.
I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.
As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.
This has been your constant mantra for 3 years now and is part of the reason people are underwhelmed.
Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....
And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.
Are those who care about evaluation luddites?
"Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.
Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].
So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.
[1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.
But here we are.
Over time and usage the limitations of a thing become apparent.
And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.
Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.
> I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.
Cringe, as I was saying.
Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.
Every conference I participated always overhyped every current bullshit.
But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.
It's like a programmer being surprised that a worker in $random_job wants to keep doing their job, and not learn how to be a programmer instead.
Before LLMs I only worked at one place that "only hired seniors and above" and now its the most commonplace thing in the world.
Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
Seems a bit like asking where the bread will come from, if no-one is forced to bake it.
AI works fine to get a vibe coded BS version of the app. No doubt there. But eventually, especially once scale hits your app, it will devolve into an unholy mess of low performance and (extremely) high cost if you do not have a bunch of senior talent able and willing to clean up after the AI mess.
Unfortunately, our capitalist economy only rewards the metrics you mentioned... but by the time the house of cards collapses, either from financial issues stemming from the above or because the tech debt explodes, it's too late to turn the ship around.
A lot of what non-believers say matches "enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises". They would then say they need 2 weeks to work on a specific project, the good old way, maybe with some light AI use along the way.
But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
So overall, I think the lack of enthusiasm is largely a skill issue. Not having the skill is fine, but not being willing to learn the skill is the real issue.
I see things changing, as "non-believers" eventually start to realize that they need to evolve or be toast. But it's slower than I imagined.
Meanehile our average PR loc balooned to ~2000loc -- generated with Claude, reviewed with copilot but colleagues also review it with Claude because it gives valid nitpicks that bump up your github stats, while missing glaring functional/architectural issues, overenginerring issues.
No way this doesn't blow up down the road with the massive bloat we're creating while getting high on the "good progress" we're making.
Yes, your 3 minutes prompt got merged. So was my friends(ex-programmer now manager) non-ai generated PR that a technical TL got stuckstuck on for 2 weeks. Different perspective? Survivor bias? High authority?
In a sane engineering culture, actual customer-visible impact is what is measured, and AI is just a tool to improve that metric, but to improve it massively.
there are still code-quality issues, prompting issues for long-running tasks, some things are just faster and more deterministic with normal code generators or just find-and-replace etc
people are annoyed at the force-feeding of llms/ai into everything even when its not needed
somethings can be one-shotted and some things cant, and that is fine and perfectly normal but execs don't like that because its not the new hotness
True but my point is that people vastly underestimate what is one-shottable.
In my experience, 80% of the times an average "non-believer" SW engineer with 7 years experience says something is not one-shottable, I, with my 15 years of experience, think it is fact one-shottable. And 20% of the time, I do verify that by one-shotting it on my free time.
This is a genuine question btw, I see plenty of instances of this in my own org.
1. I am also on the receiving end of this. My boss often codes and vibecodes, and no one feels like they have to merge their stuff. We only merge it if it meets the high quality standard we have. And there is no drama for blocking a PR in our culture. 2. I am fairly deep in the trenches myself and I know when my PRs are high quality and when they are not. And that does not correlate with use of AI in my experience.
on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)
How impressed someone get from that will depend on the recipient.
I had the exact same experience with, for example, rolling out fully virtualized infrastructure (VMware ESXi) when that was a new concept.
The resistance was just incredible!
"That's not secure!" was the most common push-back, despite all evidence being that VM-level isolation combined with VLANs was much better isolation than huge consolidated servers running dozens of apps.
"It's slower!" was another common complaint, pointing at the 20% overheads that were the norm at the time (before CPU hardware offload features such as nested page tables). Sure, sure, in benchmarks, but in practice putting a small VM on a big host meant that it inherited the fast network and fibre adapters and hence could burst far above the performance you'd get from a low end "pizza box" with a pair of mechanical drives in a RAID10.
I see the same kind of naive, uninformed push-back against AI. And that's from people that are at least aware of it. I regularly talk to developers that have never even heard of tools like Codex, Gemini CLI, or whatever! This just hasn't percolated through the wider industry to the level that it has in Silicon Valley.
Speaking of security, the scenarios are oddly similar. Sure, prompt injection is a thing, but modern LLMs are vastly "more secure" in a certain sense than traditional solutions.
Consider Data Loss Prevention (DLP) policy engines. Most use nothing more than simple regular expression patterns looking for things like credit card numbers, social security numbers, etc... Similarly, there are policy engines that look for swearwords, internal project code names being sent to third-parties, etc...
All of those are trivially bypassed even by accident! Simply screenshot a spreadsheet and attach the PNG. Swear at the customer in a language other than English. Put spaces in between the characters in each s w e a r word. Whatever.
None of those tricks work against a modern AI. Even if you very carefully phrase a hurtful statement while avoiding the banned word list, the AI will know that's hurtful and flag it. Even if you use an obscure language. Even if you embed it into a meme picture. It doesn't matter, it'll flag it!
This is a true step change in capability.
It'll take a while for people to be dragged into the future, kicking and screaming the whole way there.
I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
I think you might be really underestimating how poorly today's adults think of AI. Whenever I see a blog post that starts with an obvious AI hero image, when it has the "It's not X, it's Y" framing, when it has anything that smells like AI, I immediately discount what that person is saying as I assume they are unable to think for themselves.
I think it's only a matter of time before we see some more serious, organized opposition to AI (and perhaps even the internet and other technologies) by these young people.
When they aren't consumed by TikTok?
Gotta love that - the teenage AI scold.
Then when the salaries got good every pretended to have always been a nerd and really into everything nerd. With the result that they kicked all the nerds out.
But men didn't kick them out, technology did. Von Numan famously forbid the Eniac from ever being used for assembly when you had a perfectly cheap secretary pool to do the assembly by hand.
Low creativity repetitive work requiring great attention to detail is what the early female programmers did and what was automated first.
If we ever get deterministic AI the same will happen up the chain. I'm not holding my breath for the current generation of models, or the upcoming ones I've seen in papers.
It feels a little disrespectful. It feels a little pointless (why am I bothering talking to you if I can get the same result from the AI). I have no idea whether you've given the problem any actual thought, or if you're just copy-pasting an answer. I have no idea if you actually believe what you're telling me (or if you've even read it or understand it).
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
I'm more on the skeptical side than the evangelist, but I can see how large parts of such things could theoretically be shifted away from humans. Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking.... AI is a whole lot of hot air when considering the "second 80%" of the work involved in any of these tasks, but that's still a lot of jobs that may make little sense to start studying these years, until you have some idea how the field will develop or if there's a giant surplus of, say, French-native Spanish language experts. At least for those for whom a given study is not a real passion and they might as well choose something else
If it's fundamentals of ML, I'm surprised to hear that.
If it's "how to use ChatGPT for creative writing" then I'm not surprised. Why would someone take a class from a teacher who has had only just as much experience with these tools as their students have?
https://web.archive.org/web/20260316042004/https://cs336.sta...
Students don't enroll in a class for various reasons, but most likely because it's useless (or at least people perceive it as useless). At top universities, even notoriously challenging courses have a decent class size.
Even if you don't believe the hype and know that AI is just statistics, there is nothing to be positive about. I can't blame anyone to dismiss it. Maybe it's even the best thing that can happen, big tech won't take a sane route without civic supervision and calibration.
is key
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
The incentives are, how you say, aligned.
The deeper issue I see is the psychological crisis for a species who believes it doesn't deserve to live if it isn't performing economically valuable activity, entering a world where it is unprofitable for it to be employed. (If I were the AI, I'd come up with some kind of fake jobs to keep the humans sane.)
Gen Z would likely have a very different opinion if their basic living necessities were available to them.
One aspect almost certainly has to be data centers being run as utilities. That forces transparency, resists monopolization and gives public commissions a say in e.g. expansion.
We need to let the AI as a service businesses fail.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
There needs to be a concerted focus on real value for end users and less "yeah the terminator will take your job and raise your kids in your absence"
It is also clear AI will bring even worse poverty levels and skew the wealth disparity even further.
The latter isn't the fault of AI itself, its the fault of the humans who will control it.
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
I think agentic harnesses add a lot to LLMs, even if many are just simple loops. They are a separate thing from LLMs, are they not?
I get the feeling that even if we stopped shipping new models today, new far more useful products would be getting shipped for years, just with harness improvements. Or, am I way off base here?
Imagine choosing to be an expert in something that you think is a coin flip away from making the world worse.
It looks like:
1. They take billions in investment
2. They spend trillions
3. They and their investors profit in the quadrillions from all the "labor saving"
4. ???
5. Everyone's needs are met.
I was at a panel last week. The most pro-AI person was an account executive from a big fintech company.
EVERYONE else - a data scientist that works in AI, regulatory compliance, cybersec, and marketing, took the position of "hey this is great and will change things, but let's pump the brakes... a lot."
The AI companies are only capturing like 5% of the value produced with this tech right now.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
The kids are alright.
As a Gen Xer myself (1973) I disagree.
The widest margin of Trump voters by generation was Gen X.
Gen X has largely morphed into the boomers they used to despise.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
[1] https://en.wikipedia.org/wiki/Gas_lighting
If only it were this obvious when the polluted air isn't your home but the entire planet, killing not your grandma but taking a few healthy years of life from everyone simultaneously. Maybe people would feel like we need to reverse priorities rather than go full steam ahead on newly created energy demand and see about cleaning it up later
Meanwhile not every invention is. Electricity and internet are electricity and internet, and very few inventions come even close to that. Meanwhile LLMs have had arguably a net negative effect on the world at large.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
No wonder they're all trying to get on benefits. Fuck Maggie Thatcher for selling off the council houses.
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
Oh the second one is happening right now.
Haven't we learned anything from The Walking Dead?
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
So even universal income won't solve everything, not that it's ever likely.
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends. Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
It also seems like people on all sides within the AI debate have been fanning those flames thinking is will work in the short-term...and it won't. Big tech played that game in many countries in the early 2010s and it didn't end well.
If all is well, then it's all good: no need to blame anyone, campaings get funded, etc. If one major crisis occours though, the country self-immolates by design.
The New York Times is allowed to spend money like anyone else praising or slagging politicians, but that’s the First Amendment, not funding candidates.
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
[0] https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-ever...
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
The current state is: Nearly all productivity growth since 1980 has gone to shareholders, not workers: https://www.epi.org/productivity-pay-gap/
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
You are in a massive bubble my colleague, and I hope you have held some small doubts in your mind so when it pops you will have something to hold onto.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.