ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
63% Positive
Analyzed from 7917 words in the discussion.
Trending Topics
#companies#don#more#dangerous#power#world#things#fear#saying#models

Discussion (191 Comments)Read Original on HackerNews
AI is getting strong enough that if people give some general direction as well as access to production systems of any kind, things can go badly. It is not true that all implementations of agentic AI requires human intervention for all action.
The risks are similar: No prompts/data that go in can reliably be kept secret; A sufficiently-motivated stranger can have it send back completely arbitrary results; Some of those results may trigger very bad things depending on how you use or even just display them on your own end.
P.S. This conceptual shortcut doesn't quite capture the dangers of poison data, which could sabotage all instances even when they happen to be hosted by honorable strangers.
I don't see any stabilizing influences on the horizon, given how much cash is sloshing around in the economy looking for a place to land. Things are going to get weird, stupid, and chaotic, not necessarily in that order.
I’m not sure what the problem is there
If I was a Ph.D. student today, I'd probably do a thesis on cheap verifiers for LLM agents. Since LLM agents are not reliable and therefore not very useful without it, that is a trillion dollar problem.
Once a developer groks that concept, the agents stop being scary and the potential is large.
put this exact value inside this exact register at the right concurrent time and all the tedious exactness that C required
into now:
"pretty please can you not do that and fix the bug somewhere a different way"
With Lee Zeldin heading the EPA is anyone sure we won't?
Bit flips in memory are super common. Even CPUs sometimes output the wrong answer for calculations because of random chance. Network errors are common, at scale you'll see data corruption across a LAN often enough that you'll quickly implement application level retries because somehow the network level stuff still lets errors through.
Some memory chips are slightly out of timing spec. This manifests itself as random crashes, maybe one every few weeks. You need really damn good telemetry to even figure out what is going on.
Compilers do indeed have bugs. Native developers working in old hairy code bases will confirm, often with stories of weeks spent debugging what the hell was going on before someone figured out the compiler was outputting incorrect code.
It is just that the randomness has been so rare, or the effects so minor, that it has all been, mostly, an inconvenience. It worries people working in aviation or medical equipment, but otherwise people accept the need for an occasional reboot or they don't worry about a few pixels in a rendered frame being the wrong color.
LLMs are uncertainty amplifiers. Accept a lot of randomness and in return you get a tool that was pure sci-fi bullshit 10 years ago. Hell when reading science fiction now days I am literally going "well we have that now, and that, oh yeah we got that working, and I think I just saw a paper on that last week."
Have you ever seen Claude Code launch a subagent? You've used it, right? You've seen it launch a subagent to do work? You understand that that is, in fact, Claude Code running itself, right?
They're tool calls. Claude Code provides a tool that lets the model say effectively:
The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.It's a really useful parlor trick, but I don't think it tells us anything profound.
The OP says AI requires human interaction to work. This simply isn't true. You know yourself that as agents get more reliable you can delegate more to them, including having them launch more subagents, thereby getting more work done, with fewer and fewer humans. The unlock is the Task tool, but the power comes from the smarter and smarter models actually being able to delegate hierarchical tasks well!
If that is software running itself, then an if statement that spawns a process conditionally is running itself.
AI in the hands of an expert operator is an exoskeleton. AI left alone is a stooge.
Nobody has built an all-AI operator capable of self-direction and choices superior to a human expert. When that happens, you'd better have your debts paid and bunker stocked.
We haven't seen any signs of this yet. I'm totally open to the idea of that happening in the short term (within 5 years), but I'm pessimistic it'll happen so quickly. It seems as though there are major missing pieces of the puzzle.
For now, AI is an exoskeleton. If you don't know how to pilot it, or if you turn the autopilot on and leave it alone, you're creating a mess.
This is still an AI maximalist perspective. One expert with AI tools can outperform multiple experts without AI assistance. It's just got a much longer time horizon on us being wholly replaced.
If you lead with this, people will stop questioning why their sprint velocity hasn't increased 10 fold. Managers start asking leads, instead of hiring more devs can we add Agent.md to our repos?
The Apocalypse sells. They are afraid that you'll find out that AI is just another useful tool. That's the real threat, not to humanity, but to their hype.
Edit: i made a video about this recently: https://youtu.be/nB0Vz-fh8EI
* We need to completely deregulate these US companies so China doesn't win and take us over
* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner
* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)
* If you don't use AI, you will not be able to function in a future job
* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right
They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
And specifically about the point on China, several people in power in China have also expressed the need to regulate AI and put international structures of governance in place to make sure it will benefit mankind:
https://nowinners.ai/#s5-china
Peter Thiel literally gave a lecture on the Antichrist* saying basically that regulation is satanic https://www.nytimes.com/2026/03/17/world/europe/peter-thiel-...
* He's the best person in the world for this lecture - the only one that can claim first-person knowledge on the subject!
How about building a multipolar world where different parts of the world (US/China/India/EU/Africa,..) get to build sovereign tech and have their own winners?
Think of a standard for classifying and regulating the self-hosting of open-source models similar to how an FFL works. You can do it, but you must have all your paperwork lined up, with background checks, a valid business license, and if you forget to dot an "i" or cross a "t" the Cyber version of the ATF shows up and shoots your fucking dog.
This thread and article have made me realize that a lot of different incentives exist to talk up the apocalypse.
It even neutralizes the Eliezers and their apocalypse mongering.
On the other hand, it seems like Dario is himself a bit more of a true believer.
Additionally Dario has just been really accurate with his predictions so far. For instance in early 2025 he predicted that nearly 100% of code would be written with AI in 2026.
> nearly 100% of code would be written with AI in 2026
I feel like this is kind of a meaningless metric. Or at least, it's very difficult to measure. There's a spectrum of "let AI write the code" from "don't ever even look at the code produced" to "carefully review all the output and have AI iterate on it".
Also, it seems possible as time goes on people will _stop_ using AI to write code as much, or at least shift more to the right side of that spectrum, as we start to discover all kinds of problems caused by AI-authored code with little to no human oversight.
If this is some kind of twisted marketing, it's unprecedented in history. Oil companies don't brag about climate change. Tobacco companies don't talk about giving people cancer. If AI companies wanted to talk about how powerful their AI will be, they could easily brag about ending cancer, curing aging, or solving climate change. They're doing a bit of that, but also warning it might get out of control and kill us all. They're getting legislators riled up about things like limiting data centers.
People saying this aren't just company CEOs. It's researchers who've been studying AI alignment for decades, writing peer reviewed papers and doing experiments. It's people like Geoffrey Hinton, who basically invented deep learning and quit his high-paying job at Google so he could talk freely about how dangerous this is.
This idea that it's a marketing stunt is a giant pile of cope, because people don't want to believe that humanity could possibly be this stupid.
It's hard to see as anything but a button anyone with enough money can press and suddenly replace the people that annoy them (first digitally then likely, into flesh).
HN is the only place I have heard it seriously suggested that anything like this is happening or likely to happen. We certainly get a lot of cheerleading here, my guess is that in the trenches the fraction is way lower.
It makes more sense if one breaks that "everyone" into subgroups. A good first-pass split would be "investors" versus "everyone else."
From their perspective: Rich Investor Alice rushing over with bags of money because of FOMO >>> Random Person Bob suffers anxiety reading the news.
One can hone it a bit more by thinking about how it helps them gain access to politicians, media that's always willing to spread their quotes, and even just getting CEO Carol's name out there.
It seems more reasonable to me to think that they know it's bullshit and it's just marketing. Not necessarily marketing to end users as much as investors. It's very hard to take "AGI in 3 years" seriously.
Why are they still building it? Because each team thinks that THEY are the ones who can prevent it from destroying humanity, but they have to get to AGI first, before the other teams make an AI that does destroy humanity.
But also, if AGI doesn't destroy humanity, it would be the most powerful weapon in the world, and they want to be the ones in control of it. Keeping the focus on Armageddon distracts from the real and severe problems that arise if a single person, or even a small group, controls an AGI.
Also, the idea that AI leadership seized on and amplified these concerns purely for marketing purposes isn't plausible. If you're attempting to market a new product to a mass audience, talking about how dangerous and potentially world-ending it is is the most insane strategy you could choose. Any advantage in terms of getting people's attention is going to be totally outweighed by the huge negative associations you are creating in the minds of people who you want to use your product, and the likelihood of bringing unwanted scrutiny and regulation to your nascent industry.
(Can you imagine the entire railroad industry saying, "Our new trains are so fast, if they crash everybody on board will die! And all the people in the surrounding area will die! It'll be a catastrophe!" They would not do this. The rational strategy is to underplay the risks and attempt to reassure people. Even more so if you think genuinely believe the risks are being overstated.)
Occam's razor suggests that when the AI industry warned about AI risk they believed what they were saying. They had a new, rapidly advancing technology, and absent practical experience of its dangers they referred to pre-existing discussions on the topic, and concluded it was potentially very risky. And so they talked about them in order to prepare the ground in case they turned out to be true. If you warn about AI causing mass unemployment, and then it actually does so, perhaps you can shift the blame to the governments who didn't pay attention and implement social policies to mitigate the effects.
I don't think the AI industry deserve too much of our sympathy, but there is a definite "damned if you do, damned if you don't aspect" to AI safety. If they underplay it, they will get accused of ignoring the risks, and if they talk about it, they get accused scaremongering if the worst doesn't happen.
except that isn't the segment of the market they're targeting. They're trying to FOMO businesses into paying them, and the businesses play along in part because they (the businesses) don't care about morals nearly as much as the potential profit (sure, a train that kills everyone on board is bad for the people on board, but just think about how efficient shipping will be) and in part because they're scared that by not doing so they'll end up on the business end of how dangerous these new models supposedly are
We live in an age where influential companies with notable figureheads are seen as evil incarnate and influential companies without notable figureheads as, well, you know, the same old same old greedy companies. It just so happens that the most influential AI companies have notable figureheads, so almost everybody fucking hates them and thinks they're up to no good (whatever they do). Truth is that for most of those companies, taking away the influence of their hated CEO and doing away with their ramblings will change absolutely nothing about how that company operates.
In fact it has been AI people who have been leading discussions around AI ethics and the dangers of AI since 1955. This is not new and it is consistent.
The new thing is that the average person is now entering into the debate around AI; And like pretty much everything else in the public sphere doing it with entirely no context.
I always love when some total novice encounters a problem in a well studied field as though they’re the first one to encounter it. There’s nothing more narcissistic than some person thinking they are unique in their position with absolutely no demonstration of having done their homework on whether or not this is an established topic in an established field.
That’s where I place 99.9999% of people who are opening their mouth on this topic.
Most of the builders don’t care about this mess and are continuing to work like usual.
So they don't consider it an existential threat, unlike what the CEOs of companies raising hundreds of billions are saying.
It’s an existential threat if it has existential consequences; if it doesn’t then it isn’t
Can’t know till you build it
There is contention among vulnerability researchers about the impact of Mythos! But it's not "are frontier models going to shake up vulnerability research and let loose a deluge of critical vulnerabilities" --- software security people overwhelmingly believe that to be true. Rather, it's whether Mythos is truly a step change from 4.7 and 5.5.
For vulnerability researchers, the big "news" wasn't Mythos, but rather Carlini's talk from Unprompted, where he got on stage and showed his dumb-seeming "find me zero days" prompt, which actually worked.
The big question for vulnerability people now isn't "AI or no AI"; it's "running directly off the model, or building fun and interesting harnesses".
Later
I spoke with someone who has been professionally acquainted with Khlaaf. Khlaaf is a serious researcher, but not a software security researcher; it's not their field. I think what's happening here is that the BBC doesn't know the difference between AI safety prognosis and software security prognosis, or who to talk to for each topic.
The Anthropic report that describes the bugs they have found with Mythos in various open-source projects admits that a prompt like "find me zero days" does not work with Mythos.
To find bugs, they have run Mythos a large number of times on each file of the scanned project, with different prompts.
They have started with a more generic prompt intended to discover whether there are chances to find bugs in that file, in order to decide whether it is worthwhile to run Mythos many times on that file. Then they have used more and more specific prompts, to identify various classes of bugs. Eventually, when it was reasonably certain that a bug exists, Mythos was run one more time, with a prompt requesting the confirmation that the identified bug exists (and the creation of an exploit or patch).
Because what you say about Carlini is in obvious contradiction with the technical report about Mythos of Anthropic, I assume that is was just pure BS or some demo run on a fake program with artificial bugs. Or else the so-called prompt was not an LLM prompt, but just the name of a command for a bug-finding harness, which runs the LLM in a loop, with various suitable prompts, as described by Anthropic.
Are we just talking past each other? Like: yes, you have to run 4.6 and 4.7 "multiple times" to find stuff. Carlini does it once per file in the repro, with a prompt that looks like:
That's the process I'm talking about.PS
I want to say real quick, I generally associate your username with clueful takes about stuff; like, you're an actual practitioner in this space, right? I'm surprised to see this particular take, which at my first read is... like, just directly counterfactual? I must be misunderstanding something here.
There has been a large majority on HN who have dismissed AGI and model capabilities at every turn since OpenAI was founded a decade ago. The problem is the universe where models are going to be super powerful is unprecedented, revolutionary, and probably scary, so therefore it is easier to digest it as untrue. "they won't be powerful". "LLM's couldn't have possibly done the vulnerability expose that I could never have." And every time capabilities are leveling up, there is a refusal to accept basic facts on the ground.
Am I not allowed to be concerned about _both_?
I do not believe that Sam Altman and other AI company execs believe that the singularity is imminent. If they did, they wouldn't behave so recklessly. Even if they don't care about the rest of humanity, there's too much risk to themselves if they actually believe what they're saying.
But I think it's correct to be worried about a potential future AI apocalypse. Personally I doubt that LLMs will scale to full sentience, but I believe we'll get there eventually. And whether it's in 2 years or 200 years I'm worried about it. Plenty of smart people who aren't working for AI companies (and thus have no motive to use it as hype or distraction) hold this belief and it really doesn't seem that crazy.
But yeah, obviously let's focus primarily on the real harms AI is causing in our society right now.
I don't believe Zuckerberg believes in either the promise or the danger, his presentations are far too mundane. The leaked memos suggest he may simply not care about dangers, which is worse.
Altman at least seems to think an LLM can be used as an effective tool for harm and is doing more than the bare minimum to avoid AI analogies of all the accidents and disasters from the industrial age which led to us having health and safety laws, building codes, and consumer product safety laws.
Musk clearly thinks laws only exist for him to wield against others. Tries to keep active tools which cause widespread revulsion as if a freedom of speech argument is enough.
Amodei seems to actually care even when it hurts Anthropic, as evidenced by saying "no" to the US government. It could be kayfabe, Trump is famous for it after all, but as yet I have no active reason to dismiss Amodei as merely that.
People seem unable to make up their mind if AI is very dangerous or is it not. I think what the AI companies and this author agree on, is that this technology is potentially extremely dangerous. AI impacts labor markets, the environment, warfare, mental health, etc... It's harder now to find things which it will not impact.
So if we agree that AI is potentially dangerous, it makes the title question moot: Both AI companies and this author want people to be aware of the dangers that AI poses to society. The real question is what do we do about it?
The nuance here is that AI can be incredible positive as well. It's like the invention of fire, you can use it for good or bad, and there will be many unintended consequences along the way.
We could legislate and ban AI tech. People have proposed this seriously, yet this feels completely unrealistic. If the US bans AI research, then this research will move elsewhere. I think it is like trying to ban fire because it's dangerous: some groups will learn to work with fire and they will get an extreme advantage over those groups that don't. (or they will destroy themselves in the process).
So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?
This is a propaganda tactic. For decades, tobacco companies claimed that there was no evidence that smoking was bad for one's health. Then, only after losing dozens of lawsuits did the propaganda switch to "but everyone knew for 100+ years that smoking was lethal".
One can read about it by reading Trust Us, We're Experts, or Toxic Sludge Is Good For You, or the other books written by the authors.
https://en.wikipedia.org/wiki/Trust_Us,_We%27re_Experts
https://www.prwatch.org/tsigfy.html
What I meant by
> People seem unable to make up their mind if AI is very dangerous or is it not.
Is that the article says 2 contradictory things:
1. AI companies are misleading us when they say their tech is dangerous and people should be afraid.
2. AI is currently very dangerous and people should be afraid.
Anecdotally, people on the internet (including HN), seem unable to agree on whether AI is real or overblown "hype".
Pretty much everyone agrees that what passes for AI these days is very dangerous. People only differ in which ways they think it is (or will be) dangerous and which dangers they are most worried about.
Some are worried about the environmental harms. Some are worried that AI will do a very shitty job of doing very important things, but that companies will use it anyway because it saves them money and we'll suffer for it. Some are worried that AI will take their jobs regardless of how well that AI performs. Some are worried that AI will make their jobs suck. You've also got people who think that our glorified chatbots are going to gain consciousness and become literal gods who will take over the planet and usher in the Robot Wars.
Some of those dangers are clearly more immediate and realistic than others. We should probably be focused on those right now. We can start by limiting the environmental harms they're causing and making companies responsible for the costs and impacts they have on our environment. Maybe make it illegal for power companies to raise the price of power for individuals just because some company wants to build a bunch of power hungry data centers. Let those companies fully bear the costs instead.
We can make sure that anyone using AI for any reason cannot use AI as a defense for the harms their use of AI causes. If a company uses AI to make hiring decisions and the result is discrimination, an actual human at that company gets held legally accountable for that. If AI hallucinates a sale price, the company must honor that price. If AI misidentifies a suspect and an innocent person ends up behind bars a human gets held accountable.
We can ban the use of AI for things like autonomous weapons. Things that are too important to trust to unreliable AI.
We could even do more extreme things like improve our social safety nets so that if people are put out of work they don't become homeless, or invest more in the creation of AI individuals can host locally so we aren't forced to hand so much power to a few huge companies, or even force companies to release their models or their training data (which they mostly stole anyway) so that power doesn't consolidate into a small number of companies or individuals. We have lots of options, it just comes down to what we want and how much we can get our elected officials to represent our interests over the interests of the companies who are stuffing their pockets with cash.
These are not mutually exclusive.
Calling out the demonic behavior of trying to coerce people into using your product out of fear is not an indictment of the underlying technology itself.
> trying to coerce people into using your product out of fear
is nonsense.
Everyone agrees that there are legitimate reasons to be fearful of this technology, this is not a fabrication, but we need to figure out how to proceed in a safe and constructive way.
What "coercion" is occurring here? Either you find the technology valuable and you want to pay for it, or you find it not useful (or worse harmful), and you do not want to pay for it.
Maybe another way of putting it, what do you think the frontier AI companies should do in this situation? It seems that being straightforward with the dangers is correct thing to do, and probably being overly cautious is prudent. You could go further and argue they should slow down or stop development, but that is something that the govt should impose, we should not expect or trust the companies to do this themselves. Ironically, in the Anthropic / Pentagon case, we have Anthropic trying to pump the brakes and put up guardrails while the govt wants to go full-steam ahead with autonomous warfare.
The other issue with slowing down / pausing development is it requires an unheard of level of agreement, even with companies in China, or else it will probably not be effective. You could argue this is not even possible at this point.
Lee Vinsel's criti-hype article nailed this 5 years ago, before we even had the chatbot economy we do now: https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...
the writers and the editors know exactly what they're doing - spreading FUD and creating controversy out of thin air. some of it is done for-profit, some for-agenda, and all of it with malicious intent.
Altman wasn't even at OpenAI at that point, so why would that be marketing?
Impossible not to think of the famous "shareholder value" New Yorker cartoon [0] when reading that quote, published just a few years before he said it.
[0] https://www.newyorker.com/cartoon/a16995
If there were tobacco companies warning everyone who would listen in the 1950’s that cigarettes cause cancer, it would be, like, points for honesty, but why don’t you stop selling them then?
The difference being that there are a lot of good uses for AI chat and it doesn’t directly harm most people.
It seems like the customers who would misuse AI are getting left out of the discussion? It’s as if arms dealers were being solely blamed for war, or if arms dealers were expected to stop wars.
The difference being that a single, general purpose product that can do such a wide variety of things isn’t really comparable to making weapons that are only good for one thing.
Maybe it’s as if car manufacturers in the early 20th century were predicting highways, traffic, and pollution.
Or imagine if early dot-com companies were predicting the various dangers of social networks?
There was a when little panic about the fear of bigoted computers at that point.
But... it got a lot of earned advertising and they also sort of did a "pre-burn." They saturating the space with "bigoted Ai concern" for a while, and now I don't ever see it come up.
There's a "get ahead of the inevitable" thing going on. Also, obviously, prospectus hype.
Besides all that, these are geeks and they're excited. This is what an excited geek looks like.
I guess since training them does take cash that raises the bar for what people will do as a prank or on principle.
The main problem is obtaining a big enough training data set. Now, unless you are someone like Google or Microsoft, it has become much harder to scrap data from the Internet than by the time when OpenAI and Anthropic got most of their data.
- Spam
- Deep Fakes
- Porn
- Buggy Software
- Economic Bubbles
- Degradation in people’s abilities and learned dependence on ChatGPT for basic functions.
- Job loss through enshittification ala AI interviews and Telemarketers
- Climate Change, noise pollution etc.
- Mass Surveillance
It’s much more an Idiocracy AI than Terminator.
Then I saw how effective it was at raising money.
Then I realized how effective the fear was at fundraising...
So fear-mongering seems to be just a tool how to get attention and more customers.
Hey ma, I use very dangerous tool now. I am OG.
Glad people are finally catching on.
> Here's one theory.
But the author never gets back to this! It's the main observation the theory has to account for; why don't we see other companies speak this way, if it's such an effective strategy for deflecting non-apocalyptic concerns?
They do. Every company who promised us that their shitty cell phone app or website was going to change the world and revolutionize and disrupt industry/society was guilty of the same thing. They just usually focused their ridiculous levels of hype on the positives. The goal was the same. "Our technology is going to change the world so investors had better give us cash or else they will be left behind" is still the message.
I think this is just an advancement of what we saw with self-driving cars and how companies were pushing narratives around how every trucker will be out of work (this still hasn't happened) or how no individuals would own a car again while deflecting from things like how badly their cars performed in snow/rain or in anything other than very carefully controlled and mapped out conditions.
The answer to the burger analogy is that it's the wrong analogy. McDonald's is selling you the burger. AI companies are essentially selling you the grill.
The hype works so well because it plays on people's ego and desire for power. They think I have the power to end the world with this technology but I won't because I'm a good person.
This technology interacts socially, so even if it can't jailbreak itself on a technology-level (which feels like a tough guarantee to make at this point) it can simply ask someone to do a bad thing and there is some chance they'll do it. The same way a human leader does.
The first kids who have only faint memories of a time before chatbots will be entering the military in 6-7 years. You have to assume they are acting as best friends, therapists, or even surrogate parents for a substantial number of kids right now.
We are going to need years to figure out what to do about this technology. I think some impetus to get that process started is a good idea.
(I am not saying I approve of all the stuff they are being used for or all the statements of its management.)
"We are too dangerous to commoditize" pitches better than "we are mostly typical of the internet's median answer", those are kind of the same statement.
AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.
Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.
I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment).
And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make.
Warfare, policing, bio-engineered viruses are theoretical and far down the list.
AI shaping warfare Vs. Using AI to justify outrageous warfare
would you like me to list the applicable sections of the Geneva convention?
The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand.
None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was.
Obviously, they still overhype and oversell this end of humanity stuff, but this argument regurgitated ad-nauseam is not THAT great of an example when you think about it.
And I am saying this as a person who actually likes this tech.
There’s about $1 trillion that needs to be paid off.
Steam machines are even dumber, but I'm quite sure that industrial revolution is a real thing.