Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

44% Positive

Analyzed from 1855 words in the discussion.

Trending Topics

#more#bubble#models#zitron#going#value#article#don#model#costs

Discussion (40 Comments)Read Original on HackerNews

ej88•about 2 hours ago
my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful (especially in the hands of competent users), and that all the labs are extremely compute starved due to overwhelming demand.
bigbadfeline•31 minutes ago
> my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful

Yeah, I find that sort of critics to cause more harm than good. The economic case for closed source AI isn't there - in macroeconomic sense, and accounting for all costs, it's more expensive than the value it provides. There's data to back that up, so focus on economics.

On the other hand, hallucinating about what AI can or cannot do is useless, only research can provide the answer.

llbbdd•about 2 hours ago
I used to enjoy his writing a lot pre-AI around the time he was spending a lot of words on Musk, crypto, etc. More because it was an entertaining form of hate-reading about those topics than really informative, per se. Then he started doing this schtick with AI and I felt like I got hit hard with Gell-Man Amnesia because he so blatantly makes claims that anybody with a free ChatGPT account can dismiss handily, and it calls everything else he says into serious question.
kritiko•about 2 hours ago
Ed Zitron is annoying.

But saying “I wish the argument was being made better” while using him as the basis for your article is more annoying to me! Just make the argument then.

But publications like The Argument need to take shots to get views, I guess.

1attice•about 2 hours ago
I don't quite understand this -- the article is not written by a Zitron supporter or ally. I think it's a best practice in several fields to interact with the texts of other thinkers and kick the tires.

Perhaps a slower, more nuanced scroll would serve you. (In all respect.)

danaw•31 minutes ago
ed can be verbose and he can be exaggerated but it's funny to claim that he doesn't come with receipts when his last two articles exhaustively go over the many signs of financial deceptions and other pricing issues that signal manipulation

this whole article was "i wish he made arguments the way i like"... ok then go do that yourself? its word policing at its most annoying

Legend2440•about 2 hours ago
Other professional critics like Gary Marcus and Emily Bender are the same way. It doesn't matter what neural networks do, they will always be a dead end that should be abandoned.
semiquaver•about 2 hours ago
He’s hitched his wagon to a thesis and views everything through that lens come hell or high water.

  > he does not consider, even to disagree with it, the possibility that the industry is paying for Anthropic’s product for non-psychosis reasons, such as finding it useful)
This is my main problem with Zitron. He is so obviously the epitome of motivated reasoning. He seems constitutionally incapable of admitting the possibility that companies derive usefulness and productivity from LLMs. For anyone capable of doing on the ground reporting this would be trivially obvious (at least when it comes to coding). So he ends up just cheerleading on the “AI bad” side whether the cheers make any sense or not.

  > “Nobody wants to talk about the fact that AI isn’t actually doing very much,” he complained, before going on to complain about people saying that agents are able to do tasks independently with oversight. “What tasks, exactly? Who knows!” he wrote.
  >
  > Ed, thousands of people know and it is your journalistic responsibility to be one of them!
He’s intentionally incurious and doesn’t understand the idea of a general-purpose technology. This would be like looking at the rise of programming and computers in the 80s and 90s and asking “what are computer programs doing? I don’t see any concrete benefits right now, must be a scam”
JPLeRouzic•about 1 hour ago
> This would be like looking at the rise of programming and computers in the 80s and 90s and asking “what are computer programs doing? I don’t see any concrete benefits right now, must be a scam”

There were many people around me that said that in the 80s.

jospeh554•about 3 hours ago
Saying something is a "bubble" doesn't mean it'll go away entirely when it pops...

Which seems to be a lot of this article

SpicyLemonZest•about 3 hours ago
It doesn't necessarily mean that, but this specific person Ed Zitron believes and argues extensively that it's going to go away when it pops.
simianwords•about 2 hours ago
This sort of egregious Motte Bailey that keeps popping up.

Motte: AI is useless and unsustainable and fraud and the bubble will pop anytime

Bailey: Ohh ackchually AI is a bubble but it will end up like the internet

Why bother with useless arguments like this?

tim333•about 1 hour ago
I dunno if he's lost the plot so much as repeating the "AI is rubbish, the investment is a bubble, it'll all crash" plot at the rate of 10,000 words a month year after year.
cmorp•about 2 hours ago
I view the AI bubble more like huge investments into something with the goal to profit later, against the likelihood of a open source model (probably even models) running on affordable hardware in any home, making the bet and all the datacenter the real flop.
sweetheart•about 3 hours ago
I'm pleasantly surprised to see this! Last year a few people I know in person, and a podcast I enjoy, talked about or to Ed Zitron and I felt like I was going crazy because so, so much of what he argued was either woefully outdated, or just a fallacy. It's also annoying because it'd be such an interesting topic to explore rigorously and without motive. As mentioned in the article, those analyses _can_ be found. But man, Ed Zitron just seems loud and silly.
MarkusQ•about 3 hours ago
> But time passes and situations evolve. Ed Zitron, though, clearly does not.

> Over the last two years, he has called the top repeatedly: The AI bubble was definitely about to burst here, and here, and here, and here, and here, and here. His conclusion hasn’t changed, but his arguments have.

> The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.

> This is basically an admission that he can’t make the case in terms of the economics anymore. And in deciding how seriously to take his case in 2026, I think it’s valuable to read it in parallel with his case from 2024 and 2025.

Say what? This is exactly the progression that you'd expect if there was, in fact, outright fraud going on.

* Someone claims to be able to do <impossible thing>

* Critic call them on it

* Rather than folding, the hype machine grows and they start claiming to be doing the thing

* The critics start accusing them of fraud

Also, I note, it's a cute trick to start of claiming "time passes and situations evolve. Ed Zitron, though, clearly does not" and then in the next paragraph object that "his conclusion hasn’t changed, but his arguments have".

I don't have a pony in this race and don't know who Ed Zitron is, but this article makes me suspect he's correct. Acting as if going from "they are wrong" to "they are wrong and lying" is "losing the plot" is anti-convincing.

[edit]

The ending is much stronger:

> I don’t actually think we need less skepticism in AI world. These companies are, indeed, run by people who are not very trustworthy, who often contradict each other or oversell their products.

> And the things they say they’re trying to do are outrageous; people have every right to object to it. Skepticism is more than warranted.

> But we desperately need better skepticism.

In that spirit, I would like to offer this observation. The one substantive difference the author highlights is the claim that generative AI is now offering value that renders the claims that it's all fraud questionable. I would argue that the value it offers is effectively plagiarism-as-a-service, and, just as with the infinite energy machines that secretly harvest power from the wiring of the building, compatible with the notion of fraud.

rafterydj•about 1 hour ago
Not sure why you're being downvoted, your comment seems interesting to me.
ForHackernews•about 3 hours ago
I'm not familiar with Ed Zitron but failing to call the top of a bubble doesn't mean you're wrong about it being a bubble. People who were calling out the housing bubble in the 2000s were "wrong" right up until they were right. e.g. from 2006 https://www.nytimes.com/2006/01/02/opinion/no-bubble-trouble...

My own feeling is that it is a bubble: AI models are the new virtual machines. They will become commodified and low-margin hosting providers will dominate the market. Investors in OpenAI/Anthropic will lose their shirts.

bwfan123•about 3 hours ago
> AI models are the new virtual machines

Deepseek v4 flash is priced at 1/10 that of openai/anthropic. I can see a race to the bottom - or perhaps an android vs iphone split - where, the premium market is served by openai/anthropic and there is a long-tail of commodity vendors.

zurfer•about 2 hours ago
It's priced at 1/10, but deepseek is probably not profitable, also it's slow.

Even more interesting is the question if we would have a deepseek model without the US frontier models.

And then what's the value of the advantage that the frontier models have. It's definitely 100x more valuable to find zero days 3months earlier. Probably not in every domain but in enough domains having the smartest model is valuable.

simianwords•about 2 hours ago
False. Deepseek and other providers who host deepseek have no incentive to subsidise. They also price it similarly. So it is the true value.
ForHackernews•about 3 hours ago
iPhone is a consumer brand, and to some extent a fashion/status signalling choice. The market pressures in the B2B space are quite different, I expect lots of cheap good-enough models (Deepseek and others) will end up powering customer service chatbots and the like.

Who will pay 500x the price for a 1% better model? Quants and traders?

bwfan123•about 2 hours ago
Much of the agentic intelligence is at the client. The llm backend is largely swappable. For instance, claude-code paired with any model performs well enough for many usecases. In fact, the real breakthrough is how an agent paired with an unreliable llm could perform well. Given this dynamic, I see llm tokens as the electrons or electricity, and agents as the toasters, and appliances using those electrons. If you extend this analogy, value will bubble up into the appliances which would each have consumer preferences. A token is a token no matter who produces it, just as an electron is, but I like my KitchenAid toaster, whats your preference ?
verdverm•about 3 hours ago
> They will become commodified and low-margin hosting providers will dominate the market.

I have my doubts about this. We have not seen a viable YouTube alternative because the underlying costs of handling video content are significant and YT has custom hardware and sophisticated software. When we look to the broader cloud market, hyperscalers dominate. We are likely seeing similar when it comes to Google's TPU and access to Nvidia's best offerings.

That being said, I did just pick up a DGX Spark and it runs qwen-3.6 sufficiently well to be a viable interactive coding assistant. Certainly more than enough for unattended agents.

danaw•36 minutes ago
arguing that the reason youtube is succeeding is because of video hosting costs is hilariously misguided.

the content and creators are the only competitive advantage they have. there are MANY video hosting platforms out there but they just don't have the content to attract large audiences like youtube does. they have a strong early mover advantage

jononor•about 3 hours ago
YouTubes biggest moat the last 10 years is probably more that all the viewers and creators are already there. Any competitor has a huge disadvantage - creators are not interested in a place without viewers, and viewers not in a place without creators/content.
verdverm•about 3 hours ago
yeah, network effect is real, and you cannot get viewers without competitive video delivery, so perhaps the moat is more like having an ocean on both sides
ForHackernews•about 3 hours ago
Maybe, but if custom hardware and economies of scale are the determining factor, that favors Google (and Amazon/Microsoft), not OpenAI or Anthropic.
verdverm•about 3 hours ago
Definitely, Google Vertex Ai serves up other companies models better than they can themselves. The TPU is the bee's knees. I really hope Google makes a take on the DGX Spark
ej88•about 2 hours ago
Theres no viable yt alternative because of network effects not the video hosting

Those same network effects dont exist (yet) on models

Ekaros•about 2 hours ago
Discoverability of other content and ad money. And then critical mass of viewers leading to sponsorships and other exploitative models of monetising outside Google.

Ads might be questionable model for lot of use cases. And network model only works for promotion but does not lock users in because content is only available in one place.

verdverm•27 minutes ago
There is a bit of apples vs. oranges

It is unlikely that models will have network effect because (1) there is less of a two-sided marketplace and (2) people are already forming brand preferences. We also see significant convergence among the agent harnesses as well.

I'm currently building out an internal agentic orchestration platform for business and development and a requirement is to support multiple models and tools so people have an amount of choice.

alex43578•about 3 hours ago
There has to be some time-based discount factor to calling a bubble.
simianwords•about 2 hours ago
- If costs are going up: oh no AI is not sustainable! Bubble bursting!

- If AI costs are going down: oh no it’s getting commoditised, OpenAI bankrupt anytime soon TM

- If companies have moat and get bigger: oh no companies are getting powerful! It’s bad and we must oppose them because they can rug pull anytime and enshittify!!

What situation is something that you would be okay with? Because people seem to have a problem with any outcome.

ForHackernews•about 2 hours ago
Huh? What makes you think I'm objecting to anything?

I'm actually pretty happy if we have a competitive market for AI that maximizes consumer surplus. For a while there it looked like AI might remain in the hands of two or three corporate giants.

(I won't be buying the OpenAI IPO, that's all)

Advertisement
apercu•about 3 hours ago
Article kind of lost me at "It can no longer argue that costs aren’t falling; they are."
simianwords•about 2 hours ago
They absolutely are. She has even linked a source for that. It’s almost indisputable that the prices (per capability) is going down. I think you should read it more deeply to understand the argument.