ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
44% Positive
Analyzed from 1855 words in the discussion.
Trending Topics
#more#bubble#models#zitron#going#value#article#don#model#costs

Discussion (40 Comments)Read Original on HackerNews
Yeah, I find that sort of critics to cause more harm than good. The economic case for closed source AI isn't there - in macroeconomic sense, and accounting for all costs, it's more expensive than the value it provides. There's data to back that up, so focus on economics.
On the other hand, hallucinating about what AI can or cannot do is useless, only research can provide the answer.
But saying “I wish the argument was being made better” while using him as the basis for your article is more annoying to me! Just make the argument then.
But publications like The Argument need to take shots to get views, I guess.
Perhaps a slower, more nuanced scroll would serve you. (In all respect.)
this whole article was "i wish he made arguments the way i like"... ok then go do that yourself? its word policing at its most annoying
There were many people around me that said that in the 80s.
Which seems to be a lot of this article
Motte: AI is useless and unsustainable and fraud and the bubble will pop anytime
Bailey: Ohh ackchually AI is a bubble but it will end up like the internet
Why bother with useless arguments like this?
> Over the last two years, he has called the top repeatedly: The AI bubble was definitely about to burst here, and here, and here, and here, and here, and here. His conclusion hasn’t changed, but his arguments have.
> The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.
> This is basically an admission that he can’t make the case in terms of the economics anymore. And in deciding how seriously to take his case in 2026, I think it’s valuable to read it in parallel with his case from 2024 and 2025.
Say what? This is exactly the progression that you'd expect if there was, in fact, outright fraud going on.
* Someone claims to be able to do <impossible thing>
* Critic call them on it
* Rather than folding, the hype machine grows and they start claiming to be doing the thing
* The critics start accusing them of fraud
Also, I note, it's a cute trick to start of claiming "time passes and situations evolve. Ed Zitron, though, clearly does not" and then in the next paragraph object that "his conclusion hasn’t changed, but his arguments have".
I don't have a pony in this race and don't know who Ed Zitron is, but this article makes me suspect he's correct. Acting as if going from "they are wrong" to "they are wrong and lying" is "losing the plot" is anti-convincing.
[edit]
The ending is much stronger:
> I don’t actually think we need less skepticism in AI world. These companies are, indeed, run by people who are not very trustworthy, who often contradict each other or oversell their products.
> And the things they say they’re trying to do are outrageous; people have every right to object to it. Skepticism is more than warranted.
> But we desperately need better skepticism.
In that spirit, I would like to offer this observation. The one substantive difference the author highlights is the claim that generative AI is now offering value that renders the claims that it's all fraud questionable. I would argue that the value it offers is effectively plagiarism-as-a-service, and, just as with the infinite energy machines that secretly harvest power from the wiring of the building, compatible with the notion of fraud.
My own feeling is that it is a bubble: AI models are the new virtual machines. They will become commodified and low-margin hosting providers will dominate the market. Investors in OpenAI/Anthropic will lose their shirts.
Deepseek v4 flash is priced at 1/10 that of openai/anthropic. I can see a race to the bottom - or perhaps an android vs iphone split - where, the premium market is served by openai/anthropic and there is a long-tail of commodity vendors.
Even more interesting is the question if we would have a deepseek model without the US frontier models.
And then what's the value of the advantage that the frontier models have. It's definitely 100x more valuable to find zero days 3months earlier. Probably not in every domain but in enough domains having the smartest model is valuable.
Who will pay 500x the price for a 1% better model? Quants and traders?
I have my doubts about this. We have not seen a viable YouTube alternative because the underlying costs of handling video content are significant and YT has custom hardware and sophisticated software. When we look to the broader cloud market, hyperscalers dominate. We are likely seeing similar when it comes to Google's TPU and access to Nvidia's best offerings.
That being said, I did just pick up a DGX Spark and it runs qwen-3.6 sufficiently well to be a viable interactive coding assistant. Certainly more than enough for unattended agents.
the content and creators are the only competitive advantage they have. there are MANY video hosting platforms out there but they just don't have the content to attract large audiences like youtube does. they have a strong early mover advantage
Those same network effects dont exist (yet) on models
Ads might be questionable model for lot of use cases. And network model only works for promotion but does not lock users in because content is only available in one place.
It is unlikely that models will have network effect because (1) there is less of a two-sided marketplace and (2) people are already forming brand preferences. We also see significant convergence among the agent harnesses as well.
I'm currently building out an internal agentic orchestration platform for business and development and a requirement is to support multiple models and tools so people have an amount of choice.
- If AI costs are going down: oh no it’s getting commoditised, OpenAI bankrupt anytime soon TM
- If companies have moat and get bigger: oh no companies are getting powerful! It’s bad and we must oppose them because they can rug pull anytime and enshittify!!
What situation is something that you would be okay with? Because people seem to have a problem with any outcome.
I'm actually pretty happy if we have a competitive market for AI that maximizes consumer surplus. For a while there it looked like AI might remain in the hands of two or three corporate giants.
(I won't be buying the OpenAI IPO, that's all)