HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
69% Positive
Analyzed from 4604 words in the discussion.
Trending Topics
#claude#code#openai#more#codex#don#chatgpt#market#enterprise#anthropic

Discussion (119 Comments)Read Original on HackerNews
What if there are no other killer apps for Enterprise? Only CC will produce the level of token churn that could drive huge profits for model providers.
The Enterprise market is not as substantial as the rapid success of CC makes seem.
This is not an AI thing, this is a stocks thing, which Ive been complaining about incessantly.
If a given domain, like AI, has competition, that means you have to sell things at cost + margin, and rush ahead or be crushed by competitors. Will definitely make you good money, but wont make you a king.
This is not the kind of money people involved with these kinds of companies are looking for.
AI right now looks like a competiton, with many horses in the race, which are more or less building the same product.
It will hard to squeeze and enshittify this considering people can just jump to another vendor, thus if the current market structure were to prevail, investors would go.
Thus competition has to go.
Altman knows this, and tried to position OpenAI as the obvious winner in this competition, but I guess in the process he managed to alienate people, so now he's not doing so well.
But who knows what the future will bring?
Like, that feels like it's also a huge amount of token churn ("sure, I can search every xls file on your machine to find the 2023 invoice from that company"), and very early in its adaption curve.
Most people are still using AI as a webpage chatbot to ask questions to and copy+paste between, but running an "openclaw" like assistant, which can access your files, email, and opens you up to wild security attacks, that seems like a really big killer app.
Cowork to me also seems like it'll take longer to reach the broader market since the models are less good at "use the mouse and keyboard to do this repetitive task" than "write code", but I see it as having killer-app potential with lots of token churn.
I think it's more likely that the companies that employ large numbers of people to perform manual push-the-button-then-the-other-button workflows will replace the tools that need button-pushing with other sorts of automation.
And outside of work I wouldn't spend any money on something to save myself the ten minutes of logging in to pay my credit cards or check my bank statements once a month or so. I have no real need for an always-running assistant and even the things that it seems most useful for today (beating unassisted humans to the punch for limited-quantity things) are only something it could help with as long as only a very few people have access.
Tools like Claude are best at answering things when the user understands the question.
It’s telling how scarce vision is.
I’ve been using these types of functions for a while for some specific use cases, and it’s super useful for this. Eg go into my budgeting app and explain to me why a certain discrepancy between forecast and actual occurred, which would otherwise cost me a huge amount of time.
I’ve also been using Cowriter AI, which actively learns from what you’re doing by taking screenshots of your screen every few seconds.
These types of utilities are just starting, they’re underexplored, and will definitely burn lots of tokens (while creating value).
Sure, that's happening too, but to a lesser degree than I thought. CC with a number of "enterprise integrations" (really: corporate MCPs) is a pretty hefty force-multiplier for operations teams. "Go summarise and dissect this weird client request for me. Documentation is spread across at least $THESE_ENTERPRISE_DATA_SILOES." Saves a bunch of time pinging the different people across continents who happen to know intimate details. That was not entirely unexpected.
It's the technically minded but not necessarily otherwise technical people who keep surprising me in weird and wacky ways. People are building themselves and their immediate peers disposable dashboards. Who needs a service to pull data for a real-time display when CC can collect the necessary information and construct a local, static HTML file with all the info neatly in one place? I'm sure there will be a hangover because the compute cost for doing these in JIT fashion will surely feel like death by a thousand cuts at some point, but the ability to really quickly validate whether certain types of data aggregations are useful is proving to be ... a positive development.
I disagree about the ease of maintaining the software, though. You still need the skills to really understand what the code is doing, and with the original "why" possibly lost in the adrenaline haze, the maintenance effort floor has shifted.
I'm in the film and engineering spaces, and I can honestly say the same about image and video models.
There is so much fun in all of these tools, and the productivity gains are insane.
I shoot film, but I never would have been able to do anything like this before:
https://www.youtube.com/watch?v=HDdsKJl92H4
https://www.youtube.com/watch?v=oqoCWdOwr2U
Today, I saw AI OR DIE with this banger:
https://www.youtube.com/watch?v=CNbmoVdirxw
Gossip Goblin is doing incredible work as usual. Dude is a savant and would have killed it in Hollywood if he'd had a chance before:
https://www.youtube.com/watch?v=-Rzl7nUdEs4
Corridor Crew is leaning in and building new tools:
https://www.youtube.com/watch?v=Y3Dfw969itU
There's just so much incredible stuff being made by really brilliant people that never would have had the chance before. And these tools are literally brand spanking new. We're just getting started.
Are they actually driving any profit? I mean actual profit, not "tokens" or users or profit but ignoring inference costs, same ignoring training, R&D, etc. I'm not arguing against how useful it is, nor how popular, just the basic total spent - total earned.
Too busy trying to make TikTok for preteens with $4/generation videos that lost their novelty the minute IP was off the table. Didn't even identify the professional market in video was the correct place to invest, like Kling and ByteDance did.
Chasing consumer killed their ascendency.
Sam is a ruthless leader and knows how to build an empire, but he's also a distracted leader who chases too many flights of fancy. Without a golden goose like Zuckerberg, every mistake is a knife wound.
Its pretty embarassing how they have blown the lead. Instead of finding a pathway toward selling tokens in volume (software production) they spread themselves thin and tried to hype up research, sora, web browser... blah blah.
Again - they get what they deserve.
It's a falling knife. Don't try catch it on the way down. That valuation might be justified in another 10 years.
Hard to imagine when they don't have any moat.
Sure they have ... I don't know how many users but it's not like a social network. Instagram was valued $10B with 10 very VERY fast not because of it's tech or employees but mostly IMHO because of the number of locked in users ... because of OTHER users.
Here if one wants to move from OpenAI to Anthropic, they can and they do. You might have difficulty exporting history, context, etc but you make it.
Even basic email has more lock-in than any of the model provider. They did have some moat few years ago, arguably, but now no differentiator that would justify such a valuation.
They are no Meta/Google/Microsoft/Oracle not because of their size or technology but only because their customers can swap providers.
https://finance.yahoo.com/news/new-rule-could-fast-track-spa...
Google has the benefit of being insanely profitable though.
AGI is not gonna come from these companies
Many teams remain anchored on equating AI with chat experiences, while a growing share of enterprise value is emerging from leasing compute clusters to run agentic workloads in containerized environments.
OpenAI has built a cloud-first architecture that supports this model. The desktop experience and applications are sexy, but enterprise usage will likely skew heavily toward asynchronous, background processing.
Look at previous killer apps- they came out quickly and were raking in money very quickly. The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. Apple IPO'd in 1980 with a 21% operating margin! Netscape Navigator 1.0 released December 15th 1994, Amazon.com made its first sale July 16th 1995- 214 days later. AMZN IPO'd May 15th 1997, 883 days after Netscape 1.0 released to the public (they had raised <10 million dollars to that point, but chose not to have a profit because they kept re-investing all of their profit into expanding the business).
We are already 1232 days since ChatGPT 1.0. So we're about 50% farther along than either of those killer apps. No one has figured out as good a business model for Generative AI as either of those were.
To use the other great technology transformation of the past 50 years, cell phones, I have a bit of trouble figuring out the right comparison to ChatGPT 1.0. I can work backwards from today to ChatGPT 1.0 opening up to the public, that's about the difference from the iPhone 3G (the first one with an appstore, the real killer app) to the launch of the Motorola Razr, to give you an idea of how fast mobile technology moved.
Do note that the Razr and the iPhone, like Visicalc, the Apple II, and Netscape 1.0 were hugely profitable for their companies, in a way that no one has demonstrated with Generative AI. Amazon is a bit of a special case, but they were not raising money, they were just re-investing cash that was being thrown off not as profits but into expanding the business. I don't believe that any AI company is generating cashflow the way that Amazon was in 1997, and the other companies mentioned here were GAAP-profitable.
All the other stuff is nice… but you will continue to be money losing and eventually die.
Now you can’t come out and say this because there’s a whole bunch of investments that depend on hype - think about the robotics nonsense.
We're only using 1% of what these models will ultimately do when they're running 24/7 as utilities serving new economic models.
There just isn't enough compute right now to realize the larger monetization strategies.
And that's the weird one, all of the other examples I provided were booking real profits by this point in their technology cycle.
As you note, Netscape and Amazon IPOed fairly quickly.
Google took 6 years (1998 to 2004)
Facebook took 8 years (2004 to 2012)
Alibaba Group took 15 years (1999 to 2014)
Claude Code is at $30B annual recurring revenue, and it launched in Feb 2025, and OpenAI at $25B (although they measure partner revenue differently). By comparison the iPhone make $630M revenue in the 12 months after it was launched.
The ironic part about this is GPT models are by far the worst models to chat too.
I think I rather talk to a wall than GPT-5.4. It so unpleasant. I feel bad for anyone who only experience with AI is ChatGPT.
* generate a lot of text
* answer at least a few of what it thinks might be follow up questions
* restate its original answer a few times
* suggest a follow up “if you want, I can turn that into…”
It feels very tedious and noisy.
Especially the “If you want” at the end of every single reply.
Probably won't happen. But not definitely.
Maybe they think OpenAI is doing something right?
1. I honestly don't think that AI is all that useful for anything other than suppressing labor costs and I don't expect that to change in the short to medium term;
2. I really don't think Anthropic or OpenAI can ever satisfy their stratospheric valuations. I foreesee no cash flow possible that will arrive quick enough to make that happen;
3. Hardware costs will devalue the trillions invested in AI data centers. By 2030 the GPUs will probably be at least 3x as good. Bear in mind, it's just over 4 years between the 3090 and 5090 and that's 3x TFLOPS; and
4. China or other actors will make sure that proprietary LLMs won't be dominant. DeepSeek was a shot across the bow. China in particularly won't want a US tech company to dominate this space. The increasing RAM in local, relatively cheap computers will make this more and more viable.
Bonus prediction: I think China will be making their own homegrown NVidia equivalent GPUs on homegrown EUV by 2030.
The Sora sunsetting marked a big shift towards enterprise focus and meeting Anthropic on the enterprise battlefield, but almost all engineers I work with or know are using Claude at this point exclusively.
Anyone seeing differently?
The warning signs are already starting to show up though, projects are being stalled, not filled out, blaming it on delays from China etc, but the funding is still present, the construction keeps going on of the next building even as the last one sits vacant and offline. The sky high purchases of property from connected individuals by site developers continue, even as pushback mounts and many places are passing anti-datacenter ordinances.
There have been a stream of HN posts (I'm noticed this mainly in the past few weeks) implying some people prefer ChatGPT/Codex to Claude.
Anecdotally, Claude on the $20/month plan can only run 1-3 queries per 4 hours before rate limiting, often stopping in the middle of a query. ChatGPT/Codex doesn't have this problem.
HOWEVER, it has a flaw that makes some people prefer Codex: out of the box, it's lazy: https://x.com/i/status/2044126543287300248
However, once you learn how to deal with the laziness (which can be dealt with some CLAUDE.md instructions and context docs), Claude shows a better taste for coding. It replicates patterns from the repo, writes more readable/maintainable code, follows instructions, captures implicit information.
GPT/Codex is not a bad model/agent, but it lacks something. It's amazing for code reviews, but it writes code with zero regard to your existing codebase or SOLID/DRY principles. It just likes to output code (a lot of it) that works for the task you gave it right now, with zero regard for maintenance later. And also over-uses defensive programming in a way that quickly makes the codebase unreadable for dynamic languages.
Claude is not perfect, I still have to steer it sometimes to prevent overengineering or duplicate code, but a lot less than when I try Codex (and the built-in /simplify does half of the work for me).
Utterly not my experience. I use opus near daily for long research sessions (not all agent based). Are you throwing in 100k input tokens to every query?
I use Claude Max 20x at work and I rarely hit 10% session utilization, which implies even using Claude to write code all day only uses 2x the Pro token limit.
Are you just telling it to try again when you get a response you don't like?
We have Claude Teams at work and I don't think I've had issues there.
I've also noted that 90% of technical users I encounter are on claude or mostly-claude via cursor (switching models here-and-there).
It was a pretty straightforward transition going from mostly using claude code, to now exclusively codex
genuine question, what do you think these words mean?
If people want to meme OpenAI into a trillion dollar market cap, I guess let them?
This is exactly the dynamic I've been worried about.
If you go to OpenAI's site to learn what they're all about, they're pretty clear about it: "ensure that artificial general intelligence benefits all of humanity", "Join us in shaping the future of technology". They think and I agree that ChatGPT is great, but the future of humanity does not depend on precisely how successful this one consumer chatbot is, and so it is not the company's focus. Anyone who understands OpenAI at even a basic level would recognize this, it's neither new nor subtle.
I'm not sure how to avoid the conclusion that OpenAI investors do not understand OpenAI and are just revenue growth junkies.
The 2023 board fight illustrated exactly this conflict in real time: the board tried to exercise mission-aligned oversight and was effectively overruled by capital. The new governance structure gave investors more influence, not less.
"We take the mission seriously" and "we need to justify an $852B valuation" can coexist for a while, but not forever. The investors may be revenue-focused, but they were invited in under terms that make their expectations structurally legitimate — which is what makes this more than just a perception problem.
Thus far based on their actions, a reasonable read would be that they believe “humanity” would be better off with fewer people. Whoever you think OpenAI is or was, you’d have to be willfully ignorant of the actions of those who run it to believe it and Sam now.
Whats comical is Steve Jobs preached the notion of focus decades ago.
Why can't people follow simple advice from someone who already acquired the scar tissue? Its literally madness.
Sam shouldve been fired and stayed fired. He's great at raising money, but running the firm? Absolute basket case of a CEO in that regard.
Anthropic is also overvalued. Their revenue is not even recurring. It’s now “Annualised Revenue” due to token spend.
These two companies are just vehicles of a pump and dump scheme. OpenAI is already off loading shares with “acquisitions” that do not make any sense because investors already think they are about to IPO and not worth the price.
Also, one more thing… and it is called Deepseek.