Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
78% Positive
Analyzed from 5397 words in the discussion.
Trending Topics
#claude#code#pro#anthropic#more#subscription#plan#users#still#test
Discussion Sentiment
Analyzed from 5397 words in the discussion.
Trending Topics
Discussion (168 Comments)Read Original on HackerNews
This does not explain the changes to documentation.
> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
If you don't want things like this spreading through screenshots of X and Reddit, don't run "tests" like this in the first place!
(Also "if it affects existing subscribers" is a cop-out, I need to know the pricing of Claude Code for NEW subscribers if I'm going to adopt it at a company with a growing team, or recommend it to other people, write tutorials etc.)
A/B testing people without their informed consent is immoral, unethical, and should be illegal.
I can't trust Anthropic to manage their products in a way that supports my workflow.
ive been trying to make the case all year that if we're going to let employees do shit with ai, lets try claude. in the past like.. 2-3 weeks all that goodwill has basically evaporated.
local inference needs to take off asap because all of these entities actually suck and i wouldn't trust a single sla with anthropic. they are not acting like a serious company right now, this is a joke.
his title should be changed to Head of Corporate Bullshitting
They're hitting the physical limits of energy production and chip supply for inference capacity. There's literally nothing that can be done but reduce usage to spread it around for now.
And with no free trial period on top of that, nobody is going to want to pay $100+ just to check it out. I can't imagine the conversion rate of that test being positive.
I, and everyone else I have asked, see this new updated sales UI; sounds like more than 2%.
This is concerning though. If I lose my current usage allotment at this price point I will likely switch to codex
Once they get people hooked, deskilled, and paying, the money ratchet only tightens.
And the companies KNOW that theyre replacing engineers, or trying to. So each engineer replaced is X salary a year they now have available, so make it back in SaaS LLM tokens.
Based on how much money Zitron has reported that these companies are losing on every subscription, this feels more like they're just trying to survive. In other words "ohshittification."
I had a bit of an epiphany the other day thinking about these VC companies offering products to the public at unsustainable prices. It's classic anticompetitive behavior.
You imagine anticompetitive behavior to come from a monopoly because they can afford to burn money to drive competition out before they bring prices back to profitable but the whole VC burn is the same thing. People talk about it a lot without really saying it explicitly when they talk about moats. The only moat Anthropic and OpenAI have is money and they utilize it by offering products below cost.
The two companies are just trying to outlast the other one until they are the only one left.
So it's not really enshitification as much as you were previously getting the deal of a lifetime.
Plenty of Pro subscribers never touch claude-code.
Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).
After all, we may be a just a data source and not their intended demographic all along.
Makes me curious about the internal thinking. One theory being they are in a capacity crisis and knocking Pro users off Claude Code is an emergency brake getting pulled. But an opposite theory is it's a revenue move and they think they have the lock in to pull it off. Especially if they are building up to IPO.
Interestingly the Team subscription which is still $20/month/seat still includes Claude Code. But you need minimum 5 seats. So it could be a way to force people off individual plans and into enterprise plans where possibly things scale better for them, especially IPO/wise. When one user wants it in a company, probably they go buy 5 seats.
My assumption is that people are able to very easily saturate Pro with Claude Code and therefore even though the quotas are lower (more than proportionally) the utilization of those quotas is higher enough that Pro is less profitable.
I dunno, I'm no business genius, but I think we're starting to see these companies try to find ways to make money instead of losing it.
Claude web is actually pretty good for dealing with random projects outside of code. I have a Home Assistant MCP server [1] behind a Cloudflare tunnel exposed to it that makes maintaining automations a lot easier.
[1] https://github.com/homeassistant-ai/ha-mcp
Its funny that openai, who in my eyes went for the general public rather than devs initially, seems to be semi pivoting and catching all the fallout from anthropic's recent behavior.
It is a massive bummer, up until those few weeks ago, i was hard pulling for anthropic for quite some time, now i just dont care and hope something dope emerges quickly that signals i wont ever have to consider either of them.
On the one hand, the people there are supposedly among the smartest on the planet. On the other hand, they consistently forget that they're dealing with LOYAL humans, and these humans prefer respectful communication beforehand instead of being messed with every other day.
My hope for reasonable behavior is to not handle it this way. Decrease limits and increase prices if you can't handle it and be _honest_ about it.
Are they just looking for a way to rationalize another hostile act? And already have expectations like:
- "minus 10% in pro signups" -> oh, let's drop those coders who won't pay anyway
- "minus X% in pro signups and plus X% in max" -> awesome, PAY UP!
While these tools stand to enable the democratization of productive capability in software engineering and other tasks (creating a renaissance for solopreneurs, let's say), what seems more likely to actually happen is that entrenched capital will become the only player with real access to this "knowledge as a utility" (was it Altman who called it that?).
We already see this playing out in two fronts: 1) the gradual reduction of services and 2) the DRAM market, where local-first tools (i.e., potential disruptors of the emerging "knowledge monopoly" created by the big AI firms) are being stifled by supply shortages. How many promising small-to-medium-sized competitors are being snuffed out of existence (or never starting) due to the insanity of the DRAM/storage/CPU (soon) markets?
The currently-subsidized access that we have to the big Opus-like models will, in parallel, be gradually be taken away until only the big players can afford it. And in the end what we will have is hyper-productive skeleton crews at a few consolidated firms performing (or selling expensive access to) basically all of the knowledge labor for society, with very little potential for disruption due to the hardware and "knowledge" scarcity engineered (in part, maybe) by this monopoly.
Not necessarily a closely held belief – just a hunch – which is why I want to see what parts of the picture I might be missing.
It's easy to see this becoming a permanent position; the latest models and smarts are reserved for establishment members only, the riff-raff get the cast-offs. So the establishment is preserved and the status quo protected.
[0] I'm putting scare/irony quotes around this, but if the reporting is accurate, there is something to this; we built the internet on string and duct tape, it's not hard to see how a very smart AI could cut it to ribbons.
The real profitability is selling tokens to enterprise, and enterprise demand is growing so fast that they are short on the total amount of tokens they can generate per minute, and are prioritising rationally - enterprise gets a better experience - instead of optimizing for their lowest paying (and most loss leading) customers.
We are in a hardware crunch right now but that won't be forever, and eventually (likely 2028) we will get experiences like we got in January from pro-sumer accounts again.
“You asked, and we listened: Introducing Max Plus, our biggest plan yet, designed for those…” blah blah
Opus 4.6 is giving 2, maybe 3 questions before blowing through the Pro 5 hour limit as well. We are forced to use Sonnet which makes the same mistakes over and over and then to start trying with other companies. To make matters worse, it reuses old code as we try to survive between credit expiry so it re-introduced issues into the code with the limited credits, that we had already fixed on our own and with other models.
Anthropic in just a few days has gotten me to try GLM 5.1, the new Kimi, and back to OpenAI. OpenAI also seems to introduce new bugs without being carefully micromanaged. The advantage Claude has is that the models are more careful and can refactor code instead of leading to bloat as they go. But the throttling happening now is breaking things and making the entire subscription unusable. I really hope they fix it soon.
One interesting variable is that I'm located in Vietnam while my coworkers are located in Norway and Europe.
To work around this issue I used Claude for coding with a Copilot subscription which was much cheaper and had virtually no rate limiting.
Copilot gives you some set amount of credits each month, but you can also pay as you go if you run out of credit which is much better than the 5 hour window crap claude code would give me.
The only opus model available now on copilot for some reason is 4.7 and it costs 7.5x tokens, while everything else is 1x, 0.33x or free.
But I switched to using GPT 5.4 medium for a month or so which I find very reasonable.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
Glad I’m not the only one!
I’ve been limited so often this week I’ve setup half a dozen token compression tools in my workflow and had to do a crash course in token optimization.
Of course, it seems to only slightly delay the inevitable and doesn’t really solve the problem.
There is a lot of political capital to be earned by appearing to be "tough" on AI companies.
At this rate I fully anticipate being able to run a comparable stack on a 128GB Mac Studio using quants of newer-generation distilled OSS models in a year or two. Being able to ramble to a computer for an hour about features and technical philosophy then have it build a nearly-working app for $50 is an exciting feeling. There's still a long tail of productionization and fixing what the model didn't adhere to but it's still incredible.
(Head of Growth @AnthropicAI)
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
Is there a wager that this is 100% foreshadowing Claude Code will be removed from the $100-200/month Max plans soon and go to something like API-only? Or only available on like a new $500-1,000/month plan? Restrict the $100-200/month ones to Claude.ai (website) or Claude desktop app only?
Either way, doesn't seem good to say it's a small test and then start justifying it in this direction.
Additionally I run a constant hacking contest between GPT and Claude. It’s a toy project and it simulates an attack/defense of a small corporate network.
Claude used to win pretty handily. Suddenly it’s started to lose 90% of the time. I thought GPT had gotten better but no, looking at the logs it seems that Claude is slower and more prone to running in circles. This is still the case when switching to Opus 4.7.
I don’t know what that means but it’s undoubtedly worse.
From what I can tell Opus 4.7 is more resource-intensive than Opus 4.6 is more resource-intensive than Opus 4.5.
If Anthropic continues to getting worse, try Amazon Kiro and other companies that run Claude on their own hardware.
It might be expensive and have a worse experience compared to Claude's code, but at least the model itself is the "original flavor."
These days, it's hard to ask for much.
I could be connecting unrelated dots here, but it sure as hell seems quite coincidental to me.
So I pay for Codex instead.
Why not with email?
Even the downtime would've been fine (as GitHub shows). Instead they're pissing it all away by letting employees make random announcements on random platforms.
That $20/month is not profitable? That Anthropic thinks that people are willing to pay 400% markup without batting an eye? That Anthropic is desperately trying to clean up their burn rate? Why should we trust a company that can screw up basic PR this hard?
Would it really be that hard for them to just make all of the changes and then do a redeploy rather than doing them incrementally? It's not like they're just editing the raw HTML sitting on the server manually, right? Actually, don't answer that, I'm not sure I even want to know the answer.
3 hours later…
I settled for the AMD rough equivalent. It’s not perfect but it can still handle most of the work. Now if only extra ram would come down in price… I find I need about 5 GB more than I have
https://bsky.app/profile/mattgreenrocks.bsky.social/post/3mk...
Another example, I recently saw two people over on Twitter posting LLM responses at each other in a bitter argument about Vercel's security breach. They made no attempt to pretend they'd formulated the ripostes themselves, it was just screenshotting one-sided conversations... What's the point? They could've saved themselves the trouble by spawning two LLMs, naming them "John Doe" and "Fred Doe", then telling them to argue and post the name of the winner.
Disclaimer: I don't use Twitter, Bluesky, Mastodon, etc., so maybe it's not that deep.
That is the only way to avoid being held captive by Anthropic / Meta / Google.
I realize this duplicates a lot of sentiment already in this thread but anyone here with pull at Anthropic please understand it will undo a lot of the goodwill that made Claude so successful in the first place.
Now though I don't dare use spend tokens for basic note taking with Sonnet because I'm hitting the limit over a couple million tokens on the 20x plan, so they've really tightened the purse strings since November.
I remember when they first added Claude Code to Pro — it was limited to Max initially — and my first thought was that it seemed kind of stupid, because at one fifth of my current limit, I would be hitting walls all the time...
But I’ve mostly been using it for gitops infrastructure in my homelab. I wonder if the token usage is lighter than if I were developing an application.
> For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
https://x.com/TheAmolAvasare/status/2046724659039932830
April: "The fact that we're doing X isn't news because we're only starting to do X"
August: "The fact that we've fully rolled out X isn't news because we started X in April"
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
https://xcancel.com/TheAmolAvasare/status/204672528250217304...
The Anthropic website has become inconsistent. Some places say Claude Code is included in the Pro plan, other pages don't.
The million token context + reduced caching period + new models using more tokens made this a probably unpopular but perhaps unavoidable development.
There's a hard problem here balancing costs and experience. I'm afraid despite the bad experience for people that this is necessary and $20/month was just too big a loss to sustain.
Is there any marginal cost associated with a new subscriber?
I have always heard inference is cheap and the cost was in training, so I assumed any subscriber was making them money, just not enough to cover their insane fixed costs.
But I am just guessing.
Maybe this is coming next
"We've determined that claude code is too dangerous to your code base to release, so we are withdrawing it"