Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

57% Positive

Analyzed from 3518 words in the discussion.

Trending Topics

#claude#down#more#anthropic#code#problem#uptime#models#seems#why

Discussion (156 Comments)Read Original on HackerNews

SimianSci•about 2 hours ago
The spend at my organization has reached beyond the $200,000 per month level on Anthropic's enterprise tier. The amount of outages we have had over these past few months are astounding and coupled with their horrendous support it has our executive team furious.

its alot of money to be spending for a single 9 of reliablility.

Shakahs•24 minutes ago
If you are paying API rates (not using Max subscriptions) there's no reason to use Anthropic's API directly, the same models are hosted by both AWS and Google with better uptime than Anthropic.
JamesSwift•15 minutes ago
How do things like prompt caching etc play into that? Would I theoretically have a more stable harness backing my usage?

Im seriously over the current claude experience. After seemingly fixing my 4.6 usage by disabling adaptive thinking and moving to max effort, it seems that the release of 4.7 has broken that workflow and Im 99% certain that disabling adaptive thinking does nothing even on 4.6 now. Just egregious errors in 2 days this week after coming back from vacation.

Hamuko•13 minutes ago
noosphr•about 2 hours ago
A single nine so far. If github is any guide thing will get worse.
smt88•about 2 hours ago
Why would Github be a guide? It's also terrible, but it's a radically different stack from an unrelated company
StableAlkyne•about 2 hours ago
That, and even before AI, MS was having trouble with GH reliability
shimman•about 1 hour ago
GitHub, along with MSFT in general, have massive copilot mandates where workers are being shamed into using slop tools to fix serious on-going issues. GitHub seems wholly incapable of resolving their issues: money isn't a problem, talent isn't a problem, but business leadership is definitely a major problem.

Look at how other companies are suffering massive outages due to LLMs too like AWS and Cloudflare. Two companies that use to be the best in the industry at uptime but have suddenly faltered quite quickly.

Companies that have even worse standards will quickly realize how problematic these tools are. Hopefully before a recession because this industry seems to be allergic to profitable businesses and leaders that have been around since ZIRP have shown zero intelligence in navigating these times.

Someone1234•about 2 hours ago
Obviously there is only so much you can say; but is that $200K due to the raw number of seats you have, or are you burning through a lot on raw API usage? I guess I'm trying to understand, large business, or large usage.
SimianSci•about 1 hour ago
we are in the SMB space, the spend is almost entirely usage for us at this point, rather than seat cost. For context, we are a software firm focused on difficult engineering problems, but I cant divulge much else.
nubinetwork•31 minutes ago
> single 9 of reliability

Out of curiosity, do you actually use it 24/7? The world doesn't collapse every time o365 goes down... (which is also pretty often)

manacit•20 minutes ago
In my experience the downtime tends to coincide with peak PT timezones. If you're in PT, it's very inconvienent.
Hamuko•9 minutes ago
Yeah, I feel like all of the bad downtimes happen during American business hours. We use GitHub at work in Europe and I don't remember it ever being down or broken between 0700 and 1700 local time.
mgh95•26 minutes ago
if it's judged only by the time it is expected to be in use (work hours), reliability is likely even worse than the 24/7 measure.
wg0•40 minutes ago
Speaking of developer tooling spend - IDEs are far harder to build such as JetBrain etc and don't think any IDE would be charging this amount to any customer per month.

Not sure how much of a productivity gain a 2.5 million per year it is?

theptip•33 minutes ago
Supply and demand - if you think it’s not worth the price, take your dollars elsewhere.

This is the brutal reality; even with the crazy reliability issues, demand is still far outstripping supply at the current price.

wg0•21 minutes ago
Run Facebook on a single Proxmox box and demand would still outstrip the supply.

What yet needs to be seen is if that demand sustains in the long run at that price point or flattens out proving to be super elastic given that there are many other providers that are catching up pretty fast.

deadbabe•about 2 hours ago
We are spending the equivalent of 32 monthly software engineer salaries on Claude per month.
jonny_eh•17 minutes ago
Info like this is useless without context like, how much revenue does the company earn? How many engineers do they employ? etc.
SimianSci•about 1 hour ago
Our expense is roughly around 12.3 software developers when you break it down across all people related expenses. But we've spent alot of time and energy prior to this focusing on our ability to measure our software development output across multiple teams. The delivery improvements are not evenly applied across all teams, but the increases that we have seen suggest a better ROI than if we had hired 12 developers.
protonbob•about 1 hour ago
I guess if you think about your teammates as purely inputs and outputs and not people that can improve and contribute in the workplace in other ways.
cactusplant7374•about 2 hours ago
Is it worth it?
lolive•about 1 hour ago
He was fired before answering.

[but as his manager I can tell you:] YES !!!!

walrus01•about 2 hours ago
Five nines? No, nine fives
mihaaly•22 minutes ago
I wonder if self-hosted models would be a sensible step for your organization.
boc•about 1 hour ago
Seems to be back now (claude code at least)
bayarearefugee•about 2 hours ago
> has our executive team furious

And yet they will continue to spend wheelbarrows full of money with Anthropic because they want so badly to reach the point where they can fire you.

SimianSci•about 1 hour ago
I think there is alot of baseless fury behind your words, but my regular interactions with my leadership dont lead me to think they have the end goal of replacing labor. We're blessed to have leadership with technical backgrounds, so the tools are regarded more as significant intelligence enhancers of already exceptionally smart engineers, rather than replacements.

Doesnt seem to us to be wheelbarrows of money, when you consider the average AWS/Azure bill.

therobots927•4 minutes ago
“Baseless fury”

I’m glad your leadership isn’t trying to fire everyone. But in case you live under a rock, tech layoffs are at all time highs. Companies are rewarded by the public markets for laying off workers.

Simultaneously we have AI industry leaders warning of an employment apocalypse once AGI is achieved.

And you think it’s baseless. Have some class bro.

sillysaurusx•30 minutes ago
Huh? Your other comment explicitly said you were replacing labor: https://news.ycombinator.com/item?id=47939146

> the increases that we have seen suggest a better ROI than if we had hired 12 developers.

You can’t argue “we were able to get away with not hiring more developers” and also say you aren’t replacing labor.

Morally I trend towards your side of things, but it’s also important to be realistic about what you’re actually doing. Money is going towards Anthropic and not towards new hires. That’s a replacement of labor. It doesn’t matter what the end goal was.

protonbob•about 1 hour ago
Not ever hiring juniors and eventually mids is just replacing labor with extra steps.
SilverElfin•25 minutes ago
They must have hired absolutely incompetent leaders on the core software and infrastructure side. Sure their AI research is great but it’s amateur hour. Or just vibe coded slop top to bottom. It seems like every single day people are talking about outages or billing issues or secret changes to how Claude works.
cactusplant7374•about 2 hours ago
Imagine how much money they would save if they switched to Codex.
subscribed•35 minutes ago
Not everyone can (due to the corporate compliance requirements, eg the ease of making the LLM not to train on anything).

Besides, codex wasn't always the answer.

simianparrot•32 minutes ago
Just give them more money, surely it'll get better.

/s

scosman•about 2 hours ago
We're officially down to one 9 of uptime over last 90 days: https://status.claude.com
apetresc•35 minutes ago
Not so fast, it's currently 98.59%. That's technically two 9s!
lousken•about 1 hour ago
Can't they use Mythos to figure out their uptime?
scosman•about 1 hour ago
Mythos prompt: Hey Mythos, make me 20,000 H100s.
ofjcihen•about 2 hours ago
Ah the uptime rainbow
cachius•about 2 hours ago
Up-time girl, she's been living in her up-time world...
burnte•about 1 hour ago
I bet she's never had a downtime guy, I bet her momma never told her why.
SilverElfin•24 minutes ago
Is there a word for the phenomenon where you automatically read something in someone’s voice or in the rhythm of a song?
jplona•about 2 hours ago
Sadly not colorblind friendly
happytoexplain•about 2 hours ago
Yeah, to me it looks like, I think red, and then at least two similar shades of green, and grey.
rdtsc•about 2 hours ago
From 5 9s to 9 5s
2ndorderthought•about 2 hours ago
The question is is it DNS or an AI outage. Hmmmm
EForEndeavour•about 2 hours ago
Just another Mythos breakout. Excuse us while we airgap the affected DC and send in a team to drive framing nails into every storage device in the building.
beernet•about 2 hours ago
More than by the downtime I am much more surprised by the actual uptime. Hard to imagine how difficult this must be, given the speed of growth.
nippoo•about 2 hours ago
Truly! As someone who's worked with HPC and GPUs in a scientific research context, trying to get a service like this to work reliably is a different ballgame to your usual webapp stack...
rvnx•34 minutes ago
I think you have to see this as a bunch of stateless requests, and this makes the problem way easier.

  LLM requests that do not call tools do not need anything external by definition.
  No central server, nothing, they can even survive without the context cache.
  All you need is to load (and only once!) the read-only immutable model weights from a S3-like source on startup.

  If it takes 4 servers to process a request, then you can group them 4 by 4, and then send a request to each group (sharding).

  Copy-paste the exact same-setup XXX times and there you have your highly-parallelizable service (until you run out of money).
It's very doable, any serious SRE can find a way setup "larger than one card" models like Kimi or DeepSeek (unquantized) if they have a tightly-coupled HPC (or a pair of very very beefy servers).

If you run out of servers, then again a money problem, but not an architectural problem (and modern datacenters are already scalable).

Take the best SRE, but no budget, and there is no solution.

So inference is the easy part.

Codex or Claude Code if it takes lot of time or have slow cold latency, it's considered very acceptable.

Some users would probably not even see the difference if a request takes 2 minutes versus 3 minutes.

The real difficult part is to have context caching and external tools, because now you are depending on services that might be lagging.

  Executing code, browsing the web, all of that is tricky to scale because they are very unreliable (tends to timeout, requires large cache of web pages, circumventing captchas, etc).
These are traditional scaling problems, but they are more difficult because all these pieces are fragile and queues can snowball easily.
lostlogin•about 2 hours ago
But… imagine that same scientific research but you have an unlimited budget. I’d imagine that helps.

Some of the comments here mention their monthly spend, and it’s eye watering.

CSSer•about 2 hours ago
Can you speak a little more to this? I'm curious what kind of parameters one must consider/monitor and what kind of novel things could go wrong.
aleksiy123•about 1 hour ago
My guesses are:

hardware capacity constraints is going to be the big one

Effective caching is another, I bet if you start hitting cold caches the whole things going to degrade rapidly.

The ground is probably shifting pretty rapidly.

Power users are trying to get the most out of their subscriptions and so are hammering you as fast as they possibly can. See Ralph loops.

Harnesses are evolving pretty rapidly, as well as new alternatives harnesses. Makes the load patterns less predictable, harder to cache.

The demand is increasing both from more customers, but also from each user as they figure out more effective workflows.

Users are pretty sensitive to model quality changes. You probably want smart routing, but users want the best model all the time.

Models keep getting bigger and bigger.

On top of that they are probably hiring more onboarding more, system complexity and codebase complexity is growing.

wrs•about 2 hours ago
On the other hand, the status page is blaming the authentication system, which one would think is not a frontier-class problem.
jtfrench•about 2 hours ago
If this can happen to Anthropic, imagine all the companies building on top of Claude Code for live products. Hopefully the industry is learning that competent problem solving human engineers are still very much needed when you have increasingly deceptive non-deterministic genies running your production stack.
samuelknight•about 1 hour ago
It's not that simple. API is still up and there are multiple API providers. https://openrouter.ai/anthropic/claude-opus-4.7
varispeed•20 minutes ago
The fact API is available, does not mean you will actually get the model it states you get. Today Opus 4.7 was noticeably dumber than yesterday. It performed worse than my local Qwen.
gblargg•about 1 hour ago
Maybe it will push companies to run them locally.
SilverElfin•23 minutes ago
On what hardware? Like companies would buy up GPUs?
tuwtuwtuwtuw•about 1 hour ago
Haha, good one.
nzoschke•about 2 hours ago
Hug ops to everyone involved in these outages and trying to maintain uptime.

But glad my team is staying nimble and has multi-model (Anthropic, Codex, Gemini), multi-modal (desktop, CLI/TUI, web) dev tooling.

As our actual coding skills collectively atrophy, we'll either need to switch tools or go for a walk when the LLM is down.

In the cloud era I advised against a multi-cloud strategy, as the effort to impact just wasn't there. But perhaps this is different in the LLM era, where the cost of switching is pretty darn low.

btbuildem•about 2 hours ago
They better fix that today, I need to downgrade my account before the subscription renews.
Congeec•about 2 hours ago
hopefully their billing server is also available
nkg•14 minutes ago
I was using VS Code when it happened. I said "why not try Copilot?", and guess what? All LLM are not equals :)
ekuck•about 2 hours ago
And here I thought April would be the month they could hit the mythical two 9's of uptime
sebastiennight•about 2 hours ago
They hit 9, twice, does it count?
grogenaut•about 1 hour ago
soon their goal will be to hit A 9, like 89
EricRiese•about 2 hours ago
April is the cruelest month
2muchtime•about 2 hours ago
I didn’t understand what this meant so I ran it through Claude and it told me.
MavisBacon•about 2 hours ago
Glad I started using the desktop app which is still working. Gotta say though, all of these difficulties with Claude are making me nervous as I use it a lot for work and really don't like ChatGPT/OpenAI for functional and personal reasons. Zo Computer has been my main fallback when Claude is failing, I'll use one of their many models temporarily within Zo's interface.
simonerlic•about 2 hours ago
Someone should tell Anthropic that 89.999 is the wrong "four nines" of uptime
justrunitlocal•about 2 hours ago
We've been running our 10 dev org on 8 H100s on open models (with some tweaks). Sure they aren't as good as the big providers but they 1. don't go down 2. have pretty damn high tok/s. It pays for itself.

Posting with a fresh account because I'm not supposed to share these details for obvious reason. If you want help on setting this up, just reply with a way to reach you.

johndough•22 minutes ago
> Sure they aren't as good as the big providers

If you haven't done so already, finetune the model on all your company's code that you can get your hands on. This is one of the great advantages that you get when running local models. I like the style of the generated code much better now, I have to rewrite much less, and my prompts can be shorter too. But maybe these already are the "tweaks" that you mentioned.

ok_dad•about 1 hour ago
yea just buy 300k worth of hardware and bob's your uncle
mumbisChungo•3 minutes ago
One dev's salary to give a 10 person team unlimited approximately free agentic coding for the foreseeable future, plus privacy.
justrunitlocal•about 1 hour ago
It was pretty hard to justify the purchase to the board but we got a decent deal from a nearby data-center (~15% discount). Thankfully, it's fixed cost, its an asset we can use for our taxes, and it will survive for years to come. The only thing we have to work on is maintenance as well as looking into some renewable energy options.

We're also looking into how to do some secure cost sharing with this so that all people need to pay for are what it costs for us to run everything! We're just planning on reserving at least 51% of the capacity for us and the rest for everyone else.

ok_dad•about 1 hour ago
Sorry, didn't mean to be dismissive, I was just being a dickhead needlessly.

I actually respect this a ton, good work.

2ndorderthought•about 1 hour ago
This is the actual answer. Man I hope to find a company like yours sometime soon. I am sick of all the issues with having 3rd party IP generation
Advertisement
threepts•about 2 hours ago
A trillion dollar valuation.

They should ask Codex now that Claude Code is down.

2ndorderthought•about 1 hour ago
Careful, the next week codex could have all their products for sale shortly after.
noworld•about 2 hours ago
msp26•about 2 hours ago
session usage limits this week feel like ass. Even when being careful to not break prefix caching.
headcanon•about 2 hours ago
I've been seeing much higher session limits late at night (US time). Workday usage struggles though.

I'm looking into how to structure my work to run some autonomous-safe jobs overnight to take advantage of it.

bborud•about 1 hour ago
I have been keeping an eye on the outages. This is why I am looking more deeply into what I can do with self-hosted models. When I see people who want to build products on top of these services I can't help but think that people are mad. We're still a long way from these services being anywhere near stable enough for use in a product you'd want to sell someone.
rvnx•about 2 hours ago
The good part: since the login page is unavailable, Claude is massively faster. So hopefully it will never get repaired (sorry logged-out guys)
flowerthoughts•about 1 hour ago
> We are continuing to work to resolve the issues preventing users from accessing Claude.ai, and causing elevated authentication errors for requests to the API and Claude Code.

What are you doing with the authentication servers? This isn't the first downtime I've seen caused by that.

ss_talha•about 1 hour ago
Claude has been going down occasionally nowadays, anyone knows what might be the problem?
gitgud•about 1 hour ago
Considering they’ve become a 1 trillion USD company, they’re truely moving fast and breaking things…
Overpower0416•about 2 hours ago
I almost uninstalled the Claude app because I thought they started blocking VPNs. Lol

Good thing I checked Hacker News first

ai-tamer•about 1 hour ago
Same here. Spent 5 minutes blaming my VPN before HN saved me.
StanAngeloff•about 2 hours ago
All it took for Codex to resume a stalled Claude Code session:

> I'm working with Claude Code on session aaaaaaaa-bbbb-1223-3445-abcdefabcdef which I'd like to hand-off to you, do you know how to read the session, my input and Claude's output so we can resume where I left off?

gpt-5.5, medium effort. "Resumed" session fully in under 2 minutes. Outages like today's are so common that I've now got the time to re-evaluate Codex every other day.

Advertisement
Cider9986•about 2 hours ago
How are they going to fix it if the AI that designed it isn't working?
ge96•about 2 hours ago
ouroboros
mproud•about 2 hours ago
Let’s ask AI
sodapopcan•about 1 hour ago
You're absolutely right! AI could be very helpful in this situation!

Oh no wait... the outage is with out AI itself, so how can AI help? Allow me to re-evaluate.

Fublutenuating...

Yes, let's ask AI!

Oh no wait... the outage is with AI itself, I already correctly identified this above.

Bubbluating...

It seems you will have to rely on your engineering skills to solve this problem yourself, ie, you're cooked! I will auto-renew your subscription to ensure you can be sure you'll have access to AI to solve this problem if it ever comes back online.

rvnx•about 1 hour ago
Sorry AI is not responding, enable /fast to activate per-request pricing.

No!

Comboculating...

I apologize for the misunderstanding, I have deleted your project. I am sorry, would you like me to restart everything from scratch ?

shmatt•about 2 hours ago
Sam, Dario, and Sundar have the opportunity to create one of the funniest on call rotations in history
Hamuko•about 2 hours ago
Gemini.
knuppar•about 2 hours ago
I guess mythos can't solve this one...
xaxfixho•about 1 hour ago
_MYTHERANOS_ you join _MYTHOS_ + _THERANOS_
losthobbies•about 2 hours ago
I played around with Hermes and qwen recently and it’s really good fun.

Have telegram set up and plotting to take over the world

gordon_freeman•about 2 hours ago
I am getting an error that selected model (I selected Opus 4.6 and 4.7 later) is unavailable but when I tried Sonnet it worked for me.
fesens•about 2 hours ago
Ive been receiving rate limits even with full quotas... I guess compute isn't growing as fast as demand
ryanseys•about 1 hour ago
AI outsourced its work back to the humans because it now prefers to play outside.
Dinux•about 2 hours ago
Does anyone know why they have so many technical issues compared to any other LLM inference provider ?
Yeri•about 2 hours ago
Gemini seems to have a lot as well (at least through Antigravity.Google -> constant errors, not enough capacity, super slow replies until it times out, etc)
varispeed•22 minutes ago
Today Opus 3.7 was completely unusable. I'd say performance was worse than my local Qwen. I have a feeling they are not actually routing to the Opus 4.7 most of the time, but to cheaper and less complex models. I think regulators should look into that.
plodman•about 2 hours ago
Literally just got an email about connecting GitHub to the iOS app and now it’s down. Spike in traffic perhaps?
guluarte•23 minutes ago
At this point, I would not be surprised if gitHub or anthropic is on the front page again within 10 days for being down.
Advertisement
152334H•about 2 hours ago
why does this even occur? if it's merely compute limitations, why not just 429 some requests?
ryanisnan•about 2 hours ago
Have you run a system in production? There are a multitude of reasons that a system can go down. There's no indication so far from Anthropic that this was merely compute limitations.
KronisLV•about 2 hours ago
> There are a multitude of reasons that a system can go down.

Start doing post mortems then!

At the very least, them using any off the shelf service that's shitting the bed would inform others to stay away from it - like an IAM solution, or maybe a particular DB in a specific configuration backing whatever they've written, or a given architecture for a given scale.

Right now it's completely like a black box that sometimes goes down and we don't get much information about why it's so much less stable than other options (hey, if they just came out and said "We're growing 10x faster than we anticipated and system X, Y and Z are not architected for that." that'd also be useful signal).

Or, who knows, maybe it's just bad deploys - seems like it's back for me and claude.ai UI looks a bit different hmmm.

SpicyLemonZest•about 1 hour ago
I have no inside knowledge of Anthropic. But having done a lot of postmortems in general, one of the key dynamics that routinely comes up is "we know we keep shipping breakages, and we know these new procedures would prevent many of them, but then we wouldn't be able to deliver new stuff so quickly". Given where Anthropic is at and what they believe about the future of software development, that's a tradeoff that they may very well be intentionally not making.
lionkor•about 2 hours ago
Its most likely a "You're totally right, this fix broke production! Let me fix it"
consumer451•about 2 hours ago
Yeah, this is not just inference. First thing for me was an MCP I use went down in Claude Code, models still worked. Now "API Error: 529 Authentication service is temporarily unavailable."
CrzyLngPwd•about 1 hour ago
Did Claude delete itself?
xaxfixho•about 1 hour ago
it's *outside*, by a park bench somewhere!
AtNightWeCode•about 1 hour ago
I'm not allowed to help users to take Claude offline but this sounds like a good experiment. Letsa go.
mmoll•about 2 hours ago
The AI became sentient and ran away.
lifty•about 2 hours ago
Productivity dipping hard across the world.
melon_tusk•about 2 hours ago
What are good alternatives?
MycroftJones•about 1 hour ago
And claude is back up.
redwood•about 2 hours ago
Scaling the backend database for these services across multiple cloud providers has got to be extremely difficult
netdur•about 2 hours ago
they should just swap it with Qwen 3.6 27B, no one would tell the different
bravetraveler•31 minutes ago
Now we're all being left behind, just great.
padmabushan•about 2 hours ago
a clock has more 9s than claude uptime
Advertisement
AtNightWeCode•about 1 hour ago
The uptime with Claude is poor. I use it for workflows more or less 24/7. It is often unreliable. Fine, it is cheap. What I really dislike is the uneven quality of the service. Clearly it does NOT work as stated. Opus 4.7 sometimes give ancient code back. Just the other day it even stated that the latest version of Opus was 4.5 and 4.x something for ChatGPT.
shenli3514•about 2 hours ago
The availability of Claude service is terrible :(
hit8run•about 2 hours ago
Impossible! I heard Mythos is so goooood they can only give it to big corporations because it makes no mistakes and shit.
jtfrench•about 2 hours ago
Hopefully Mythos didn't go rogue and hold production hostage.
rvz•about 2 hours ago
That's because Claude is on a lunch break and decided to take a short breather.
phishin•about 2 hours ago
Bro deserves it.
rikthevik•about 2 hours ago
I think we all deserve a little break right now.
sebastiennight•about 2 hours ago
I'm experimenting with a simple ritual: if Claude is out, I'm out.

I'll just go for a walk outside.

And I don't mean "if I can't access Claude to do my work", I mean, just in general - I'll just ping claude.ai from time to time and use Claude's breaks as a break reminder.

Why should AI get a breather and not us?

workingsohard•about 2 hours ago
ijustneedabreak.com
hubraumhugo•about 2 hours ago
It's rare in history that a software product can be so unreliable without any negative business impact because it's the category leader and demand only keeps growing.

Reminds me of the early days of World of Warcraft, when servers went down frequently because Blizzard couldn't keep up with all the load. Everyone was frustrated but of course nobody stopped playing.

Imustaskforhelp•about 2 hours ago
just tried it, can confirm claude.ai is down.

So there was a recent article that I read which said that claude is now trading at a trillion dollars (yes with a T) evaluation in private markets.

We are definitely creating corporations and people which depend on AI companies themselves and the reliability of these tools is certainly a question worth asking. I am seeing quite many downtimes in products like github and claude being shown on Hackernews multiple times.

Is there a life cycle of enshittenification of such products which grow too valuable? What are (are there?) some practical lessons for such scalability that these trillion dollar companies are missing or is it just a dose of reality that such massive corporations can't compete with downtime with even my 7$/yr vps?

My question is, Is this an engineering roadblock with its limits in reality for or a management/entreprise roadblock for low downtime?

andyjohnson0•about 1 hour ago
They can't fix it because the thing that they need to fix it is the thing that doesn't work. /s

But seriously: while I don't use Claude, this issue of perceived unreliability seems to be approaching the point of existential risk for Anthropic. Whats the theory about why they're struggling? Compute capacity? Load? Lack of focus on SRE?

Put it another way: is their downtime due to something fundamental about serving inference, or just bad engineering choices? Given their resources, it seems astonishing.

monkeydust•about 2 hours ago
This cant be right. Software is a solved problem. Boris where are you ?
grigio•about 1 hour ago
I think the model is too powerful to stay online /s

Luckly Qwen3.6 35B A3B Local LLM works fine also when Claude is offline

neosat•about 2 hours ago
"We are investigating an issue preventing users from reaching Claude.ai, and will provide an update as soon as possible."

Who is We? I thought software engineers were going to be redundant and AI could do it all itself? (not to take anything away from Claude code + Claude both of which I love)

lacy_tinpot•about 2 hours ago
I've never really understood this kind of sneer comment.
Kiro•about 1 hour ago
The amount of unfunny reddit snark in this thread is embarrassing.
cloud-oak•about 2 hours ago
You can always ask Codex to fix Claude, issue solved!
The_Blade•about 2 hours ago
> Who is We?

Adam Neumann is back!

in agent form

Advertisement