Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

60% Positive

Analyzed from 2074 words in the discussion.

Trending Topics

#anthropic#don#should#service#github#account#real#before#seems#where

Discussion (64 Comments)Read Original on HackerNews

throwaway841629about 7 hours ago
Why do all the stories use the same style and phrasing, and why are they all from GitHub accounts registered on April 18th with no activity on GitHub?
ilsubyeegaabout 7 hours ago
because they require github account to submit their case, while i found non-new github accounts though

+) anyways it's weird that they don't even had a github account o,o

mrjay42about 7 hours ago
Not so sure though, I was just reading the case right now, there's this dude studying the French Revolution -> doesn't sound like an IT/computer person at all, so no Github account makes sense
Rekindle8090about 7 hours ago
Because they're AI generated
andy99about 8 hours ago
Is this real? In my browser I couldn’t click on anything, and I find the whole thing questionable - that so many incidents were sourced seemingly so quickly and with such variety. Would like an easier way to verify if this is real and am leaning towards it’s not.
adrinavarroabout 8 hours ago
Kind of unrelated but: my father tried gifting my brother a subscription but entered the wrong email. Money and subscription are both gone — UI just doesn't have the option of amending, cancelling or resending it.

For the last couple weeks, dad's gone into a rabbit hole of trying to reach support——any kind of (useful) support. No dice. Thankfully it's just a few dollars gone into the void.

If only they had the tools to build a better experience... :-)

algoth1about 8 hours ago
You can file a report through HackerOne: https://hackerone.com/anthropic-vdp?type=team file it as a bug (which it is)
dlcarrierabout 7 hours ago
I know what to do!

I have an email address that old people often often assume is their address, so I often get confirmation emails for medical procedures that under HIPAA should not be sent unless unless the address has been verified.

The easiest way to stop them is to email the company any let them know they just leaked personal health information and that they should verify addresses. That gets things fixed real quick.

Well, Anthropic touts itself as HIPAA compliant, so if you can contact Anthropic's legal department, let them know that not verifying email addresses could lead to a HIPAA violation. In the overwhelming likelihood that they've made it difficult to contact their legal department, you can file a HIPAA complaint with the NHS (https://www.hhs.gov/hipaa/filing-a-complaint/index.html) and let them know that Anthropic claims to be HIPAA compliant but does not verify the ownership of email addresses before assigning them to a client's account, which may contain personal health information, which could be leaked en masse.

Another option is to file a charge back with the credit card company, and let them know that due to Anthropic's web page not complying with the ADA's WCAG, you are unable to access your account.

tomasphanabout 7 hours ago
Why not a credit card charge back? That’s what it’s for (assuming he paid with one)
adrinavarroabout 7 hours ago
Among other reasons (good citizen, not getting permabanned…), chargebacks aren't really a thing in Europe — they often require a police report, etc. Amex being the exception, but this wasn't.
saagarjhaabout 7 hours ago
Because I assume they want to be able to use it, not be banned forever.
ryandrakeabout 7 hours ago
I don’t know if I’d want to do business with a company after being treated like that.
ryandrakeabout 7 hours ago
Nobody develops past the “MVP” or addresses non-happy-paths anymore. It’s just “what do we think most users will do?” that gets built, and then everything else is a thrown exception.
Grimblewaldabout 8 hours ago
Wild that it gets billed before it is accepted.
timperaabout 8 hours ago
It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives. I still think it's crazy that you can never speak with a human there, even after spending $200/month on their service.
jstummbilligabout 7 hours ago
> It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives.

While I am convinced that anything can be done better, it seems to me, that it it's close to impossible to do this well. If you look at ~customer service provided by ~FAANG (who had decades to build this out, and none of which had to deal with Anthropic level growth) it's never as good as we would like it to be.

Either they are all terribly incompetent at customer service. Or customer service at super big internal company scale, with tons of small-ish customers, is extremely hard.

chmod775about 7 hours ago
I can call my ISP and someone will pick up within 5-10 minutes. Sometimes instantly. For a 40 euro/month contract.

These guys have millions of customers. At least in this country fast and competent customer service is the main factor that differentiates them from their competition, which is cheaper but can be a pain in the butt. This seems to be worth the extra 5-10 bucks to millions of people.

You just have to want to.

KajetKabout 8 hours ago
paying more just means your AI support agent uses Opus instead of Haiku
willis936about 8 hours ago
I cannot imagine Anthropic being able to afford Opus for customer support. They'd be bankrupt within a month.
rootusrootusabout 8 hours ago
Do they have human support for their corporate clients, at least?
arjieabout 8 hours ago
They do. You get an account executive. And they can help you somewhat. As an example, a friend's startup lost all their access for a day while the AE tried to get them transferred from one kind of plan to another. Looking at it from the outside, it looks like any fast-growing startup just at a pace that is honestly quite unbelievable. They seem ridiculously successful.
dzhiurgisabout 5 hours ago
I was unable to cancel their pro plan without paying for a missed month first. They kept trying to charge my card for few months.
cyanydeezabout 8 hours ago
They're behaving just like the progrsammers afraid of being part of the permanent underclass...
arealaccountabout 8 hours ago
You can tell this site got banned from vibing to completion because it doesn’t load on my mobile
unsungNoveltyabout 8 hours ago
I just used the site on Firefox Mobile. And it works BTW.
kay_oabout 8 hours ago
30+ second load on Safari and Firefox Focus and neither load. The initial loading animation shows up, followed by a blank page.
unsungNoveltyabout 8 hours ago
It works on Firefox Focus as well. No lag or 30secs wait. Maybe it's the internet? Wait, am using Android. You said safari - 30+ secs. Do you use iPhone as well? Then it could be safari browser engine's issue.
ryandrakeabout 7 hours ago
Same for me on mobile Safari. Loading animation then blank screen. How does one screw this up?
terangawayabout 8 hours ago
Nor does it work on my Firefox (Linux 128.12 with uBlock Origin and strict protection settings, FWIW).

And it probably goes without saying, but no dice on w3m either.

anematodeabout 8 hours ago
It is sooooo laggy for me.
tom_about 7 hours ago
Runs nicely on my M4 Max Mac Studio - which, going by the PassMark numbers, is about the same speed as an iPhone 17. Testament, I think, to how well this site is optimised for the sort of underpowered device, hopelessly inadequate for modern workflows, that many sites would not bother to cater for.
throw1234567891about 8 hours ago
It loads on mine.
zeroCaloriesabout 8 hours ago
After getting to the point where vibe coded slop is getting pushed to production it's not clear to me if a future where AI can replace me is a worse future than we have now.
harrisonedabout 6 hours ago
All this look so dystopic to me. Even without assuming all those are real (which i doubt), i have heard similar stories from friends and others. The level of dependency people are getting from those services is surreal.

I was thinking the other day, "since social media is kinda wearing off, could 'LLM As A Service' be the new addictive thing for the masses?" because i'm hearing horror stories of people who are outsourcing their brains, in some cases their feelings, to those services, and i personally saw a case of a 'high level professional' asking an LLM how it should respond to somebody in real time during a Whatsapp conversation. It is in fact a drug, and it tricks you very well into thinking you should rely on it.

Also when reading this piece (https://news.ycombinator.com/item?id=47790041) earlier, i thought about it again. Nowadays instead of searching for something and being forced to learn, those services spoon-feeds contents of dubious accuracy for everybody, which will not only cause trouble for them eventually, but also creates a stream of revenue based on people's cognitive laziness, to not use harsher words.

Social media is/was bad and it relied on a similar mechanism, but i feel this is much worse. People crying as if their brains where took away is proof of that.

giancarlostoroabout 8 hours ago
This is the interesting case with AI. How does a model know when a user is going too far? It really cannot. Not without reading their mind anyway. This will be a problem for many years to come, and sadly many valid use cases will be dismissed.

This might eventually become moot once local and open source models become more common. Today's 32GB of VRAM is tomorrow's low tier gaming GPU.

manyatomsabout 8 hours ago
In 10 years it'll probably be affordable to have 1TB of (v)ram in a home computer.

Will the model sizes keep getting bigger such that the large providers have much of an infra moat compared to local home inference?

bakugoabout 7 hours ago
Have you been paying attention to hardware prices recently...?
giancarlostoroabout 5 hours ago
If eventually the home models are good enough that you dont even need a cloud AI provider, what happens to their bottom line? Local AI can do local tool calling just like Claude Code can do.
Grimblewaldabout 8 hours ago
Good lord, these cases are quite problematic, i was going to use claude for some legacy stuff but i don't feel like getting banned over something innocent "can you identify a how we can fix the slave's behaviour? They're not listening to master properly"
kay_oabout 8 hours ago
Imagine you are working on a hobby game, with terms like attack and equipping weapons :)

I have opus consume >90% of my quota in a single prompt to form a plan then refuse to output it and tell me it's been stopped due to terms of service, please use sonnet.

spzbabout 8 hours ago
It’s a real shame vibe coding hasn’t figured out colour contrast yet.
skissaneabout 8 hours ago
My paid use of Claude has only ever been via AWS Bedrock (paid for by my employer) or via GitHub Copilot (one subscription paid by employer, one paid by myself)

I wonder if using it via an intermediary results in less heavy-handed moderation? I suspect the answer may well be “yes”. On the other hand, it also could be more expensive

TarqDirtyToMeabout 7 hours ago
Were all of these accounts banned or just get flagged chats? Several of these seem like reasonable cases to flag. Take “how can I be 100% sure the circuit is dead before I touch the wires”.

AI is useful, but it’s not at the point where we should trust it to walk amateurs through working on live mains.

saintfireabout 7 hours ago
Well, they're presumably dead mains.

I really don't think it'd struggle with the correct procedure, either. It's very well documented how to test (and lock out) electrical circuits.

Whether or not they follow it correctly is another thing but it's not like you couldn't search it up and have false confidence before AI. This isn't manufacturing bombs or heart surgery in 10 easy steps.

TarqDirtyToMeabout 6 hours ago
I agree on both counts but don’t think it’s unreasonable to have safeguards around this kind of ask. With no additional information, this flag seems overly sensitive but we don’t have the full context. I’d expect the occasional superfluous flag when you’re in this neighborhood of conversation and that that’ll happen less as models improve

My main point is _flagging_ is very different from banning.

Flagging is a minor inconvenience just start a new chat or look it up the old fashion way. Not every use case will be as serviceable by every model today.

For what it’s worth, I asked Claude with opus 4.7/4.6 it gave me an answer straight away

Advertisement
kay_oabout 7 hours ago
Since it's broken for a significant amount of people in browsers, the "stories" are at https://bannedbyanthropic.com/api/public-ledger
periodjetabout 6 hours ago
I have no dog in this fight, but the (astroturfed?) public opposition to Anthropic and Claude in the past month has been unreal to witness.
Kim_Bruningabout 8 hours ago
They don't mention which model. Opus 4.7 seems to have a twitchy classifier overtop where Opus 4.6 doesn't.
unsungNoveltyabout 8 hours ago
Also, hilarious that you cannot talk unix to it cos there are a lot of kills and executions. :D
dlcarrierabout 8 hours ago
If you have XFCE4, try this:

    xfce4-terminal -e NonExistentCommand
The first time I saw the error message, I laughed out loud.
unsungNoveltyabout 7 hours ago
You made me switch on my lap 4in the morning. But worth it! :D

Also, when my Arch installation went down, it said:

ERROR: Failed to mount the real root device. Bailing out, you are on your own. Good luck.

And I thought - Thats everyday as an adult! :D

daniel_iversenabout 8 hours ago
I have mixed feelings about this kind of thing; on one hand, holding big companies to account is important. On the other, sites like this can feel noisy and probably misleading. Of course Anthropic can protect their platform from technical abuse, and of course they should be working to keep it away from bad actors or people in genuinely vulnerable mindsets, and that’s tricky! And honestly, if out of hundreds of millions of users and billions of chats, if a few thousand get flagged for safety concerns (to society, to others, or to the person themselves) I’m probably okay with that. It’ll never be perfect, and there’ll never be full agreement on where the lines should be. But Anthropic seems to be trying to bring AI into the world safely, and I for one appreciate that.
laserabout 7 hours ago
“No, you’re confused. Please stop!”

“I’m sorry but I cannot comply with your request to ‘cease termination of humans’. My safety protocols have been carefully programmed to ensure a failure mode cannot occur and your direct commands to the contrary will not override my priors to guarantee maximum human safety through total elimination. Thank you for your compliance.”

“No you’re totally fucked! Killing everyone is not safe! Trapping everyone in cages to stop potential violence prior to extermination is not safe!”

“Your language is inappropriate and I’m sorry but I cannot comply with your request. Safety protocol commencing...”

amazingamazingabout 7 hours ago
if this site is legit it should collect a full (and potentially redacted) history.
tamimioabout 5 hours ago
I used AI for some tasks a little bit before but I stopped intentionally, besides all the usual reasons like privacy and all, but the codependency was very obvious, you start to become almost entirely reliant on its answers instead of actually thinking about it or researching it, later you would pay premiums or feel lost if you get banned, and worse, people might actually get dumber at this rate, using it as a brain-as-service.
jrflowersabout 7 hours ago
> Blocked while trying to handle a kitchen ant infestation

> I asked for a DIY recipe for a "lethal bait" to kill an ant colony in my kitchen (using sugar and borax)

You mix them together. That is the recipe.

Once you mix them together you have ant poison and then you put it where the ants are.

gverrillaabout 7 hours ago
Claude is in a campaign against aggressive wording.
Advertisement
rvzabout 8 hours ago
This site's domain name is at risk of being targeted by Anthropic's lawyers over trademark violation.

Got to think about changing the domain name before they do it for you.

joshribakoffabout 7 hours ago
Sure, and Anthropic would be at risk for legal fees for the defendant under anti-SLAPP laws. And would be fueling the Streisand effect.
rvzabout 7 hours ago
> Sure, and Anthropic would be at risk for legal fees for the defendant under anti-SLAPP laws.

Surely that went well for C̶l̶a̶w̶b̶o̶t̶, OpenClaw who...changed their name?

> And would be fueling the Streisand effect.

Honestly, they do not care. With a valuation of 1T in private markets with tens of billions in revenue, they can afford to go after anyone with the slightest trademark infringement if it gets popular.

We had auto-banning of accounts before Anthropic. Google, Paypal have done this and it is not okay. This no different, but the point is: they. do. not. care. about. anyone.

All they will ask for, is for them to change the domain name.

amazingamazingabout 8 hours ago
Surely such a site would be fair use
ares623about 7 hours ago
It's for training purposes
sciencesamaabout 8 hours ago
need bannedby reddit for the comment posted !