Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
65% Positive
Analyzed from 2733 words in the discussion.
Trending Topics
#anthropic#persona#verification#data#claude#identity#credit#account#why#card
Discussion Sentiment
Analyzed from 2733 words in the discussion.
Trending Topics
Discussion (89 Comments)Read Original on HackerNews
Persona also might send your data to 17 different subprocessors (16 if you exclude Anthropic itself).
You reminded me of this submission from two months ago: I verified my LinkedIn identity. Here's what I handed over (https://news.ycombinator.com/item?id=47098245)
I legally couldn't verify because persona doesn't detect aadhaar card and their support system on twitter/mail whatever was incredibly bad so much so that it felt like copy-paste and I still haven't gotten the card. I have written about my experience too.
https://smileplease.mataroa.blog/blog/linkedin/ : (Title of this is) Linkedin's "final decision", restricting my account, making me feel unheard, Persona being Persona & the time I asked Linkedin support what 351/13 is to prove if they are human or not.
And the answer to that question is:
"Hell no! We used the cheapest, shadiest company we could find for that. They'll process and sell all your data. Thank you for continuing to be a valued Anthropic customer!".
* preventing North Korea, China, Russian, Iran and etc. actors from accessing service. They absolutely use workarounds to access AI, e.g. I bet there are companies who are proxy between Anthropic and those countries.
I imagine there will be quite some false positives while identifying those.
[0] https://www.tradingview.com/news/cointelegraph:6192f38e3094b...
[1] https://youtube.com/watch?v=QebpXFM1ha0
So now he's a Codex user. OpenAI and Google both have a minimum age of 13.
EDIT: I should note that Anthropic gave him a refund for the whole month that was underway, despite him being nearing the end of it. So good on them.
Can you expand on this? Your teenage son makes more money than you do professionally, by vibe coding video games?
Anthropic made the best models by hiring non-technical folks like philosophers to build the best training sets and evaluations. Now, it seems like their philosophers are telling people how they can and can't use their model.
I sense an opportunity for free tokens.
Ideas for prompts that reliably trigger the age check?
It is frequently said that programming directly is obsolete, and the skill you must have now is knowing how to operate agentic AIs.
Yet you aren't allowed to do this until you're 18.
So, developing software is now 18+ only?
Local models are chasing the online frontier models pretty hard.
So worst case, that's the fallback (FWIW, YMMV)
edit: Qwen-3.5 MoE (and other local MoE models like it)
Who says this?
This is genuine advice I’ve seen from high profile business types. We’re fucked in the sense our children will be made to be attention whores online.
- Repeated violations of our Usage Policy
- Account creation from an unsupported location
- Terms of Service violations
- Under-18 usage
For the user, sure. But for companies and governments? I'm pretty sure Person is quite trustworthy.
The future has arrived, in which you are only allowed to program a computer in any meaningful way requires total identification and permission.
What a tragedy that the amazing capabilities of LLM assisted programming come with such disgusting and reprehensible requirements and impositions.
So they can ban you from some minor infringement of their usage policies and you'll never be allowed to program again.
"Mr Anderson, it has come to our attention that you have been programming computers under an assumed identity. As you are aware this is a felony under the computer fraud and hacking act and you will be sentenced to four years in jail and may never use a computer again.". Yes laugh it up.
Why would you do this? If you can't write it yourself, you're just sabotaging your effort once the hallucinations are revealed. Secondly, a whistleblower is going to use a corporate LLM provider? Even without ID checks, that's an extremely uncompensated risk.
In other words: they want to create a private web and sniff-after-people system. Today the EU also introduced an app for age verification. They also constantly say how this is ... voluntary.
Well, I guess we all know the direction. Let's have a look at this in a few years, because there may be a few ... suspicions.
With regards to Claude the question is: WHY do they want to sniff off user data exactly?
I may consider showing my ID to a company I already have a business relationship with; given demonstrable legal obligations, contractual necessities, legitimate interests etc . Eg the standard GDPR list.
I do have an existing business relationship with Anthropic, so I might under some circumstances decide to show them my id. I don't have a business relationship with Persona though.
I understand the instinct: they want to insulate themselves from holding PII. Not the worst idea. I'm not happy with it being a third party though. Especially the third party in question.
Put this way: I sort of already trust Anthropic with some of my PII. And that's ... maybe not ok actually. But it's a single failure surface.
But that's definitely not the same thing as trusting Anthropic, AND Persona AND All Persona's partners AND their Partners ad infinitum.
And let's say Persona is actually ok; who knows, they might be? But it's still an extra surface; and if they share again, that's another extra surface again.
It's fairly common sense blast radius minimization. This is part of the actual theory behind GDPR.
"We already seem to accidentally be leaking some data through channel A" , doesn't mean it's a good idea to open channels B-Z as well. It means you might want to tighten down that channel A.
Logged into Claude. Cancelled my max sub. That was that. Now on to migration.
> Your ID and selfie are collected and held by Persona, not on Anthropic's systems. Anthropic can access verification records through Persona's platform when needed—for example, to review an appeal—but we don't copy or store those images ourselves.
It's unacceptable that this data is persisted at all, let alone that it's persisted by Persona.
> Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud. They're bound to protect it with industry-standard security controls and delete it in line with the retention limits we've set and applicable law.
It's good to hear that they're criminals. That means nothing for me though. Nothing.
> Why did my account get banned after verification?
This is bad. Why do they wait to ban until after they have your personal info? Venmo did the same thing to me: They didn't tell me I was banned until they had my ID. Absolutely despicable practice.
---
Anthropic is one of my favorite AI companies because they get LLMs more right than anyone else I've seen. But unfortunately this also means they can be swindled by social manipulation in lieu of technical excellence; the same type of brain results in both, I've seen it.
Persona is a bout of sociopaths, and it shows: they're worming their way into everything despite the well-documented conspiracy. They're doing it out in the open with zero consequences.
If someone is doing something deeply unethical with Claude, let's say they're using a clade of Claudes to launch cyberattacks, then doesn't Anthropic have fine grained telemetry, payment history, API usage / prompting / requests, and other details necessary to investigate? What does a government photo ID provide Anthropic that these data points don't?
At this point, people usually ask "what if they use stolen credit cards?" or are "state backed?" then well... if they're state backed / using stolen credit cards, then they're also capable of using stolen IDs or state-sponsored "legitimate" IDs.
It doesn't make much of a difference to organized crime / state backed assets. Or, Anthropic. But it makes A HUGE difference for entrepreneurs, founders, and just plain old consumers who use the service.
It's an asymmetric risk.
It's one thing for your credit card to leak, you can get a new one. It's harder for lower-tier / dumber criminals to socially engineer into your personal information for impersonation / ID theft with just a credit card number. But it becomes a lot easier with your scans of your ID.
Unless you're connected with an org of interest, have b/millions in crypto, most better organized groups / state actors won't usually (no guarantees) steal your identity. Identity theft is very much a SME operation in cybercrime.
So when Persona inevitably gets compromised and everyone's personal IDs inevitably gets leaked, the threat posed to entrepreneurs, founders and consumers is higher than the inverse.
I don't understand why Anthropic would expose themselves to the liability; when arguably they have all the tools baked right in.
I don't use their tool for writing. Perhaps it's ego, but I think I'm a better writer. But I shared the above text and asked Claude Opus 4.6 on Max thinking, "What would you say about the argument that the Anthropic has the best tool for threat prevention baked right in?"
(I did.)>I don't understand why Anthropic would expose themselves to the liability; when arguably they have all the tools baked right in.
What liability? When has a company ever faced any significant penalty for irresponsibly handling people's private data?
You can have a CC / Visa / MasterCard when you are under 18 years old, but you need to be 18 or older for Claude. That would be one reasons why CC does not work.
Or maybe they suspect you opened a second account after your first got banned for whatever reason. Like you said it's easy to get a new card.
Debit? Sure, some banks will issue them to 11-12 year olds. Credit? Apparently not.