FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
100% Positive
Analyzed from 565 words in the discussion.
Trending Topics
#government#models#woke#nationalized#model#claiming#https#anthropic#more#once

Discussion (53 Comments)Read Original on HackerNews
With what competent staff?
Anthropic ran a weeks-long roadshow on how powerful Mythos is. They pointed to the danger, their controls, the capabilities, and practically begged the world to be scared of it.
Simultaneously, the current US regime realized there was a way to demand fealty from the AI labs. If they're so dangerous, don't we need to see them first? That will cost you, obviously. Standard extortion from the government, at this moment in time.
The labs get their marketing; the white house gets its pseudo-bribe. I hope nobody involved is confused about how we ended up here.
Are you claiming there will be a fee?
Universities: https://www.npr.org/2026/01/29/nx-s1-5559293/trump-settlemen...
Companies: https://news.bloomberglaw.com/esg/extortionary-intel-stake-s...
Law firms: https://www.lawfaremedia.org/article/the-law-firms--deals-wi...
Media: https://www.nytimes.com/2025/07/02/business/media/paramount-...
Why would AI companies be any different?
> Are you claiming there will be a fee?
I'd be more concerned with "your model can't be too woke" regulatory scenarios.
Honestly that's exactly where my mind went. We already see the current administration trying to censor free speech (e.g. Jimmy Kimmel, blocking/restricting press access to the White House unless you are pro-Trump).
I'm afraid of the potential to move in the direction of what we see in China where queries to LLMs referencing things like Tianenmen Square are censored (at best).
Q: Does the government have the expertise, integrity, and credibility to regulate AI models? A: Color me sceptical.