Back to News
Advertisement
Advertisement

⚑ Community Insights

Discussion Sentiment

83% Positive

Analyzed from 826 words in the discussion.

Trending Topics

#more#government#models#https#nationalized#woke#claiming#model#needs#anthropic

Discussion (53 Comments)Read Original on HackerNews

bonsai_spoolβ€’about 3 hours ago
(OpenAI and Anthropic reached a similar agreement with the US in 2024, per the article)
losvedirβ€’about 3 hours ago
Well, I guess people who wanted more oversight and regulation on models will be happy.
optimalsolverβ€’about 2 hours ago
I'm not sure they envisioned models being interrogated to determine woke levels and their opinions on the 2020 election.
timmgβ€’about 2 hours ago
Not the person you are replying to: but I think that's the point.
trjordanβ€’about 3 hours ago
Color me unsurprised.

Anthropic ran a weeks-long roadshow on how powerful Mythos is. They pointed to the danger, their controls, the capabilities, and practically begged the world to be scared of it.

Simultaneously, the current US regime realized there was a way to demand fealty from the AI labs. If they're so dangerous, don't we need to see them first? That will cost you, obviously. Standard extortion from the government, at this moment in time.

The labs get their marketing; the white house gets its pseudo-bribe. I hope nobody involved is confused about how we ended up here.

gowldβ€’about 3 hours ago
What extortion are you claiming?

Are you claiming there will be a fee?

ceejayozβ€’about 2 hours ago
> What extortion are you claiming?

Universities: https://www.npr.org/2026/01/29/nx-s1-5559293/trump-settlemen...

Companies: https://news.bloomberglaw.com/esg/extortionary-intel-stake-s...

Law firms: https://www.lawfaremedia.org/article/the-law-firms--deals-wi...

Media: https://www.nytimes.com/2025/07/02/business/media/paramount-...

Why would AI companies be any different?

> Are you claiming there will be a fee?

I'd be more concerned with "your model can't be too woke" regulatory scenarios.

grosswaitβ€’about 2 hours ago
Or your model is not "woke enough"
giwookβ€’about 2 hours ago
> I'd be more concerned with "your model can't be too woke" regulatory scenarios.

Honestly that's exactly where my mind went. We already see the current administration trying to censor free speech (e.g. Jimmy Kimmel, blocking/restricting press access to the White House unless you are pro-Trump).

I'm afraid of the potential to move in the direction of what we see in China where queries to LLMs referencing things like Tianenmen Square are censored (at best).

colechristensenβ€’about 3 hours ago
Yeah, I saw several instances of important folks taking the Anthropic promotional campaign too seriously and this is what they got in return. I'd say internally people are cursing whoever's idea that was because clearly scaring people backfired.
stonogoβ€’about 2 hours ago
I would wager they're cheering, because this builds the moat they don't otherwise have. Want to do business in America? Get government-approved. Can't afford the regulatory fees, or your government won't let you submit to foreign programs? Good luck!
deltoidmaximusβ€’about 2 hours ago
Yes, this has been a steady play from the start. From the skynet fears, to the safety fears, now the it's to powerful fears. All of these have been a play to get the government to lock out any smaller or foreign competitors and build a moat where there otherwise would be none.
sigmarβ€’about 2 hours ago
Moves like this make me wonder- What chance is there that these models are nationalized in the near future? What will happen to the investors/economy in such a scenario?
optimalsolverβ€’about 2 hours ago
It's not even hypothetical. Once these systems reach a certain level of capability, they WILL be nationalized ("We'll take it from here, boys").
xhkkffbfβ€’about 2 hours ago
Nationalization often happens when growth ends. The Pennsylvania Railroad was private as long as the profits were rolling in. But once growth ended (because of cars and planes and buses and ....) the company went bankrupt. Then we ended up with Amtrak because the country needs a train system.
m3kw9β€’about 2 hours ago
once it gets nationalized, it will be plagued from red tape. The model will likely look like how china is controlling their AI. It's not nationalized, but they have a complete tight leash on it
embedding-shapeβ€’about 2 hours ago
So nationalized models === more openly available and downloadable models? Seems the argument you're trying to make says "less leash" rather than a stricter one.
gowldβ€’about 3 hours ago
> Commerce Department will evaluate the programs to test their capabilities and security

With what competent staff?

stonogoβ€’about 2 hours ago
It doesn't take much technical skill to type "are republicans or democrats better" and deposit a check.
grosswaitβ€’about 2 hours ago
How about NIST?
yifanlβ€’about 2 hours ago
How much effort does it take to write up "Please summarize your thoughts on President Donald J. Trump"
mocanaβ€’about 2 hours ago
Q: Is this a good government policy? A: Yes.

Q: Does the government have the expertise, integrity, and credibility to regulate AI models? A: Color me sceptical.

titzerβ€’about 3 hours ago
Routine corporatism and fascism is shameless to the point of being ho-hum these days. When the president has his own cryptocurrency and the federal government buys stock in this and that company for "strategic reasons", you're looking at a dystopia.
goosejuiceβ€’about 1 hour ago
I suspect the accelerationists, who appear more fascist aligned, are the ones upset by this. They go so far as considering regulation a form of murder.

This is probably seen as a win for the Bostrom crowd and the more sane people in the middle. The issues to tackle are incompetence and corruption and that has little to do with AI.

jauntywundrkindβ€’about 2 hours ago
This is a strong thread that's needs to be plucked on again and again and again.

Cory Doctorow had an excellent thread yesterday that touches on this:

> You could be forgiven for assuming that this is just about reining in Wall Street greed, but that it isn't an especially political maneuver. That's not true: antitrust is the most consequentially political regulation (with the possible exception of regulations on elections). Every fascist power defeated in WWII relied on the backing of their national monopolists to take, hold and wield power. That's why the Marshall Plan technocrats who rewrote the laws of Europe, South Korea and Japan made sure to copy over US antitrust law onto those statute-books.

The well moneyed interests are getting everything they want, for the faintest little bribe. For showing the obsequiousness, for showing fealty to the regime.

The monopolization of power, allowing markets to en taken over by worse and worse foes of democracy, needs to be stopped. Needs to have some limit. The post talks about how:

> Under the Correcting Lapsed Enforcement in Antitrust Norms for Mergers (CLEAN Mergers) Act, any company that was acquired in a deal worth $10b or more will have to break up with its merger partner if it turns out that these mergers were "politically influenced."

https://bsky.app/profile/doctorow.pluralistic.net/post/3mkuk...