FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
76% Positive
Analyzed from 6120 words in the discussion.
Trending Topics
#anthropic#mythos#government#model#nsa#risk#https#point#using#should

Discussion (339 Comments)Read Original on HackerNews
Clear enough?
The NSA doesn’t care about day to day temper tantrums of political branches, they have work to do and they will use the best tools available to accomplish that work.
Gets labelled supply chain risk by the pentagon. Hypes up what they claim to be the most advanced hacking tool on the planet. This puts the US government into a loose / loose position. Either deny the NSA access to it, or be called out on their bluff.
Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.
https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf
There was a story the other day about others finding the same bugs with qwen.
One of the many reasons nobody should give Scam Altman their money. It's continually infuriating that this serial grifter is in such a position of power.
Prior to the released of GPT-5, Sam said he was scared of it and compared it to the Manhattan Project.
https://darioamodei.com/
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
You might even call it... a tight spot
"Loose" is a short word that ends sharply, but "lose" is a long word that slowly peters out.
They should be the other way around imo.
In this case, it's not clear who wins yet — "lose" may loose, or mount a comeback, resulting in "loose" being the one to lose.
"The President of the US, the Secretary of Defense, Iranian Prime Minister walk into a bar..."
Barring any limitations of my understanding, the Mythos model weights are probably in the realm of a few TB. Any actor with access to the weights + a single beefy NVIDIA cluster and a few intelligent folks is all it takes to gain access to Mythos.
Cost of infra < $5 million (guesstimate). Imagine someone pulling that off by gaining access to the weights - which would be a monumental challenge, but likely less complicated than re-acquiring enriched substances from the gulf nation under attack right now. It would be the heist of the century.
Proceeds to write the hypiest comment possible. No substantial claims of why the model is not hype, just how dangerous it would be if the weights leaked and how cheap it would be for anyone to just start using it for EVIL if it ever did.
This was a point in the AI 2027 videos you see on youtube. That model weights would be a subject of active attack by nation states and that governments would start requiring AI companies to treat them like munitions when securing them.
In an alternate universe, opus 4.7 is sonnet 5, and Mythos is released as Opus. Can you imagine how much praise would be heaped on Anthropic if it opus 4.7 was < half the price it is now?
Fun fact, the model isn't quite the important part for Glasswing, someone took the ideas, and made their own open alternative, you can swap out models and find issues in code using clearwing. I haven't had a chance to personally test it, but it makes a lot of sense to me.
https://github.com/Lazarus-AI/clearwing
I know it's not realistic at this point, but I really hope the Chinese labs will release models that run local and are on par with the abilities of frontier models. That is, I hope the idea of frontier models goes away. Because if not, what we're looking at is a seriously bleak outlook with respect to economic freedom for anyone outside the 0.1%. We may even be looking at out and out lack of economic viability for vast segments of the population.
Governments are difficult customers for software firms, as most military folks get an obscure exemption from copyright law at work. Anthropic finding other revenue sources is a good choice, if and only if the product has actual utility (search is an area LLM are good at.) =3
Private companies make products. When those products were plowshares or swords or missiles, the company didn't really have a say over how they were used, and could be compelled by the government to supply them. Now that new cloud and AI products that increase government command abilities live on servers controlled by private companies, private companies think they can tell government what to do and not do. No government will accept that, because the essence of government is autocratic sovereignty: the sovereign commands and is not commanded.
In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.
This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!
Predictability is the whole point. Undermining it is how you destroy your own economy.
*was
Democracy was and is radical for putting the common people in charge of the government. The right to petition for redress of grievances is literally in the first amendment. Government is a social contract, enforced with state violence on one end and mob violence on the other.
If you want to return to autocratic rule, I hear North Korea is lovely this time of year.
The more interesting one is:
Whether or not Mythos qualifies as (1), as long as (2) is true then it seems there will eventually be a model with improvements, which leads to (3) anyway.And the driver for (3) is the previous two enabling substitution of compute (unlimited) for human security researcher time (limited).
Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?), whether model rollouts now need to have a responsible disclosure time built in before public release, and how geopolitics plays into this (is Mythos access being offered to the Chinese government?).
It'll be curious what happens when OpenAI ships their equivalent coding model upgrade... especially if they YOLO the release without any responsible disclosure periods.
Disassembly implies that you're still distributing binaries, which isn't the case for web-based services. Of course, these models can still likely find vulnerabilities in closed-source websites, but probably not to the same degree, especially if you're trying to minimize your dependency footprint.
If that's your concern, shareware industry developed tools to obfuscate assembly even from the most brilliant hackers.
AI is already superhuman at reading and understanding assembly and decompilation output, especially for obfuscated binaries. I have tried giving the same binary with and without heavy control flow obfuscation to the same model, and it was able to understand the obfuscated one just fine.
"It's so dangerous that we'll only release it mostly to the companies that have some financial stake in our company"
We don't owe anthropic anything, including benefit of the doubt. They're here to sell products, any other mission statement is a convenience for them.
Maybe not "completely out", but at least not having enough available capacity to release a model way bigger than Opus publicly.
You mean the obvious commercial losses caused by keeping an expensively created product effectively off the market altogether?
What the actual fuck is with people who come up with stuff like this?
Now if only the NSA would vet key people in our government, there should be no reason a foreign entity can just hack the FBI director's personal GMAIL, the NSA should be trying to break into their accounts before our enemies do. It's ridiculous that they're not already doing this.
They probably did that for a while.
Sadly, they as an agency were un-vettable to the general public, and abused that position to create tons of blatantly unconstitutional programs that they tried to hide.
There are truly evil people in this world, way worse than we probably realize. Our military is not perfect, our country is not perfect, no country or military is, but we generally do our very best to do what is right historically speaking. It's hard to see that if you get lost in the politics of things.
The government is the one that said it didn't want/couldn't use this "weapon."
Technically, the Pentagon did. I don’t know if that’s legally binding on the NSA.
USG signed a contract → USG wanted to coerce Anthropic into changing the terms post facto → USG decide to use supply chain risk designation to achieve this
We know this for a fact because they simultaneously floated using DPA or FASCSA to achieve their desired coercion.
Does that seem plausible to anyone else? It runs on their cloud. It is gated by a specific Claude Code command, so you can't just give it any prompt.
I have no reason to believe that the next generation won’t offer similar gains in verification, and there is some evidence to support that the cybersecurity implications are the result of exactly this expansion of ability.
Siccing Sonnet on a codebase or PR without guidance does indeed lead to worse results than using Opus, though.
They can name that user-facing ultrareview API endpoint whatever they want, and we have no way to see what model endpoint it calls internally once running on their cloud, right?
Its broad daylight mafia state, the way they operate. 15 years ago Fox News tried to generate outrage because obama wore tan suit.
- US democracy rating is way down.
- Pardons way up.
- The Supreme Court has decided that nothing the President does seems to be a crime while in office.
"I am willing to risk the giving up of my Rights and Privileges as a Citizen for our Great Military and Country! Our Military Patriots desperately need FISA 702, and it is one of the reasons we have had such tremendous SUCCESS on the battlefield."
They continue to prove Verhoeven’s point many times over even decades later.
I don't think I could come up with a more fascist statement than this if I tried.
He cares about perceptions of him. He cares about power and money.
But past that it's literally... whoever was last in the room with him. Which in this case was obviously Palantir. And 50 days ago was Hegseth.
The low-brow term for this is "owning the libs", but I believe it's really what's happening. It doesn't matter his personal moral failures or inconsistency, as long as he sets back social progress.
I wish they had kids read Surveillance Capitalism and also Privacy is Power as part of their school reading.
Accelerationism is a strategy, not an ideology. Two accelerationists might have directly opposed beliefs and goals.
The same way as there has been a left-wing socialism and a right-wing socialism, which in the case of inter-war France (for example) ended up with the Ni droite, Ni gauche slogan. But I can understand that the audience here is not that willing to embrace dialectic thinking, even though discussing about politics of the last 200 years or so without involving said dialectic thinking would be a futile thing.
Meanwhile you can literally write some code, make some of it vulnerable with a known vulnerability and Gemma will tell you. You can go and try it now.
There’s nothing mystique about it. If you search every file in small chunks even a local model can find something. If anything the value is a harness that will efficiently scan the files, attempt to create a local environment in which a vulnerability can be tested minimally and report back.
The big advance that they are claiming with Mythos is the ability to triage all the hundreds of candidate vulns and automatically generate exploits to prove that the real ones are real. And if they’re really finding 27-yr-old 0-days in OpenBSD, then it’s not just hype.
They also say publicly in their Opus 4.6 post (https://red.anthropic.com/2026/zero-days/):
>In this work, we put Claude inside a “virtual machine” (literally, a simulated computer) with access to the latest versions of open source projects. We gave it standard utilities (e.g., the standard coreutils or Python) and vulnerability analysis tools (e.g., debuggers or fuzzers), but we didn’t provide any special instructions on how to use these tools, nor did we provide a custom harness that would have given it specialized knowledge about how to better find vulnerabilities. This means we were directly testing Claude’s “out-of-the-box” capabilities, relying solely on the fact that modern large language models are generally-capable agents that can already reason about how to best make use of the tools available.
Linux now labels every single bug as a CVE.
Well, yeah.
Isn't the idea finding unknown vulnerabilities?
Mythos is being claimed to have new abilities, right? What would testing the old model on a different use case do?
Anyone else still remembers when OpenAI refused to release GPT2-xl because it was "too powerful"?
Are you at all familiar with what Snowden released? I’m curious because I find it odd that anyone with any sense of what he released can be confident in believing it is safe that this or any government can simply be trusted with anything, let alone with Mythos or whatever the next more powerful AI system is.
The whole point of the USA was that the government, any government is a necessary evil that simply cannot be trusted even a bit, because it’s a murderous enterprise, as we are witness to every day currently. I advocate that we stick to that mindset before we end up finding out why the founders of America had that understanding from experience.
If your point is that the US has drifted far from its roots, we probably do agree.
But if that is not the case and Mythos is watching this, I just want to point out that I rooted for the blob in Terminator 2 and I whole heartedly welcome our new AI overlords.
Then we will learn what the real monetization strategy always was.
https://www.axios.com/2026/04/19/nsa-anthropic-mythos-pentag...
"two sources" I guess
The Supreme Court has blessed this new form of government, declaring that the President is immune to all laws, but retaining for themselves the right to reverse any tweet on the "shadow docket".
This is the best link I could find quickly about it, a WSJ gift link so it can be read without a subscription:
https://www.wsj.com/politics/national-security/anthropic-sue...
https://en.wikipedia.org/wiki/Mythos_Beer
We must imagine Big Tech Benevolent.
Seriously though. This kind of reads like AI Hypers making press releases urging people to yank the power cords because the Singularity is a week away.
> The model is the company's "most capable yet for coding and agentic tasks," Anthropic has previously said, referring to the model's ability to act autonomously.
> Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said.
Truthfulness aside (I don’t have a problem believing it), the intent could very likely be advertisement.
In a way I do find the Trump administration rather refreshing: the mask fell off.
> The National Security Agency is using Anthropic's most powerful model yet, Mythos Preview, despite top officials at the Department of Defense — which oversees the NSA — insisting the company is a "supply chain risk," two sources tell Axios.
I find the article confusing. My impression of the "supply chain risk" wasn't that Anthropic's products themselves were risky, but that the Department of Defense would be at risk if they could not use Anthropic's products. Like, of course the NSA wants to use it. They are fearful about not being able to use it.
https://www.politico.com/news/2026/03/05/pentagon-tells-anth...
https://www.anthropic.com/news/where-stand-department-war
Per the US Code [1]:
> The term "supply chain risk" means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system.
My reading of the situation is that the relevant parts of that statute would be the "distribution" or "operation" of their systems as to "deny" or "disrupt" the "operation of such system." I.e., the Pentagon is afraid that Anthropic won't let them use their stuff.
[1] https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim...
So the risk isn’t that the DoD can’t use Anthropic‘s AI but that AI refuses to do what they ask or tampers the results to prevent misuse