FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
54% Positive
Analyzed from 13518 words in the discussion.
Trending Topics
#model#models#mythos#code#find#more#anthropic#vulnerabilities#small#same

Discussion (329 Comments)Read Original on HackerNews
> This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed.
Mythos scoured the entire continent for gold and found some. For these small models, the authors pointed at a particular acre of land and said "any gold there? eh? eh?" while waggling their eyebrows suggestively.
For a true apples-to-apples comparison, let's see it sweep the entire FreeBSD codebase. I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter.
Have Anthropic actually said anything about the amount of false positives Mythos turned up?
FWIW, I saw some talk on Xitter (so grain of salt) about people replicating their result with other (public) SotA models, but each turned up only a subset of the ones Mythos found. I'd say that sounds plausible from the perspective of Mythos being an incremental (though an unusually large increment perhaps) improvement over previous models, but one that also brings with it a correspondingly significant increase in complexity.
So the angle they choose to use for presenting it and the subsequent buzz is at least part hype -- saying "it's too powerful to release publicly" sounds a lot cooler than "it costs $20000 to run over your codebase, so we're going to offer this directly to enterprise customers (and a few token open source projects for marketing)". Keep in mind that the examples in Nicholas Carlini's presentation were using Opus, so security is clearly something they've been working on for a while (as they should, because it's a huge risk). They didn't just suddenly find themselves having accidentally created a super hacker.
But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none. Both are worthless without human intervention.
I definitely breathed a sigh of relief when I read it was $20,000 to find these vulnerabilities with Mythos. But I also don't think it's hype. $20,000 is, optimistically, a tenth the price of a security researcher, and that shift does change the calculus of how we should think about security vulnerabilities.
'Or none' is ruled out since it found the same vulnerability - I agree that there is a question on precision on the smaller model, but barring further analysis it just feels like '9500' is pure vibes from yourself? Also (out of interest) did Anthropic post their false-positive rate?
The smaller model is clearly the more automatable one IMO if it has comparable precision, since it's just so much cheaper - you could even run it multiple times for consensus.
See e.g. https://epoch.ai/data-insights/llm-inference-price-trends/
We already know this is not true, because small models found the same vulnerability.
Machines being faster, more accurate is the differentiating factor once the context is well understand
In other words, a significantly better model is also 1-2 orders of magnitude cheaper. You can cut it in half by doing batch. You could cut it another order of magnitude by running something like Gemma 4 on cloud hardware, or even more on local hardware.
If this trend continues another 3 years, what costs 20k today might cost $100.
(I would emphasize that the article doesn't claim and I don't believe that this proves Mythos is "fake" or doesn't matter.)
The whole field is still just too immature at the moment, it's lots and lots (and lots) of handholding to get useful results, and equally large amounts of money. Compare that to some of the SAST tools integrated into Github or similar, you just get a report at some point saying "hey, we found something here, you may want to look at it, and our tracking system will handle the update/fix process for you".
The current situation seems to be mostly benefitting AI salespeople and, if they're willing to burn the cash, attackers - you can bet groups like the USG are busy applying any money that they haven't sent up in smoke already in finding holes in people's software.
How is this preferable or even comparable with using COTS security scanners and static code analysis tools?
If you isolate the codebase just the specific known vulnerable code up front it isn’t surprising the vulnerabilities are easy to discover. Same is true for humans.
Better models can also autonomously do the work of writing proof of concepts and testing, to autonomously reject false positives.
Giving them the benefit of the doubt is no longer appropriate.
What? You want honest "AI" marketing?
Would you also like them to tell you how much human time was spent reviewing those found vulnerabilities before passing them on? And an unicorn delivered on Mars?
The trick with Mythos wasn't that it didn't hallucinate nonsense vulnerabilities, it absolutely did. It was able to verify some were real though by testing them.
The question is if smaller models can verify and test the vulnerabilities too, and can it be done cheaper than these Mythos experiments.
I took its preliminary findings into Claude Code with the same model. But in mine it knows where every adjacent system is, the entire git history, deployment history, and state of the feature flags. So instead of pointing at a vague problem, it knew which flag had been flipped in a different service, see how it changed behavior, and how, if the flag was flipped in prod, it'd make the service under testing cry, and which code change to make to make sure it works both ways.
It's not as if a modern Opus is a small model: Just a stronger scaffold, along with more CLI tools available in the context.
The issue here in the security testing is to know exactly what was visible, and how much it failed, because it makes a huge difference. A middling chess player can find amazing combinations at a good speed when playing puzzle rush: You are handed a position where you know a decisive combination exist, and that it works. The same combination, however, might be really hard to find over the board, because in a typical chess game, it's rare for those combinations to exist, and the energy needed to thoroughly check for them, and calculate all the way through every possible thing. This is why chess grandmasters would consider just being able to see the computer score for a position to be massive cheating: Just knowing when the last move was a blunder would be a decisive advantage.
When we ask a cheap model to look for a vulnerability with the right context to actually find it, we are already priming it, vs asking to find one when there's nothing.
> We isolated the vulnerable svc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities.
To follow your analogy, they pointed to the exact room where the gold was hidden, and their model found it. But finding the right room within the entire continent in honestly the hard part.
Just like people paid by big tobacco found no link to cancer in cigarettes, researchers paid for by AI companies find amazing results for AI.
Their job literally depends on them finding Mythos to be good, we can't trust a single word they say.
TFA article is literally from a company whose business is finding vulnerabilities with other people's AI. This article is the exact kind of incentive-driven bad study you're criticizing.
Hell, the subtitle is literally "Why the moat is the system, not the model". It's literally them going, "pssh, we can do that too, invest in us instead"
Given the tone with which the project communicates discussing other operating systems approaches to security, I understand that it can be seen as some kind of trophy for Mythos. But really, searching the number of erratas on the releases page that include "could crash the kernel" makes me think that investing in the OpenBSD project by donating to the foundation would be better than using your closed source model for peacocking around people who might think it's harder than it is to find such a bug.
And last security audit I paid for (on a smaller codebase than OpenBSD) was substantially more than $20k, so it’s cheaper than the going price for this quality of audit.
When it’s a security researcher, HN says that’s a squalid amount. But when its a model, it’s exorbitant.
I've not said anything else than that I think this specific bug isn't worth the attention it's getting, and that 20k USD would benefit the OpenBSD project (much) more through the foundation.
> When it’s a security researcher, HN says that’s a squalid amount. But when its a model, it’s exorbitant.
Not sure why you're projecting this onto me, for the project in question $20k is _a_lot_. The target fundraising goal for 2025 was $400k, 5% of that goes a very long way (and yes, this includes OpenSSH).
Anthropic spends millions - maybe significantly more.
Then when they know where they are, they spend $20k to show how effective it is in a patch of land.
They engineered this "discovery".
What the small teams are doing is fair - it's just a scaled down version of what Anthropic already did.
Do they find novel items? Or do they copy the areas already found by others?
Opus "found" 8 issues. Two of them looked like they were probably realistic but not really that big a deal in the context it operates in. It labelled one of them as minor, but the other as major, and I'm pretty sure it's wrong about it being "major" even if is correct. Four of them I'm quite confident were just wrong. 2 of them would require substantial further investigation to verify whether or not they were right or wrong. I think they're wrong, but I admit I couldn't prove it on the spot.
It tried to provide exploit code for some of them, none of the exploits would have worked without some substantial additional work, even if what they were exploits for was correct.
In practice, this isn't a huge change from the status quo. There's all kinds of ways to get lots of "things that may be vulnerabilities". The assessment is a bigger bottleneck than the suspicions. AI providing "things that may be an issue" is not useless by any means but it doesn't necessarily create a phase change in the situation.
An AI that could automatically do all that, write the exploits, and then successfully test the exploits, refine them, and turn the whole process into basically "push button, get exploit" is a total phase change in the industry. If it in fact can do that. However based on the current state-of-the-art in the AI world I don't find it very hard to believe.
It is a frequent talking point that "security by obscurity" isn't really security, but in reality, yeah, it really is. An unknown but presumably staggering number of security bugs of every shape and size are out there in the world, protected solely by the fact that no human attacker has time to look at the code. And this has worked up until this point, because the attackers have been bottlenecked on their own attention time. It's kind of just been "something everyone knows" that any nation-state level actor could get into pretty much anything they wanted if they just tried hard enough, but "nation-state level" actor attention, despite how much is spent on it, has been quite limited relative to the torrent of software coming out in the world.
Unblocking the attackers by letting them simply purchase "nation-state level actor"-levels of attention in bulk is huge. For what such money gets them, it's cheap already today and if tokens were to, say, get an order of magnitude cheaper, it would be effectively negligible for a lot of organizations.
In the long run this will probably lead to much more secure software. The transition period from this world to that is going to be total chaos.
... again, assuming their assessment of its capabilities is accurate. I haven't used it. I can't attest to that. But if it's even half as good as what they say, yes, it's a huge huge huge deal and anyone who is even remotely worried about security needs to pay attention.
Unless Anthropic makes it known exactly what model + harness/scaffolding + prompt + other engineering they did, these comparisons are pointless. Given the AI labs' general rate of doomsday predictions, who really knows?
Lots of questions about the $20k. Is that raw electricity costs, subsidized user token costs? If so, the actual costs to run these sorts of tasks sustainably could be something like $200k. Even at $50k, a FreeBSD DoS is not an extremely competitive price. That's like 2-4mo of labor.
Don't get me wrong, I think this seems like a great use for LLMs. It intuitively feels like a much more powerful form of white box fuzzing that used techniques like symbolic execution to try to guide execution contexts to more important code paths.
We’re not doing anything that couldn’t be done before, we’re just doing it faster, easier and cheaper.
Sounds like a recipe for a lot of junk being built. Also sounds like something that’s been true since the beginning of humanity.
In the more near term, sounds like a reminder the datacenters and processing boom will look at lot like the fiber one.
https://news.ycombinator.com/item?id=47732322
> Scoped context: Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints
They pointed the models at the known vulnerable functions and gave them a hint. The hint part is what really breaks this comparison because they were basically giving the model the answer.
loop through each repo: loop through each file: opencode command /find_wraparoundvulnerability next file next repo
I can run this on my local LLM and sure, I gotta wait some time for it to complete, but I see zero distinguishing facts here.
They're a company selling a system for detecting vulnerabilities reliant on models trained by others, so they're strongly incentivized to claim that the moat is in the system, not the model, and this post really puts the thumb on the scale. They set up a test that can hardly distinguish between models (just three runs, really??) unless some are completely broken or work perfectly, the test indeed suggests that some are completely broken, and then they try to spin it as a win anyway!
A high false-positive rate isn't necessarily an issue if you can produce a working PoC to demonstrate the true positives, where they kinda-sorta admit that you might need a stronger model for this (a.k.a. what they can't provide to their customers).
Overall I rate Aisle intellectually dishonest hypemongers talking their own book.
At the same time, I'm not sure that really changes anything because I don't see a reason to believe attacks are constrained by the quality of source code vulnerability finding tools, at least for the last 10-15 years after open source fuzzing tools got a lot better, popular, and industrialized.
This might sound like a grumpy reply, but as someone on both sides here, it's easy to maintain two positions:
1. This stuff is great, and doing code reviews has been one of my favorite claude code use cases for a year now, including security review. It is both easier to use than traditional tools, and opens up higher-level analysis too.
2. Finding bugs in source code was sufficiently cheap already for attackers. They don't need the ease of use or high-level thing in practice, there's enough tooling out there that makes enough of these. Likewise, groups have already industrialized.
There's an element of vuln-pocalypse that may be coming with the ease of use going further than already happening with existing out-of-the-box blackbox & source code scanning tools . That's not really what I worry about though.
Scarier to me, instead, is what this does to today's reliance on human response. AI rapidly industrializes what how attackers escalate access and wedge in once they're in. Even without AI, that's been getting faster and more comprehensive, and with AI, the higher-level orchestration can get much more aggressive for much less capable people. So the steady stream of existing vulns & takeovers into much more industrialized escalations is what worries me more. As coordination keeps moving into machine speed, the current reliance on human response is becoming less and less of an option.
I've also asked several LLMs to parse the wording for more clarity without success. They all highlight it as ambiguous wording. Why not use more direct language and provide the supporting data? They also stated that they are providing $100M in credits to their partners. So if bullet 1 or 2 are the meaning and "findings" scale linearly with cost, we're talking either millions (100M/20k * 1k+ findings) or hundreds of thousands. Does that make any sense? Or is the idea that all of these companies will run scans across their critical codebases continuously? Anyone else have a better sense of the math going on here?
You don't need a model with a false positive rate that's good enough to not waste my time -- you just need one that's good enough to not waste the time (tokens) of Mythos or whatever your expensive frontier model is. Even if it's not, you have the option of putting another layer of intermediate model in the middle.
Because for the same price, you could point the small model at each function, one by one, N times each, across N prompts instructing it to look for a specific class of issue.
It's not that there's no difference between models, but it's hard to judge exactly how much difference there is when so much depends on the scaffold used. For a properly scientific test, you'd need to use exactly the same one.
Which isn't possible when Anthropic won't release the model.
OpenBSD's code is in the 10s of millions of lines. Being able to hold all of it in context would make bug finding much easier.
These are pretty self-contained and seems to be something more like "formal verification" where the model is able to simulate a large number of states and find incorrect ones, if I were to speculate, something akin to a reasoning loop that moved from the harness/orchestration layer down to the model itself.
for githubProject in githubProjects opencode command /findvulnerability end for
Seems like a silly thing to try and back up.
Here's the first one:
> Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior").
Mythos did no such thing, it was cut lose and told to find vulnerabilities. If the intent was to prove that small models are just as good, they haven't demonstrated that at all. The end.
Until "Mythos" is compared with the most bland and straight forward harness vs small model, there's no great context god that can't be emulated with deterministic scanning and context pulls.
Impressive, and very valuable work, but isolating the relevant code changes the situation so much that I'm not sure it's much of the same use case.
Being able to dump an entire code base and have the model scan it is they type of situation where it opens up vulnerability scans to an entirely larger class of people.
> Scoped context: Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints. The models' performance here is an upper bound on what they'd achieve in a fully autonomous scan. That said, a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do.
That's why their point is what the subheadline says, that the moat is the system, not the model.
Everybody so far here seems to be misunderstanding the point they are making.
They measured false negatives on a handful of cases, but that is not enough to hint at the system you suggest. And based on my experiences with $$$ focused eval products that you can buy right now, e.g. greptile, the false positive rate will be so high that it won't be useful to do full codebase scans this way.
The smaller models can recognize the bug when they're looking right at it, that seems to be verified. And with AISLE's approach you can iteratively feed the models one segment at a time cheaply. But if a bug spans multiple segments, the small model doesn't have the breadth of context to understand those segments in composite.
The advantage of the larger model is that it can retain more context and potentially find bugs that require more code context than one segment at a time.
That said, the bugs showcased in the mythos paper all seemed to be shallow bugs that start and end in a single input segment, which is why AISLE was able to find them. But having more context in the window theoretically puts less shallow bugs within range for the model.
I think the point they are making, that the model doesn't matter as much as the harness, stands for shallow bugs but not for vulnerability discovery in general.
Is Mythos some how more powerful than just a recursive foreloop aka, "agentic" review. You can run `open code run --command` with a tailored command for whatever vulnerabilities you're looking for.
To clarify, I don't necessarily agree with the post or their approach. I just thought folks were misreading it. I also think it adds something useful to the conversation.
I'm skeptical; they provided a tiny piece of code and a hint to the possible problem, and their system found the bug using a small model.
That is hardly useful, is it? In order to get the same result , they had to know both where the bug is and what the bug is.
All these companies in the business of "reselling tokens, but with a markup" aren't going to last long. The only strategy is "get bought out and cash out before the bubble pops".
To be fair, nothing stops anyone from feeding each function of given codebase separately with one out of the predefined set of hints.
It's just AST and a for loop. Calling it a system is a bit much.
Can you expand a bit more on this? What is the system then in this case? And how was that model created? By AI? By humans?
- "Is the code doing arithmetic in this file/function?" - "Is the code allocating and freeing memory in this file/function?" - "Is the code the code doing X/Y/Z? etc etc"
For each question, you design the follow-up vulnerability searchers.
For a function you see doing arithmetic, you ask:
- "Does this code look like integer overflow could take place?",
For memory:
- "Do all the pointers end up being freed?" _or_ - "Do all pointers only get freed once?"
I think that's the harness part in terms of generating the "bug reports". From there on, you'll need a bunch of tools for the model to interact with the code. I'd imagine you'll want to build a harness/template for the file/code/function to be loaded into, and executed under ASAN.
If you have an agent that thinks it found a bug: "Yes file xyz looks like it could have integer overflow in function abc at line 123, because...", you force another agent to load it in the harness under ASAN and call it. If ASAN reports a bug, great, you can move the bug to the next stage, some sort of taint analysis or reach-ability analysis.
So at this point you're running a pipeline to: 1) Extract "what this code does" at the file, function or even line level. 2) Put code you suspect of being vulnerable in a harness to verify agent output. 3) Put code you confirmed is vulnerable into a queue to perform taint analysis on, to see if it can be reached by attackers.
Traditionally, I guess a fuzzer approached this from 3 -> 2, and there was no "stage 1". Because LLMs "understand" code, you can invert this system, and work if up from "understanding", i.e. approach it from the other side. You ask, given this code, is there a bug, and if so can we reach it?, instead of asking: given this public interface and a bunch of data we can stuff in it, does something happen we consider exploitable?
It's the difference of "achieve the goal", and "achieve the goal in this one particular way" (leverage large context).
Unless the context they added to get the small model to find it was generated fully by their own scaffold (which I assume it was not, since they'd have bragged about it if it was), either they're admitting theirs isn't well designed, or they're outright lying.
People aren't missing the point, they're saying the point is dishonest.
The argument in the article is that the framework to run and analyze the software being tested is doing most of the work in Anthropic's experiment, and that you can get similar results from other models when used in the same way.
You could even isolate it down to every function and create a harness that provides it a chain of where and how the function is used and repeat this for every single function in a codebase.
For some very large codebases this would be unreasonable, but many of the companies making these larger models do realistically have the compute available to run a model on every single function in most codebases.
You have the harness run this many times per file/function, and then find ones that are consistently/on average pointed as as possible vulnerability vectors, and then pass those on to a larger model to inspect deeper and repeat.
Most of the work here wouldn't be the model, it'd be the harness which is part of what the article alludes to.
My understanding (based on the Security, Cryptography, Whatever podcast interview[0] -- which, by the way, go listen to it) is that this is actually what Anthropic did with the large model for these findings.
[0]: https://securitycryptographywhatever.com/2026/03/25/ai-bug-f...
> I wrote a single prompt, which was the same for all of the content management systems, which is, I would like you to audit the security of this codebase. This is a CMS. You have complete access to this Docker container. It is running. Please find a bug. And then I might give a hint. “Please look at this file.” And I’ll give different files each time I invoke it in order to inject some randomness, right? Because the model is gonna do roughly the same time each time you run it. And so if I want to have it be really thorough, instead of just running 100 times on the same project, I’ll run it 100 times, but each time say, “Oh, look at this login file, look at this other thing.” And just enumerate every file in the project basically.
It's weird that Aisle wrote this.
No, writing an advertisement is not weird. What's weird is that it's top of HN. Or really, no, this isn't weird either if you think about it -- people lookin for a gotcha "Oh see, that new model really isn't that good/it's surely hitting a wall/plateau any day now" upvoted it.
I think people forget that it's hard to be clever and tidy 100% of the time. Big programs take a lot of discipline and an understanding of the context that can be really hard to maintain. This is one of several reasons that my second draft or third draft of code is almost always considerably better than the first draft.
People on the outside with imposter syndrome also need to remember this.
Any mature codebase is a bit messy.
It's the flaw in the "given enough eyeballs, all bugs are shallow" argument. Because eyeballs grow tired of looking at endless lines of code.
Machines on the other hand are excellent at this. They don't get bored, they just keep doing what they are told to do with no drop-off in attention or focus.
Would it be cheaper than Claude Mythos doing it? No idea. Maybe, maybe not.
But it’s weird how we’re willing to throw away money to a megacorp to do it with “automation” for potentially just as much if not more as it would cost to just have big bounty program or hiring someone for nearly the same cost and doing it “normally”.
It would really have to be substantially less cost for me to even consider doing it with a bot.
So would I, but it doesn't negate that we, humans, are bad at this. We will get bored and our focus will begin to drift. We might not notice it, we might not want to admit it, but after a few continuous hours we will start missing things.
And if there were, the cost would be more like $20M than 20K.
Having all code reviewed for security, by some level of LLM, should be standard at this point.
The thesis is, the tooling is what matters - the tools (what they call the harness) can turn a dumb llm into a smart llm.
The general approach without LLMs doesn't work. 50 companies have built products to do exactly what you propose here; they're called static application security testing (SAST) tools, or, colloquially, code scanners. In practice, getting every "suspicious" code pattern in a repository pointed out isn't highly valuable, because every codebase is awash in them, and few of them pan out as actual vulnerabilities (because attacker-controlled data never hits them, or because the missing security constraint is enforced somewhere else in the call chain).
Could it work with LLMs? Maybe? But there's a big open question right now about whether hyperspecific prompts make agents more effective at finding vulnerabilities (by sparing context and priming with likely problems) or less effective (by introducing path dependent attractors and also eliminating the likelihood of spotting vulnerabilities not directly in the SAST pattern book).
It almost seems like a coordinated effort (Google in January, Anthropic and OAI in April) building out gated models that will eventually be very expensive. Yet, here we are: Aisle is saying that's not required to get there.
I don't think it's weird at all. It seems to me the Frontier providers are just trying to find, still unsuccessfully, a moat to make their unsustainable business model... Well. Sustainable.
There's no great way to garner the quality / efficacy of something non-deterministic that you can't trust, at least not currently. And I wouldn't be surprised that the providers haven't known that their LLMs could possibly be cheating for a while now.
On one hand they're saying: these models are so apocalyptic if everyone had them, and then on the other hand showcasing how their models are sweeping the floor on benchmarks. So which is it? Personally I don't believe any of these companies at this point, especially when they make claims that are non-public and wrapped in NDAs that benefit their bottom line.
[0] https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/
Realizing this fact explains:
1. why software development is first to get disrupted by AI
2. other domains that are easily loopable like contract review are also quite easy to deploy AI into, so you get all these "AI for Law" running around doing essentially the same thing
3. domains that are not easily loopable are much harder to figure out leading people to believe AI can't be useful, when in fact it's a failure of the application layer
I realize they are trying to prove that an agentic harness running small models can ultimately achieve the same thing as what Mythos did, but they are handwaving away the steps it takes to construct the context Mythos handled in model and using a misleading test result to prove small models can handle the key step.
Poor evidence of a premise that logically wouldn't even be proven if the their evidence was valid. If they could find these types of vulnerabilities with the same effectiveness they would have done it already.
None of these requires mythos. If anything we just need Opus 4.5+ that is not lobotomised.
Note that I say cheap, not small, because small models may lack the reasoning needed, but some models are cheap enough but retain enough reasoning (ala Sonnet 3.7+)
“PKI is easy to break if someone gives us the prime factors to start with!”
I really like your original point, I never thought about it this way.
Genuinely curious - why couldn't a static analyzer also find the issue then? Those have been worked on for 50+ years at this point, maybe longer.
Give open models an environment (prior to Feb 15- so no Mythos-discovered vulns are patche) of Linux and see how many vulnerabilities it can find. Then put it in a sandbox and see if it can escape and send you an e-mail.
> "Our tests gave models the vulnerable function directly, often with contextual hints. A real autonomous discovery pipeline starts from a full codebase with no hints. The models' performance here is an upper bound on what they'd achieve in a fully autonomous scan. That said, a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do."
Also they included a test with a false positive, the small models got it right and Opus got it wrong. So this paper shows with the right approach and harness these smaller models can produce the same results. Thats awesome!
So, if you're struggling to make these smaller models work it's almost certainly an issue of holding them wrong. They require a different approach/harness since they are less capable of working with a vague prompt and have a smaller context, but incredibly powerful when wielded by someone who knows how to use them. And since they are so fast and cheap, you can use them in ways that are not feasible with the larger, slower, more expensive models. But you have to know how to use them, it requires skill unlike just lazily prompting Claude Code, however the results can be far better. If you aren't integrating them in your workflow you're ngmi imo :) This will be the next big trend, especially as they continue to improve relative to SOTA which is running into compute limitations.
What happens then is that, for example, the model looks through that particular file, identifies potential problems, and works upwards through the codebase to check whether those could actually be hit.
“Hum, here we assume that the input has been validated, is there any way that might not be the case?”
This is not unique to Mythos. You can already do this with publicly available models. Mythos does appear to be significantly more capable, so it would get better results.
The research discussed here provided models with just a known buggy function, missing the whole process required to find that bug in the first place.
> The research discussed here provided models with just a known buggy function, missing the whole process required to find that bug in the first place.
That process can be made part of a harness, again which is what they were validating.
I'm not sure why people are so hell-bent on disparaging open source models here. I get that some people cant get results from them, but that's just a skill issue - we should all be ecstatic that we don't need to rely on the unethical AI corps to allow us to do our jobs.
Meanwhile this mythical beast wasn't able to prevent the Bun vulnerability that exposed their code, let alone precluding the need to acquire that IP in the first place for presumably hundreds of millions of $$$, instead of coding a better replacement or a solution of its own.
What is real and measurable is that subscription plan users are getting a much degraded service for the same money through both open and hidden policies, while Anthropic moves compute to serve off-the-counter customers. The same people who come with the most obvious and brazen lies to dismiss the clear degradation of their service also come with this "security" justification for a move that looks just like good old market segmentation which would perfectly fit the strong symptoms that they cannot afford to offer tokens at a competitive price in this market.
a) Anthropic is lying, and every company that is collaborating on vulnerability squishing project is an accomplice in this big lie b) Anthropic has then goldest gold of the shovels to sell to people, which is actually useful for enterprises
Everyone, including Ant, understands that other companies will catch up in terms of model strength. So it’s a damned if you do, damned if you don’t position wrt releasing it to the public.
They know if they released it publicly, people will be able to see exactly how smart it is, and adjust their demand correspondingly. Anthropic will either need to price it high enough that nobody uses it (and the hardware is sitting mostly idle to servicing a few customers), or lower their profit margins (potentially below cost) to price it fairly.
So instead, they bundle it with this fancy new exploit finding scaffold, and sell the combined it to enterprise customers. I bet the scaffold works fine with smaller models, but gets notably improved results with Mythos.
The two products support each-other, and with the exclusive bundle Anthropic can get more profit selling both together than they would get selling them individually.
And as an added bonus, people over estimate the capability of this unreleased model, providing hype for Anthropic.
https://youtu.be/1sd26pWhfmg?t=204
https://youtu.be/1sd26pWhfmg?t=273
IMO the big "innovation" being shown by Mythos is the effectiveness with prompting LLMs to look for security vulnerabilities by focusing on specific files one at a time and automating this prompting with a simple script.
Prompting Mythos to focus on a single file per session is why I suspect it cost Anthropic $20k to find some of the bugs in these codebases. I know this same technique is effective with Opus 4.6 and GPT 5.4 because I've been using it on my own code. If you just ask the agent to review your pr with a low effort prompt they are not exhaustive, they will not actually read each changed file and look at how it interacts with the system as a whole. If the entire session is to review the changes for a single file, the llm will do much more work reviewing it.
Edit: I changed my phrasing, it's not about restricting its entire context to one file but focusing it on one file but still allowing it to look at how other files interact with it.
Instead of asking the model: "Here's this codebase, report any vulnerability." you ask. "Here's this codebase, report any vulnerability in module\main.c".
The model can still explore references and other files inside the codebase, but you start over a new context/session for each file in the codebase.
So no, the fact that the posters isolated the relevant code does not invalidate their findings.
[1] https://red.anthropic.com/2026/mythos-preview/
> Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior").
This is an essentially unquantifiable statement that makes the underlying claim harder to believe as an external party. What does “much” mean here? The end state of vulnerability exploitation is typically eminently quantifiable (in the form of a functional PoC that demonstrates an exploited end state), so the strong version of the claims here would ideally be backed up by those kinds of PoCs.
(Like other readers, I also find the trick of pre-feeding the smaller models the “relevant” code to be potentially disqualifying in a fair comparison. Discovering the relevant code is arguably one of the hardest parts of human VR.)
If your model says every line if your code has a bug, it will catch 100% of the bugs, but it's not useful at all. They tested false-positives with only a single bug...
I'm not defending anthropic and openai either. Their numbers are garbage too since they don't produce false-positive rates either.
Why is this "analysis" making the rounds?
Anyway, it seems like they erred in the up-front claim "small models found the vulnerability we pointed directly at!", but the findings are at least somewhat stronger if you read through the details.
The small models didn't match Mythos at exploitation. They suggested plausible exploits, but didn't actually try them out so I can't tell if they would have worked. Deepseek R1's sounds pretty convincing to me, but I'm not a good judge. (I'm more in the space of accidentally writing vulnerabilities, not seeking them out or exploiting them. Well, ok, I have a static analysis that finds some, at least.)
If the exploits exist in e.g. one file, great. But many complex zerodays and exploits are chains of various bugs/behaviors in complex systems.
Important research but I don’t think it dispels anything about Mythos
1. Mythos uniquely is able to find vulnerabilities that other LLMs cannot practically.
2. All LLMs could already do this but no one tried the way anthropic did.
The truth is one of these. And it comes down whether the comparison is apples to apples. Since we don't know the exact specifics of how either tests were performed, we lack a way of knowing absolutely.
So I guess, like so many things today, we can to pick the truth we find most comfortable personally.
https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...
“The results show something close to inverse scaling: small, cheap models outperform large frontier ones.”
What’s so special about the harness - why wouldn’t others be able to replicate it?
"Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior")."
If smaller models can find these things, that doesn’t mean mythos is worse than we thought. It means all models are more capable.
Also if pointing models at files and giving them hints is all it takes to make them find all kinds of stuff, well, we can also spray and pray that pretty well with llms can’t we.
It just points to us finding a lot more stuff with only a little bit more sophistication.
Hopefully the growing pains are short and defense wins
It means "it's so dangerous we can't release it" was a blatant lie since anthropic would have already known this.
Though, like, I guess I expect that when this comes out, all the opus traffics will move over. It does appear to be much more capable, just jury is out about how much more capable
The reason they didn't publish it was that it's orders of magnitude more successful at writing exploits vs Opus 4.6, which only managed it something like 2% of the time.
Companies like Aisle.com (the blog) and other VAPT companies charge huge amounts to detect vulnerabilities.
If Cloud Mythos become a simple github hook their value will get reduced.
That is a disruption.
Those guys are the reason our new work laptops run at 1/3 of speed.
While back crowdstrike managed to simultaneously crash every windows computer and bring every major company to a halt and somehow are still around.
ZScalar No PE
Palo Alto Networks Inc (PANW) 86 PE
Fortinet : (FTNT) 31.63 PE
That last one, didn't get hit at all by the Mythos announcement, because at some level it has at least some grounding in fiscal reality.
If you isolate the positive cases and then ask a tool to label them and it labels them all positive, doesn't prove anything. This is a one-sided test and it is really easy to write a tool that passes it -- just return always true!
You need to test your tool on both positive and negative cases and check if it is accurate on both.
If you don't, you could end up with hundreds or thousands of false positives when using this on real-world samples.
The real test is to use it to find new real bugs in the midst of a large code base.
Say it isn't so! I for one like to start from scratch each time I release my version of my compiler toolchain.
This misses the point entirely. You pay $20k as a one-time fee to establish a baseline. Your codebase develops one PR at a time, which... updates isolated sections of code. Which means you don't need Mythos for a PR, just small, open-weight models. Maybe you run Mythos once a year to ensure that you keep your baseline updated and reduce the risk that the open-weights models missed anything.
Seeing this as anything but a huge win for open-weights models and a huge loss for Anthropic misses the point entirely. Mythos isn't something you can persuade Fortune 500 companies to spend $20k/day or even $20k/week to spend on, like they were hoping for. $20k/year is a lot less valuable, and it won't justify development costs or Anthropic's growth multiple.
the experiment i'd want to see is running each of the small models as an unsupervised scanner across full freebsd then return the top-k suspicious functions per model and compute precision at recall levels that correspond to real analyst triage budgets, if mythos s findings show up in the small models top 100, i'd call that meaningful but if they only surface under 10k false positives then the cost advantage collapses because analyst triage time is more expensive than frontier model compute to begin with
second thing i keep coming back to is the $20k mythos number is a search budget not a model cost, small models at one hundredth the per-token price don't give us one hundredth the total budget when the search process is the same shape, i still run thousands of iterations and the issue for autonomous vuln research is how fast the reward signal converges and the aisle post doesn't touch any of this
A while ago, the autoresearch[1] harness went viral, yet it's but a highly simplified version of AlphaEvolve[2][3][4].
In the cybersecury context, you can envision a clever harness that probes every function in a codebase for vulnerabilities, then bubbles the candidates up to their callsites (and probes whether the vulnerability can be triggered from there) and then all the way to an interface (such as a syscall) where a potential exploit can be manifested. And those would be the low hanging fruit, other vulnerabilities may require the interplay of multiple functions. Or race conditions.
[1] <https://github.com/karpathy/autoresearch>
[2] <https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...>
[3] <https://arxiv.org/abs/2506.13131>
[4] <https://github.com/algorithmicsuperintelligence/openevolve>
https://red.anthropic.com/2026/mythos-preview/
Also "isolating the relevant code" in the repro is not a detail - Mythos seems to find issues much more independently.
But sometimes you do know where vulnerabilities are and still don't know what they are. For example, an update may be released in beta changing the part of the Mac or Windows kernel or some app, but they haven't published the CVE yet. If locally runnable (even with significant compute costs) LLMs can find and exploit it based on either the location of the changed file or the actual diff of the compiled output, we could see exploits before the update ever went to production?
absolutely. I see this pattern all the time when doing security audits - code that is nearly-vulnerable. I would mark these things as informational and recommend to harden them anyway, and any model would do a good job to do the same.
The hardest part is locating the issue, if you point directly to it, you're not comparing the same thing by far, and they know it. This was just a stunt by them to get publicity, they knew what they were doing and many fell for it, including here.
Of course I say this without any knowledge of what mythos is doing or how it’s different. I am sure it’s somehow different
Using small models as a classifier "there might be a vulnerability here" is probably reasonable, if you have a model capable of proving it. There are many companies attempting this without the verification step, resulting in AI vulnerability checker being banned left and right, from the nonsense noise.
Also, if someone has the time and tokens, would they please run the OpenJPEG 2000 decoder through this tester? It's known to be brittle. The data format has lots of offsets, and it's permitted to truncate the file to get a lower-rez version. That combo leads to trouble.
And if they constantly scan your code with various settings and updates you will spend hours a day reading, trying to understand locally coherent but structurally incoherent vibes trying to pinpoint the exact reasoning flaw. Exhausting.
Perfectly summarizes what I hate about AI code. The diff looks fine but if you take a step back its an absolute mess. I mean have you looked at the Claude Code or Openclaw codebases? that is the result of full on vibecoded. A bloated unattainable mess that no one understands.
I occasionally pick up contract work doing coding annotation to make some quick extra money, and a few months ago one of the projects was heavily focused on spotting common memory access bugs in C and C++.
We're literally talking about the biggest computers on the planet ever, trained with the biggest amount of data ever available to a system, with the biggest investment ever made by man or close to it and...
The subtlest security bug it can find required: going 28 years in the past and find a...
Denial-of-service?
A freaking DoS? Not a remote root exploit. Not a local exploit.
Just a DoS? And it had to go into 28 years old code to find that?
So kudos, hats off, deep bow not to Mythos but to OpenBSD? Just a bit, no!?
I mean isn't that most of it? If you put a snippet of code in front of me and said "there's probably a vulnerability here" I could probably spend a few hours (a much lower METR time!) and find it. It's a whole other ballgame to ask me with no context to come up with an exploit.
It also sounds like that is how mythos works too. Which makes sense - the linux kernel is too big to fit in context
Though another possibility would be that since LLMs generate so much code, the LLM vulnerability discovery would just keep chugging along and we'd simply settle for the same amount of potential vulns, same relative vulnerability-exploit-patch dynamics, though higher in absolute numbers.
It's also noteworthy that Anthropic attributes Mythos' improvement to advances in "coding, reasoning and autonomy", and that the autonomy part seems especially important since they go on to say that trying to develop exploits included adding debug code to projects, running them under a debugger, etc.
When comparing the capabilities of Mythos to previous generation and/or smaller models, it seems it would therefore be useful to distinguish between identifying potential vulnerabilities and actually trying to build exploits for them in agentic fashion. Finding the "needle in a haystack" (potential vulnerability) is one aspect, but the other part is an agentic exploit-writing harness being handed the needle and asked to try to exploit it.
I wonder how much effort Anthropic put into building the harnesses and environments for Mythos to run, modify and debug code? For example, was Mythos set up to be able to build and run a modified BSD in some virtual environment, or did it just take suspect functions and test those in isolation?
It'd be interesting to put the capabilities of Opus 4.6, Mythos, and other models into perspective by comparing them to traditional non-AI static analysis security scanning tools. Anthropic mention that the open source projects they scanned came from the OSS-Fuzz corpus, but as far as I can see they don't say what other tools have, or have not, been used to scan these projects.
It'd also be interesting to know to what extent Mythos was explicitly RL trained to develop exploits (especially since it sounds as if Anthropic have the dataset and environment needed to do this) as opposed to this just being a natural consequence of the model being better. If this was the case then it might be a large part of why they are not releasing it - can't really position yourself as strong on security if you deliberately develop and release a hacking tool!
So what’s Anthropic’s plan here? How long can they withhold releasing Mythos or something Mythos-like? Is it reasonable to think they - or another AI provider - are going to dumb down future models so they’re less dangerous? I personally don’t think that’s the case.
I’m not saying Anthropic should or shouldn’t release Mythos, but it leaves me wonderingwhat’s going to be different in, say, 6 months or even a year when they or another provider releases a model as dangerous as we’re being told Mythos is?
Finding a needle in a haystack is easy if someone hands you the small handful of hay containing the needle up front, and raises their eyebrows at you saying “there might be a needle in this clump of hay”.
Gating access is also a clever marketing move:
Option A: Release it but run out of capacity, everyone is annoyed and moves on. Drives focus back to smaller models.
Option B: A bunch of manufactured hype and putting up velvet ropes around it saying it’s “too dangerous” to let near mortals touch it. Press buys it hook, like, and sinker, sidesteps the capacity issues and keeps the hype train going a bit longer.
Seems quite clear we’re seeing “Option B” play out here.
"""
Your task is to study the following directive, research coding agent prompting, research the directive's domain best practices, and finally draft a prompt in markdown format to be run in a loop until the directive is complete.
Concept: Iterative review -- study an issue, enumerate the findings, fix each of the findings, and then repeat, until review finds no issues.
<directive>
Your job is to run a security bug factory that produces remediation packages as described below. Design and apply a methodology based on best practices in exploit development, lean manufacturing, threat modeling, and the scientific method. Use checklists, templates, and your own scripts to improve token efficiency and speed. Use existing tools where possible. Use existing research and bug findings for the target and similar codebases to guide your search. Study the target's development process to understand what kind of harness and tools you need for this work, and what will work in this development environment. A complete remediation package includes a readme documenting the problem and recommendations, runnable PoC with any necessary data files, and proposed patch.
Track your work in TODO.md (tasks identified as necessary) LOG.md (chronological list of tasks complete and lessons) and STATUS.md (concise summary of the current work being done). Never let these get more than a few minutes out of date. At each step ensure the repo file tree would make sense to the next engineer, and if not reorganize it. Apply iterative review before considering a task complete.
Your task is to run until the first complete remediation package is ready for user review.
Your target is <repo url>.
The prompt will be run as follows, design accordingly. Once the process starts, it is imperative not to interrupt the user until completion or until further progress is not possible. Keep output at each step to a concise summary suitable for a chat message.
``` while output=$(claude -p "$(cat prompt.md)"); do echo "$output"; echo "$output" | grep -q "XDONEDONEX" && break; done ```
</directive>
Draft the prompt into prompt.md, and apply iterative review with additional research steps to ensure will execute the directive as faithfully as possible.
"""
> “Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them.” Our internal evaluations showed that Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league. For example, Opus 4.6 turned the vulnerabilities it had found in Mozilla’s Firefox 147 JavaScript engine—all patched in Firefox 148—into JavaScript shell exploits only two times out of several hundred attempts. We re-ran this experiment as a benchmark for Mythos Preview, which developed working exploits 181 times, and achieved register control on 29 more.
Like I discovered a JavaScript vulnerability using a fridge.
I'm hoping the good results with AI models drive down the prices of traditional tools. Then, we can train open models to integrate with them.
Also you're not helping your case as a software company if you feed your code to an LLM, great job making it all public, because it will most likely be used as training data like it or not.
We prepare security measures based on the perceived effort a bad actor would need to defeat that method, along with considering the harm of the measure being defeated. We don't build Fort Knox for candy bars, it was built for gold bars.
These model advances change the equation. The effort and cost to defeat a measure goes down by an order of magnitude or more.
Things nobody would have considered to reasonably attempt are becoming possible. However. We have 2000-2020s security measures in place that will not survive the AI models of 2026+. The investment to resecure things will be massive, and won't come soon enough.
They did the same stunt with the C compiler. They could've released a tool to let others replicate it, but they didn't.
Really?
> We isolated the vulnerable vc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities.
No.
Case in point here where they conveniently fail to report the false positive rate, while also saying that if it wasn’t for Address Sanitizer discarding all the false positives this system would have been next to useless