ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
70% Positive
Analyzed from 2564 words in the discussion.
Trending Topics
#model#more#models#better#security#don#gpt#mythos#find#access

Discussion (60 Comments)Read Original on HackerNews
I guess this is the crux of the debate. All the claims are comparing models that are available freely with a model that is available only to limited customers (Mythos). The problem here is with the phrase "better model". Better how? Is it trained specifically on cybersecurity? Is it simply a large model with a higher token/thinking budget? Is it a better harness/scaffold? Is it simply a better prompt?
I don't doubt that some models are stronger that other models (a Gemini Pro or a Claude Opus has more parameters, higher context sizes and probably trained for longer and on more data than their smaller counterparts (Flash and Sonnet respectively).
Unless we know the exact experimental setup (which in this case is impossible because Mythos is completely closed off and not even accessible via API), all of this is hand wavy. Anthropic is definitely not going to reveal their setup because whether or not there is any secret sauce, there is more value to letting people's imaginations fly and the marketing machine work. Anthropic must be jumping with joy at all the free publicity they are getting.
I guess we'll never learn.
He also transfers the logic of their claims to the actual real world. You can say that model cards are marketing garbage. You have to prove that experienced programmers are not significantly better at security.
Out of curiosity, are you one of the people who has access to the model? If yes, could you write about your experimental setup in more detail?
This does not match my experience.
Rumors say it has 10 trillion parameter vs. 1 trillion.
It's restricted because it's genuinely good at finding vulnerabilities, and employees felt that it's not a good idea to give this capability to everyone without letting defenders front-run.
That's it. That's all there is to it. It is not some grand marketing play.
I'm not that old but have been here long enough that I remember when GPT-3 was considered too dangerous to release. Now you have models 10x as good, 1/10th the size and run on 8GB VRAM.
It's a possibility, but it doesn't eliminate the possibility that it's hype. If these claims were indeed serious, they would submit it for independent analysis somewhere.
This isn't some crazy process. Defense contractors are required to submit their systems (secret sauce and all) for operational test and evaluation before they're fielded.
They have. 40 different companies that have all committed resources to patching their systems based on vulnerabilities found by Mythos. One of them, Google, is a frontier AI lab that pointedly did not say that their own models have found similar vulnerabilities.
> Defense contractors are required to submit their systems (secret sauce and all) for operational test and evaluation before they're fielded.
Does this look something like having 40 separate companies look at the outputs of the system, deciding that it’s real and they should do something about it, and committing resources to it?
At some point, “cynicism” is another word for “lalala can’t hear you”.
We don't yet know if Mythos was a level shift in the capability/cost frontier, or a continued extension of the same logarithmic capability/cost curve.
AI companies routinely claim that something is too dangerous to release (I think GPT-2 was the first case) for marketing reasons. There are at least 10 documented high profile cases.
They keep it secret because they now sell to the MIC with China and North Korea bullshit stories as well as to companies who are invested in the AI hype themselves.
And with gpt-2 the worry was mass emails a lot better and more detailed and personal, social media campaigns etc.
How many bots are deployed today on X and influencing democrazy around the globe?
Its fair to say it had an impact and LLMs still have.
The platonic ideal of how to dismiss any argument by anyone about anything.
unless you are an employee at anthropic and shouldn't be talking about any of this at all, there's no way to know what the model's capabilities are.
In conclusion - Having a lot of tokens help! Having a better model also helps. Having both helps a lot. Having very intelligent humans + a lot of tokens + the best frontier models will help the most (emphasis on intelligent human).
Adding the words “by Claude” to it doesn’t materially change it. One could also pay a few humans to do the same thing. People have done that for decades.
A good security expert earns how much per year? And that person works 8/5.
Now you can just throw money at it.
CIA and co pay for sure more than 20k (thats what the anthropic red team stated as a cost for a complex exploit) for a zero day.
If someone builds some framework around this, you can literaly copy and paste it, throw money at it and scale it. This is not possible with a human.
It takes humans a very long time to learn how to code/find bugs. You just can't take any human and have them do it in a reasonable amount of time with a reasonable amount of money.
Claude is effectively automation, once you have the hardware you can run as many copies of the model as you want. Factories can build hardware far faster then they can train more people.
It's weird to see a denial of the industrial revolution on HN.
I’m not denying that LLMs can be used to improve security research, suggesting that their use is wrong or anything like that.
Humans have used software to research security for a long time. AI driven SAST is clearly going to help improve productivity.
Humans burned stuff for a very long time now, it's when we started burning coal in mass industrially that the global environmental impacts started stacking up to the point of considerable damage.
- what if at a certain level of capability you're essentially bug-free? I'm somewhat skeptical that this could be the case in a strong sense, because even if you formally prove certain properties, security often crucially depends on the threat model (e.g. side channel attacks, constant-time etc,) but maybe it becomes less of a problem in practice?
- what if past a certain capability threshold weaker models can substitute for stronger ones if you're willing to burn tokens? To make an example with coding, GPT-3 couldn't code at all, so I'd rather have X tokens with say, GPT 5.4, than 100X tokens with GPT-3. But would I rather have X tokens with GPT 5.4 or 100X tokens with GPT 5.2? That's a bit murkier and I could see that you could have some kind of indifference curve.
I would say that most software is going to have few easily exploitable bugs. Presence of such bugs will immediately cost more than having them discovered and fixed.
Other bugs, those that do not lead to easy pwning of a system, circumventing billing, etc, may linger as much as they currently do.
Also, I find myself thinking more and more that the ability to pay for tokens is becoming crucial. And it's unfair. If you don't have money, you don't have access. Somehow, a worsening of class conflicts. If you know what I mean.
If you spend months shipping slop, because “models will get better and tomorrow’s models can fix me today’s slop”, what happens when they not only do not get better, but actually get worse, and you are left with a bunch of slop you don’t understand and your problem solving muscles gotten weak?
The defender also not only has to discover issues but get them deployed. Installing patches takes time, and once the patch is available, the attacker can use it to reverse engineer the exploit and use it attack unpatched systems. This is happening in a matter of hours these days, and AI can accelerate this.
It is also entirely possible that the defender will never create patches or users will never deploy patches to systems because it is not economically viable. Things like cheap IoT sensors can have vulnerabilities that don't get addressed because there is no profit in spending the tokens to find and fix flaws. Even if they were fixed, users might not know about patches or care to take the time to deploy them because they don't see it worth their time.
Yes, there are many major systems that do have the resources to do reviews and fix problems and deploy patches. But there is an enormous installed base of code that is going to be vulnerable for a long time.
With LLMs even the halting problem is just the question of paying for pro subscription!
Interestingly enough, I was thinking of writing an article about how cybersecurity (both access models and operational assumptions) can be modeled as a proof (NOT proof of work) system. By that I mean there is an abstract model with a set of assumptions (policies, identities, invariants, configurations and implementation constraints) from which authorization decisions are derived.
A model is secure if no unauthorized action is derivable.
A system is correct if its implementation conforms to the model's assumptions.
A security model can be analyzed operationally by how likely its assumptions are to hold in practice.
If anyone has access to the mythical Mythos we'll see the contact with reality.
It's not proof of work, but proof of financial capacity.
The big companies are turning the access to high-quality token generators (through their service) into means of production. We're all going direct to Utopia, we're all going direct the other way.
This continous rush is not healthy. npm updates, replies to articles that barely made HN 12 hours ago, anything like that. It's not healthy.
Slow down.
its not just PoW at inference. It's PoW of inference + training.
So the bigger models hallucinate better causally hitting more real problems?