ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
74% Positive
Analyzed from 8683 words in the discussion.
Trending Topics
#more#security#code#tokens#need#software#attacker#find#spend#don

Discussion (181 Comments)Read Original on HackerNews
I wouldn't be surprised if NVIDIA picked up this talking point to sell more GPUs.
(Fan of your writing, btw.)
You might well be right, it is not an area I know much of or work in. But I'm a fan of reliable sources for claims. It is far to easy to make general statements on the internet that appear authorative.
Unfortunately, they fit straight lines to graphs with y axis from 0 to 100% and x axis being time - which is not great. Should do logistic instead.
Seems much like those secretly tobacco industry funded reports about tobacco being safe and such.
You can do a lot better efficiency-wise if you control the source end-to-end though - you already group logically related changes into PRs, so you can save on scanning by asking the LLM to only look over the files you've changed. If you're touching security-relevant code, you can ask it for more per-file effort than the attacker might put into their own scanning. You can even do the big bulk scans an attacker might on a fixed schedule - each attacker has to run their own scan while you only need to run your one scan to find everything they would have. There's a massive cost asymmetry between the "hardening" phase for the defender and the "discovering exploits" phase for the attacker.
Exploitability also isn't binary: even if the attacker is better-resourced than you, they need to find a whole chain of exploits in your system, while you only need to break the weakest link in that chain.
If you boil security down to just a contest of who can burn more tokens, defenders get efficiency advantages only the best-resourced attackers can overcome. On net, public access to mythos-tier models will make software more secure.
[0] https://securitycryptographywhatever.com/2026/03/25/ai-bug-f...
As it is, we're stuck with "yeah it seems this works well for bootstrapping a Next.js UI"...
There are several simultaneous moving targets: the different models available at any point in time, the model complexity/ capability, the model price per token, the number of tokens used by the model for that query, the context size capabilities and prices, and even the evolution of the codebase. You can’t calculate comparative ROIs of model A today or model B next year unless these are far more predictable than they currently are.
Chinese AI vendors specifically pointed out that even a few gens ago there was maybe 5-15% more capability to squeeze out via training, but that the cost for this is extremely prohibitive and only US vendors have the capex to have enough compute for both inference and that level of training.
I'd take their word over someone that has a vested interested in pushing Anthropic's latest and greatest.
The real improvements are going to be in tooling and harnessing.
I think the important thing is to avoid over-optimizing. Your scaffold, not avoid building one altogether.
I think there is work to be done on scaffolding the models better. This exponential right now reminds me of the exponential from CPU speeds going up until let’s say 2000 or something where you had these game developers who would develop really impressive games on the current thing of hardware and they do it by writing like really detailed intricate x86 instruction sequences for like just exactly whatever this, like, you know, whatever 486 can do, knowing full well that in 2 years, you know, the pen team is gonna be able to do this much faster and they didn’t need to do it. But like you need to do it now because you wanna sell your game today and like, yeah, you can’t just like wait and like have everyone be able to do this. And so I do think that there definitely is value in squeezing out all of the last little juice that you can from the current model.
Everything you can do today will eventually be obsoleted by some future technology, but if you need better results today, you actually have to do the work. If you just drop everything and wait for the singularity, you're just going to unnecessarily cap your potential in the meantime.
Here we go again.
http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Taken to an extreme, the end result is a dark forest. I don't like what that means for entrepreneurship generally.
It does mean that the hoped-for 10x productivity increase from engineers using LLMs is eroded by the increased need for extra time for security.
This take is not theoretical. I am working on this effort currently.
Sorry, how does that work?
This seems wrong however, as it ignores the arrow of time. The full source code has been scanned and fixed for things that LLMs can find before hitting production, anyone exfiltrating your codebase can only find holes in stuff with their models that is available via production for them to attack and that your models for some reason did not find.
I don't think there is any reason to suppose non-nation state actors will have better models available to them and thus it is not a dark forest, as nation states will probably limit their attacks to specific things, thus most companies if they secure their codebase using LLMs built for it will probably be at a significantly more secure position than nowadays and, I would think, the golden age of criminal hacking is drawing to a close. This assume companies smart enough to do this however.
Furthermore, the worry about nation state attackers still assumes that they will have better models and not sure if that is likely either.
Assuming your code is inaccessible isn't good for security. All security reviews are done assuming code source is available. If you don't provide the source, you'll never score high in the review.
This really is not the case.
You have freedom of methodology.
You can also ask it to enumerate various risks and find proof of existence for each of them.
Certainly our LLM audits are not just a prompt per file - so I have a hard time believing that best in class tools would do this.
And using an LLM to audit your code isn't necessarily a case of turning it into perfect code, it's to keep ahead of the other side also using an LLM. You don't need to outrun the bear, just the other hikers.
Well, you need to harden everything, the attacker only needs to find one or at most a handful of exploits.
Yeah, but it's not like the attacker knows where to look without checking everything, it it?
If you harden and fix 90% of vulns, the attacker may give up when their attempts reach 80% of vulns.
It's the same as it has ever been; you don't need to outrun the bear, you only need to outrun the other runners.
Worse, "attackers no longer break in, they log in", so the supply chain attacks harvesting credentials have been frightening
What accounts are these?
I've seen some people use this but I cannot imaging that anyone thinks this is the best.
For example I've had success telling LLMs to scan from application entry points and trace execution, and that seems an extremely obvious thing to do. I can't imagine others in the field don't have much better approaches.
You cannot get away with „well no one is going to spend time writing custom exploit to get us” or „just be faster than slowest running away from the bear”.
Primitive? I'd say simple and thorough.
Of course LLMs see a lot more source-assembly pairs than even skilled reverse engineers, so this makes sense. Any area where you can get unlimited training data is one we expect to see top-tier performance from LLMs.
(also, hi Thomas!)
Ha
>for free.
Haha, it is more complicated in reality
Prediction 1. We're going to have cheap "write Photoshop and AutoCad in Rust as a new program / FOSS" soon. No desktop software will be safe. Everything will be cloned.
Prediction 2. We'll have a million Linux and Chrome and other FOSS variants with completely new codebases.
Prediction 3. People will trivially clone games, change their assets. Modding will have a renaissance like never before.
Prediction 4. To push back, everything will move to thin clients.
Obvious possibilities include:
* More use of software patents, since these apply to underlying ideas, rather than specific implementations.
* Stronger DMCA-like laws which prohibit breaking technical provisions designed to prevent reverse engineering.
Similarly, if the people predicting that humans are going to be required to take ultimate responsibility for the behaviour of software are correct, then it clearly won't be possible for that to be any random human. Instead you'll need legally recognised credentials to be allowed to ship software, similar to the way that doctors or engineers work today.
Of course these specific predictions might be wrong. I think it's fair to say that nobody really knows what might have changed in a year, or where the technical capabilities will end up. But I see a lot of discussions and opinions that assume zero feedback from the broader social context in which the tech exists, which seems like they're likely missing a big part of the picture.
It seems like that is perhaps not the case anymore with the Mythos model?
In other terms, I feel the argument from TFA generally checks out, just on a different level than "more GPU wins". It's one up: "More money wins". That's based on the premise that more capable models will be more expensive, and using more of it will increase the likelihood of finding an exploit, as well as the total cost. What these model providers pay for GPUs vs R&D, or what their profit margin is, I'd consider less central.
But then again, AI didn't change this, if you have more money you can find more exploits: Whether a model looks for them or a human.
Of course it's trivially NOT true that you can defend against all exploits by making your system sufficiently compact and clean, but you can certainly have a big impact on the exploitable surface area.
I think it's a bit bizarre that it's implicitly assumed that all codebases are broken enough, that if you were to attack them sufficiently, you'll eventually find endlessly more issues.
Another analogy here is to fuzzing. A fuzzer can walk through all sorts of states of a program, but when it hits a password, it can't really push past that because it needs to search a space that is impossibly huge.
It's all well and good to try to exploit a program, but (as an example) if that program _robustly and very simply_ (the hard part!) says... that it only accepts messages from the network that are signed before it does ANYTHING else, you're going to have a hard time getting it to accept unsigned messages.
Admittedly, a lot of today's surfaces and software were built in a world where you could get away with a lot more laziness compared to this. But I could imagine, for example, a state of the world in which we're much more intentional about what we accept and even bring _into_ our threat environment. Similarly to the shift from network to endpoint security. There are for sure, uh, million systems right now with a threat model wildly larger than it needs to be.
- a very large codebase
- a codebase which is not modularized into cohesive parts
- niche languages or frameworks
- overly 'clever' code
For example from this article:
> Karpathy: Classical software engineering would have you believe that dependencies are good (we’re building pyramids from bricks), but imo this has to be re-evaluated, and it’s why I’ve been so growingly averse to them, preferring to use LLMs to “yoink” functionality when it’s simple enough and possible.
Anyone who's heard of "leftpad" or is a Go programmer ("A little copying is better than a little dependency" is literally a "Go Proverb") knows this.
Another recent set of posts to HN had a company close-sourcing their code for security, but "security through obscurity" has been a well understand fallacy in open source circles for decades.
I dunno about that quoted bit; "Defense in depth" (Or defense via depth) is a good thing, and obscurity is just one of those layers.
"Security through obscurity" is indeed wrong if the obscurity is a large component of the security, but it helps if it is just another layer of defense in the stack.
IOW, harden your system as if it were completely transparent, and only then make it opaque.
The times, as they say, are a-changin’.
Open software is not inherently more secure than closed software, and never has been.
Its relative security value was always derived from circumstantial factors, one of the most important of which was the combination of incentive and ability and willingness of others in the community to spend their time and attention finding and fixing bugs and potential exploits.
Now, that’s been the case for so long that we all implicitly take it for granted, and conclude that open software is generally more secure than closed, and that security through obscurity falls short in comparison.
But this may very well fundamentally change when the cost of navigating the search space of potential exploits, for both the attacker and the defender, is dramatically reduced along the axes of time and attention, and increased along the axis of monetary investment.
It then becomes a game of which side is more willing to pool monetary resources into OSS security analysis – the attackers or the defenders – and I wouldn’t feel comfortable betting on the defenders in that case.
Maybe we could start with the prompts for the code generation models used by developers.
> You don’t get points for being clever
Not sure about this framing, this can easily lead to the wrong conclusions. There is an arms race, yes, and defenders are going to need to spend a lot of GPU hours as a result. But it seems self-evident that the fundamentals of cybersecurity still matter a lot, and you still win by being clever. For the foreseeable future, security posture is still going to be a reflection of human systems. Human systems that are under enormous stress, but are still fundamentally human. You win by getting your security culture in order to produce (and continually reproduce) the most resilient defense that masters both the craft and the human element, not just by abandoning human systems in favor of brute forcing security problems away as your only strategy.
Indeed, domains that are truly security critical will acquire this organizational discipline (what's required is the same type of discipline that the nuclear industry acquires after a meltdown, or that the aviation industry acquires after plane crashes), but it will be a bumpy ride.
This article from exactly 1 year ago is almost prophetic to exactly what's going on right now and the subtle ways in which people are most likely to misunderstand the situation: https://knightcolumbia.org/content/ai-as-normal-technology
I, for the NFL front offices, created a script that exposed an API to fully automate Ticketmaster through the front end so that the NFL could post tickets on all secondary markets and dynamic price the tickets so if rain on a Sunday was expected they could charge less. Ticketmaster was slow to develop an API. Ticketmaster couldn't provide us permission without first developing the API first for legal reasons but told me they would do their best to stop me.
They switched over to PerimeterX which took me 3 days to get past.
Last week someone posted an article here about ChatGPT using Cloudflare Turnstile. [0] First, the article made some mistakes how it works. Second, I used the [AI company product] and the Chrome DevTools Protocol (CDP) to completely rewrite all the scripts intercepting them before they were evaluated -- the same way I was able to figure out PerimeterX in 3 days -- and then recursively solve controlling all the finger printing so that it controls the profile. Then it created an API proxy to expose ChatGPT for free. It required some coaching about the technique but it did most of the work in 3 hours.
These companies are spending 10s of millions of dollars on these products and considering what OpenAI is boasting about security, they are worthless.
[0] https://news.ycombinator.com/item?id=47566865
> Worryingly, none of the models given a 100M budget showed signs of diminishing returns. “Models continue making progress with increased token budgets across the token budgets tested,” AISI notes.
So, the author infers a durable direct correlation between token spend and attack success. Thus you will need to spend more tokens than your attackers to find your vulnerabilities first.
However it is worth noting that this study was of a 32-step network intrusion, which only one model (Mythos) even was able to complete at all. That’s an incredibly complex task. Is the same true for pointing Mythos at a relatively simple single code library? My intuition is that there is probably a point of diminishing returns, which is closer for simpler tasks.
In this world, popular open source projects will probably see higher aggregate token spend by both defenders and attackers. And thus they might approach the point of diminishing returns faster. If there is one.
I wouldn't use those as excuses to dismiss AI though. Even if this model doesn't break your defences, give it 3 months and see where the next model lands.
For instance, if failing any step locks you out, your probability of success is p^N, which means it’s functionally impossible with enough layers.
It is not that one would design a system in this manner because you'd never design a loophole in no matter the steps it takes to get there: it is just a benchmark.
It's nuts. If the timing were slightly different, none of this "Cybersecurity" would even be a thing. We'd just have capabilities based, secure general purpose computation.
As soon as there are multiple programs with full authority on your data, "cybersecurity" happens. And internet/web is that to the power of 100.
Because we have tools and techniques that can guarantee the absence of certain behavior in a bounded state space using formal methods (even unbounded at times)
Sure, it's hard to formally verify everything but if you are dealing with something extremely critical why not design it in a way that you can formally verify it?
But yeah, the easy button is keep throwing more tokens till you money runs out of money
In fact, security programs built on the idea that you can find and patch every security hole in your codebase were basically busted long before LLMs.
Yeah, it sucks. But you're getting paid, among other things, to put up with some amount of corporate suckiness.
Developers usually need elevated privileges, executing unverified arbitrary code is literally their job. Their machines are not trustworthy, and yet, they often have access to the entire company internal network. So you get a situation where they have both too much privilege (access to resources beyond the scope of their work) and too little (some dev tools being unavailable).
I tend to encourage Firefox over Cr flavoured browsers because FF (for me) are the absolute last to dive in with fads and will boneheadedly argue against useful stuff until the cows come home ... Web Serial springs to mind (which should finally be rocking up real soon now).
Oh and they are not sponsored by Google errm ... 8)
I'm old enough to remember having to use telnet to access the www (when it finally rocked up and looked rather like Gopher and WAIS) (via a X.25 PAD) and I have seen the word "unsupported" bandied around way too often since to basically mean "walled garden".
I think that when you end up using the term "unsupported browser" you have lost any possible argument based on reason or common decency.
There is at least a possibility that a code base can be secured by a (practically) finite number of tokens until there is no more holes in it, for reasonable amounts of money.
This also reminds me of what I wrote here: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ There's still value in code tested by the real world, and in an era of "free code" that may become even more true than it is now, rather than the initially-intuitive less valuable. There is no amount of testing you can do that will be equivalent to being in the real world, AI-empowered attackers and all.
I disagree.
The defender must be right every single time. The attacker only has to get lucky and thanks to scale they can do that every day all day in most large organizations.
To use your example, if the odds of the guard being asleep and the vault being unlocked are both 1% we have a 0.0001 chance on any given day. Phew, we're safe...
Except that Google says there are 68,632 bank branch locations in the US alone. That means it will happen roughly 7 times on any given day someplace in America!
Now apply that to the scale of the internet. The attackers can rattle the locks in every single bank in an afternoon for almost zero cost.
The poorly defended ones have something close to 100% odds of being breached, and the well defended ones how low odds on any given day, but over a long enough timeline it becomes inevitable.
To again use your bank example. if we only have one bank, but keep those odds it means that over about 191 years the event will happen 7 times. Or to restate that number, it is like to happen at least once every 27 years. You'll have about 25% odds of it happening in any 7 year span.
For any individual target, it becomes unlikely, but also still inevitable.
From an attackers perspective this means the game is rigged in their favor. They have many billions of potential targets, and the cost of an attack is close to zero.
From a defenders perspective it means realizing that even with defense in depth the breach is still going to happen eventually and that the bigger the company is the more likely it is.
Cyber is about mitigating risk, not eliminating it.
Until the attacker has initial access.
Then the attacker needs to be right every single time.
The time is a cost, but at scale any individual target is a pretty minor investment since it's 90%+ automated. Also, these aren't folks that are otherwise highly employable. The opportunity cost to them is also usually very low.
The last attacker I got into a conversation with was interesting. Turns out, he was a 16 year old from Atlanta GA using a toolkit as an affiliate. He claimed he made ~100k/year and used the money on cars and girls. I felt like he was inflating that number to brag. His alternative probably would have been McDonalds, and as a minor if he got caught it would've been probation most likely. I told him to come to the blue team, we pay better.
Put more simply: to keep your system secure, you need to be fixing vulnerabilities faster than they're being discovered. The token count is irrelevant.
Moreover: this shift is happening because the automated work is outpacing humans for the same outcome. If you could get the same results by hand, they'd count! A sev:crit is a sev:crit is a sev:crit.
1) The number of vulnerabilities surfaced (and fixed?) in a given software is roughly proportional to the amount of attention paid to it.
2) Attention can now be paid in tokens by burning huge amounts of compute (bonus: most commonly on GPUs, just like crypto!)
3) Whoever finds a vulnerability has a valuable asset, though the value differs based on the criticality of the vulnerability itself, and whether you're the attacker or the defender.
More tokens -> more vulns is not a guarantee of course, it's a stochastic process... but so is PoW!
Are these totally previously unknown security holes or are they still generally within the umbrella of our understanding of cybersecurity itself?
If it's the latter, why can't we systematically find and fix them ourselves?
Would it? I’m old school but I’ve never trusted these massive dependency chains.
That’s a nit.
We’re going to have to write more secure software, not just spend more.
Your wall should be made of a small number bricks you bet your life on.
All the rest goes inside.
Security was always about having more money/resources. Using more tokens is just another measure for the same.
Some previous post, which I cannot verify myself, stated that mythos is not as powerful as it seems to be as the same bugs could be found using much smaller/simpler models and that the method is the key part.
That's a really big "if". Particularly since so many companies don't even know all of the OSS they are using, and they often use OSS to offload the cost of maintaining it themselves.
My hope is when the dust settles, we see more OSS SAST tools that are much better at detecting vulnerabilities. And even better if they can recommend fixes. OSS developers don't care about a 20 point chained attack across a company network, they just want to secure their one app. And if that app is hardened, perhaps that's the one link of the chain the attackers can't get past.
Companies that market to the EU are going to need to find out real fast.
I want to believe formal methods can help, not because one doesn't have to think about it, but because the time freed from writing code can be spent on thinking on systems, architecture and proofs.
1. A proof mindset is really hard to learn.
2. Writing theorem definitions can be hard, but writing a proof can be even harder. So, if you could write just the definitions, and let an LLM handle all the tactics and steps, you could use more advanced techniques than just a SAT solver.
So I guess LLMs only marginally help with (1), but they could potentially be a big help for (2), especially with more tedious steps. It would also allow one to use first order logic, and not just propositional logic (or dependant types if you're into that).
Imo, cybersecurity looks like formally verified systems now.
You can't spend more tokens to find vulnerabilities if there are no vulnerabilities.
But part of me has been wondering for a while now whether proofs of correctness is the way out of the NVIDIA infinite money glitch. IDK if we're there yet but it's pretty much the only option I can imagine.
The only process that scared me was windowgrid. It kept finding a way back when I killed all the "start with boot" locations I know. Run, runonce, start up apps, etc. Surely it's not in autoexec.bat :)
https://news.ycombinator.com/item?id=47788473
(It's true that formalization can still have bugs in the definition of "secure" and doesn't work for everything, which means defenders will still probably have to allocate some of their token budget to red teaming.)
You can only do this if you have a very clear sense of what your code should be doing. In most codebases I've ever worked with, frankly, no one has any idea.
Red teaming as an approach always has value, but one important characteristic it has is that you can apply red teaming without demanding any changes at all to your code standards, or engineering culture (and maybe even your development processes).
Most companies are working with a horrific sprawl of code, much of it legacy with little ownership. Red teaming, like buying tools and pushing for high coverage, is an attractive strategy to business leaders because it doesn't require them to tackle the hardest problems (development priorities, expertise, institutional knowledge, talent, retention) that factor into application security.
Formal verification is unfortunately hard in the ways that companies who want to think of security as a simple resource allocation problem most likely can't really manage.
I would love to work on projects/with teams that see formal verification as part of their overall correctness and security strategy. And maybe doing things right can be cheaper in the long run, including in terms of token burn. But I'm not sure this strategy will be applicable all that generally; some teams will never get there.
Really depends how consistently the LLMs are putting new novel vulnerabilities back in your production code for the other LLMs to discover.
I predict the software ecosystem will change in two folds: internal software behind a firewall will become ever cheaper, but anything external facing will become exponential more expensive due to hacking concern.
In the case of crooks (rather than spooks) that often means your security has to be as good as your peers, because crooks will spend their time going with the best gain/effort ratio.
If there is only one bear, you just need to run faster than your friends. If there's a pack of them, it you need to start training much harder!
After how many years of "shifting left" and understanding the importance of having security involved in the dev and planning process, now the recommendation is to vibe code with human intuition, review then spend a million tokens to "harden"?
I understand that isn't the point of the article and the article does make sense in its other parts. But that last paragraph leaves me scratching my head wondering if the author understands infosec at all?
Better to write good, high-quality, properly architected and tested software in the first place of course.
Edited for typo.
For example, developers should no longer run dev environments on the same machine where they access passwords, messages, and emails — no external package installation on that box at all.
SaaS Password Managers — assume your vault will be stolen from whichever provider is hosting it.
Ubikeys will be more important than ever to airgap root auth credentials.
That would have started a P2 and woken up a senior IR responder anywhere that I’ve worked. Are you sure you’re running a realistic defender environment?
And yet... Wireguard was written by one guy while OpenVPN is written by a big team. One code base is orders of magnitude bigger than the other. Which should I bet LLMs will find more cybersecurity problems with? My vote is on OpenVPN despite it being the less clever and "more money thrown at" solution.
So yes, I do think you get points for being clever, assuming you are competent. If you are clever enough to build a solution that's much smaller/simpler than your competition, you can also get away with spending less on cybersecurity audits (be they LLM tokens or not).
When things are tagged "cybersecurity", compliance/budget/manager/dashboard/education/certification are the usual response...
I don't think it would be an appropriate response for code quality issues, and it would likely escape the hands of the very people who can fix code quality issues, ie. developers.
But I don't really get the hype, we can fix all the vulnerabilities in the world but people are still going to pick up parking-lot-USBs and enter their credentials into phishing sites.
The benchmark might be a good apples-to-apples comparison but it is not showing capability in an absolute sense.
I think were are already here. I wrote something about this, if you are interested: https://go.cbk.ai/security-agents-need-a-thinner-harness
https://imgflip.com/memetemplate/Always-Has-Been
Of course those are attracted to new tools and AI shill institutes like AISI (yes, the UK government is shilling for AI, it understands a proper grift that benefits the elites).
Security "research" is perfect for talkers and people who produce powerpoint graphs that sell their latest tools.
You still can sit down and write secure software, while the "researchers" focus on the same three soft targets (sudo, curl, ffmpeg) over an over again and get $100,000 in tokens and salaries for a bug in a protocol from the 1990s that no one uses. Imagine if this went to the authors instead.
But no, government money MUST go to the talkers and powerpointists. Always.
nothing is better or worse, basically as its always been.
if you think otherwise, stop ignoring the past.
you are addicted to dopamine. think carefully and take good care of yourself
1) massive companies spending millions of tokens to write+secure their software
2) in the shadows, "elite" software contractors writing bespoke software to fulfill needs for those who can't afford the millions, or fix cracks in (1)
(Oh wait, I think this is what is happening now, anyway, minus the millions of tokens)
What's new?
It was always about spending more money on something.
Team has no capacity? Because the company doesn't invest in the team, doesn't expand it, doesn't focus on it.
We don't have enough experts? Because the company doesn't invest in the team, doesn't raise the salary bar to get new experts, it's not attractive to experts in other companies.
It was always about "spending tokens more than competitors", in every area of IT.
I already see this happening: companies are moving toward AI-generated code (or forking projects into closed source), keeping their code private, AI written pipelines taking care of supply chain security, auditing and developing it primarily with AI.
At that point, for some companies, there's no real need for a community of "experts" anymore.
If we take this at face value, it's not that different than how a great deal of executive teams believe cybersecurity has worked up to today. "If we spend more on our engineering and infosec teams, we are less likely to get compromised".
The only big difference I can see is timescale. If LLMs can find vulnerabilities and exploit them this easily (and I do take that with a grain of salt, because benchmarks are benchmarks), then you may lose your ass in minutes instead of after one dedicated cyber-explorer's monster energy fueled, 7-week traversal of your infrastructure.
I am still far more concerned about social engineering than LLMs finding and exploiting secret back doors in most software.
These mass-produced tokens are just cheaper...
If the attacker and defender are using the same AI model, then (up to some inflection point) whoever spends more finds the most vulnerabilities.
In your embarrassingly reductive binary vulnerability state worldview? Have.
Not saying security will never be dominated by AI like it happened with chess, with maps, with Go, with language. But just braindead money to security pipeline? Skeptical.