DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
47% Positive
Analyzed from 4608 words in the discussion.
Trending Topics
#security#software#more#patch#vulnerabilities#code#linux#going#source#kernel

Discussion (102 Comments)Read Original on HackerNews
The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.
This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.
It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.
The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.
I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.
Probably goes without saying but the last line of defense is not deploying your software publicly and instead relying on server-client architectures to do anything. Maybe this will be more common as vulnerabilities are more easily detected and exploited. Of course its not always feasible.
It has been annoying seeing my (proguard obfuscated) game client binaries decompiled and published on github many times over the last 11 years. Only the undeployed server code has remained private.
Interestingly I didn't have a problem with adversaries reverse engineering my network protocols until I was updating them less frequently than weekly. LLM assisted adversaries could probably keep up with that now too.
How easy to do you this is for LLM to build decent emulator of the server in question by just observing what you send and what you get as response?
Current coordinated disclosure practices have a dependency on patching and disclosure being separate, but the gap between them seems to be asymptomatically approaching zero.
The actual policy responses to it, I couldn't say! I've always believed, even when there was a meaningful gap between patching and disclosing, that coordinated disclosure norms were a bad default.
That’s why Microsoft has been obfuscating its binary builds for at least the last two decades so that even the two builds from the same source would produce very different blobs.
What would the best solution be? And where do you believe the industry is headed (which may very well be something other than the best solution) ?
I can’t think about anything other than improving operations, but given the state of the industry, this seems like a pipe dream.
Day -X + 1: Engineer at Alibaba finds the vuln and tells Apache. Patch is pushed to git while new release is coordinated.
Day -X: A black hat sees commits fixing the bug. Attacks start happening.
Day 0: Memes start circulating in Minecraft communities of people crashing servers. Some logs are shared on Twitter, especially in China, of people getting pwned.
Day 0 + ~4 hours: My friend DMs me a meme on Twitter. I look up to find the CVE. Doesn't exist. My friend and I reproduce the exploit and write up a blog post about it. (We name it Log4Shell to differentiate it from a different, older log4j RCE vuln)
Day ~1: Media starts picking it up. Apache is forced to release patches faster in response. CVE is actually published to properly allow security scanners to identify it.
Today: AI makes this happen faster and more consistently. Patches probably should be kept private until a coordinated disclosure happens post-testing and CVE being published?
Hard to say what the right move is, but this is gonna be happening a lot over the next 1-3 years. Lots of companies are going to be getting cooked until AI helps us patch faster than attackers can exploit these fresh 0-days.
people were already diffing kernel commits and figuring out which ones were security fixes long before llms. if a patch lands publicly, the race has basically already started.
also not sure shorter embargoes really help. the orgs that can patch in hours are already fine. everyone else still takes days or weeks.
if anything, cheaper exploit generation probably makes coordinated disclosure more important, not less.
With skill, and usually not consistently and systematically. With AI, anyone can do this to any software.
> not sure shorter embargoes really help
Why 90 days versus 2 years? The author is arguing the factors that set that balance have shifted, given the frequency of simultaneous discovery. The embargo window isn’t an actual window, just an illusion, if the exploit is going to be found by several people outside the embargo anyway.
> cheaper exploit generation probably makes coordinated disclosure more important
I agree. But it also makes it less viable. If script kiddies can find and exploit zero days, the capacity to co-ordinate breaks down.
There was always a guild ethic that drove white-hate (EDIT: hat) culture. If the guild is broken, the ethic has nothing to stand on.
How do you know? If the people who like to crow about vulnerabilities aren't doing it, it doesn't mean that the people who are actually in a position to exploit them systematically and effectively aren't doing it.
Those embargoes have always been dangerous, because they create a false sense of security. But, as you point out...
> With AI, anyone can do this to any software.
Yep. Even if it hadn't been true before, it's clear that now you just have to assume that everybody relevant will immediately recognize the security impact of any patch that gets published. That includes both bugs fixed and bugs introduced.
... and as the AI gets better, you're going to have to assume that you don't even have to publish a patch. Or source code. Within way less time than it's going to take people to admit it and adjust, any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.
Shouldn't this naturally lead to a state where all (new) code is vulnerability-free? If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code.
We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.
I would like to see actual evidence of this, not.. vibes
I mean, this reeks of "Anyone is a Principal developer now" when the truth is there is still work to do.
I haven't seen this kind of thing and I get the impression, despite all the hype, that this will be a frequent phenomenon now thanks to LLMs.
So it's not surprising Dirtyfrag was disclosed by a fix in the Linux kernel. [2]
[1] https://www.zdnet.com/article/torvalds-criticises-the-securi...
[2] https://afflicted.sh/blog/posts/copy-fail-2.html
https://news.ycombinator.com/item?id=47921829
Much of today's workloads are containerized and run on roughly ephemeral nodes that can be switched out easily- K8s version upgrades force this more or less. We tent to run more and more of-the shelf hardware and worry less about individual node failures now.
In-memory updates also not magic , and can be limited as they requires data structure semantics to not really change and can create its own class of issues/bugs including security ones.
While am sure there are still use cases which dictate this type of update, the need is lot less than 15 years ago that the patent expiry will do much to the ecosystem.
The US is at war. Much of the world is at war at the cyber attack level right now. The US, the EU, most of the Middle East, Israel, Russia... Major services have been attacked and have gone down for days at a time - Ubuntu, Github, Let's Encrypt, Stryker. Entire hospital systems have had to partially shut down.
Now, in the middle of this, AI has made attacks much faster to generate. Faster than the defensive side can respond. Zero-day attacks used to be rare. Now they're normal.
It's going to get worse before it gets better. Maybe much worse.
How is it going to get better?
One of the big challenges with cybersecurity is that attackers only need to find one exploit, while defenders need to stop everything. When you have a large surface area and limited resources, it's much easier to be the side that only has to succeed once. AI eliminates the limited resources problem.
...so if we assume a halting oracle?
In time most of the bugs AI can find will be fixed, and things will calm down. Some bugs will be left, but will be too complex to find and weaponise (or rarely).
Alin short, attackers have advantage for a brief time now, but ultimately defenders will win. I guess this "fight" might be over before the end of the year.
Right now all that stuff is optional, so most companies don't do it, which makes more security holes and it takes longer to patch.
However, half of the vulnerabilities are logic errors in terms of what I would call RBAC enforcement, incorrect access permissions, and so on. Rust won't help at all with any of these.
Security researchers should report their findings to a committee that includes some big companies (IBM and Oracle seem like trustworthy choices here, but ideally we should find a way to get Microsoft included). Those companies would apply the security patches and distribute binary builds of Linux to their customers. Users fortunate enough to have a business relationship with those companies would be protected immediately. The source would still be published after 90 days for educational purposes and for anyone who doesn't appreciate the security benefits of this approach.
"But even if you could convince people to collaborate like this for the greater good, the GPL makes it legally impossible", you say. Ah, but the GPL only says you have to make the source available for a minimal monetary cost, it doesn't impose a time limit. Traditionally, responding to source code requests with a snail-mailed CD is good enough. No judge in the US is going to rule that a short administrative delay in sending out those CDs - in the name of everyone's security, after all, and 90 days is nothing to the judicial system - violates a nebulous licensing agreement from a different era.
Soon, there will be no such thing as a safe way to disclose a vulnerability in an open source project. Centralized SaaS will have a major security advantage here.
Edit: Because an RCE in a open-source dependency means you are just as vulnerable when the security patch lands? I don’t see the controversy.
https://blog.mozilla.org/en/privacy-security/ai-security-zer...
Why?
> Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective
This is the key. With AI, the “people won't notice, with so many changes going past” assumption fails.
I agree it is not much additional evidence! If someone wanted to try running the same test on a series of N commits from that list including this one I'd be very curious to see the answer!
for those (like me) who hadn't seen it before.
90d seems long too though.
Think ultimately the big AI houses will need to help the core internet infra guys. Running latest and greatest AI over stuff like nginx and friends makes sense for us all collectively I think
We should be able to turn around a bug report to a patched product ready for QA testing in 1 hour. Standardize/open source it, have the whole software supply chain use it (ex. Linux kernel -> distros -> products that use distros -> users). With AI there's no reason we can't do this, we're just slow.
Imo we are going to have to rely on more layers of security. Systems that are designed to be secure even in the presence of individual vulnerabilities. This has already been happening for a while on mobile platforms and game consoles. Even physical hardware designed to keep particular secrets /keys even from the kernel.
I've been dealing with a bunch of AI-generated (or at least -assisted) vulnerability reports lately. In many cases the reports include proposed patches to fix the issues.
It's been..... interesting. In many cases, the analysis provided in the report has been accurate and helpful. In some cases, the proposed patches have also been good, and we've accepted them with minimal or no changes.
In other cases, despite finding a valid issue, and even providing a good analysis of the problem, the AI tool's suggested patch has been, quite simply, wrong.
Careful review from somebody who really _understands_ the code -- and the wider context in which it is operating -- is still absolutely necessary. That's not always going to happen in an hour.
This is an important facet of the problem space: security risks turning into an arms race for who wants to spend more tokens.
That's when firewalls were widely deployed to provide some layer of protection.
So you can ask yourself, what is the (possibly metaphorical) firewall in the software you depend on?
Is there any way you can decrease attack surface, separate out the most important data in extra-secure (and thus less accessible) systems?
There will be much wailing and gnashing of teeth around this, because a lot of tech types really resent having to update constantly, but I don't think people will have a choice. If you have a complicated stack where major or even minor version updates are a huge hassle, I'd start working now to try and clear out the cruft and grease those wheels.
Debian continuously issues security updates for stable versions, ingestable with automatic updates. “Stable” doesn’t mean that vulnerabilities aren’t getting fixed.
The argument that could be made is that keeping up with getting vulnerabilities fixed might become such a high workload that fewer releases can be maintained in parallel, and therefore the lifetime and/or overlap of maintained releases would have to be reduced. But the argument for abandoning stable releases altogether doesn’t seem cogent.
It goes both ways: Stable code that only receives security updates becomes less vulnerable over time, as the likelihood of new vulnerabilities being introduced is comparatively low. From that point of view, stable software actually has a leg up over continuous (“eternal beta” in the worst case) functional updates.
I hate software that forces you to take new features as a condition of obtaining bug and security fixes. We need to keep old "stable" builds around for longer and maintain them better. I know, I know, it is really upsetting to developers to have to backport things to old versions--they wish that all they had to work on was the current branch. But that just causes guys like me to never upgrade because the downside of upgrading (new features) is worse than the upside (security fixes).
It may actually be the opposite.
Debians steady and professional approach on shipping security patches with very little to no functional difference actually enables us to consider and work on automated, autonomous weekly or faster patches of the entire fleet. And once that's in place and trusted, emergency rollouts are very possible and easy.
We have other projects that "move fast and break things" and ship whatever they want in whatever versions they want and those will require constant attention to ship any update for a security topic. These projects require constant human attention to work through their shenanigans to keep them up to date.
> CVE-2026-32105 xrdp
which i see has a fix in sid but not on bookworm
I kind of get your point, but they responded pretty quickly here.
We've just kept building more complex things with more exposure with no recognition that the day of reckoning was coming. And now we are in an untenable situation. With governments spending billions on AI with the big providers it's likely they've found many of these already.
Not so daunting for me having come of age when compiling a kernel specific to a hardware platform was essential.
Custom software that does not fit the usual patterns is not fool proof but it won't be obvious.
Monocultures with all their eggs in one basket are even less secure than truly diverse ecosystems though.
Turns out it did.
Thus we should probably start treating our thinking model of computing as a Dark Forest, not a friendly community. That mitigates these risks to some degree.
The 3rd one is what I practice when giving companies time to fix their issue. Note, I haven't reported anything to FOSS projects, but to several companies I found exploits in. I give them 5 days. If they don't respond at all in the first day, I deduct 1 day - apparently they're either incompetent or don't care. After the 5 days have passed, I make it public. So far they've all fixed the issue on the 3rd or 4th day.
If I were to report something to a FOSS project, I'd give them a bit more, say 8-9 days. Enough time for everyone to wake up, review the vuln, patch it and ship it. Enough time for all the downstream projects to also ship the patch.
90 days is ridiculous, especially for companies. If I report something on Friday 23:30 and they reply Monday 15:00 - what were they doing during the weekend? Did they forget their software is used 24/7? I had one company complain quite a bit, threatening to sue. When they realized there was no one to sue (me being anonymous with my report), they fixed it in less than a day.
Bottom line - if you're a company offering a product or service, you should have a security team 24/7.
If you're a FOSS project - either alert your users to stop using your software or disable the service yourself, if you can.
If it's an extremely important life-or-death service you can't shut off - then fix it quickly. What are you doing with life-or-death stuff when you can't react quickly enough?
Fuck the 90 days standard - it's what companies want us to do because it's easier on them. If security hasn't been your top priority, you have a few days to make it your top priority.
With AI, that makes even more sense now. Bugs won't be able to stay hidden for months. Especially bugs I've reported like IDORs or SQL injections - things everyone tries first.
(and I love Linux, but getting an "Oh noes!" from Anubis at kernel.org because I don't have cookies enabled (I do??) really makes me not want to report anything to the Linux kernel in particular. If I ever did find something, I'd just immediately post it as a HN comment or something like that)
It depends on the kind of vulnerability, but sometimes in order to fix a problem, you need to do an enormous amount of software engineering. Which needs to be done to a very high standard, because the expectation is that people will push security patches more or less immediately to production.
Of course, this only works if no one else is likely to discover the vulnerability in the meantime!
Why do we need to put up with excuses? If a company has lots of complicated code that would need enormous amount of time to fix, it's on them. They decided to release this code into the wild.
If I publish the vuln publicly, the users would have the option to stop using the software/service until it's patched. If a customer is using a service without caring about security, it's on them. I want to protect the customers who would monitor the news for such vulns and protect themselves.
Not just for vulnerabilities, having a nice agents|skills|etc.md definitions would encourage new devs to contribute instead of dealing with an overworked maintener repeating the same thing for n time.