RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
29% Positive
Analyzed from 2697 words in the discussion.
Trending Topics
#linux#root#copy#same#fail#more#don#bugs#exploit#kernel

Discussion (94 Comments)Read Original on HackerNews
Which illustrates pretty well something that's lost when relying heavily on LLMs to do work for you: exploration.
I find that doing vulnerability research using AI really hinders my creativity. When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby. It's like a genie - you get exactly what you asked for and nothing more.
The researcher who discovered Copy Fail relied heavily on AI after noticing something fishy. If he had to manually wade through lots of code by himself, he would have many more chances to spot these twin bugs.
At the same time, I'm pretty sure that by using slightly less directed prompting, a frontier LLM would found these bugs for him too.
It's a very unusual case of negative synergy, where working together hurt performance.
The wrong thing got fixed for copy.fail, because people jumped to blame AF_ALG.
[ed.: yes it's the same authencesn issue. https://github.com/V4bel/dirtyfrag/blob/892d9a31d391b7f0fccb... it doesn't say authencesn in the code, only in a comment, but nonetheless, same issue.]
[ed.2: the RxRPC issue is separate, this is about the ESP one]
The RxRPC one is definitely a different root cause (although caused by a very similar mistake).
For the ESP one it's a bit harder to tell. I don't think the wrong thing was fixed, just that there was a very similar bug in almost the same spot. Could be wrong about that though.
It's absolutely the same issue in authencesn/ESP. There's another one in RxRPC that is AIUI completely unrelated.
Is there a counterfactual where you would say it explored well enough, besides both vulnerabilities published as one?
I bet that with a slightly looser prompt/harness, the LLM could have found these twin bugs too.
Yet at the same time, I also think that if the human researcher had manually scanned the code, he'd have noticed these bugs too.
FWIW I do think LLMs are great tools for finding vulnerabilities in general. Just that they were visibly not optimally applied in this case.
I think LLMs are great for vulnerability discovery, but you need to not skimp on the legwork and understanding what even you just found there.
> This finding was AI-assisted, but began with an insight from Theori researcher Taeyang Lee, who was studying how the Linux crypto subsystem interacts with page-cache-backed data.
https://xint.io/blog/copy-fail-linux-distributions
That's why is very very important to just step out and use saved time to go for a walk, to a park, sit on a bench, listen do birds, close eyes and zoom out.
The state we are in is actually brilliant.
If there's a root cronjob that runs a world readable binary, you could modify it in the page cache and exploit it that way.
Modifying the page cache is a really strong primitive with countless ways to exploit it.
link: https://github.com/V4bel/dirtyfrag
detailed writeup: https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...
importantly:
"Copy Fail was the motivation for starting this research. In particular, xfrm-ESP Page-Cache Write in the Dirty Frag vulnerability chain shares the same sink as Copy Fail. However, it is triggered regardless of whether the algif_aead module is available. In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag."
mitigation (i have not tested or verified!):
"Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution. Use the following command to remove the modules in which the vulnerabilities occur."
conversation around the mitigation suggests you need a reboot or run this after the above on already-exploited machines:And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.
this is more targeted at the people who run the PoC to see if their machine is vulnerable.
just transcribing some relevant stuff from https://github.com/V4bel/dirtyfrag/issues/1 so that people visiting this thread dont need to poke around a bunch of different places.
authencesn didn't get fixed. Now we got the results of that, turns out you can access the same (I believe) out of bounds write through plain network sockets.
I wish I thought of that, but I didn't.
[ed.: I'm referring to the through-ESP issue. The RxRPC one is AIUI completely unrelated.]
This feels like the practice of Linux distros back in 1999 to ship with dozens of network services by default. Except it's not 1999 anymore.
We could also wonder why XZ was linked to SSH... But only on systemd-enabled distros (which is a lot of them).
Just... Why?
And then make sure to call to incompetence, instead of malice and say non-sense like "Sure, it only factually affects systemd distros, but this is totally not related to systemd". All I saw though was a systemd backdoor (sorry, exploit).
Now regarding copy.fail that just happened: not all maintainers are irresponsible. And some have, rightfully, bragged that the security measures they preemptively took in their distros made them non vulnerable.
But yup I agree it's madness. Just why. And Ubuntu is a really bad offender: it's as if they did a "yes | .." pipe to configure every single modules as an include directly in the kernel.
"We take security seriously, look we've got the IPsec backdoor (sorry, exploit) modules directly in the kernel". "There's 'sec' in 'IPsec', so we're backdoored (sorry, secure)".
- esp4 (kernel config "CONFIG_AF_RXRPC")
- esp6 (kernel config "CONFIG_INET_ESP")
- rxrpc (kernel config "CONFIG_INET6_ESP")
Is this correct?
In fact, given the official public APIs, Google could replace the Linux kernel with a BSD, and userspace wouldn't notice, other than rooted devices, and the OEMs themselves baking their Android distro.
Some folks like the termux rebels, occasionally find out there is a sherif in town.
> As documented in the Android N behavioral changes, to protect Android users and apps from unforeseen crashes, Android N will restrict which libraries your C/C++ code can link against at runtime. As a result, if your app uses any private symbols from platform libraries, you will need to update it to either use the public NDK APIs or to include its own copy of those libraries. Some libraries are public: the NDK exposes libandroid, libc, libcamera2ndk, libdl, libGLES, libjnigraphics, liblog, libm, libmediandk, libOpenMAXAL, libOpenSLES, libstdc++, libvulkan, and libz as part of the NDK API. Other libraries are private, and Android N only allows access to them for platform HALs, system daemons, and the like. If you aren’t sure whether your app uses private libraries, you can immediately check it for warnings on the N Developer Preview.
https://android-developers.googleblog.com/2016/06/improving-...
These stable APIs,
https://developer.android.com/ndk/guides/stable_apis
2026-04-29: Submitted detailed information about the rxrpc vulnerability and a weaponized exploit that achieves root privileges on Ubuntu to security@kernel.org.
2026-04-29: Submitted the patch for the rxrpc vulnerability to the netdev mailing list. Information about this issue was published publicly.
2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.
2026-05-07: Detailed information and the exploit for the esp vulnerability were published publicly by an unrelated third party, breaking the embargo.
2026-05-07: After obtaining agreement from distribution maintainers to fully disclose Dirty Frag, the entire Dirty Frag document was published.
But this is very similar to Copy Fail, and I'm assuming there was an assumption that others might also discover this soon as well. Hence the urgency.
At least that's my charitable interpretation.
However, it can be used to modify files that are passed into the container (e.g. Docker run -v), or files that are shared with other containers (e.g. other Docker containers sharing the same layers). kube-proxy with Kubernetes happens to share a trusted binary with containers by default, which is how it can be exploited: https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...
AI is neat because it's higher signal but yeah no, we're not getting anywhere close to "safe linux", AI or not.
An LPE only allows an attacker who can already execute code on the system to become root. So, bad, yes, but it doesn't mean you are immediately pwned.
you think the reporters and the distribution maintainers colluded to... get 5 minutes of attention?
that would be exceptionally stupid of the distribution maintainers and destroy all trust.
2. Bsds don’t have the same optimizations that Linux has. Bsds generally try to pursue corrrectness
That being said there were just a bunch of vulnerabilities in freebsd
macOS has had its own dirty cow attack and I know there’s for sure more memory ones just based on the way the xnu kernel works.
So no Linux isn’t really worse per say
- more people are using it (assuming macos is in its own bucket perhaps) - bigger surface areas (esp NetBSD has in my limited understanding just less stuff that can go boom) - more churn, ie more new stuff than can be buggy released more often.
Of course, because of that, more eyes are on Linux, so I'm not sure where that security tradeoff is.
Not criticizing whoever found the bug, of course.
That said, running every process in its own micro VM is looking more attractive by the minute.
But yes, micro VMs are a great idea!