HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
50% Positive
Analyzed from 1240 words in the discussion.
Trending Topics
#container#containers#exploit#host#root#same#image#still#user#binary

Discussion (30 Comments)Read Original on HackerNews
However copy fail can be used in many other ways not contained by containers or the above settings. For example it can modify the /etc/ssl/certs to prepare for MitM attacks. If you have multiple containers based on the same image then one compromised CA set affects another.
1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.
So:
- Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.
- What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.
- What if you ro-bind-mount something in? It’s not ro any more.
I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?
I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.
Oh, an this [2] just happened
[1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501
Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?
It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?
Couldn't you then simply re-run the exploit again as unprivileged podman user and gain root on the host?
If we weren't looking into moving away from using containers completely into using ephemeral microVMs one area I'd invest in would be replicating what CargoWall does for GitHub actions in GitLab CI. At that point even if the attacker gained access to a container, modified a binary with some specific instructions (like reading env vars and sending them to an external server) it'd not be able to send credentials or fetch a malware remotely at all due to the DNS queries being intercepted by eBPF and being sent to a CoreDNS proxy.
I still think rootless containers increase the attack vector complexity in more than just a one liner into an attack scenario that, at that point, should also involve understanding additional details about the underlying host with information such as, as you correctly pointed out, what container images (and thus shared image layers) are present and also whether these images use setuid binaries which specific CI jobs explicitly call throughout the build process (kind of unusual to see anyone running a setuid binary in a CI pipeline anyway as that is generally an action that would result in a permission denied in normal conditions).
Wouldn’t the exploit then just use ip addresses directly?
@isityettime, the vulnerability happens not because of file contents being modified on disk (think of a base image that is shared across multiple CI builds) rather because a binary in one base image shares the same inode (and thus the same address space in memory as an optimization) as the same binary in another container, meaning container B will execute a poisoned binary and that's where the "container escape" happens.
But… does this escape the container? If not (the author seems to indicate it does not) then does it matter if you are in Docker or rootless Podman, right, since the end result is always: you have elevated to root within the container. If the rest of the container filesystem isolation does its job, the end result is the same? Though I guess another chained exploit to escape the container would be worse in Docker? Do I have that right?