ZH version is available. Content is displayed in original English for accuracy.
I got tired of libvirt XML and Vagrant's Ruby/reload dance for single-host VM stacks, so I built a compose-style runtime directly on QEMU/KVM.
What's there: GPU passthrough as a first-class primitive (VFIO, OVMF, per-instance EFI vars), healthchecks that gate depends_on over SSH, socket-multicast L2 between VMs with no root and no bridge config, cloud-init wired through the YAML, Dockerfile support for provisioning.
What it's not: Kubernetes. No clustering, no live migration, no control plane. Single host. Prototype, but I'm running it on real hardware. Curious what breaks for people.

Discussion (18 Comments)Read Original on HackerNews
Originally if you wanted e.g. a SoundBlaster16 device you could use -device sb16. Then it changed to -soundhw sb16. And now it's -audio driver=none,model=sb16; this has been happening with several different classes of options over the years and I haven't found any good documentation of all the differences in one place. If anyone knows, I'd appreciate it.
I built something similar recently on top of Incus via Pulumi. I also wanted to avoid libvirt's mountain of XML, and Incus is essentially a lightweight and friendlier interface to QEMU, with some nice QoL features. I'm quite happy with it, though the manifest format is not as fleshed out as what you have here.
What's nice about Pulumi is that I can use the Incus Terraform provider from a number of languages saner than HCL. I went with Python, since I also wanted to expose a unified approach to provisioning, which Pyinfra handles well. This allows me to keep the manifest simple, while having the flexibility to expose any underlying resource. I think it's a solid approach, though I still want to polish it a bit before making a public release.
So would be nice if holos could replicate that docker/incus CLI functionality, like say "holos run -d --name db ubuntu:noble bash -c blah".