I stumbled upon this interesting platform and thought I’d share.
Incus provides support for system containers and virtual machines.
When running a system container, Incus simulates a virtual version of a full operating system. To do this, it uses the functionality provided by the kernel running on the host system.
When running a virtual machine, Incus uses the hardware of the host system, but the kernel is provided by the virtual machine. Therefore, virtual machines can be used to run, for example, a different operating system.
You can learn more about the differences between application containers, system containers and virtual machines in our documentation.
Containers aside, why would you want to use Incus to run VMs, when you’ve already got KVM/libvirt? Are there any performance/resource utilization/other advantages to using it?
My best guess would be to have a single point of management for both LXCs and VMs.
LXD/Incus provides a management and automation layer that really makes things work smoothly. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes) and those are just a few things you can do with it and not with pure KVM/libvirt.
Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.
Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. It plays on the land of, let’s say, Proxmox and I can guarantee you that most people running it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no BS licenses and way less overhead.
Interesting, I didn’t know you could create clusters with it! That looks promising then. I was planning to install Proxmox for my homelab but didn’t like that it was a whole distro, which shipped with an ancient kernel…
I was planning to install Proxmox for my homelab but didn’t like that it was a whole distro, which shipped with an ancient kernel…
My issue with Proxmox isn’t that it ships with an old kernel is the state of that kernel, it is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circunstances.
For LXD you’ve a couple of options:
- Debian 12 with LXD/LXC provided from their repositories;
- Debian 12 with LXD/LCX provided from snap;
- Ubuntu with snap.
In the first case you’ll get a very clean system with a very stable LXD 5.0.2 LTS, it works really well however it doesn’t provide a WebUI. If you go with a the Snap options you’ll get LXD-UI after version 5.14.
Personally I was running LXD from snap since Debian 10, and moved to LXD repository under Debian 12 because I don’t care about the WebUI and I care about having clean systems… but I can see how some people, particularly those coming from Proxmox, would like the UI.
Side note: it should be possible to run the WebUI without snap and use it to control a LXD 5.0.2 LTS cluster but as I don’t need it I never spent time on it. :)