Since the project’s creation about 14 years ago, libvirt has grown enormously. In that time there has been a lot of code refactoring, but these were always fairly evolutionary changes; there has been little revolutionary change of the overall system architecture or some core technical decisions made early on. This blog post is one of a series examining recent technical decisions that can be considered more revolutionary to libvirt. This was the topic of a talk given at KVM Forum 2019 in Lyon.
Monolithic daemon
Anyone who has used libvirt should be familiar with the libvirtd
daemon which runs most of the virtualization and secondary drivers that libvirt distributes. Only a few libvirt drivers are stateless and run purely in the library. Internally libvirt has always tried to maintain a fairly modular architecture, with each hypervisor driver being a separated from other drivers. There are also secondary drivers providing storage, network, firewall functionality which are notionally separate from all the virtualization drivers. Over time the separation has broken down with hypervisor drivers directly invoking internal methods from the secondary drivers, but last year there was a major effort to reverse this and re-gain full separation between every driver.
There are various problems with having a monolithic daemon like libvirtd
. From a security POV, it is hard to provide any meaningful protections to libvirtd
. The range of functionality it exposes, provides an access level that is more or less equivalent to having a root shell. So although libvirtd
runs with a “virtd_t
” SELinux context, this should be considered little better than running “unconfined_t
“. As well as providing direct local access to the APIs, the libvirtd
daemon also has the job of exposing remote access over TCP, most commonly needed when doing live migration. Exposing the drivers directly over TCP is somewhat undesirable given the size of the attack surface they have.
The biggest problems users have seen are around reliability of the daemon. A bug in any single driver in libvirt can impact on the functionality of all other drivers. As an example, if something goes wrong in the libvirt storage mgmt APIs, this can harm management of any QEMU VMs. Problems can be things like crashes of the daemon due to memory corruption, or more subtle things like main event loop starvation due to long running file handle event callbacks, or accidental resource cleanup such as closing a file descriptor belonging to another thread.
Libvirt drivers are shipped as loadable modules, and an installation of libvirt does not have to include all drivers. Thus a minimal installation of libvirt is a lot smaller than users typically imagine it is. The existance of the monolithic libvirtd
daemon, however, and the fact the many apps pull in broader RPM dependencies than they truly need, results in a perception that libvirt is bloated / heavyweight.
Modular daemons
With all this in mind, libvirt has started a move over to a new modular daemon model. In this new world, each driver in libvirt (both hypervisor drivers & secondary drivers) will be serviced by its own dedicated daemon. So there will be a “virtqemud
“, “virtxend
“, “virtstoraged
“, “virtnwfilterd
“, etc. Each of these daemons will only support access via a dedicated local UNIX domain socket, /run/libvirt/$DAEMONNAME
, eg /run/libvirt/virtqemud
. The libvirt client library will be able to connect to either the old monolithic daemon socket path /run/libvirt/libvirt-sock
, or the new per-daemon socket. The hypervisor daemons will be able to open connections to the secondary daemons when required by requested functionality, eg to config a firewall for a QEMU guest NIC.
Remote off-host access to libvirt functionality will be handled via a new virtproxyd
daemon which listens for TCP connections and forwards API calls over a local UNIX socket to whichever modular daemon needs to service it. This proxy daemon will also be responsible for handling the monolithic daemon UNIX domain socket path that old libvirt clients will be expecting to use.
Overall from an application developer POV, the change to monolithic daemons will be transparent at the API level. The main impact will be on deployment tools like Puppet / Ansible seeking to automate deployment of libvirt, which will need to be aware of these new daemons and their config files. The resulting architecture should be more reliable in operation and enable development of more restrictive security policies.
Both the existing libvirtd
and the new modular daemons have been configured to make use of systemd socket activation and auto-shutdown after a timeout, so the daemons should only be launched when they actually need to do some work. Several daemons will still need to startup at boot to activate various resources (create the libvirt virb0
bridge device, or auto-start VMs), but should stop quickly once this is done.
Migration timeframe
At the time of writing the modular daemons exist in libvirt releases and are built and installed by default. The libvirt client library, however, still defaults to connecting to the monolithic libvirtd
UNIX socket. To best of my knowledge, all distros with systemd use presets which favour the monolithic daemon too. IOW, thus far, nothing has changed from most user’s POV. In the near future, however, we intend to flip the switch in the build system such that the libvirt client library favours connections to the modular daemons, and encourage distros to change their systemd presets to match.
The libvirtd daemon will remain around, but deprecated, for some period of time before it is finally deleted entirely. When this deletion will happen is still TBD, but it is not less than 1 year away, and possibly as much as 2 years. The decision will be made based on how easily & quickly applications find adaptation to the new modular daemon world.
Future benefits
The modular daemon model opens up a number of interesting possibilities for addressing long standing problems with libvirt. For example, the QEMU driver in libvirt can operate in “system mode” where it is running as root and can expose all features of QEMU. There is also the “session mode” where it runs as an unprivileged user but with features dramatically reduced. For example, no firewall integration, drastically reduced network connectivity options, no PCI device assignment and so on. With the modular daemon model, a new hybrid approach is possible. A “session mode” QEMU driver can be enhanced to know how to talk to a “system mode” host device driver to do PCI device assignment (with suitable authentication prompts of course), likewise for network connectivity. This will make the unprivileged “session mode” QEMU driver a much more compelling choice for applications such as virt-manager or GNOME Boxes which prefer to run fully unprivileged.
Since the project’s creation about 14 years ago, libvirt has grown enormously. In that time there has been a lot of code refactoring, but these were always fairly evolutionary changes; there has been little revolutionary change of the overall system architecture or some core technical decisions made early on. This blog post is one of a series examining recent technical decisions that can be considered more revolutionary to libvirt. This was the topic of a talk given at KVM Forum 2019 in Lyon.
Historical usage
In common with many projects which have been around for a similar time frame, libvirt has accumulated a variety of different programming languages and document formats, for a variety of tasks.
The main library is written in C, but around that there is the autotools build system which adds shell, make, autoconf, automake, libtool, m4, and other utilities like sed, awk, etc. Then there are many helper scripts used for code generation or testing which are variously written in shell, perl or python. For documentation, there are man pages written in POD, web docs written in HTML5 with an XSL templating system, and then some docs written in XML which generate HTML, and some docs generated from source code comments. Finally there are domain specific languages such as XDR for the RPC system.
There are a couple of issues with the situation libvirt finds itself in. The large number of different languages and formats places a broad knowledge burden on new contributors to the project. Some of the current choices are now fairly obscure & declining in popularity, thus not well known by potential project contributors. For example, Markdown and reStructuredText (RST) are more commonly known than Perl’s POD format. Developers are more likely to be competent in Python than in Perl. Some of the languages libvirt uses are simply too hard to deal with, for example it is a struggle to find anyone who can explain m4 or enjoys using it when writing configure scripts for autoconf.
Ultimately the issues all combine to have a couple of negative effects on the project. They drive away potential new contributors due to their relative obscurity. They reduce the efficiency of existing contributors due to their poor suitability for the tasks they are applied to.
Intended future usage
With the above problems in mind, there is a desire to consolidate and update the programming languages and document formats that libvirt uses. The goals are to reduce the knowledge burden on contributors, and simplify the development experience to make it easier to get the important work done. Approximately the intention is to get to a place where libvirt uses C for the core library code, Python for all helper scripts, RST for all documentation, and Meson for the build system. This should eliminate the use of shell, shell utilities like sed/awk, perl, POD, XSL, HTML5, m4, make, autoconf, automake. A decision about which static website builder to use to replace XSL hasn’t been made yet, but Sphinx and Pelican are examples of the kind of tools being considered. Getting to this desired end point will take a prolonged effort, but setting a clear direction now will assist contributors in understanding where to best spend time.
To kickstart this consolidation at the end of last year almost all of the Perl build scripts were converted to Python. This was a manual line-by-line conversion, so the structure of the scripts didn’t really change much, just the minimal language syntax changes in most cases. There are still a handful of Perl scripts remaining, which are some of the most complicated ones. These really need a rewrite, rather than a plain syntax conversion, so will take more time to bring over to Python. The few shell scripts have also been converted to Python, with more to follow. After that all the POD man pages were fed through a automated conversion pipeline to create RST formatted man pages. In doing this, the man pages are now also published online as part of the libvirt website. Some of the existing HTML content has also been converted into RST.
The next really big step is converting the build system from autotools into Meson. This will eliminate some of the most complex, buggy and least understood parts of libvirt. It will be refreshing to have a build system which only requires knowledge of a single high level domain specific language, with Python for extensions, instead of autotools which requires knowledge of at least 6 different languages and the interactions between them.
Since the project’s creation about 14 years ago, libvirt has grown enormously. In that time there has been a lot of code refactoring, but these were always fairly evolutionary changes; there has been little revolutionary change of the overall system architecture or some core technical decisions made early on. This blog post is one of a series examining recent technical decisions that can be considered more revolutionary to libvirt. This was the topic of a talk given at KVM Forum 2019 in Lyon in October 2019.
Automatic memory cleanup
Libvirt has always aimed to be portable across a wide set of operating system platforms and this included portability to different compiler toolchains. In the early days of the project GCC was the most common target, but users did use the Solaris and Microsoft native compilers occasionally. Fast forward to today and the legacy UNIX platforms are much less relevant. Officially libvirt only targets Linux, FreeBSD, macOS and Windows as supported platforms and all of these have GCC or CLang or both available. These compilers are available on any platform that we’re likely to add in the future too. Conceivably people might still want to use Microsoft compilers, but their featureset is so poor compared to GCC/Clang that we long ago discounted them as a toolchain to support.
Thus libvirt in the early part of last year, libvirt made the explicit decision to only support GCC and CLang henceforth. This in turn freed the project to take full advantage of extensions to the C language offered by these compilers.
The extension which motivated this decision was the cleanup attribute. This allows a variable declaration to have a function associated with it that will be automatically invoked when the variable goes out of scope. The most obvious use for these cleanup functions is to release heap memory associated with pointers, and this is exactly what libvirt wanted to do. This is not the only use case though, they are also convenient for other tasks such as closing file descriptors, decrementing reference counts, unlocking mutexes, and so on.
The native C syntax for using this feature is fairly ugly
__attribute__((__cleanup__(free))) char *foo = NULL;
but this can be made more attractive via macros. For example, GLib provides several pretty macros to simplify life g_autofree
, g_autoptr
and g_auto
.
Thus the old libvirt coding pattern of
void dosomething(char *param) {
char * foo;
...some code...
foo = g_strdup_printf("Some string %s", param);
if (something() < 0)
goto cleanup;
... some more code...
cleanup:
free(foo);
}
Can be replaced by something like
void dosomething(char *param) {
g_autofree(char *) foo = NLL;
...some code...
foo = g_strdup_printf("Some string %s", param);
if (something() < 0)
return;
... some more code...
}
There are still some “gotchas” to be aware of. Care must be taken to ensure any variable declared with automatic cleanup is always initialized, otherwise the cleanup function will touch uninitialized stack data. If a pointer stored in an automatic cleanup variable needs to be returned to the caller of the method, the local variable must be NULLd out. Fortunately GLib provides a convenient helper g_steal_pointer
for exactly this purpose.
The previous blog described how many goto jumps were eliminated by aborting on OOM, instead of trying to gracefully cleanup & report it. The remaining goto jumps were primarily for free’ing memory, closing file descriptors, and releasing mutexes, most of which can be eliminated with these cleanup functions.
The result is that the libvirt code can be dramatically simplified, which reduces the maint burden on libvirt contributors, allowing more time to be spent on coding features which matter to users. As an added benefit, in converting code over to use automatic cleanup functions we’ve fixed a number of memory leaks not previously detected, which reinforces the value of using this C extension.
Incidentally after this was introduced in libvirt last year, I suggested that QEMU also adopt use of automatic cleanup functions, since it has also mandated either GCC or CLang as the only supported compilers, and this was accepted.
Since the project’s creation about 14 years ago, libvirt has grown enormously. In that time there has been a lot of code refactoring, but these were always fairly evolutionary changes; there has been little revolutionary change of the overall system architecture or some core technical decisions made early on. This blog post is one of a series examining recent technical decisions that can be considered more revolutionary to libvirt. This was the topic of a talk given at KVM Forum 2019 in Lyon.
Portability and API abstractions
Libvirt traditionally targeted the POSIX standard API but there are a number of difficulties with this. Much of POSIX is optional so can not be assumed to exist on every platform. Plenty of platforms are non-compliant with the spec, or have different behaviour in scenarios where the spec allowed for multiple interpretations. To deal with this libvirt used the GNULIB project which is a copylib that attempt to fix POSIX non-compliance issues. It is very effective at this, but portability is only one of the problems with using POSIX APIs directly. It is a very low level API, so simple tasks like listening on a TCP socket require many complex API calls. Other APIs have poor designs by modern standards which make it easy for developers to introduce bugs. The malloc APIs are a particular case in point. As a result libvirt has created many higher level abstractions around the POSIX APIs. Looking at other modern programming languages though, such higher level abstractions are already a standard offering. This allows developers to focus on solving their application’s domain specific problems. Libvirt maintainers by contrast have spent a lot of time developing abstractions unrelated to virtualization such as object / class systems, DBus client APIs, hash tables / bitmaps, sockets / RPC systems, and much more. This is not a good use of limited resources in the long term.
Adoption of GLib
These problems are common to many applications / libraries that are written in C and thus there are a number of libraries that attempt to provide a high level “standard library”. The GLib library is one such effort from the GNOME project developers that has long been appealing. Some of libvirt’s internal APIs are inspired by those present in GLib, and it has been used by QEMU for a long time too. What prevented libvirt from using GLib in the past was the desire to catch, report and handle OOM errors. With the switch to aborting on OOM, the only blocker to use of GLib was eliminated.
The decision was thus made for libvirt to adopt the GLib library in the latter part of 2019. From the POV of application developers nothing will change in libvirt. The usage of GLib is purely internal, and so doesn’t leak into public API exposed from libvirt.so, which is remains compatible with what came before. In the case of QEMU/KVM hosts at least, there is also no change in what must be installed on hosts, since GLib was already a dependency of QEMU for many years. This will ultimately be a net win, as using GLib will eliminate other code in libvirt, reducing the installation footprint on aggregate between libvirt and QEMU.
With a large codebase such as libvirt’s, adopting GLib is a not as quick as flicking a switch. Some key pieces of libvirt functionality have been ported to use GLib APIs completely, while in other cases the work is going to be an incremental ongoing effort over a long time. This offers plenty of opportunities for new contributors to jump in and make useful changes which are fairly easily understood & straightforward to implement.
Removal of GNULIB
One of the anticipated benefits of using GLib was that it would be able to obsolete a lot of the portability work that GNULIB does. The GNULIB project is strongly entangled with autotools as a build system, so is a blocker to the adoption of a different build system in libvirt. There has thus been an ongoing effort to eliminate GNULIB modules from libvirt code. In many cases, GLib does indeed provide a direct replacement for the functionality needed. One of the surprises though, is that a very large portion of GNULIB was completely redundant given libvirt’s stated set of OS platform build targets. There is no need to consider portability to a wide variety of old buggy UNIX variants (Solaris, HPUX, AIX, and so on) for libvirt. After a final big push over the last few weeks, a patch series has been posted which completes the removal of GNULIB from libvirt, which will merge in the 6.1.0 release.
The work has been tested across all the platforms covered by libvirt CI, which means RHEL-7, 8, Fedora 30, 31, rawhide, Ubuntu 16.04, 18.04, Debian 9, 10, sid, FreeBSD 11, 12, macOS 10.14 with XCode 10.3 and XCode 11.3, and MinGW64. There are certainly libvirt users on platforms not covered by CI. Those using other modern Linux distros should not see problems if using GLibC, as the combination of RHEL, Debian & Ubuntu testing should expose any latent bugs. The more likely places to see regressions will be if people are using libvirt on other *BSDs, or older Linux distros. Usage of alternative C library implementations on Linux is also an unknown, since there is no CI coverage for this. Support for older Linux distros is explicitly not a goal for libvirt and the project will willingly break old platforms. Support for other modern non-Linux OS, however, is potentially interesting. What is stopping such platforms being considered explicitly by libvirt is lack of any contributors willing to help provide a CI setup and deal with fixing portability problems. IOW, libvirt is willing to entertain the idea of supporting additional modern OS platforms if contributors want to work with the project to make it happen. The same applies to Linux distros using a non-GLibC impl.
Since the project’s creation about 14 years ago, libvirt has grown enormously. In that time there has been a lot of code refactoring, but these were always fairly evolutionary changes; there has been little revolutionary change of the overall system architecture or some core technical decisions made early on. This blog post is one of a series examining recent technical decisions that can be considered more revolutionary to libvirt. This was the topic of a talk given at KVM Forum 2019 in Lyon.
Detecting and reporting OOM
Libvirt has always taken the view that ANY error from a function / system call must be propagated back to the caller. The out of memory condition (ENOMEM / OOM) is just one of many errors that might be seen when calling APIs, and thus libvirt attempted to report this in the normal manner. OOM is not like most other errors though.
The first challenge with OOM is that checking for a NULL
return from malloc()
is error prone because the return value overloads the error indicator with the normal returned pointer. To address this libvirt coding style banned direct use of malloc()
and created a wrapper API that returned the allocated pointer in an output parameter, leaving the return value solely as the error indicator leading to a code pattern like:
char *varname;
if (VIR_ALLOC(varname) < 0) {
....handle OOM...
}
This enabled use of the ‘return_check
‘ function attribute to get compile time validation that allocation errors were checked. Checking for OOM is only half the problem. Handling OOM is the much more difficult issue. Libvirt uses a ‘goto error
‘ design pattern for error cleanup code paths. A surprisingly large number of these goto jumps only exist to handle OOM cleanup. Testing these code paths is non-trivial, if not impossible, in the real world. Libvirt integrated a way to force OOM on arbitrary allocations in its unit test suite. This was very successful at finding crashes and memory leaks in OOM handling code paths, but this only validates code that actually has unit test coverage. The number of bugs found in code that was tested for OOM, gives very low confidence that other non-tested code would correctly handle OOM. The OOM testing is also incredibly slow to execute since it needs to repeatedly re-run the unit tests failing a different malloc()
each time. The time required grows exponentially as the number of allocations increases.
Assuming the OOM condition is detected and a jump to the error handling path is taken, there is now the problem of getting the error report back to the user. Many of the libvirt drivers run inside the libvirtd daemon, with an RPC system used to send results back to the client application. Reporting the error via RPC messages is quite likely to need memory allocation which may well fail in an OOM scenario.
Is OOM reporting useful?
The paragraphs above describe why reporting OOM scenarios is impractical, verging on impossible, in the real world. Assuming it was possible to report reliably though, would it actually benefit any application using libvirt ?
Linux systems generally default to having memory overcommit enabled, and when they run out of memory, the OOM killer will reap some unlucky process. IOW, on Linux, it is very rare for an application to ever see OOM reported from an allocation attempt. Libvirt is ported to non-Linux platforms which may manage memory differently and thus genuinely report OOM from malloc()
calls. Those non-Linux users will be taking code paths that are never tested by the majority of libvirt users or developers. This gives low confidence for success.
Although libvirt provides a C library API as its core deliverable, few applications are written in C, most consume libvirt via a language binding with Perl and Go believed to be the most commonly used. Handling OOM in non-C languages is even less practical/common than in C. Many libvirt applications are also already using libraries (GTK, GLib) that will abort on OOM. Overall there is little sign that any libvirt client application attempts to handle OOM in its own code, let alone care if libvirt can report it.
One important application process using the libvirt API though is the libvirtd daemon. In the very early days, if libvirtd stopped it would take down all running QEMU VMs, but this limitation was fixed over 10 years ago. To enable software upgrades on hosts with running VMs, libvirtd needs to be able to restart itself. As a result libvirtd maintains a record of important state on disk enabling it to carry on where it left off when starting up. Recovering from OOM by aborting and allowing the libvirtd to be restarted by systemd, would align with a code path that already needs to be well tested and supported for software upgrades.
Give up on OOM handling
With all the above in mind, the decision shouldn’t be a surprise. Libvirt has decided to stop attempting to handle ENOMEM from malloc() and related APIs and will instead immediately abort. The libvirtd daemon will automatically restart and carry on where it left off. The result is that the libvirt code can be dramatically simplified by removing many goto jump and cleanup code blocks, which reduces the maint burden on libvirt contributors, allowing more time to be spent on coding features which matter to users.