Alot has been going on in the libvirt universe recently as it continues on its path to world domination. A couple of days ago Daniel Hokka Zakrisson (that’s the 4th Daniel involved in libvirt!) surprised us all by announcing the start of a Linux-VServer driver for libvirt. This is the second container based virtualization driver, following on from the previous OpenVZ driver work. On the KVM front we now have support for save & restore thanks to Jim Paris, and the Xen & KVM drivers can also do CDROM media changes which will make Windows guest installs much more friendly. A bunch of work is taking place around NUMA to allow guests to be intelligently placed to take advantage of the capabilities of large NUMA boxes.
I’ve been working on integrating SASL support into the remote management driver. This will augment our existing SSL/TLS + x509 security model, to provide fun stuff like Kerberos integration (single sign on!) and plain old username/password auth (with data encryption thrown in too though). This will tie in very nicely with the FreeIPA project which is providing a way to get a pre-integrated Kerberos + LDAP solution for Linux users to compet with ActiveDirectory. I installed FreeIPA in a Fedora 7 guest a few weeks back and can say it is looking very impressive indeed.
There have been lengthy discussions & arguments about how to represent & manage storage from within libvirt, which is a key requirement for being able to provision guest OS from a remote host. Once this all gets fleshed out we’ll be able to manage plain old files, QCow/VMDK files, LVM volumes, iSCSI volumes, disks/partitions and even FiberChannel with NPIV from within virt-manager and other libvirt based apps. This will improve the admin experiance significantly.
Rich Jones has put together all sorts of fun apps ontop of libvirt in recent months. virt-top is a command line tool to provide a ‘top’ like display of CPU, disk & network activity in all guest machines on a host. Virt-P2V is a Live CD based on Fedora which allows you to take an existing physical machine and turn its disk images into a virtual guest running under Xen / KVM fullvirt. Nagios-Virt is a plugin for the Nagios monitoring system which gives status on virtual machines. There are various other interesting admin tools along these lines in the works, so watch this space….
For Fedora 8 we have quite an ambitious set of goals to improve security of the virtualization management stack. With the test2 date fast approaching things are starting to fall into place, although as ever its taken longer than expected. Should really have expected this since it requires getting code accepted in 3 upstream projects (Xen, QEMU, KVM), releases several brand new pieces of new software (GTK-VNC and Virt Viewer), and updating many others (Virt Manager & virt-install).
A couple of weeks ago DV released an update of libvirt which includes support for secure remote management, either tunnelled over SSH, or directly connecting with TLS/SSL and x509 certificate authentication. This was the culmination of many months work by Rich Jones, and review & feedback by the rest of the libvirt team. Oh, it also supports IPv6 out of the box – the only open source virtualization management software to support IPv6 for remote management.
Yesterday I submitted another iteration of the my patches to add password authentication and the VeNCrypt extension to QEMU’s VNC server. The latter allows VNC to be encrypted with SSL/TLS and authenticated with x509 certificates.
Today I submitted changes to Xen to remove the horrible VNC server implementation based on LibVNCServer . For those who don’t know, LibVNCServer is a horrible hack which turns the vncserver code into a shared library for embedding in applications which need VNC server support. Unfortunately the code is utterly unintelligable, and has been retrofitted with multi-thread support which is completely and utterly broken. We’ve made countless fixes to the thread synchronization to address deadlocks & crashes and still have no confidence that it is truely working correctly. So I’ll be glad to see the back of LibVNCServer.
Staying on the VNC theme, we announced the first official release of GTK-VNC. This is a GTK widget which provides a VNC client viewer. It provides a core library written in C, using coroutines to allow it to be completely asynchronous while remaining single threaded. A wrapper library using PyGTK provides access to the widget functionality from Python. Two example programs illustrate use of the widget by re-implementing the traditional ‘vncviewer’ in a few 10’s of lines of code. The client is fully IPv6 aware, and as well as the traditional VNC authentication protocol, implements the VeNCrypt extension to provide secure TLS/SSL encrypted communications, optionally using x509 certificates to authenticate.
Finally also announced the first release of Virtual Machine Viewer, a lightweight, minimal UI for interacting with the graphical console from virtual machines. It is intended as a replacement for vncviewer, since it integrates with libvirt there is no need to tell it the VNC display address – just tell it the guest name, ID or UUID and it’ll figure out the rest.
There’s still plenty of work to be done before Fedora 8 is released, but its starting to come together nicely. The forthcoming Fedora 8 release will again be leading the pack when it comes to open source virtualization management.
Last week, after many months development & testing, we finally did a new release of
libvirt which includes secure remote management. Previously usage of libvirt was restricted to apps running on the machine being managed. When you are managing a large data center of machines requiring that an admin ssh into a machine to manage virtual machines is clearly sub-optimal. XenD has had the ability to talk to its HTTP service remotely for quite a while, but this used cleartext HTTP and had zero authentication until Xen 3.1.0. We could have worked on improving XenD, but it was more compelling to work on a solution that would apply to all virtualization platforms. Thus we designed & implemented a small daemon for libvirt to expose the API to remote machines. The the communications can be run over a native SSL/TLS encrypted transport, or indirectly over an SSH tunnel. The former will offer a number of benefits in the long term – not least of which the ability to delegate management permissiosn per-VM and thus avoid the need to provide root access to virtual machine administrators.
So how do you make use of this new remote management. Well, installing libvirt 0.3.0 is the first step. Out of the box, the SSL/TLS transport is not enabled since it requires x509 certificates to be created. There are docs about certificate creation/setup so I won’t repeat it here – don’t be put off by any past experiance setting up SSL with Apache – its really not as complicated as it seems. The GNU TLS certtool
also a much more user friendly tool than the horrific openssl
command line. Once the daemon is running, then the only thing that changes is the URI you use to connect to libvirt. This is best illustrated by a couple of examples
- Connecting to Xen locally
-
$ ssh root@pumpkin.virt.boston.redhat.com
# virsh --connect xen:/// list --all
Id Name State
----------------------------------
0 Domain-0 running
6 rhel5fv blocked
- hello shut off
- rhel4x86_64 shut off
- rhel5pv shut off
- Connecting to Xen remotely using SSL/TLS
-
$ virsh --connect xen://pumpkin.virt.boston.redhat.com/ list --all
Id Name State
----------------------------------
0 Domain-0 running
6 rhel5fv blocked
- hello shut off
- rhel4x86_64 shut off
- rhel5pv shut off
- Connecting to Xen remotely using SSH
-
$ virsh --connect xen+ssh://root@pumpkin.virt.boston.redhat.com/ list --all
Id Name State
----------------------------------
0 Domain-0 running
6 rhel5fv blocked
- hello shut off
- rhel4x86_64 shut off
- rhel5pv shut off
- Connecting to QEMU/KVM locally
-
$ ssh root@celery.virt.boston.redhat.com
# virsh --connect qemu:///system list --all
Id Name State
----------------------------------
1 kvm running
- demo shut off
- eek shut off
- fc6qemu shut off
- rhel4 shut off
- wizz shut off
- Connecting to QEMU/KVM remotely using SSL/TLS
-
$ virsh --connect qemu://celery.virt.boston.redhat.com/system list --all
Id Name State
----------------------------------
1 kvm running
- demo shut off
- eek shut off
- fc6qemu shut off
- rhel4 shut off
- wizz shut off
- Connecting to QEMU/KVM remotely using SSH
-
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Id Name State
----------------------------------
1 kvm running
- demo shut off
- eek shut off
- fc6qemu shut off
- rhel4 shut off
- wizz shut off
Notice how the only thing that changes is the URI – the information returned is identical no matter how you connect to libvirt. So if you have an application using libvirt, all you need do is adapt your connect URIs to support remote access. BTW, a quick tip – if you get tired of typing –connect arg you can set the VIRSH_DEFAULT_CONNECT_URI environment variable instead.
What about virt-install and virt-manager you might ask. Well there are slightly more complicated. During the creation of new virtual machines, both of them need to create files on the local disk (to act as virtual disks for the guest), possibly download kernel+initrd images for booting the installer, and enumerating network devices to setup networking. So while virt-manager can run remotely now – it will be restricted to monitoring existing VMs, and basic lifecycle management – it won’t be possible to provision entirely new VMs remotely. Yet. Now the basic remote management is working, we’re looking at APIs to satisfy storage management needs of virt-manager. For device enumeration we can add APIs which ask HAL questions and pass the info back to the client over our secure channel. Finally kernel+initrd downloading can be avoided with by PXE booting the guests.
There’s lots more to talk about, such as securing the VNC console with SSL/TLS, but I’ve not got time for that in this blog posting. Suffice to say, we’re well on our way to our Fedora 8 goals for secure remote management. Fedora 8 will be the best platform for virtualization management by miles.
For the last couple of years all the hype wrt open source virtualization has been about Xen. Unfortunately after several years Xen is still not upstream in LKML, the main codebase being a huge out of tree patch persistently stuck on obsoleted kernel versions. The Xen paravirt_ops implementation is showing promise, but its a long way off being a full solution since it doesn’t provide Dom0 or ia64/ppc/x86_64 yet. Then out of nowhere, 6 months ago, a newer contender arrived in the form of KVM almost immediately finding itself merged upstream. Now you can’t claim to be offerring state of the art virtualization without including both KVM and Xen. We had to have KVM in Fedora 7. With excellant forsight, when working to integrate Xen in Fedora Core 5, Daniel Veillard had the idea to create a technology independant management library for virtualization platforms. This is libvirt. The core idea was to provide a stable library API for application developers to target, insulating them from implementation specific APIs in the changing base virtualization platform. This API would be LGPL to allow both both the community and 3rd party vendors build applications.
On top of libvirt a suite of applications are evolving – virt-manager, virt-install and cobbler/koan – to name the three most popular thus far. Fast forward to Fedora 7 when we’re looking to introduce support for KVM into the distribution. We could simply have taken the approach of throwing in the kernel modules & QEMU code for KVM, sending out a fancy press release & letting users figure out the gory details. With libvirt though, we were in a position to do more…much, much more. So the precious few months between the end of our RHEL-5 GA work and the Fedora 7 release were spent hacking on libvirt to add in support for managing QEMU and KVM virtual machines.
The results speak for themselves. In Fedora 7 you can fire up the tools like virt-manager & virt-install and manage KVM virtual machines in exactly the same way you managed Xen virtual machines. You are free from being locked into Xen. You avoided having to learn new command sets. You are able to pick the best open source virtualization technology to accomplish the task at hand. This is Fedora innovation & vision at its best! No other distribution even comes close…yet… None of this technology has any serious distribution specific logic & we want everyone to reap the rewards. Thus we actively work with developers from other distributions, in particular Open Solaris and Debian to help them integrate (& thus take advantage of) libvirt and the applications built upon it.
So if you’re using Fedora 7 what are some of the things to look out for….
- Choosing which hypervisor to use
-
Both KVM and Xen require root privileges since they need to talk to various kernel devices & drivers. Not everyone has root on their machine. With KVM you can allow unprivileged users the ability to create VMs by changing the ownership on /dev/kvm. The plain unaccelerated QEMU can be used a general purpose emulator too without any elevated privileges. libvirt uses ‘hypervisor URIs’ to choose between backend implementations.
xen
(or no explicit URI): this connects to Xen. root has full access, non-root has read only access
qemu:///system
: this connects to the main QEMU/KVM backend driver. root has full access, non-root has read only access.
qemu:///session
: this connects to the per-user private QEMU backend driver. Full access irrespective of user, since there is one instance per-user account.
All libvirt tools accept a --connect
argument to choose the hypervisor URI to use. virt-manager & virt-install will try to pick either xen
or qemu://system
depending on which kernel you’re running. You can always o an explicit argument. So to use virsh
as a non-root user with the per-user QEMU instance, virsh --connect qemu:///session
.
- Using virsh instead of xm
-
You may be familiar with Xen’s xm
command line tool. The capabilities of this valuable administrative tool are also in virsh
. The benefit being that it works with Xen and KVM/QEMU, as well as any future platforms libvirt is ported to. To use with Xen, just run virsh
with no arguments. To use with KVM/QEMU add in --connect qemu:///system
. The virsh help
command shows the (huge!) list of commands at your disposal. Oh the VIRSH_DEFAULT_CONNECT_URI
environment variable can be used if you don’t want to specify --connect
everytime.
- Changing the config of an existing guest
-
Every virtualization technology has a difference configuration file format (or even none at all – relying on command line args in QEMU case). As expected libvirt provides a consistent way to change the configuration of a guest virtual machine. Every guest has an XML document describing its setup. You can see this configuration by using the virsh dumpxml [guest]
command. Save this XML to a file, edit it in VI and then use it to update the master config by running virsh define [xmlfile]
. There’s a page describing the various XML elements. Afraid of looking at XML ? Well virsh
provides commands for changing simple things like memory & CPU count, while virt-manager
also allows disk and network devices to be added/removed.
- Connecting virtual machines to the network
-
libvirt allows for many ways to connect a virtual machine to the real network, two of them stand out as the common case. First the traditional Xen approach is to setup a bridge device in the host OS with the physical ethernet device enslaved, along with zero or more guest interfaces. Xen ships with a shell script which munges networking on startup to put eth0 into a bridge, though it is preferrable to just use the standard initscripts to define the bridge & device. In libvirt we call this a “shared physical device” setup – your physical device is shared between the guest and the host in effect. Bridging works well if you are on a permanently wired LAN. It fails miserably if you have a dynamic network setup as typically found on laptops. So the second approach is to have a bridge device not connected to any physical, attach guests to this bridge, and then NAT their traffic out to the real physical device. In libvirt we call this a “virtual network” since its not seen outside of a single host, all LAN traffic appears to orginate from the single host. David Lutterkort describes the setup in a little more detail
We try to make the choice easy in virt-install – if there is a bridge device with a physical interface enslaved we setup a ‘shared physical device’ style config, otherwise we fallback to a ‘virtual network’ config. You can be explicit when choosing between the two using the --network bridge:eth0
(connect to bridge called eth0) or --network network:default
(connect to virtual network called ‘default’). Virt-manager has an extra screen in the VM creation wizard to let you choose the networking setup.
I can only just scratch the surface of this in a blog posting, and of course the work on libvirt is far from complete. A major focus for Fedora 8 will be providing secure remote management. In parallel there’s ongoing work to expand coverage of virtual hardware configuration to audio, USB, more advanced serial consoles. We’d also welcome contributions from anyone interested in adding support for OpenVZ, User Mode Linux, VMWare and Virtual Box to libvirt…
One of the most frequently requested features for both libvirt and virt-manager is the ability to manage virtual machines remotely. When considering this problem, one immediately thinks of the security implications – most of the communications being used for local management have minimal security. For example, with Xen the XenD offers access over TCP with either XML-RPC or a custom SEXPR protocol. In both cases though, the communication channel is unencrypted and the authentication is minimal. Even though the new XML-RPC protocol does offer authentication against PAM this is totally unsuitable for use off-host because the passwords would be transmitted in clear text :-( The core management APIs are not the only area using a cleartext communication channel – the migration of virtual machines between hosts is also done in the clear – not great if the OS running in the VM has any sensitive data in memory. The virtual console capability is based on the VNC remote framebuffer protocol – again in everything is being transmitted in cleartext.
When raising the question of secure remote management, a frequently suggested solution is to just tunnel everything over SSH. While this would obviously address the issue of wire level encryption just fine, it is less desirable when considering the issues of authentication, access control, and key management. To manage virtual machines using SSH tunnelling one would have to create user accounts on each Dom0 for anyone who can manage VMs. Integrity / security of the Dom0 hosts is critical in a virtualized environment because any single host is running many VMs, so giving people shell access with SSH is not at all desirable. It is possible to restrict a users’ access by only allowing authentication with public keys, and using magic in the authorized_keys file to whitelist a set of commands that can be run.
That said, in a data center with 1000’s hosts, one certainly does not want to be maintaining 10’s of 1000’s of authorized_keys files – just think of the work involved in revoking access to an admin who resigns / is fired. Authentication via SSH is at a rather coarse level – the user can either log into the host, or not – there’s no facility for saying ‘user A can manage virtual machines B and C, but not virtual machine D or E’. So there would need to be a second level of access control on top of that provided by SSH. Thus as well as managing 10’s of 1000’s of authorized_keys files, there’s the additional management hassle of users in the management protocol itself. While on the subject of SSH, when logging into a machine for the first time, how many people actually verify that the host key is correct ? Not many I suspect. I know I usually just type ‘yes’ every time.
So IMHO, using SSH as the foundation for secure remote management of virtual machines in a large data center is not really practical. The answer to these problems is to utilize TLS (the successor to SSL). Contrary to popular belief TLS is not just used for web servers/browsers / HTTPS – it is a general purpose protocol which can be layered onto pretty many any network communication protocol with surprisingly little effort. So, how does it deal with the issues identified above ? WRT to the question of host key checking, the key is that TLS uses X509 certificates and with that comes the concept of a Certificate Authority. Apriori, one only has to trust the certificate authority (CA). Thus, when connecting to a server for the first time, one validates the certificate presented by checking that it was signed by the trusted CA, and compare the server hostname against the ‘common name’ embedded in the signed certificate. Obviously if one is communicating directly using the virtualization management server’s native protocol there is no need to give any shell access to the Dom0 host. Each client connecting to the server would have their own ‘client certificate’ which is signed by the certificate authority. Upon accepting a client connection, the server validates the CA signature on the client’s certificate, and also checks the ‘certificate revokation list’ (CRL) published by the CA. Assuming the CRL is pushed to each Dom0 host on a periodic basis, there is only one place the admin needs to go to revoke a user’s access to all hosts – the CA admin site. Fine grained access control per VM, can be keyed off a whitelist of client certificates, either based on the ‘common name’ or ‘fingerprint’ fields in the cert.
Thus my long term vision for remote management is to TLS enable all the network protocols involved. As a first step Rich Jones is working on a TLS enabled daemon for libvirt which will allow apps to securely manage any virtualization driver supported by libvirt.
This is the core first step in getting virt-manager operating remotely. The obvious next step in this process is to enable TLS in the VNC protocol. The main open source VNC server, RealVNC, does not offer any standard TLS support – they restrict this stuff to the commercial version. Over the years several people have offered patches to add TLS to VNC, but they’ve never been accepted in the RealVNC codebase, which has unfortunately led to a fork – VeNCrypt. Thus for QEMU and the VNC daemon used by Xen paravirt framebuffer, I’d anticipate implementing the VNC protocol extensions defined by VeNCrypt to add TLS support. I’m currently hacking on a GTK VNC viewer widget to replace the current VNC viewer in virt-manager, with the express purpose of supporting TLS. The final stage of the plan will involve adapting the Xen and QEMU migration protocols to be layered over a TLS connection. There are many interesting questions / challenges to be worked out along the way, and certainly a hell of alot of coding to be done. The payoff at the end will be well worth it though.
As I mentioned earlier, Rich Jones is working on the libvirt management protocol, and I’m attacking the VNC client side of things. That still leaves a lot of hacking to be done – the server side VNC TLS impl in QEMU and the Xen framebuffer daemon are both sizeable chunks of work, and I’m not aware of anyone tackling the migration stuff either. So there’s plenty of coding work for people who are interested in this….