This is a friendly alert of the major plans we have for Xen kernels in
Fedora 9 timeframe… Also syndicated on the fedora-xen mailing list.
Since we first added Xen in Fedora Core 4, our kernels have been based on a forward-port of XenSource’s upstream Xen kernels, to new LKML. For a
long time we ported their 2.6.16 tree to 2.6.18. Now we do ports of their
2.6.18 tree to 2.6.21/22/23, etc. At the same time, upstream Linux gained
Xen support for i386 DomU, and shortly x86_64 DomU, and is generally
getting ever more virtualization capabilities.
As everyone knows, we have tended to lag behind Fedora’s state-of-the-art
bare metal kernels by several releases due to the effort required to port
Xen to newer LKML releases. Despite our best efforts, this lag has been
getting worse, not better.
We have taken the decision, that this situation is unacceptable for Fedora 9.
We simply cannot spend more time forward porting Xen kernels. Either Xen has
to be dropped entirely, or we need a different strategy for dealing with the
kernels. Since people seeem to use Xen, we have decided not to drop it :-)
So the plan is to re-focus 100% of all Xen kernel efforts onto paravirt_ops.
LKML already has i386 pv_ops + Xen DomU. We intend to build on this to
add:
- x64_64 pv_ops
- x86_64 Xen DomU on pv_ops
- i386 & x86_64 Xen Dom0 on pv_ops
- memory balloon
- paravirt framebuffer
- save/restore
All of this based on same LKML release as Fedora bare metal. If all goes to
plan it may even be in the base kernel RPM, instead of kernel-xen, but thats
a minor concern compared to the actual coding.
Getting all this done for Fedora 9 is seriously ambitious, but it is the only
long term sustainable option, other than dropping Xen entirely.
What this means though, is that Fedora 9 Xen will certainly be going through
periods of instability and will certainly be even buggier than normal. F9
may well end up lacking features compared to Xen in Fedora 8 & earlier (eg no
PCI device passthrough, or CPU hotplug). On the plus side though we will be
100% back in sync with bare metal kernel versions & hopefully even have a
lot of this stuff merged in LKML to make ongoing maintainence sustainable.
Short term pain; Long term gain!
I have not got any ETA on when any of these kernel changes will appear in
rawhide – some time before the F9 feature freeze date is best guesstimate.
We will alert people when the time comes. There is a F9 feature page
with some amount of info about the plan.
In terms of Fedora 6/7/8 maintainence… The kernel-xen in these existing
releases already lags behind the bare metal kernel version by 2-3 releases.
We do not intend to continue trying to rebase the kernel-xen in existing
Fedora releases. It will be essentially important bug-fix mode only. This
is neccessary to enable maximum resources to be focused on the critical
Fedora 9 Xen work.
This broadcast was on behalf of some very busy Fedora Xen kernel developers :-)
For the short story, read the announcement. For the long story, read on…..
The libvirt project provides a hypervisor agnostic API for managing virtual machines (and their associated resources like their network & storage). The libvirt API has 3 core characteristics – simplicity – minimal code required to get useful work done; standard – the same API can be used across any virtualization backend; stable – guarenteed stable public API and XML descriptions across releases. In the short time it has been around, libvirt has proved impressively popular, finding its way in all the main Linux distributions, as well as Open Solaris. There are drivers in libvirt for Xen, QEMU, KVM, OpenVZ and soon Linux-VServer. If someones contributes VMWare, UML, and VirtualBox support we’ll basically have complete coverage for all common Linux virtualization platforms in a single open source API, usable by open & closed source apps alike (libvirt is LGPL licensed explicitly to enable use by closed source apps).
In the enterprise world, people like to form committees to come up with comprehensive standards to cover everything you can think of, and then go off and write multiple implementations of these standards ;-) For virtualization, the ‘standard’ is based around the DMTF / CIM specification. Pretty much since the start of the libvirt project, people have asked why we didn’t just implement the CIM APIs directly. We always wanted the core of our public APIs to be simpler to use though. At the same time, we recognise that some people really want to use a CIM API for their virtualization management tools. Thus we’ve always supported the idea of writing a CIM provider on top of libvirt. Not only would this mean only a single CIM provider implementation is needed for all of the virt platforms supported in libvirt, but it gives good interoperability between tools using libvirt vs CIM.
Over the past year & a half (or more), the Xen project has developed a CIM provider using the Xen-API as the underlying implementation. Indeed at one time this actually used libvirt instead of Xen-API, but back when the switch was made to Xen-API, KVM wasn’t around so the benefits of a hypervisor agnositic libvirt API were more hypothetical than tangible. KVM does eventually appear on the scene & sure enough, the topic of developing CIM providers for KVM soon comes up. KVM though doesn’t have any form of existing management API to deal with – a KVM guest is ‘managed’ by running a QEMU process, and killing it when you’re done. QEMU provides a mini admin shell for controlling various aspects of its behaviour. Turning this into a formal API is hard work – libvirt has already done it once and you don’t want to have to duplicate that :-)
At the recent KVM forum, IBM announced that they were going to develop a CIM provider based on libvirt, and release it to the open source community. Fast forward a few months and I’m happy to announce their plans have come to fruition. Furthermore, this is not just a 3rd party add-on, the CIM provider is to be a core part of the libvirt project & ecosystem. We’re collaborating on ideas, API requirements, hosting for our infrastructure (websites, SCM repos, mailing lists, IRC channels, etc) and above all working towards the common goal of providing state-of-the-art open source management APIs, capable of working across any virtualization platform.
In his recent blog posting on redirected direct renderering, Kristian happened to mention Clutter, a toolkit which allows you to do 3d graphics without first needing to acquire a PhD in OpenGL. I made a mental note to give it a try sometime.
On saturday I spent a few hours doing the major upgrade from Fedora 6 to Fedora 8 on my laptop & desktop (skipping F7). I did it the crazy way, just changing my YUM config files & letting it upgrade everything. I can’t say it was a painless process, but I do finally have two working boxes on F8 now. I also took the opportunity to switch my IBM T60p laptop over to the Avivo driver, instead of the VESA driver & which worked without a hitch.
Back on topic. After the upgrade to F8, I poked at the repos and found that (nearly) all the Clutter related bits are packaged & available to Fedora users already. There is just an annoying buildrequires missing in the python bindings spec file, which meant the RPM is missing the GTK & GST APIs for clutter. A quick rebuild fixed that issue. You may remember I’ve been working on a GTK widget for displayed VNC sessions. One evil thought, led to another even more evil thought, and thus I ended up writing a VNC viewer program which displays multiple VNC sessions on a spinning cube. To make it even more evil, I decided to not restrict it to a cube, and instead generalized to an arbitrary number of sides.
The results are spectacular, though a static screenshot doesn’t really do it justice…
Ctrl+Alt+PageUp/PageDown lets you rotate to the previous/next session respectively. The mouse & keyboard events are fully plumbed in so you can actually interact with each session. In this example I just started 6 VNC server instances running TWM at 500×500 pixels, but they could have been real GNOME desktops too. The only issue is that I’ve not yet figured out how todo correct depth sorting of each desktops. You don’t notice this problem in the screenshot, since I’ve just got them all rendering at 50% opacity ;-) It is impressively slow & impressively fast at the same time. Slow because I’m using Avivo which doesn’t have any real 3d rendering support yet; fast because I’m amazed it is actually working at all :-)
Now to hook this all up into the next version of virt-manager….. just kidding ;-P
Alot has been going on in the libvirt universe recently as it continues on its path to world domination. A couple of days ago Daniel Hokka Zakrisson (that’s the 4th Daniel involved in libvirt!) surprised us all by announcing the start of a Linux-VServer driver for libvirt. This is the second container based virtualization driver, following on from the previous OpenVZ driver work. On the KVM front we now have support for save & restore thanks to Jim Paris, and the Xen & KVM drivers can also do CDROM media changes which will make Windows guest installs much more friendly. A bunch of work is taking place around NUMA to allow guests to be intelligently placed to take advantage of the capabilities of large NUMA boxes.
I’ve been working on integrating SASL support into the remote management driver. This will augment our existing SSL/TLS + x509 security model, to provide fun stuff like Kerberos integration (single sign on!) and plain old username/password auth (with data encryption thrown in too though). This will tie in very nicely with the FreeIPA project which is providing a way to get a pre-integrated Kerberos + LDAP solution for Linux users to compet with ActiveDirectory. I installed FreeIPA in a Fedora 7 guest a few weeks back and can say it is looking very impressive indeed.
There have been lengthy discussions & arguments about how to represent & manage storage from within libvirt, which is a key requirement for being able to provision guest OS from a remote host. Once this all gets fleshed out we’ll be able to manage plain old files, QCow/VMDK files, LVM volumes, iSCSI volumes, disks/partitions and even FiberChannel with NPIV from within virt-manager and other libvirt based apps. This will improve the admin experiance significantly.
Rich Jones has put together all sorts of fun apps ontop of libvirt in recent months. virt-top is a command line tool to provide a ‘top’ like display of CPU, disk & network activity in all guest machines on a host. Virt-P2V is a Live CD based on Fedora which allows you to take an existing physical machine and turn its disk images into a virtual guest running under Xen / KVM fullvirt. Nagios-Virt is a plugin for the Nagios monitoring system which gives status on virtual machines. There are various other interesting admin tools along these lines in the works, so watch this space….