Things have been progressing well with the GTK-VNC widget. The 0.3.0 release a few weeks back fixed a couple of co-routine race conditions, fixed portability to Solaris and added compatability for UltraVNC brokenness – it claims support for RFB version 3.4 which doesn’t technically exist. 0.3.1 was a brown paper bag release a day later, due to the ‘make dist’ process going wrong with 0.3.0; say no more. Today Anthony released version 0.3.2 which adds a GThread based co-routine implementation to provide portability to platforms lacking ucontext support (yes I’m looking at you Windows/cygwin). It also adds support for the RRE server encoding which is a zlib compressed format, although not commonly used its in the spec so its worth supporting.
For the next releases we’ll have support for the Tight encoding as used by TightVNC – this is a more advanced variant on RRE, in some cases using JPEG as its compression method which is interesting. We’re also in communication with the VMWare team to see if we can write code to support the RFB extensions, for which they recently got official extension numbers assigned. We decided to apply the ‘release early, release often’ mentality with earnest, and thus we’re aiming to have regular monthly releases for the forseeable future. Meanwhile John is continuing to develop Vinagre, a long overdue modern VNC client taking full advantage of the GNOME infrastructure & desktop integration points.
If you are a Perl developer you may have come across CPANTS which analyses and ranks all distributions and authors on CPAN based on their Kwalitee score. To quote…
What is a good module? That's hard to say.
What is good code? That's also hard to say.
"Quality" is not a well-defined term in computing ... and especially not Perl.
One man's Thing of Beauty is another's man's Evil Hack
Since we can't define quality, how do we write a program to assure it?
Kwalitee: It looks like quality, it sounds like quality, but it’s not quite quality.
I was rather disappointed to discover my own Kwalitee scores were rather poor so have been spending time to improve matters. The key to this is to run a test to check the Kwalitee score of a new release before uploading it to CPAN. Conveniently the very code used to generate the rankings is available to download and run offline in the form of the Module-CPANTS-Analyse. Inconveniently this wasn’t in Fedora…until today. I finally got all the dependent modules through review and built for rawhide, with F-8 to follow shortly… So now if you want to test the Kwalitee of your Perl modules before release just run:
# yum install perl-Module-CPANTS-Analyse
# cpants_lint.pl /path/to/my/module.tar.gz
It already helped me get Test-AutoBuild perfect…
$ cpants_lint.pl Test-AutoBuild-1.2.2.tar.gz
Checked dist Test-AutoBuild-1.2.2.tar.gz
Kwalitee rating 112.50% (27/24)
Congratulations for building a 'perfect' distribution!
112.50% you might wonder ? Yes, indeed. Only 24 of the checks are considered mandatory at this time. The other 3 are optional, but good practice none-the-less. To get to 100% you need to pass all the mandatory checks. If you manage this, then passing any optional checks will give you an additional boost.
I finally got around to doing some more work on Test-AutoBuild – a build and test automation framework for upstream developers. It checks sources out of SCM repos (CVS, Subversion, SVK, GNU Arch, Mercurial, Perforce), runs any build and test processes. It detects any RPMs generated during the build and publishes them in a YUM repo. It also publishes HTML status pages showing build logs, list of generated packages, any artifacts generated (eg, code test coverage reports, API documentation) and changelogs from the SCM repo. It is a similar system to CruiseControl, but is more powerful since it directly understands the idea of module dependancies, and so can intelligently manage chained builds of multiple dependant modules. We use this in the ET group for testing our virtualization stack. Our nightly builder builds libvirt and gtk-vnc first, then builds virt-viewer and virt-install against these builds, and finally builds virt-manager against all of them. So any change in libvirt gets validated to make sure it doesn’t break apps using libvirt. Since autobuild understands the dependancies, it can do intelligent build caching. eg if there were new changes in the libvirt SCM repo, but none in the virt-manager repos, it will still do a rebuild of virt-manager as a regression test
This new release version 1.2.1 was all about making the SCM checkout process more reliable. Previously if a module could not be checked out (eg due to a server being down, or a config file typo) the entire build cycle would be aborted. With the new release, the troublesome module is simply skipped and the SCM logs published for the admin to diagnose – other modules in the build cycle continue to be built
This is a friendly alert of the major plans we have for Xen kernels in
Fedora 9 timeframe… Also syndicated on the fedora-xen mailing list.
Since we first added Xen in Fedora Core 4, our kernels have been based on a forward-port of XenSource’s upstream Xen kernels, to new LKML. For a
long time we ported their 2.6.16 tree to 2.6.18. Now we do ports of their
2.6.18 tree to 2.6.21/22/23, etc. At the same time, upstream Linux gained
Xen support for i386 DomU, and shortly x86_64 DomU, and is generally
getting ever more virtualization capabilities.
As everyone knows, we have tended to lag behind Fedora’s state-of-the-art
bare metal kernels by several releases due to the effort required to port
Xen to newer LKML releases. Despite our best efforts, this lag has been
getting worse, not better.
We have taken the decision, that this situation is unacceptable for Fedora 9.
We simply cannot spend more time forward porting Xen kernels. Either Xen has
to be dropped entirely, or we need a different strategy for dealing with the
kernels. Since people seeem to use Xen, we have decided not to drop it :-)
So the plan is to re-focus 100% of all Xen kernel efforts onto paravirt_ops.
LKML already has i386 pv_ops + Xen DomU. We intend to build on this to
add:
- x64_64 pv_ops
- x86_64 Xen DomU on pv_ops
- i386 & x86_64 Xen Dom0 on pv_ops
- memory balloon
- paravirt framebuffer
- save/restore
All of this based on same LKML release as Fedora bare metal. If all goes to
plan it may even be in the base kernel RPM, instead of kernel-xen, but thats
a minor concern compared to the actual coding.
Getting all this done for Fedora 9 is seriously ambitious, but it is the only
long term sustainable option, other than dropping Xen entirely.
What this means though, is that Fedora 9 Xen will certainly be going through
periods of instability and will certainly be even buggier than normal. F9
may well end up lacking features compared to Xen in Fedora 8 & earlier (eg no
PCI device passthrough, or CPU hotplug). On the plus side though we will be
100% back in sync with bare metal kernel versions & hopefully even have a
lot of this stuff merged in LKML to make ongoing maintainence sustainable.
Short term pain; Long term gain!
I have not got any ETA on when any of these kernel changes will appear in
rawhide – some time before the F9 feature freeze date is best guesstimate.
We will alert people when the time comes. There is a F9 feature page
with some amount of info about the plan.
In terms of Fedora 6/7/8 maintainence… The kernel-xen in these existing
releases already lags behind the bare metal kernel version by 2-3 releases.
We do not intend to continue trying to rebase the kernel-xen in existing
Fedora releases. It will be essentially important bug-fix mode only. This
is neccessary to enable maximum resources to be focused on the critical
Fedora 9 Xen work.
This broadcast was on behalf of some very busy Fedora Xen kernel developers :-)
For the short story, read the announcement. For the long story, read on…..
The libvirt project provides a hypervisor agnostic API for managing virtual machines (and their associated resources like their network & storage). The libvirt API has 3 core characteristics – simplicity – minimal code required to get useful work done; standard – the same API can be used across any virtualization backend; stable – guarenteed stable public API and XML descriptions across releases. In the short time it has been around, libvirt has proved impressively popular, finding its way in all the main Linux distributions, as well as Open Solaris. There are drivers in libvirt for Xen, QEMU, KVM, OpenVZ and soon Linux-VServer. If someones contributes VMWare, UML, and VirtualBox support we’ll basically have complete coverage for all common Linux virtualization platforms in a single open source API, usable by open & closed source apps alike (libvirt is LGPL licensed explicitly to enable use by closed source apps).
In the enterprise world, people like to form committees to come up with comprehensive standards to cover everything you can think of, and then go off and write multiple implementations of these standards ;-) For virtualization, the ‘standard’ is based around the DMTF / CIM specification. Pretty much since the start of the libvirt project, people have asked why we didn’t just implement the CIM APIs directly. We always wanted the core of our public APIs to be simpler to use though. At the same time, we recognise that some people really want to use a CIM API for their virtualization management tools. Thus we’ve always supported the idea of writing a CIM provider on top of libvirt. Not only would this mean only a single CIM provider implementation is needed for all of the virt platforms supported in libvirt, but it gives good interoperability between tools using libvirt vs CIM.
Over the past year & a half (or more), the Xen project has developed a CIM provider using the Xen-API as the underlying implementation. Indeed at one time this actually used libvirt instead of Xen-API, but back when the switch was made to Xen-API, KVM wasn’t around so the benefits of a hypervisor agnositic libvirt API were more hypothetical than tangible. KVM does eventually appear on the scene & sure enough, the topic of developing CIM providers for KVM soon comes up. KVM though doesn’t have any form of existing management API to deal with – a KVM guest is ‘managed’ by running a QEMU process, and killing it when you’re done. QEMU provides a mini admin shell for controlling various aspects of its behaviour. Turning this into a formal API is hard work – libvirt has already done it once and you don’t want to have to duplicate that :-)
At the recent KVM forum, IBM announced that they were going to develop a CIM provider based on libvirt, and release it to the open source community. Fast forward a few months and I’m happy to announce their plans have come to fruition. Furthermore, this is not just a 3rd party add-on, the CIM provider is to be a core part of the libvirt project & ecosystem. We’re collaborating on ideas, API requirements, hosting for our infrastructure (websites, SCM repos, mailing lists, IRC channels, etc) and above all working towards the common goal of providing state-of-the-art open source management APIs, capable of working across any virtualization platform.