Prototype for a Fedora virtual machine appliance builder
For the oVirt project the end product distributed to users consists of a LiveCD image to serve as the ‘managed node’ for hosting guests, and a virtual machine appliance to serve as the ‘admin node’ for the web UI. The excellant Fedora LiveCD creator tools obviously already deal with the first use case. For the second though we don’t currently have a solution. The way we build the admin node appliance is to boot a virtual machine and run anaconda with a kickstart, and then grab the resulting installed disk image. While this works it involves a number of error-prone steps. Appliance images are not inherantly different from LiveCDs – instead of a ext3 filesystem inside an ISO using syslinux, we want a number of filesystems inside a partitioned disk using grub. The overall OS installation method is the same in both use cases.
After a day’s hacking I’ve managed to re-factor the internals of the LiveCD creator, and add a new installation class able to create virtual machine appliances. As its input it takes a kickstart file, and the names and sizes for one or more output files (which will act as the disks). It reads the ‘part’ entries from the kickstart file and uses parted to create suitable partitions across the disks. It then uses kpartx to map the partitions and mounts them all in the chroot. The regular LiveCD installation process then takes place. Once complete, it writes a grub config and installs the bootloader into the MBR. The result is one or more files representing the appliance’s virtual disks which can be directly booted in KVM / Xen / VMware.
The virt-image
tool defines a simple XML format which can be used to describe a virtual appliance. It specifies things like minimum recommended RAM and VCPUs, the disks associated with the appliance, and the hypervisor requirements for booting it (eg Xen paravirt vs bare metal / fullvirt). Given one of these XML files, the virt-image
tool can use libvirt to directly deploy a virtual machine without requiring any further user input. So an obvious extra feature for the virtual appliance creator is to output a virt-image XML description. With a demo kickstart file for the oVirt admin node, I end up with 2 disks:
-rwxr-xr-x 1 root root 5242880001 2008-02-17 14:48 ovirt-wui-os.raw -rwxr-xr-x 1 root root 1048576001 2008-02-17 14:48 ovirt-wui-data.raw
And an associated XML file

To deploy the appliance under KVM I run
# virt-image --connect qemu:///system ovirt-wui.xml # virsh --connect qemu:///system list Id Name State ---------------------------------- 1 ovirt-wui running
Now raw disk images are really quite large – in this example I have a 5 GB and a 1 GB image. The LiveCD creator saves space by using resize2fs to shrink the ext3 filesystem, but this won’t help disk images since the partitions are a fixed size regardless of what the filesystem size is. So to allow smaller the appliance creator is able to call out to qemu-img
to convert the raw file into a qcow2
(QEMU/KVM) or vmdk
(VMWare) disk image, both of which are grow on demand formats. The qcow2
image can even be compressed. Wtth the qcow2
format the disks for the oVirt WUI reduce to 600 KB and 1.9 GB.
The LiveCD tools have already seen immense popularity in the Fedora community. Once I polish off this new code to be production quality, it is my hope that we’ll see similar uptake by people interested in creating and distributing appliances. The great thing about basing the appliance creator on the Live CD codebase and using kickstart files for both, is that you can easily switch between doing regular anaconda installs, creating Live CDs and creating appliances at will, with a single kickstart file.
Daniel,
On a Virtual tangent – I have spent two days unsuccessfully trying to get KVM on F8 working with a Public Bridge network.
I cannot understand why a Public Bridge option is not a default in KVM. I would have thought that most users would want a LAN accessible VM?
I have setup about twenty VMs using VMware Server, but I am finding KVM incredibly frustrating.
Since the requirements for Bridge networking would a common one for virtual machine appliances, I am wondering how you have solved the networking?
The choice of bridged vs NAT based networking for virtual machines is a deployment detail, not relevant when creating virtual appliances.
KVM supports both NAT and bridging, but out of the box NAT is the default option. Bridging requires that you have configured your ethernet device in a special way ahead of time – we can’t assume that a user has done this out of the box.
For more information about networking choices, consult this blog posting