In his recent blog posting on redirected direct renderering, Kristian happened to mention Clutter, a toolkit which allows you to do 3d graphics without first needing to acquire a PhD in OpenGL. I made a mental note to give it a try sometime.
On saturday I spent a few hours doing the major upgrade from Fedora 6 to Fedora 8 on my laptop & desktop (skipping F7). I did it the crazy way, just changing my YUM config files & letting it upgrade everything. I can’t say it was a painless process, but I do finally have two working boxes on F8 now. I also took the opportunity to switch my IBM T60p laptop over to the Avivo driver, instead of the VESA driver & which worked without a hitch.
Back on topic. After the upgrade to F8, I poked at the repos and found that (nearly) all the Clutter related bits are packaged & available to Fedora users already. There is just an annoying buildrequires missing in the python bindings spec file, which meant the RPM is missing the GTK & GST APIs for clutter. A quick rebuild fixed that issue. You may remember I’ve been working on a GTK widget for displayed VNC sessions. One evil thought, led to another even more evil thought, and thus I ended up writing a VNC viewer program which displays multiple VNC sessions on a spinning cube. To make it even more evil, I decided to not restrict it to a cube, and instead generalized to an arbitrary number of sides.
The results are spectacular, though a static screenshot doesn’t really do it justice…
Ctrl+Alt+PageUp/PageDown lets you rotate to the previous/next session respectively. The mouse & keyboard events are fully plumbed in so you can actually interact with each session. In this example I just started 6 VNC server instances running TWM at 500×500 pixels, but they could have been real GNOME desktops too. The only issue is that I’ve not yet figured out how todo correct depth sorting of each desktops. You don’t notice this problem in the screenshot, since I’ve just got them all rendering at 50% opacity ;-) It is impressively slow & impressively fast at the same time. Slow because I’m using Avivo which doesn’t have any real 3d rendering support yet; fast because I’m amazed it is actually working at all :-)
Now to hook this all up into the next version of virt-manager….. just kidding ;-P
Alot has been going on in the libvirt universe recently as it continues on its path to world domination. A couple of days ago Daniel Hokka Zakrisson (that’s the 4th Daniel involved in libvirt!) surprised us all by announcing the start of a Linux-VServer driver for libvirt. This is the second container based virtualization driver, following on from the previous OpenVZ driver work. On the KVM front we now have support for save & restore thanks to Jim Paris, and the Xen & KVM drivers can also do CDROM media changes which will make Windows guest installs much more friendly. A bunch of work is taking place around NUMA to allow guests to be intelligently placed to take advantage of the capabilities of large NUMA boxes.
I’ve been working on integrating SASL support into the remote management driver. This will augment our existing SSL/TLS + x509 security model, to provide fun stuff like Kerberos integration (single sign on!) and plain old username/password auth (with data encryption thrown in too though). This will tie in very nicely with the FreeIPA project which is providing a way to get a pre-integrated Kerberos + LDAP solution for Linux users to compet with ActiveDirectory. I installed FreeIPA in a Fedora 7 guest a few weeks back and can say it is looking very impressive indeed.
There have been lengthy discussions & arguments about how to represent & manage storage from within libvirt, which is a key requirement for being able to provision guest OS from a remote host. Once this all gets fleshed out we’ll be able to manage plain old files, QCow/VMDK files, LVM volumes, iSCSI volumes, disks/partitions and even FiberChannel with NPIV from within virt-manager and other libvirt based apps. This will improve the admin experiance significantly.
Rich Jones has put together all sorts of fun apps ontop of libvirt in recent months. virt-top is a command line tool to provide a ‘top’ like display of CPU, disk & network activity in all guest machines on a host. Virt-P2V is a Live CD based on Fedora which allows you to take an existing physical machine and turn its disk images into a virtual guest running under Xen / KVM fullvirt. Nagios-Virt is a plugin for the Nagios monitoring system which gives status on virtual machines. There are various other interesting admin tools along these lines in the works, so watch this space….
I’ve got many boxes for testing purposes and while often I can run graphical apps over SSH, every so often I really do need to run the app within a full GNOME session. For example, the incredible new PolicyKit app in Fedora 8 enables desktop applications to authenticate to gain extra privileges. PolicyKit uses ConsoleKit for its session tracking & the ConsoleKit sessions are created by GDM when you initially login. Thus to test an application using PolicyKit you really do need to login via GDM and run a full GNOME session, not merely a X tunnel over SSH.
Now of course the critical times when I need to do this testing are when I’m not physically anywhere near the machine I need to test on. And invariably I’ve not left a login session active, nor even GNOME’s ‘remote desktop’ access enabled. Traditionally I’ve just created a suitable VNC server startup file containing
$ cat $HOME/.vnc/xstartup
#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
unset DBUS_SESSION_BUS_ADDRESS
eval `dbus-launch --sh-syntax --exit-with-session`
exec gnome-session
This gets me a full GNOME login session. Unfortunately there’s no ConsoleKit session associated with this & thus no possibility of using PolicyKit. GNOME itself though does come with VINO which can export your regular X session using the VNC protocol. If only I were logged into X on the machine’s console & running VINO. Argh.
After much poking around I finally figured out a solution. First off, SSH to the box in question as your regular desktop user. Now we can use gconftool-2 to enable VINO. We need to enable it, enable authentication, set a password, turn off incoming connection prompts and possibily set an explicit port (if you have something else on the regular port 5900 – eg a Xen guest).
# Disable local confirmation dialog for incoming connections
gconftool-2 --type bool --set /desktop/gnome/remote_access/prompt_enabled false
# Change VNC port to :9 instead of :0
gconftool-2 --type bool --set /desktop/gnome/remote_access/use_alternative_port true
gconftool-2 --type int --set /desktop/gnome/remote_access/alternative_port 5909
# Enable password auth
gconftool-2 --type list --list-type string --set /desktop/gnome/remote_access/authentication_methods '[vnc]'
PW=`echo 'mypassword' | base64`
gconftool-2 --type string --set /desktop/gnome/remote_access/vnc_password $PW
# Enable the VINO server
gconftool-2 --type bool --set /desktop/gnome/remote_access/enabled true
So that has the VINO server configured to run when I’m logged in, but as I mentioned already – I’m typically not logged in on the console when I need to be. For this challenge GDM comes to the rescue. It is possible change its config file to specify that a particular user will be automatically logged in the moment GDM starts. To do this edit /etc/gdm/custom.conf and add
[daemon]
AutomaticLogin=yourusername
AutomaticLoginEnable=true
A quick restart of GDM later, and I’m automatically logged into the remote box with a full GNOME session, including all the neccessary ConsoleKit magic. I can now connect with VNC and properly test virt-manager / PolicyKit integration. Yay.
Any OS running Fedora Core 6 or later has IPv6 networking support enabled out of the box. Most people will never notice and/or care since they’re only ever connected to IPv4 networks. A few months back now though I decided it was time to give IPv6 a try for real….
I’ve got two servers on the Internet running in UserModeLinux guests, one running Debian, the other Fedora Core 6, and then a home network provided by a LinkSys router running OpenWRT White Russian. My goal was provide full IPv6 connectivity to all of them.
Home Router
I tackled the problem of the home router first. The OpenWRT wiki has an IPv6 Howto, describing various setups. I decided to get a tunnel from the fine folks at SixXS. My Verizon DSL only provides a dynamic IPv4 address and regular IPv6 over IPv4 tunnels require the server end to know the IPv4 address of your local endpoint. Obviously this is a bit of a problem with a dynamic IPv4 endpoint. SixXS though have a funky way around this in the form of their AICCU daemon which sets up a heartbeat from your local endpoint to their server. Thus should your IPv4 address ever change it can (securely with SSL) inform the server of your changed configuration. So I registered with SixXS, requested an IPv6 tunnel and a short while later they approved me. The service is open to anyone who wants IPv6 connectivity – the approval process is mainly to help avoid abuse & frivilous requests. I was fortunate in that OCCAID are providing an IPv6 tunnel server just a few miles away in Boston – there’s other tunnel servers dotted around but mostly concentrated in America or Europe at this time.
With my IPv6 address allocated it and the OpenWRT guide handy my router was up & running with IPv6 connectivity – I could do ping sites over IPv6 eg
# ping6 www.kame.net
PING www.kame.net (2001:200:0:8002:203:47ff:fea5:3085): 56 data bytes
64 bytes from 2001:200:0:8002:203:47ff:fea5:3085: icmp6_seq=0 ttl=50 time=513.2 ms
64 bytes from 2001:200:0:8002:203:47ff:fea5:3085: icmp6_seq=1 ttl=50 time=512.5 ms
64 bytes from 2001:200:0:8002:203:47ff:fea5:3085: icmp6_seq=2 ttl=50 time=519.5 ms
OpenWRT only ships with an IPv4 firewall as standard, so I quickly added ip6tables rules to deny all incoming traffic to the router. Even though port-scanning the entire IPv6 address space is not practical, only a tiny portion is active, and nearly all tunnels end up using addresses ending in :1 and :2, so a firewall is a must no matter what.
Home Network
To ensure you are serious about making use of their services, SixXS operate a credit system for admin requests. You start off with enough credits to request a IPv6 tunnel, but not enough to request an IPv6 subnet. To gain credits you have to prove you can keep the tunnel operational 24 hours a day for 7 days in a row – you then start gaining credits for each day’s uptime. So I had a slight pause before I could move onto setting up the home network.
Fortunately the LinkSys router is very reliable and so after a week I had enough uptime and thus enough credits to request an IPv6 subnet. In the brave new world of 128 bit addressing there’s no shortage of addresses, so to simplify routing, whenever someone needs a block of addresses they’ll typically be allocated an entire /48. That’s right /48 – you’ll be given more global IPv6 addresses for your personal use, than there are total IPv4 addresses in existance. Another interesting difference is that IPv6 subnets are not technically ‘sold’ – they are merely ‘loaned’ to end users. The upshot is that there’s no issue of having to pay your stinkin’ DSL/Cable ISP $$$ per month for one or two extra addresses.
Having got the subnet allocated, the first step is to configure an IP address on the LAN interface of the LinkSys box. With OpenWRT this just required editing /etc/init.d/S40network to add “ip -6 addr add 2001:XXXX:XXXX:XXXX::1/64 dev br0” (where 2001:XXXX:XXXX:XXXX is my subnet’s prefix). When the various IPv6 protocols were specced out a big deal was made of the fact that there would be no NAT anywhere, and that client configuration would be completely automatic & be able to dynamically reconfigure itself on the fly. The key to this is what they call a ‘router advertisment daemon’. On Linux this is the ‘radvd’ program. If you only have a single outgoing net connection, and a single local network, then configuring it is incredibly easy. Simply edit /etc/radvd.conf file and fill in the IPv6 address prefix for your subnet as allocated by SixXS. Then start the daemon.
Remember I just mentioned network configuration would be automatic – well look at any Fedora box plugged into your local network at this point. You’ll see they all just got globally routable IPv6 addresses assigned to their active network interfaces. Pop up a web browser and visit Kame and you’ll see an animated dancing turtle logo! IPv4 users only see a static image…
Bytemark Server
One of my web servers is running Debian in a User Mode Linux instance at Bytemark in the UK. The good news is that Bytemark have already taken care of getting IPv6 connectivity into their network, so there’s no need to use a tunnel on any server hosted by them. Simply ask their helpdesk to allocate you an IPv6 address from their pool, and add it to your primary ethernet address. Again don’t forget to setup ip6tables firewall rules before doing this.
For Debian configuring the eth0 was a mere matter of editing /etc/network/interfaces and adding
iface eth0 inet6 static
address 2001:XXXX:XXXX:XXXX::2
netmask 64
up ip route add 2000::/3 via 2001:XXXX:XXXX:XXXX::1
Again, with ‘2001:XXXX:XXXX:XXXX’ being the address they allocated to your server.
Since SSH listens for IPv6 connections by default, with the interface address configured I could now SSH from my laptop at home to my server using IPv6. Type ‘who’ and you’ll see a big long IPv6 address against your username if its working correctly.
Linode Server
My other web server is hosted by Linode. Unfortunately they don’t provide direct IPv6 connectivity so I had to use a tunnel. Since I do have a permanent static IPv4 address though I could use a regular IPv6-over-IPv4 tunnel rather than the dynamic heartbeat one I used at home with SixXS. For the sake of redundancy I decided to get my tunnel from a different provider, this time choosing Hurricane. When registering with them you provide a little contact info and the IPv4 address of your server. A short while later they’ll typically approve the request & activate their end of the tunnel. It is then a matter of configuring your end. This machine was running Fedora Core 6, so creating a tunnel requires adding a file /etc/sysconfig/network-scripts/ifcfg-sit1 containing something like
DEVICE=sit1
BOOTPROTO=none
ONBOOT=yes
IPV6INIT=yes
IPV6TUNNELIPV4=YY.YY.YY.YY
IPV6ADDR=2001:XXXX:XXXX:XXXX::2/64
Where YY.YY.YY.YY was the IPv4 address of hurricane’s tunnel server, and 2001:XXXX:XXXX:XXXX was the IPv6 address prefix they allocated for my server. A quick ifup later and this server too has IPv6 connectivity.
The summary
This was all spread out over a couple of weeks, but by the end of it I had got both servers and my entire home network all operational with fully routable, global IPv6 connectivity. I have three differents types of IPv6 connectivity – direct (from Bytemark), static tunnel (from Hurricane), and a dynamic tunnel (from SixXS – they offer static tunnels too). If you have a static IPv4 address there’s a fourth way to get connected called 6-to-4 which maps your Ipv4 address into the IPv6 space and uses anycast routing. With so many ways to get IPv6 connectivity it doesn’t matter if your crappy DSL/Cable ISP doesn’t offer IPv6 – simply take them out of the equation.
One of the great things about being rid of NAT is that I can directly SSH into any machine at home from outside my network – no need for VPNs, or special reverse proxy rules through the NAT gateway. IPv6 addresses are crazily long, so the one final thing I did was to setup DNS entries for all my boxes, including a DNS zone for my home network. Remember how all clients on the home network auto-configure themselves, well this is done based on their network prefix and their MAC address, so they’ll always auto-configure themselves to the same IPv6 address. Makes it easy to give them permanent DNS mappings, without needing to manually administer a DHCP server.
For Fedora 8 we have quite an ambitious set of goals to improve security of the virtualization management stack. With the test2 date fast approaching things are starting to fall into place, although as ever its taken longer than expected. Should really have expected this since it requires getting code accepted in 3 upstream projects (Xen, QEMU, KVM), releases several brand new pieces of new software (GTK-VNC and Virt Viewer), and updating many others (Virt Manager & virt-install).
A couple of weeks ago DV released an update of libvirt which includes support for secure remote management, either tunnelled over SSH, or directly connecting with TLS/SSL and x509 certificate authentication. This was the culmination of many months work by Rich Jones, and review & feedback by the rest of the libvirt team. Oh, it also supports IPv6 out of the box – the only open source virtualization management software to support IPv6 for remote management.
Yesterday I submitted another iteration of the my patches to add password authentication and the VeNCrypt extension to QEMU’s VNC server. The latter allows VNC to be encrypted with SSL/TLS and authenticated with x509 certificates.
Today I submitted changes to Xen to remove the horrible VNC server implementation based on LibVNCServer . For those who don’t know, LibVNCServer is a horrible hack which turns the vncserver code into a shared library for embedding in applications which need VNC server support. Unfortunately the code is utterly unintelligable, and has been retrofitted with multi-thread support which is completely and utterly broken. We’ve made countless fixes to the thread synchronization to address deadlocks & crashes and still have no confidence that it is truely working correctly. So I’ll be glad to see the back of LibVNCServer.
Staying on the VNC theme, we announced the first official release of GTK-VNC. This is a GTK widget which provides a VNC client viewer. It provides a core library written in C, using coroutines to allow it to be completely asynchronous while remaining single threaded. A wrapper library using PyGTK provides access to the widget functionality from Python. Two example programs illustrate use of the widget by re-implementing the traditional ‘vncviewer’ in a few 10’s of lines of code. The client is fully IPv6 aware, and as well as the traditional VNC authentication protocol, implements the VeNCrypt extension to provide secure TLS/SSL encrypted communications, optionally using x509 certificates to authenticate.
Finally also announced the first release of Virtual Machine Viewer, a lightweight, minimal UI for interacting with the graphical console from virtual machines. It is intended as a replacement for vncviewer, since it integrates with libvirt there is no need to tell it the VNC display address – just tell it the guest name, ID or UUID and it’ll figure out the rest.
There’s still plenty of work to be done before Fedora 8 is released, but its starting to come together nicely. The forthcoming Fedora 8 release will again be leading the pack when it comes to open source virtualization management.