Earlier today I was asked how one would go about debugging early startup of KVM under GDB, when launched by libvirtd. It was not possible to simply attach to KVM after it had been launched by libvirtd, since that was too late. In addition running the same KVM command outside libvirt did not exhibit the problem that was being investigated.
Fortunately, with a little cleverness, it is actually possible to debug a KVM guest launched by libvirtd, right from the start. The key is to combine a couple of breakpoints with use of follow-fork-mode
. When libvirtd starts up a KVM guest, it runs QEMU a couple of times in order to detect which command line arguments are supported. This means the follow-fork-mode
setting cannot be changed too early, otherwise GDB will end up following the wrong process.
I happen to know that there is only one place in the libvirt code which calls virCommandSetPreExecHook
, and that is immediately before launching the real QEMU process. A nice thing about GDB is that when following forked/exec’d children, it will apply any existing breakpoints in the child, even if it is a new binary. So a break point set on ‘main’, while still in libvirtd will happily catch ‘main’ in the QEMU process. The only remaining problem is that if QEMU does not setup and activate the monitor quickly enough, libvirtd will try to kill it off again. Fortunately GDB lets you ignore SIGTERM, and even SIGKILL :-)
The start of the trick is this:
# pgrep libvirtd
12345
# gdb
(gdb) attach 12345
(gdb) break virCommandSetPreExecHook
(gdb) cont
Now in a separate shell
# virsh start $GUESTNAME
Back in the GDB shell the breakpoint should have triggered, allowing the trick to be finished:
(gdb) break main
(gdb) handle SIGKILL nopass noprint nostop
Signal Stop Print Pass to program Description
SIGKILL No No No Killed
(gdb) handle SIGTERM nopass noprint nostop
Signal Stop Print Pass to program Description
SIGTERM No No No Terminated
(gdb) set follow-fork-mode child
(gdb) cont
process 3020 is executing new program: /usr/bin/qemu-kvm
[Thread debugging using libthread_db enabled]
[Switching to Thread 0x7f2a4064c700 (LWP 3020)]
Breakpoint 2, main (argc=38, argv=0x7fff71f85af8, envp=0x7fff71f85c30)
at /usr/src/debug/qemu-kvm-0.14.0/vl.c:1968
1968 {
(gdb)
Bingo, you can now debug QEMU startup at your leisure
Yesterday I talked about setting up Sheepdog with KVM, so today is it is time to discuss use of Ceph and RBD with KVM.
Host Cluster Setup, the easy way
Fedora has included Ceph for a couple of releases, but since my hosts are on Fedora 14/15, I grabbed the latest ceph 0.3.1 sRPMs from Fedora 16 and rebuilt those to get something reasonably up2date. In the end I have the following packages installed, though to be honest I don’t really need anything except the base ‘ceph’ RPM:
# rpm -qa | grep ceph | sort
ceph-0.31-4.fc17.x86_64
ceph-debuginfo-0.31-4.fc17.x86_64
ceph-devel-0.31-4.fc17.x86_64
ceph-fuse-0.31-4.fc17.x86_64
ceph-gcephtool-0.31-4.fc17.x86_64
ceph-obsync-0.31-4.fc17.x86_64
ceph-radosgw-0.31-4.fc17.x86_64
Installing the software is the easy bit, configuring the cluster is where the fun begins. I had three hosts available for testing all of which are virtualization hosts. Ceph has at least 3 daemons it needs to run, which should all be replicated across several hosts for redundancy. There’s no requirement to use the same hosts for each daemon, but for simplicity I decided to run every Ceph daemon on every virtualization host.
My hosts are called lettuce
, avocado
and mustard
. Following the Ceph wiki instructions, I settled on a configuration file that looks like this:
[global]
auth supported = cephx
keyring = /etc/ceph/keyring.admin
[mds]
keyring = /etc/ceph/keyring.$name
[mds.lettuce]
host = lettuce
[mds.avocado]
host = avocado
[mds.mustard]
host = mustard
[osd]
osd data = /srv/ceph/osd$id
osd journal = /srv/ceph/osd$id/journal
osd journal size = 512
osd class dir = /usr/lib64/rados-classes
keyring = /etc/ceph/keyring.$name
[osd.0]
host = lettuce
[osd.1]
host = avocado
[osd.2]
host = mustard
[mon]
mon data = /srv/ceph/mon$id
[mon.0]
host = lettuce
mon addr = 192.168.1.1:6789
[mon.1]
host = avocado
mon addr = 192.168.1.2:6789
[mon.2]
host = mustard
mon addr = 192.168.1.3:6789
The osd class dir
bit should not actually be required, but the OSD code looks in the wrong place (/usr/lib instead of /usr/lib64) on x86_64 arches.
With the configuration file written, it is time to actually initialize the cluster filesystem / object store. This is the really fun bit. The Ceph wiki has a very basic page which talks about the mkcephfs
tool, along with a scary warning about how it’ll ‘rm -rf’ all the data on the filesystem it is initializing. It turns out that it didn’t mean your entire host filesystem, AFAICT, it only the blows away the contents of the directory configured for ‘osd data
‘ and ‘mon data
‘, in my case both under /srv/ceph
.
The recommended way is to let mkcephfs
ssh into each of your hosts and run all the configuration tasks automatically. Having tried the non-recommended way and failed several times before finally getting it right, I can recommend following the recommended way :-P There are some caveats not mentioned in the wiki page though:
- The configuration file above must be copied to
/etc/ceph/ceph.conf
on every node before attempting to run mkcephfs
.
- The configuration file on the host where you run
mkcephfs
must be in /etc/ceph/ceph.conf or it will get rather confused about where it is in the other nodes.
- The
mkcephfs
command must be run as root since, it doesn’t specify ‘-l root’ to ssh, leading to an inability to setup the nodes.
- The directories
/srv/ceph/osd$i
must be pre-created, since it is unable to do that itself, despite being able to creat the /srv/ceph/mon$i
directories.
- The Fedora RPMs have also forgotten to create
/etc/ceph
With that in mind, I ran the following commands from my laptop, as root
# n=0
# for host in lettuce avocado mustard ; \
do \
ssh root@$host mkdir -p /etc/ceph /srv/ceph/mon$n; \
n=$(expr $n + 1; \
scp /etc/ceph/ceph.conf root@$host:/etc/ceph/ceph.conf
done
# mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.bin
On the host where you ran mkcephfs
there should now be a file /etc/ceph/keyring.admin
. This will be needed for mounting filesystems. I copied it across to all my virtualization hosts
# for host in lettuce avocado mustard ; \
do \
scp /etc/ceph/keyring.admin root@$host:/etc/ceph/keyring.admin; \
done
Host Cluster Usage
Assuming the setup phase all went to plan, the cluster can now be started. A word of warning though, Ceph really wants your clocks VERY well synchronized. If your NTP server is a long way away, the synchronization might not be good enough to stop Ceph complaining. You really want a NTP server on your local LAN for hosts to sync against. Sort this out before trying to start the cluster.
# for host in lettuce avocado mustard ; \
do \
ssh root@$host service ceph start; \
done
The ceph
tool can show the status of everything. The ‘mon’, ‘osd’ and ‘msd’ lines in the status ought to show all 3 host present & correct
# ceph -s
2011-10-12 14:49:39.085764 pg v235: 594 pgs: 594 active+clean; 24 KB data, 94212 MB used, 92036 MB / 191 GB avail
2011-10-12 14:49:39.086585 mds e6: 1/1/1 up {0=lettuce=up:active}, 2 up:standby
2011-10-12 14:49:39.086622 osd e5: 3 osds: 3 up, 3 in
2011-10-12 14:49:39.086908 log 2011-10-12 14:38:50.263058 osd1 192.168.1.1:6801/8637 197 : [INF] 2.1p1 scrub ok
2011-10-12 14:49:39.086977 mon e1: 3 mons at {0=192.168.1.1:6789/0,1=192.168.1.2:6789/0,2=192.168.1.3:6789/0}
The cluster configuration I chose has authentication enabled, so to actually mount the ceph filesystem requires a secret key. This key is stored in the /etc/ceph/keyring.admin
file that was created earlier. To view the keyring contents, the cauthtool
program must be used
# cauthtool -l /etc/ceph/keyring.admin
[client.admin]
key = AQDLk5VOeHkHLxAAfGjcaUsOXOhJr7hZCNjXSQ==
auid = 18446744073709551615
The base64 key there will be passed to the mount
command, repeating on every host needing a filesystem present:
# mount -t ceph 192.168.1.1:6789:/ /mnt/ -o name=admin,secret=AQDLk5VOeHkHLxAAfGjcaUsOXOhJr7hZCNjXSQ==
error adding secret to kernel, key name client.admin: No such device
For some reason, that error message is always printed on my Fedora hosts, and despite that, the mount has actually succeeded
# grep /mnt /proc/mounts
192.168.1.1:6789:/ /mnt ceph rw,relatime,name=admin,secret= 0 0
Congratulations, /mnt
is now a distributed filesystem. If you create a file on one host, it should appear on the other hosts & vica-verca.
RBD Volume setup
A shared filesystem is very nice, and can be used to hold regular virtual disk images in a variety of formats (raw, qcow2, etc). What I really wanted to try was the RBD virtual block device functionality in QEMU. Ceph includes a tool called rbd
for manipulating those. The syntax of this tool is pretty self-explanatory
# rbd create --size 100 demo
# rbd ls
demo
# rbd info demo
rbd image 'demo':
size 102400 KB in 25 objects
order 22 (4096 KB objects)
block_name_prefix: rb.0.0
parent: (pool -1)
Alternatively RBD volume creation can be done using qemu-img
…. at least once the Fedora QEMU package is fixed to enable RBD support.
# qemu-img create -f rbd rbd:rbd/demo 100M
Formatting 'rbd:rbd/foo', fmt=rbd size=104857600 cluster_size=0
# qemu-img info rbd:rbd/demo
image: rbd:rbd/foo
file format: raw
virtual size: 100M (104857600 bytes)
disk size: unavailable
KVM guest setup
The syntax for configuring a RBD block device in libvirt, is very similar to that used for Sheepdog. In Sheepdog, every single virtualization node is also a storage node, so there is no hostname required. Not so for RBD. Here it is necessary to specify one or more host names, for the RBD servers.
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='rbd' name='demo/wibble'>
<host name='lettuce.example.org' port='6798'/>
<host name='mustard.example.org' port='6798'/>
<host name='avocado.example.org' port='6798'/>
</source>
<target dev='vdb' bus='virtio'/>
</disk>
More observant people might be wondering how QEMU gets permission to connect to the RBD server, given that the configuration earlier enabled authentication. This is thanks to the magic of the /etc/ceph/keyring.admin
file which must exist on any virtualization server. Patches are currently being discussed which will allow authentication credentials to be set via libvirt, avoiding the need to store the credentials on the virtualization hosts permanently.
The virt-tools.org website, launched last year, provides tutorials, videos, documentation, online help and roadmaps relevant to libvirt, libguestfs, gtk-vnc, spice, other related libraries, and tools or applications like virt-manager & virt-install. The site goal is to inform & assist end users, system administrators & application developers who wish to learn about the capabilities of the virt tools stack. The focus of most content is the, state of the art, Linux native KVM hypervisor, but writing about using other hypervisors using virt tools is also welcome.
Back in June I finally got around to setting up a blog planet to aggregate the RSS feeds of various people working in libvirt, libguestfs, etc. While I announced this to various mailing lists, it appears I forgot to blog about it. Whoops. So this post is just a quick alert that if you’re interested in libvirt, libguestfs, virt-manager, etc and don’t want to follow a high traffic site like the Fedora planet, then this is the blog feed aggregator for you:
There were recently patches posted to libvir-list to improve the Ceph support in the KVM driver. While trying to review them it quickly became clear I did not have enough knowledge of Ceph to approve the code. So I decided it was time to setup some clustered storage devices to test libvirt with. I decided to try out Ceph, GlusterFS and Sheepdog, and by virtue of Sheepdog compiling the fastest, that is the first one I have tried and thus responsible for this blog post.
Host setup
If you have Fedora 16, sheepdog can directly installed using yum
# yum install sheepdog
Sheepdog relies on corosync to maintain cluster membership, so the first step is to configure that. Corosync ships with an example configuration file, but since I’ve not used it before, I chose to just use the example configuration recommended by the Sheepdog website. So on the 2 hosts I wanted to participate in the cluster I created:
# cat > /etc/cluster/cluster.conf <EOF
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: -YOUR IP HERE-
mcastaddr: 226.94.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
EOF
Obviously remembering to change the ‘bindnetaddr
‘ parameter. One thing to be aware of is that this configuration allows any host in the same subnet to join the cluster, no authentication or encryption is required. I believe corosync has some support for encryption keys, but I have not explored this. If you don’t trust the network, this should definitely be examined. Then it is simply a matter of starting the corosync and sheepdog, each on each node:
# service corosync start
# service sheepdog start
If all went to plan, it should be possible to see all hosts in the sheepdog cluster, from any node:
# collie node list
Idx - Host:Port Number of vnodes
------------------------------------------------
0 - 192.168.1.2:7000 64
* 1 - 192.168.1.3:7000 64
The final step in initializing the nodes is to create a storage cluster across the nodes. This command only needs to be run on one of the nodes
# collie cluster format --copies=2
# collie cluster info
running
Ctime Epoch Nodes
2011-10-11 10:50:01 1 [192.168.1.2:7000, 192.168.1.3:7000]
Volume setup
libvirt has a storage management API for creating/managing volumes, but there is not currently a driver for sheepdog. So for the time being, volumes need to be created manually using the qemu-img
command. All that is required is a volume name and a size. So on any of the nodes:
$ qemu-img create sheepdog:demo 1G
The more observant people might notice that this command can be run by any user on the host, no authentication required. Even if the host is locked down to not allow unprivileged user logins, this still means that any compromised QEMU instance can access all the sheepdog storage. Not cool. Some form of authentication is clearly needed before this can be used for production.
With the default Fedora configuration of sheepdog, all the disk volumes end up being stored under /var/lib/sheepdog
, so make sure that directory has plenty of free space.
Guest setup
Once a volume has been created, setting up a guest to use it, is just a matter of using a special XML configuration block for the guest disk.
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='sheepdog' name='demo'/>
<target dev='vdb' bus='virtio'/>
</disk>
Notice how although this is a network block device, there is no need to provide a hostname of the storage server. Every virtualization host is a member of the storage cluster, and vica-verca, so the storage is “local” as far as QEMU is concerned. Inside the guest there is nothing special to worry about, a regular virtio block device appears, in this case /dev/vdb. As data is written to the block device in the guest, the data should end up in /var/lib/sheepdog on all nodes in the cluster.
One final caveat to mention, is that live migration of guests between hosts is not currently supported with Sheepdog.
Edit: Live migration *is* supported with sheepdog 0.2.0 and later.
I was recently asked to outline some of the risks of virtualization wrt networking, in particular, how guests running on the same network could attack each other’s network traffic. The examples in this blog post will consider a scenario with three guests running on the same host, connected to the libvirt default virtual machine (backed by the virbr0
bridge device). As is traditional, the two guests trying to communicate shall be called alice
, bob
, while the attacker/eavesdropper shall be eve
. Provision three guests with those names, and make sure their network configuration look like this
<interface type='network'> (for the VM 'alice')
<mac address='52:54:00:00:00:11'/>
<source network='default'/>
<target dev='vnic-alice"/>
<model type='virtio'/>
</interface>
<interface type='network'> (for the VM 'bob')
<mac address='52:54:00:00:00:22'/>
<source network='default'/>
<target dev='vnic-bob"/>
<model type='virtio'/>
</interface>
<interface type='network'> (for the VM 'eve')
<mac address='52:54:00:00:00:33'/>
<source network='default'/>
<target dev='vnic-eve"/>
<model type='virtio'/>
</interface>
If the guest interfaces are to be configured using DHCP, it is desirable to have predictable IP addresses for alice, bob & eve. This can be achieved by altering the default network configuration:
# virsh net-destroy default
# virsh net-edit default
In the editor change the IP configuration to look like
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254' />
<host mac='52:54:00:00:00:11' name='alice' ip='192.168.122.11' />
<host mac='52:54:00:00:00:22' name='bob' ip='192.168.122.22' />
<host mac='52:54:00:00:00:33' name='eve' ip='192.168.122.33' />
</dhcp>
</ip>
With all these changes made, start the network and the guests
# virsh net-start default
# virsh start alice
# virsh start bob
# virsh start eve
After starting these three guests, the host sees the following bridge configuration
# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.fe5200000033 yes vnic-alice
vnic-bob
vnic-eve
For the sake of testing, the “very important” communication between alice
and bob
will be a repeating ICMP ping. So login to ‘alice’ (via the console, not the network) and leave the following command running forever
# ping bob
PING bob.test.berrange.com (192.168.122.22) 56(84) bytes of data.
64 bytes from bob.test.berrange.com (192.168.122.22): icmp_req=1 ttl=64 time=0.790 ms
64 bytes from bob.test.berrange.com (192.168.122.22): icmp_req=2 ttl=64 time=0.933 ms
64 bytes from bob.test.berrange.com (192.168.122.22): icmp_req=3 ttl=64 time=0.854 ms
...
Attacking VMs on a hub
The first thought might be for eve
to just run ‘tcpdump
‘ (again via the console shell, not a network shell):
# tcpdump icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
...nothing captured...
Fortunately Linux bridge devices act as switches by default, so eve
won’t see any traffic flowing between alice
and bob
. For the sake of completeness though, I should point out that it is possible to make a Linux bridge act as a hub instead of a switch. This can be done as follows:
# brctl setfd 0
# brctl setageing 0
Switching back to the tcpdump session in eve
, should now show traffic between alice
and bob
being captured
10:38:15.644181 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 29, length 64
10:38:15.644620 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 29, length 64
10:38:16.645523 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 30, length 64
10:38:16.645886 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 30, length 64
Attacking VMs on a switch using MAC spoofing
Putting the bridge into ‘hub mode’ was cheating though, so reverse that setting on the host
# brctl setageing 300
Since the switch is clever enough to only send traffic out of the port where it has seen the corresponding MAC address, perhaps eve
can impersonate bob
by spoofing his MAC address. MAC spoofing is quite straightforward; in the console for eve
run
# ifdown eth0
# ifconfig eth0 hw ether 52:54:00:00:00:22
# ifconfig eth0 up
# ifconfig eth0 192.168.122.33/24
Now that the interface is up with eve
‘s IP address, but bob
‘s MAC address, the final step is to just poison the host switch’s MAC address/port mapping. A couple of ping packets sent to an invented IP address (so alice
/bob
don’t see any direct traffic from eve
) suffice todo this
# ping -c 5 192.168.122.44
To see whether eve
is now receiving bob
‘s traffic launch tcpdump again in eve
‘s console
# tcpdump icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
11:02:41.981567 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1493, length 64
11:02:42.981624 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1494, length 64
11:02:43.981785 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1495, length 64
...
The original ‘ping’ session, back in alice
‘s console, should have stopped receiving any replies from bob
since all his traffic is being redirected to eve
. Occasionally bob
‘s OS might send out some packet on its own accord which re-populates the host bridge’s MAC address/port mapping, causing the ping to start again. eve
can trivially re-poison the mapping at any time by sending out further packets of her own.
Attacking VMs on a switch using MAC and IP spoofing
The problem with only using MAC spoofing is that traffic from alice
to bob
goes into a black hole – the ping packet loss quickly shows alice
that something is wrong. To try and address this, eve
could also try spoofing bob
‘s IP address, by running:
# ifconfig eth0 192.168.122.22/24
The tcpdump session in eve
should now show replies being sent back out, in response to alice
‘s ping requests
# tcpdump icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
11:10:55.797471 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1986, length 64
11:10:55.797521 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 1986, length 64
11:10:56.798914 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1987, length 64
11:10:56.799031 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 1987, length 64
A alice
‘s ping session will now be receiving replies just as she expects, except that unbeknown to her, the replies are actually being sent by eve
not bob
.
Protecting VMs against MAC/IP spoofing
So eve
can impersonate a ping response from bob
, big deal ? What about some real application level protocols like SSH or HTTPS which have security built in. These are no doubt harder to attack, but by no means impossible particularly if you are willing to bet/rely on human/organizational weakness. For MITM attacks like this, the SSH host key fingerprint is critical. How many people actually go to the trouble of checking that the SSH host key matches what it is supposed to be, when first connecting to a new host ? I’d wager very few. Rather more users will question the alert from SSH when a previously known host key changes, but I’d still put money on a non-trivial number ignoring the warning. For HTTPS, the key to avoiding MITM attacks is the x509 certificate authority system. Everyone knows that this is absolutely flawless without any compromised/rogue CA’s ;-P
What can we do about these risks for virtual machines running on the same host ? libvirt provides a reasonably advanced firewall capability in both its KVM and LXC drivers. This capability is built upon the standard Linux ebtables, iptables and ip6tables infrastructure and enables rules to be set per guest TAP device. The example firewall filters that are present out of the box provide a so called “clean traffic” ruleset. Amongst other things, these filters prevent and MAC and IP address spoofing by virtual machines. Enabling this requires a very simple change to the guest domain network interface configuration.
Shutdown alice
, bob
and eve
and then alter their XML configuration (using virsh edit) so that each one now contains the following:
<interface type='network'> (for the VM 'alice')
<mac address='52:54:00:00:00:11'/>
<source network='default'/>
<target dev='vnic-alice"/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
</interface>
<interface type='network'> (for the VM 'bob')
<mac address='52:54:00:00:00:22'/>
<source network='default'/>
<target dev='vnic-bob"/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
</interface>
<interface type='network'> (for the VM 'eve')
<mac address='52:54:00:00:00:33'/>
<source network='default'/>
<target dev='vnic-eve"/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
</interface>
Start the guests again and now try to repeat the previous MAC and IP spoofing attacks from eve
. If all is working as intended, it should be impossible for eve
to capture any traffic between alice
and bob
, or disrupt it in any way.
The clean-traffic
filter rules are written to require two configuration parameters, the whitelisted MAC address and the whitelisted IP address. The MAC address is inserted by libvirt automatically based on the declared MAC in the XML configuration. For the IP address, libvirt will sniff the DHCPOFFER responses from the DHCP server running on the host to learn the assigned IP address. There is a fairly obvious attack with this, where by someone just runs a rogue DHCP server. It is possible to alter the design of the filter rules so that any rogue DHCP servers are blocked, however, there is one additional problem. Upon migration of guests, the new host needs to learn the IP address, but guests’s don’t re-run DHCP upon migration between it is supposed to be totally seemless. Thus in most cases, when using filters, the host admin will want to explicitly specify the guest’s IP address in the XML
<filterref filter='clean-traffic'>
<parameter name='IP' value='192.168.122.33'>
</filterref>
There is quite alot more that can be done using libvirt’s guest network filtering capabilities. One idea would be to block outbound SMTP traffic to prevent compromised guests being turned into spambots. In fact almost anything that an administrator might wish todo inside the guest using iptables, could be done in the host using libvirt’s network filtering, to provide additional protection against guest OS compromise.
This will be left as an exercise for the reader…