The virt-tools.org website, launched last year, provides tutorials, videos, documentation, online help and roadmaps relevant to libvirt, libguestfs, gtk-vnc, spice, other related libraries, and tools or applications like virt-manager & virt-install. The site goal is to inform & assist end users, system administrators & application developers who wish to learn about the capabilities of the virt tools stack. The focus of most content is the, state of the art, Linux native KVM hypervisor, but writing about using other hypervisors using virt tools is also welcome.
Back in June I finally got around to setting up a blog planet to aggregate the RSS feeds of various people working in libvirt, libguestfs, etc. While I announced this to various mailing lists, it appears I forgot to blog about it. Whoops. So this post is just a quick alert that if you’re interested in libvirt, libguestfs, virt-manager, etc and don’t want to follow a high traffic site like the Fedora planet, then this is the blog feed aggregator for you:
There were recently patches posted to libvir-list to improve the Ceph support in the KVM driver. While trying to review them it quickly became clear I did not have enough knowledge of Ceph to approve the code. So I decided it was time to setup some clustered storage devices to test libvirt with. I decided to try out Ceph, GlusterFS and Sheepdog, and by virtue of Sheepdog compiling the fastest, that is the first one I have tried and thus responsible for this blog post.
Host setup
If you have Fedora 16, sheepdog can directly installed using yum
# yum install sheepdog
Sheepdog relies on corosync to maintain cluster membership, so the first step is to configure that. Corosync ships with an example configuration file, but since I’ve not used it before, I chose to just use the example configuration recommended by the Sheepdog website. So on the 2 hosts I wanted to participate in the cluster I created:
# cat > /etc/cluster/cluster.conf <EOF
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: -YOUR IP HERE-
mcastaddr: 226.94.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
EOF
Obviously remembering to change the ‘bindnetaddr
‘ parameter. One thing to be aware of is that this configuration allows any host in the same subnet to join the cluster, no authentication or encryption is required. I believe corosync has some support for encryption keys, but I have not explored this. If you don’t trust the network, this should definitely be examined. Then it is simply a matter of starting the corosync and sheepdog, each on each node:
# service corosync start
# service sheepdog start
If all went to plan, it should be possible to see all hosts in the sheepdog cluster, from any node:
# collie node list
Idx - Host:Port Number of vnodes
------------------------------------------------
0 - 192.168.1.2:7000 64
* 1 - 192.168.1.3:7000 64
The final step in initializing the nodes is to create a storage cluster across the nodes. This command only needs to be run on one of the nodes
# collie cluster format --copies=2
# collie cluster info
running
Ctime Epoch Nodes
2011-10-11 10:50:01 1 [192.168.1.2:7000, 192.168.1.3:7000]
Volume setup
libvirt has a storage management API for creating/managing volumes, but there is not currently a driver for sheepdog. So for the time being, volumes need to be created manually using the qemu-img
command. All that is required is a volume name and a size. So on any of the nodes:
$ qemu-img create sheepdog:demo 1G
The more observant people might notice that this command can be run by any user on the host, no authentication required. Even if the host is locked down to not allow unprivileged user logins, this still means that any compromised QEMU instance can access all the sheepdog storage. Not cool. Some form of authentication is clearly needed before this can be used for production.
With the default Fedora configuration of sheepdog, all the disk volumes end up being stored under /var/lib/sheepdog
, so make sure that directory has plenty of free space.
Guest setup
Once a volume has been created, setting up a guest to use it, is just a matter of using a special XML configuration block for the guest disk.
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='sheepdog' name='demo'/>
<target dev='vdb' bus='virtio'/>
</disk>
Notice how although this is a network block device, there is no need to provide a hostname of the storage server. Every virtualization host is a member of the storage cluster, and vica-verca, so the storage is “local” as far as QEMU is concerned. Inside the guest there is nothing special to worry about, a regular virtio block device appears, in this case /dev/vdb. As data is written to the block device in the guest, the data should end up in /var/lib/sheepdog on all nodes in the cluster.
One final caveat to mention, is that live migration of guests between hosts is not currently supported with Sheepdog.
Edit: Live migration *is* supported with sheepdog 0.2.0 and later.
In “fantasy island” the libvirt and KVM/LXC code is absolutely perfect and always does exactly what you want it todo. Back in the real world, however, there may be annoying bugs in libvirt, KVM/LXC, the kernel and countless other parts of the OS that conspire to cause you great pain and suffering. This blog post contains a very quick introduction to debugging/troubleshooting libvirt problems, particularly focusing on the KVM and LXC drivers.
libvirt logging capabilities
The libvirt code is full of logging statements which can be instrumental in understanding where a problem might lie.
Configuring libvirtd logging
Current releases of libvirt will log problems occurring in libvirtd at level WARNING/ERROR to a dedicated log file /var/log/libvirt/libvirtd.log
, while older releases would be send them to syslog, typically ending up in /var/log/messages
. The libvirtd configuration file has two parameters that can be used to increase the amount of logging information printed.
log_filters="...filter string..."
log_outputs="...destination config..."
The logging documentation describes these in some detail. If you just want to quickly get started though, it suffices to understand that filter strings are simply doing substring matches against libvirt source filenames. So to enable all debug information from ‘src/util/event.c’ (the libvirt event loop) you would set
log_filters="1:event"
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
If you wanted to enable logging for everything in ‘src/util’, except for ‘src/util/event.c’ you would set
log_filters="3:event 1:util"
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
Configuring libvirt client logging
On the client side of libvirt there is no configuration file to put log settings in, so instead, there are a couple of environment variables. These take exactly the same type of strings as the libvirtd configuration file
LIBVIRT_LOG_FILTERS="...filter string..."
LIBVIRT_LOG_OUTPUTS="...destination config..."
export LIBVIRT_LOG_FILTERS LIBVIRT_LOG_OUTPUTS
One thing to be aware of is that with the KVM and LXC drivers in libvirt, very little code is ever run on the libvirt client. The only interesting pieces are the RPC code, event loop and main API entrypoints. To enable debugging of the RPC code you might use
LIBVIRT_LOG_FILTERS="1:rpc" LIBVIRT_LOG_OUTPUTS="1:stderr" virsh list
Useful log filter settings for KVM and LXC
The following are some useful values for logging wrt the KVM and LXC drivers
- All libvirt public APIs invoked
- 1:libvirt
- All external commands run by libvirt
- 1:command
- Cgroups management
- 1:cgroup
- All QEMU driver code
- 1:qemu
- QEMU text monitor commands
- 1:qemu_monitor_text
- QEMU JSON/QMP monitor commands
- 1:qemu_monitor_json
- All LXC driver code
- 1:lxc
- All lock management code
- 1:locking
- All security manager code
- 1:security
QEMU driver logfiles
Every QEMU process run by libvirt has a dedicated log file /var/log/libvirt/qemu/$VMNAME.log
which captures any data that QEMU writes to stderr/stdout. It also contains timestamps written by libvirtd whenever the QEMU process is started, and exits. Finally, prior to starting a guest, libvirt will write out the full set of environment variables and command line arguments it intends to launch QEMU with.
If you are running libvirtd with elevated log settings, there is also the possibility that some of the logging output will end up in the per-VM logfile, instead of the location set by the log_outputs
configuration parameter. This is because a little bit of libvirt code will run in the child process between the time it is forked and QEMU is exec()d.
LXC driver logfiles
Every LXC process run by libvirt has a dedicated log file /var/log/libvirt/qemu/$VMNAME.log
which captures any data that QEMU writes to stderr/stdout. As with QEMU it will also contain the command line args libvirt uses, though these are much less interesting in the LXC case. The LXC logfile is mostly useful for debugging the initial container bootstrap process.
Troubleshooting SELinux / sVirt
On a RHEL or Fedora host, the out of the box configuration will run all guests under confined SELinux contexts. One common problem that may affect developers running libvirtd straight from the source tree is that libvirtd itself will run under the wrong context, which in turn prevents guests from running correctly. This can be addressed in two ways, first by manually labelling the libvirtd binary after each rebuild
chcon system_u:object_r:virtd_exec_t:s0 $SRCTREE/daemon/.libs/lt-libvirtd
Or by specifying a label when executing libvirtd
runcon system_u:object_r:virtd_exec_t:s0 $SRCTREE/daemon/libvirtd
Another problem might be with libvirt not correctly labelling some device needed by the QEMU process. The best way to see what’s going on here, is to enable libvirtd logging with a filter of “1:security_selinux”, which will print out a message for every single file path that libvirtd labels. Then look at the log to see that everything expected is present:
14:36:57.223: 14351: debug : SELinuxGenSecurityLabel:284 : model=selinux label=system_u:system_r:svirt_t:s0:c669,c903 imagelabel=system_u:object_r:svirt_image_t:s0:c669,c903 baselabel=(null)
14:36:57.350: 14351: info : SELinuxSetFilecon:402 : Setting SELinux context on '/var/lib/libvirt/images/f16x86_64.img' to 'system_u:object_r:svirt_image_t:s0:c669,c903'
14:36:57.350: 14351: info : SELinuxSetFilecon:402 : Setting SELinux context on '/home/berrange/boot.iso' to 'system_u:object_r:virt_content_t:s0'
14:36:57.551: 14351: debug : SELinuxSetSecurityDaemonSocketLabel:1129 : Setting VM f16x86_64 socket context unconfined_u:unconfined_r:unconfined_t:s0:c669,c903
If a guest is failing to start, then there are two ways to double check if it really is SELinux related. SELinux can be put into permissive mode on the virtualization host
setenforce 0
Or the sVirt driver can be disabled in libvirt entirely
# vi /etc/libvirt/qemu.conf
...set 'security_driver="none" ...
# service libvirtd restart
Troubleshooting cgroups
When libvirt runs guests on modern Linux systems, cgroups will be used to control aspects of the guests’ execution. If any cgroups are mounted on the host when libvirtd starts up, it will create a basic hierarchy
$MOUNT_POINT
|
+- libvirt
|
+- qemu
+- lxc
When starting a KVM or LXC guest, further directories will be created, one per guest, so that after a while the tree will look like
$MOUNT_POINT
|
+- libvirt
|
+- qemu
| |
| +- VMNAME1
| +- VMNAME1
| +- VMNAME1
| +- ...
| ...
+- lxc
|
+- VMNAME1
+- VMNAME1
+- VMNAME1
+- ...
Assuming the host administrator has not changed the policy in the top level cgroups, there should be no functional change to operation of the guests with this default setup. There are possible exceptions though if you are trying something unusal. For example, the ‘devices’ cgroups controller will be used to setup a whitelist of block / character devices that QEMU is allowed to access. So if you have modified QEMU to access to funky new device, libvirt will likely block this via the cgroups device ACL. Due to various kernel bugs, some of the cgroups controllers have also had a detrimental performance impact on both QEMU guest and the host OS as a whole.
libvirt will never try to mount any cgroups itself, so the quickest way to stop libvirt using cgroups is to stop the host OS from mounting them. This is not always desirable though, so there is a configuration parameter in /etc/libvirt/qemu.conf which can be used to restrict what cgroups libvirt will use.
Running from the GIT source tree
Sometimes when troubleshooting a particularly hard problem it might be desirable to build libvirt from the latest GIT source and run that. When doing this is a good idea not to overwrite your distro provided installation with a GIT build, but instead run libvirt directly from the source tree. The first thing to be careful of is that the custom build uses the right installation prefix (ie /etc, /usr, /var and not /usr/local). To simplify this libvirt provides an ‘autogen.sh’ script to run all the right libtool commands and set the correct prefixes. So to build libvirt from GIT, in a way that is compatible with a typical distro build use:
./autogen.sh --system --enable-compile-warnings=error
make
Hint: use make -j 4 (or larger) to significantly speed up the build on multi-core systems
To run libvirtd from the source tree, as root, stop the existing daemon and invoke the libtool wrapper script
# service libvirtd stop
# ./daemon/libvirtd
Or to run with SELinux contexts
# service libvirtd stop
# runcon system_u:system_r:virtd_t:s0-s0:c0.c1023 ./daemon/libvirtd
virsh can easily be run from the source tree in the same way
# ./tools/virsh ....normal args...
Running python programs against a non-installed libvirt gets a little harder, but that can be overcome too
$ export PYTHONPATH=$SOURCETREE/python:$SOURCETREE/python/.libs
$ export LD_LIBRARY_PATH=$SOURCETREE/src/.libs
$ python virt-manager --no-fork
When running the LXC driver, it is necessary to make a change to the guest XML to point it to a different emulator. Running ‘virsh edit $GUEST’ change
/usr/libexec/libvirt_lxc
to
$SOURCETREE/src/libvirt_lxc
(expand $SOURCETREE to be the actual path of the GIT checkout – libvirt won’t interpret env vars in the XML)
I was recently asked to outline some of the risks of virtualization wrt networking, in particular, how guests running on the same network could attack each other’s network traffic. The examples in this blog post will consider a scenario with three guests running on the same host, connected to the libvirt default virtual machine (backed by the virbr0
bridge device). As is traditional, the two guests trying to communicate shall be called alice
, bob
, while the attacker/eavesdropper shall be eve
. Provision three guests with those names, and make sure their network configuration look like this
<interface type='network'> (for the VM 'alice')
<mac address='52:54:00:00:00:11'/>
<source network='default'/>
<target dev='vnic-alice"/>
<model type='virtio'/>
</interface>
<interface type='network'> (for the VM 'bob')
<mac address='52:54:00:00:00:22'/>
<source network='default'/>
<target dev='vnic-bob"/>
<model type='virtio'/>
</interface>
<interface type='network'> (for the VM 'eve')
<mac address='52:54:00:00:00:33'/>
<source network='default'/>
<target dev='vnic-eve"/>
<model type='virtio'/>
</interface>
If the guest interfaces are to be configured using DHCP, it is desirable to have predictable IP addresses for alice, bob & eve. This can be achieved by altering the default network configuration:
# virsh net-destroy default
# virsh net-edit default
In the editor change the IP configuration to look like
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254' />
<host mac='52:54:00:00:00:11' name='alice' ip='192.168.122.11' />
<host mac='52:54:00:00:00:22' name='bob' ip='192.168.122.22' />
<host mac='52:54:00:00:00:33' name='eve' ip='192.168.122.33' />
</dhcp>
</ip>
With all these changes made, start the network and the guests
# virsh net-start default
# virsh start alice
# virsh start bob
# virsh start eve
After starting these three guests, the host sees the following bridge configuration
# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.fe5200000033 yes vnic-alice
vnic-bob
vnic-eve
For the sake of testing, the “very important” communication between alice
and bob
will be a repeating ICMP ping. So login to ‘alice’ (via the console, not the network) and leave the following command running forever
# ping bob
PING bob.test.berrange.com (192.168.122.22) 56(84) bytes of data.
64 bytes from bob.test.berrange.com (192.168.122.22): icmp_req=1 ttl=64 time=0.790 ms
64 bytes from bob.test.berrange.com (192.168.122.22): icmp_req=2 ttl=64 time=0.933 ms
64 bytes from bob.test.berrange.com (192.168.122.22): icmp_req=3 ttl=64 time=0.854 ms
...
Attacking VMs on a hub
The first thought might be for eve
to just run ‘tcpdump
‘ (again via the console shell, not a network shell):
# tcpdump icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
...nothing captured...
Fortunately Linux bridge devices act as switches by default, so eve
won’t see any traffic flowing between alice
and bob
. For the sake of completeness though, I should point out that it is possible to make a Linux bridge act as a hub instead of a switch. This can be done as follows:
# brctl setfd 0
# brctl setageing 0
Switching back to the tcpdump session in eve
, should now show traffic between alice
and bob
being captured
10:38:15.644181 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 29, length 64
10:38:15.644620 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 29, length 64
10:38:16.645523 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 30, length 64
10:38:16.645886 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 30, length 64
Attacking VMs on a switch using MAC spoofing
Putting the bridge into ‘hub mode’ was cheating though, so reverse that setting on the host
# brctl setageing 300
Since the switch is clever enough to only send traffic out of the port where it has seen the corresponding MAC address, perhaps eve
can impersonate bob
by spoofing his MAC address. MAC spoofing is quite straightforward; in the console for eve
run
# ifdown eth0
# ifconfig eth0 hw ether 52:54:00:00:00:22
# ifconfig eth0 up
# ifconfig eth0 192.168.122.33/24
Now that the interface is up with eve
‘s IP address, but bob
‘s MAC address, the final step is to just poison the host switch’s MAC address/port mapping. A couple of ping packets sent to an invented IP address (so alice
/bob
don’t see any direct traffic from eve
) suffice todo this
# ping -c 5 192.168.122.44
To see whether eve
is now receiving bob
‘s traffic launch tcpdump again in eve
‘s console
# tcpdump icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
11:02:41.981567 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1493, length 64
11:02:42.981624 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1494, length 64
11:02:43.981785 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1495, length 64
...
The original ‘ping’ session, back in alice
‘s console, should have stopped receiving any replies from bob
since all his traffic is being redirected to eve
. Occasionally bob
‘s OS might send out some packet on its own accord which re-populates the host bridge’s MAC address/port mapping, causing the ping to start again. eve
can trivially re-poison the mapping at any time by sending out further packets of her own.
Attacking VMs on a switch using MAC and IP spoofing
The problem with only using MAC spoofing is that traffic from alice
to bob
goes into a black hole – the ping packet loss quickly shows alice
that something is wrong. To try and address this, eve
could also try spoofing bob
‘s IP address, by running:
# ifconfig eth0 192.168.122.22/24
The tcpdump session in eve
should now show replies being sent back out, in response to alice
‘s ping requests
# tcpdump icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
11:10:55.797471 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1986, length 64
11:10:55.797521 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 1986, length 64
11:10:56.798914 IP alice.test.berrange.com > bob.test.berrange.com: ICMP echo request, id 8053, seq 1987, length 64
11:10:56.799031 IP bob.test.berrange.com > alice.test.berrange.com: ICMP echo reply, id 8053, seq 1987, length 64
A alice
‘s ping session will now be receiving replies just as she expects, except that unbeknown to her, the replies are actually being sent by eve
not bob
.
Protecting VMs against MAC/IP spoofing
So eve
can impersonate a ping response from bob
, big deal ? What about some real application level protocols like SSH or HTTPS which have security built in. These are no doubt harder to attack, but by no means impossible particularly if you are willing to bet/rely on human/organizational weakness. For MITM attacks like this, the SSH host key fingerprint is critical. How many people actually go to the trouble of checking that the SSH host key matches what it is supposed to be, when first connecting to a new host ? I’d wager very few. Rather more users will question the alert from SSH when a previously known host key changes, but I’d still put money on a non-trivial number ignoring the warning. For HTTPS, the key to avoiding MITM attacks is the x509 certificate authority system. Everyone knows that this is absolutely flawless without any compromised/rogue CA’s ;-P
What can we do about these risks for virtual machines running on the same host ? libvirt provides a reasonably advanced firewall capability in both its KVM and LXC drivers. This capability is built upon the standard Linux ebtables, iptables and ip6tables infrastructure and enables rules to be set per guest TAP device. The example firewall filters that are present out of the box provide a so called “clean traffic” ruleset. Amongst other things, these filters prevent and MAC and IP address spoofing by virtual machines. Enabling this requires a very simple change to the guest domain network interface configuration.
Shutdown alice
, bob
and eve
and then alter their XML configuration (using virsh edit) so that each one now contains the following:
<interface type='network'> (for the VM 'alice')
<mac address='52:54:00:00:00:11'/>
<source network='default'/>
<target dev='vnic-alice"/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
</interface>
<interface type='network'> (for the VM 'bob')
<mac address='52:54:00:00:00:22'/>
<source network='default'/>
<target dev='vnic-bob"/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
</interface>
<interface type='network'> (for the VM 'eve')
<mac address='52:54:00:00:00:33'/>
<source network='default'/>
<target dev='vnic-eve"/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
</interface>
Start the guests again and now try to repeat the previous MAC and IP spoofing attacks from eve
. If all is working as intended, it should be impossible for eve
to capture any traffic between alice
and bob
, or disrupt it in any way.
The clean-traffic
filter rules are written to require two configuration parameters, the whitelisted MAC address and the whitelisted IP address. The MAC address is inserted by libvirt automatically based on the declared MAC in the XML configuration. For the IP address, libvirt will sniff the DHCPOFFER responses from the DHCP server running on the host to learn the assigned IP address. There is a fairly obvious attack with this, where by someone just runs a rogue DHCP server. It is possible to alter the design of the filter rules so that any rogue DHCP servers are blocked, however, there is one additional problem. Upon migration of guests, the new host needs to learn the IP address, but guests’s don’t re-run DHCP upon migration between it is supposed to be totally seemless. Thus in most cases, when using filters, the host admin will want to explicitly specify the guest’s IP address in the XML
<filterref filter='clean-traffic'>
<parameter name='IP' value='192.168.122.33'>
</filterref>
There is quite alot more that can be done using libvirt’s guest network filtering capabilities. One idea would be to block outbound SMTP traffic to prevent compromised guests being turned into spambots. In fact almost anything that an administrator might wish todo inside the guest using iptables, could be done in the host using libvirt’s network filtering, to provide additional protection against guest OS compromise.
This will be left as an exercise for the reader…
sVirt has been available in the libvirt KVM driver for a few years now, both for SELinux and more recently for AppArmour. When using it with SELinux there has been a choice of two different configurations
- Dynamic configuration
- libvirt takes the default base label (“system_u:system_r:svirt_t:s0”), generates a unique MCS label for the guest (“c123,c465”) and combines them to form the complete security label for the virtual machine process. libvirt takes the same MCS label and combines it with the default image base label (“system_u:system_r:svirt_image_t:s0”) to form the image label. libvirt will then automatically apply the image label to all host OS files that the VM is required to access. These can be disk images, disk devices, PCI devices (we label the corresponding sysfs files), USB devices (we label the /dev/bus/usb files), kernel/initrd files, and a few more things. When the VM shuts down again, we reverse the labelling. This mode was originally intended for general usage where the management application is not aware of the existence of sVirt.
- Static configuration
- The guest XML provides the full security label, including the MCS part. libvirt simply assigns this security label to the virtual machine process without trying to alter/interpret it any further. libvirt does not change the labels of any files on disk. The administrator/application using libvirt, is expected to have done all the resource file labelling ahead of time. This mode was originally intended for locked down MLS environments, where even libvirtd itself is not trusted to perform relabelling
These two configurations have worked well enough for the two uses cases they were designed to satisfy. As sVirt has become an accepted part of the libvirt/KVM ecosystem, application developers have started wanting todo more advances things which are currently harder than they should be. In particular some applications want to have full control over the security label generation (eg to ensure cluster-wide unique labels, instead of per-host uniqueness), but still want libvirt to take care of resource relabelling. This is sort of a hybrid between our static & dynamic configuration. Other applications would like to be able to choose a different base label (“system_u:system_r:svirt_custom_t:s0”) but still have libvirt assign the MCS suffix and perform relabelling. This is another variant on dynamic labelling. To satisfy these use cases we have extended the syntax for sVirt labelling in recent libvirt. The “seclabel” element gained a ‘relabel’ attribute to control whether resource relabelling is attempted. A new “baselabel” element was introduced to override the default base security label in dynamic mode. So there are now 4 possible styles of configuration:
-
Dynamic configuration (the default out of the box usage)
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c192,c392</label> (output only element)
<imagelabel>system_u:object_r:svirt_image_t:s0:c192,c392</imagelabel> (output only element)
</seclabel>
-
Dynamic configuration, with base label
<seclabel type='dynamic' model='selinux' relabel='yes'>
<baselabel>system_u:system_r:svirt_custom_t:s0</baselabel>
<label>system_u:system_r:svirt_custom_t:s0:c192,c392</label> (output only element)
<imagelabel>system_u:object_r:svirt_image_t:s0:c192,c392</imagelabel> (output only element)
</seclabel>
-
Static configuration, no resource labelling (primarily for MLS/strictly controlled environments)
<seclabel type='static' model='selinux' relabel='no'>
<label>system_u:system_r:svirt_custom_t:s0:c192,c392</label>
</seclabel>
-
Static configuration, with dynamic resource labelling
<seclabel type='static' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_custom_t:s0:c192,c392</label>
</seclabel>