An oddity delaying kernel shutdown

Posted: September 22nd, 2011 | Filed under: Fedora, libvirt, Virt Tools | 2 Comments »

A couple of years ago Dan Walsh introduced the SELinux sandbox which was a way to confine what resources an application can access using SELinux and the Linux filesystem namespace functionality.  Meanwhile we developed sVirt in libvirt to confine QEMU virutal machines, and QEMU itself has gained support for passing host filesystems straight through to the guest operating system, using a VirtIO based transport for the 9p filesystem. This got me thinking about whether it was now practical to create a sandbox based on QEMU, or rather KVM by booting a guest with a root filesystem pointing to the host’s root filesystem (readonly of course), combined with a couple of overlays for /tmp and /home, all protected by sVirt.

One prominent factor in the practicality is how much time the KVM and kernel startup sequences add to the overall execution time of the command being sandboxed. From Richard Jones‘ work on libguestfs I know that it is possible to boot to a functioning application inside KVM in < 5 seconds. The approach I take with 9pfs has a slight advantage over libguestfs because it does not occur the initial (one-time only per kernel version) delay for building a virtual appliance based on the host filesystem, since we’re able to direct access the host filesystem from the guest. The fine details will have to wait for a future blog post, but suffice to say, a stock Fedora kernel can be made to boot to the point of exec()ing the ‘init’ binary in the ramdsisk in ~0.6 seconds, and the custom ‘init’ binary I use for mounting the 9p filesystems takes another ~0.2 seconds, giving a total boot time of 0.8 seconds.

Boot up time, however, is only one side of the story. For some application sandboxing scenarios, the shutdown time might be just as important as startup time. I naively thought that the kernel shutdown time would be unmeasurably short. It turns out I was wrong, big time. Timestamps on the printk messages showed that the shutdown time was in fact longer than the bootup time ! The telling messages were:

[    1.486287] md: stopping all md devices.
[    2.492737] ACPI: Preparing to enter system sleep state S5
[    2.493129] Disabling non-boot CPUs ...
[    2.493129] Power down.
[    2.493129] acpi_power_off called

which point a finger towards the MD driver. I was sceptical that the MD driver could be to blame, since my virtual machine does not have any block devices at all, let alone MD devices. To be sure though, I took a look at the MD driver code to see just what it does during kernel shutdown. To my surprise the answer to blindly obvious:

static int md_notify_reboot(struct notifier_block *this, unsigned long code, void *x)
{
  struct list_head *tmp;
  mddev_t *mddev;

  if ((code == SYS_DOWN) || (code == SYS_HALT) || (code == SYS_POWER_OFF)) {

    printk(KERN_INFO "md: stopping all md devices.\n");

    for_each_mddev(mddev, tmp)
      if (mddev_trylock(mddev)) {
          /* Force a switch to readonly even array
           * appears to still be in use.  Hence
           * the '100'.
           */
          md_set_readonly(mddev, 100);
          mddev_unlock(mddev);
      }

    /*
     * certain more exotic SCSI devices are known to be
     * volatile wrt too early system reboots. While the
     * right place to handle this issue is the given
     * driver, we do want to have a safe RAID driver ...
     */
    mdelay(1000*1);
  }
  return NOTIFY_DONE;
}

In other words, regardless of whether you actually have any MD devices, it’ll impose a fixed 1 second delay into your shutdown sequence :-(

With this kernel bug fixed, the total time my KVM sandbox spends running the kernel is reduced by more than 50%, from 1.9s to 0.9s. The biggest delay is now down to Seabios & QEMU which together take 2s to get from the start of QEMU main(), to finally jumping into the kernel entry point.

Taking screenshots on colour managed displays with GIMP

Posted: December 12th, 2009 | Filed under: Fedora, Photography | 2 Comments »

Last week I mentioned how I had started running F12 with a colour managed desktop wherever possible with the application. Today I had need to capture some screenshots of a new (& as yet unannounced ) application I’m working on. There are two ways I normally capture screenshots. The first is to just press the ‘PrintScreen’ button, let GNOME save the PNG image and then crop it in GIMP or something like that. The second way is to use GIMP’s own screen capture function (File -> Create -> Screenshot), useful if you want an capture of a specific window instead of the whole desktop.

Today I acquired the screenshot using GIMP since I already had it open. And the colours in the screenshot looked like complete garbage. It shouldn’t be hard to understand what went wrong here. The default ICC profile for new images created in GIMP is the sRGB colourspace. In image windows, GIMP applies a transformation to the image colours, going from the sRGB profile to the monitor’s calibrated profile. Except that since this image was created from a screenshot of a colour managed display, the colours have already been transformed according to the monitor’s profile. GIMP is in essence applying a duplicate conversion. It is no wonder the result looks awful.

Having realized that a duplicate conversion was taking place, the solution is easy. Tell GIMP that the image is in the monitor’s colourspace, rather than the default sRGB. This is done using the menu Image -> Mode -> Assign Color Profile. With the ‘Assign colour profile’ operation, you are not changing the pixel values in the source image, merely telling GIMP how to interpret them. Since it now knows the image is already in the monitor’s colourspace, the transformation becomes a no-op, and the image displays in sensible colours again.

It is possible to leave it at that, save the image and do whatever you were going todo with it. This is sub-optimal if you intend to distribute the image to other people. The sRGB colourspace is intended as a generic colourspace which has reasonable display characteristics even on monitors which are not calibrated / colour managed. If uploading the web, most people viewing the image are not going to have colour managed displays. Thus, if you want the image to look reasonable for them, it is wise to now convert it to the sRGB colourspace. This is done using the menu Image -> Mode -> Convert to Color Profile. In contrast to the ‘Assign’ operation, the ‘Convert’ operation does the change the actual pixel values in the source image. Depending on the overlap between the monitor’s colourspace and the sRGB colourspace, and the rendering intent chosen, this will be a slightly lossy process. The image colours won’t display in quite same way as before, but it will display better on other people’s monitors.

In summary, if you are taking screenshots of a colour management aware application on a colour managed display, you need to first assign the monitor profile to the captured image, and then convert it to the sRGB profile. Oh and remember that, depending on the source of the data, this assign+convert step may also be required when pasting image data from the clipboard.

Colour management in firefox on Fedora 12

Posted: December 6th, 2009 | Filed under: Fedora, Photography | 1 Comment »

It has been a long time coming, but the Linux desktop is finally getting to the point where colour management is widely available in applications. At a low level ArgyllCMS is providing support for many colour calibration devices and lCMS provides a nice library for applying colour profile transformations to images. At a high level, the graphics/photos tools DigiKam, GIMP, UFRaw, InkScape, Phatch and XSane are all able to do colour management. Most are even following the X colour management spec to automatically obtain the current monitor profile. In the last few weeks Richard Hughes has filled in another missing piece, writing gnome-colour-manager to provide a UI for driving ArgyllCMS and setting up monitor profiles upon login.

It is great to be able to do photo/graphics work on a fully colour managed Linux desktop….and then you upload the photos to Flickr and they go back to looking awful. After a little googling though, it turns out all is not lost. Firefox does in fact contain some colour management support, hidden away in its truly awful about:config page. If you go to that page and filter on ‘gfx’, you’ll find a couple of settings with ‘color_management’ in their name

gfx.color_management.display_profile
gfx.color_management.mode
gfx.color_management.rendering_intent

The first, display_profile, takes the full path to an ICC profile for your monitor, while mode controls where colour management is applied. A value of ‘2’ will make firefox only apply profiles to images explicitly tagged with a profile. A value of ‘1’ will make firefox apply profiles to CSS and images, assuming an sRGB profile if the image does is tagged. rendering_intent takes values 0, 1, 2, 3 corresponding to ‘perceptual’, ‘relative colourimetric’, ‘saturation’ and ‘absolute colourimetric’ respectively. I configured my firefox for mode=1, set a profile and restarted. Browsing to Flickr to showed an immediate improvement, with my images actually appearing in the correct colours, matching those I see during editing in GIMP/UFRaw/etc. There’s a little more info about these settings at the mozilla developer notes on ICC.

While it is nice to have colour management in firefox, its implementation is rather sub-optimal since it requires the user to manually configure the display ICC profile path. Each display profile is only valid with the monitor against which it was created. So the moment I switch my laptop from its built-in LCD to an external LCD all the colours in firefox will go to hell. If firefox followed the X ICC profile spec it would be able to automatically apply the correct profiles for each monitor. Hopefully someone will be motivated to fix this soon, since the spec is rather easy to comply with only needing a quick look at a particular named property on the root window.

kernel-xen is dead. Long live kernel + paravirt_ops

Posted: July 25th, 2008 | Filed under: Fedora | 4 Comments »

In Fedora 9 we discontinued our long standing forward-port of Xen’s 2.6.18 kernel tree, and switch to a generic LKML tree (which already had i386 Xen DomU pv_ops), and added a set of patches to support x86_64 Xen DomU pv_ops. While it lacks functionality compared to the previous Xen kernels, and was certainly less stable for a while, overall this was a great success in terms of maintainability. It was still a separate kernel RPM though…

Jeremy Fitzhardinge has meanwhile continued to improve the Xen i386 pv_ops tree stability & functionality in upstream LKML, and has also taken the hacky Fedora x86_64 pv_ops patches, majorly cleaned them up & worked them into a form that was acceptable for upstream. A couple of days ago Ingo sent Jeremy’s work onto Linus who promptly merged it for 2.6.27

Fedora 10 Rawhide of course is tracking 2.6.27 so yesterday Mark McLoughlin turned on Xen pv_ops in the main kernel RPM, and killed off ‘kernel-xen’.

So for Fedora 10 we’ll have one kernel RPM to rule them all. By the magic of pv_ops, auto-detecting whether its running bare metal, Xen, VMWare (VMI) or KVM at boot and optimizing itself for each platform!

There’s only one small wrinkle, that isn’t really Xen’s fault. Anaconda install images on 32-bit Fedora are a i586 non-PAE kernel. Xen 32-bit is i686, PAE only, so we still need to have a separate initrd and vmlinux for installation – but at least its derived from the general purpose ‘kernel-PAE’ binary, instead of ‘kernel-xen’. Of course 64-bit doesn’t have this complication. Someone just needs to fix 32-bit Linux so it can auto-switch between non-PAE and PAE at runtime. It was always said to be impossible to unify UP & SMP kernels…until someone actually did it. Now just need someone to do the impossible for PAE and all will be right with the world :-)

It’s taken a long time & alot of work by many many people to get Xen’s DomU kernel bits merged upstream, so congratulations to all involved on getting another architecture merged, enabling us to finally take full advantage of paravirt_ops in Fedora’s Xen kernels.

New CPANTS test criteria for Fedora suitability

Posted: April 14th, 2008 | Filed under: Fedora | No Comments »

CPANTS is an automated testing system for Perl modules hosted on CPAN which checks various Perl Kwalitee guidelines. There was recently a hackathon to add more guidlines to the testrig and interestingly a couple of these are focusing on distribution integration points.

  • fits_fedora_license – this validates the module license against the allowable Fedora license list. In particular this seeks to fail modules which are Artistic v1 only – modules need to be dual Artistic/GPL to be suitable for Fedora. This is a great example of how Fedora’s rigour around licensing is raising awareness across the open source community & helping to identify and solve problems before they get to the distro.
  • easily_repackageable_by_fedora – this is a compound metric which validates that the fits_fedora_license and no_generated_files metrics were satisfied by the module.

Unfortunately with all these new metrics I’ve got a whole bunch more failures to address in my CPAN modules.