Using GObject Introspection + Gjs to provide a JavaScript plugin engine

Posted: January 10th, 2010 | Filed under: Entangle | 8 Comments »

In writing the Capa photo capture application, one of the things I wanted to support was some form of plugin engine to allow 3rd parties to easily extend its functionality. The core application code itself is designed to have a formal separation of backend and frontend logic. The backend is focused on providing the core object model & operation, typically wrapping external libraries like HAL, libgphoto, lcms in GObject classes, with no use of GTK allowed here. The primary frontend builds on this backend, to produce a GTK based user interface. It is also intended to build another frontend that provides a GIMP plugin.

Back to the question of plugins for the main frontend. If the goal is to allow people to easily write extensions, a plugin engine based on writing C code is not really very desirable. Firefox uses JavaScript for its plugin engine and this has been hugely successful in lowering the bar for contributors. Wouldn’t it be nice if any GTK application could provide a JavaScript plugin engine ? Yes, indeed and thanks to the recent development of GObject introspection this is incredibly easy.

GObject introspection provides a means to query the GObject type system and discover all classes, interfaces, methods, properties, signals, all data types associated with their parameters and any calling conventions. This is an incredibly powerful capability with far reaching implications, the most important being that you will never again have to write a language binding for any GObject based library. There is enough metadata available in the GObject introspection system to provide language bindings in a 100% automated fashion. Notice I said “provide”, rather than “generate” because if targetting a dynamic language (Perl, Python JavaScript) it won’t even be necessary to auto-generate code ahead of time – everything can and will happen at runtime based on the introspection data. Say goodbye to hand written language bindings. Say goodbye to Swig. Say goodbye to any other home grown code generators.

Adding support for introspection

That’s the sales pitch, how about the reality ? The Capa code is based on GObject and was thus ready & willing to be introspected. The first step in adding introspection support is to add some m4 magic to the configure.ac to look for the introspection tools & library. This is simple boilerplate code that will be identical for every application using GObject + autoconf

GOBJECT_INTROSPECTION_REQUIRED=0.6.2
AC_SUBST(GOBJECT_INTROSPECTION_REQUIRED)

AC_ARG_ENABLE([introspection],
        AS_HELP_STRING([--enable-introspection], [enable GObject introspection]),
        [], [enable_introspection=check])

if test "x$enable_introspection" != "xno" ; then
        PKG_CHECK_MODULES([GOBJECT_INTROSPECTION],
                          [gobject-introspection-1.0 >= $GOBJECT_INTROSPECTION_REQUIRED],
                          [enable_introspection=yes],
                          [
                             if test "x$enable_introspection" = "xcheck"; then
                               enable_introspection=no
                             else
                               AC_MSG_ERROR([gobject-introspection is not available])
                             fi
                          ])
        if test "x$enable_introspection" = "xyes" ; then
          AC_DEFINE([WITH_GOBJECT_INTROSPECTION], [1], [enable GObject introspection support])
          AC_SUBST(GOBJECT_INTROSPECTION_CFLAGS)
          AC_SUBST(GOBJECT_INTROSPECTION_LIBS)
          AC_SUBST([G_IR_SCANNER], [$($PKG_CONFIG --variable=g_ir_scanner gobject-introspection-1.0)])
          AC_SUBST([G_IR_COMPILER], [$($PKG_CONFIG --variable=g_ir_compiler gobject-introspection-1.0)])
        fi
fi
AM_CONDITIONAL([WITH_GOBJECT_INTROSPECTION], [test "x$enable_introspection" = "xyes"])

The next step is to add Makefile.am rules to extract the introspection data. This is a two step process, the first step runs g-ir-scanner across all the source code and the actual compiled binary / library to generate a .gir file. This is an XML representation of the introspection data. The second step runs g-ir-compiler to turn the XML data into a machine usable binary format so it can be efficiently accessed. When running g-ir-scanner on a binary, as opposed to a library, it is necessary for that binary to support an extra command line flag called --introspect-dump. I add this code the main.c source file to support that

#if WITH_GOBJECT_INTROSPECTION
    static gchar *introspect = NULL;
#endif

    static const GOptionEntry entries[] = {
        ...snip other options...
#if WITH_GOBJECT_INTROSPECTION
        { "introspect-dump", 'i', 0, G_OPTION_ARG_STRING, &introspect;, "Dump introspection data", NULL },
#endif
        { NULL, 0, 0, 0, NULL, NULL, NULL },
    };

    ...parse command line args...

#if WITH_GOBJECT_INTROSPECTION
    if (introspect) {
        g_irepository_dump(introspect, NULL);
        return 0;
    }
#endif

Back to the Makefile.am rules. g-ir-scanner has quite a few arguments you need to set. The --include args provide the names of introspection metadata files for any libraries depended on. The -I args provide the CPP include paths to the application’s header files. The --pkg args provide the names of any pkg-config files that code builds against. There are a few others too which I won’t cover – they’re all in the man page. The upshot is that the Makefile.am gained rules

if WITH_GOBJECT_INTROSPECTION
Capa-0.1.gir: capa $(G_IR_SCANNER) Makefile.am
        $(G_IR_SCANNER) -v \
                --namespace Capa \
                --nsversion 0.1 \
                --include GObject-2.0 \
                --include Gtk-2.0 \
                --include GPhoto-2.0 \
                --program=$(builddir)/capa \
                --add-include-path=$(srcdir) \
                --add-include-path=$(builddir) \
                --output $@ \
                -I$(srcdir)/backend \
                -I$(srcdir)/frontend \
                --verbose \
                --pkg=glib-2.0 \
                --pkg=gthread-2.0 \
                --pkg=gdk-pixbuf-2.0 \
                --pkg=gobject-2.0 \
                --pkg=gtk+-2.0 \
                --pkg=libgphoto2 \
                --pkg=libglade-2.0 \
                --pkg=hal \
                --pkg=dbus-glib-1 \
                $(libcapa_backend_la_SOURCES:%=$(srcdir)/%) \
                $(libcapa_frontend_la_SOURCES:%=$(srcdir)/%) \
                $(capa_SOURCES:%=$(srcdir)/%)

girdir = $(datadir)/gir-1.0
gir_DATA = Capa-0.1.gir

typelibsdir = $(libdir)/girepository-1.0
typelibs_DATA = Capa-0.1.typelib

%.typelib: %.gir
        g-ir-compiler \
                --includedir=$(srcdir) \
                --includedir=$(builddir) \
                -o $@ $<

CLEANFILES += Capa-0.1.gir $(typelibs_DATA)

endif # WITH_GOBJECT_INTROSPECTION

After making those changes & rebuilding, it is wise to check the .gir file, since the g-ir-scanner doesn't always get everything correct. It may be necessary to provide annotations in the source files to help it out. For example, it got object ownership wrong on some getters, requiring annotations n the return values such as

/**
 * capa_app_get_plugin_manager: Retrieve the plugin manager
 *
 * Returns: (transfer none): the plugin manager
 */

The final step was add rules to the RPM specfile, which are fairly self-explanatory

%define with_introspection 0

%if 0%{?fedora} >= 12
%define with_introspection 1
%endif
%if 0%{?rhel} >= 6
%define with_introspection 1
%endif

%if %{with_introspection}
BuildRequires: gobject-introspection-devel
BuildRequires: gir-repository-devel
%endif


%prep
....
%if %{with_introspection}
%define introspection_arg --enable-introspection
%else
%define introspection_arg --disable-introspection
%endif

%configure %{introspection_arg}

%files
....
%if %{with_introspection}
%{_datadir}/gir-1.0/Capa-0.1.gir
%{_libdir}/girepository-1.0/Capa-0.1.typelib
%endif

That is all. The entire API is now accessible from Perl, JavaScript, Python without ever having written a line of code for those languages. It is also possible to generate a .jar file to make it accessible from Java.

Adding support for a JavaScript plugin engine

Since the API is now accessible from JavaScript, adding a JavaScript plugin engine ought to be easy at this point. There are in fact 2 competing JavaScript engines supporting GObject introspection, Gjs and Seed. Seed looks more advanced, documented & polished, but Gjs was what's in Fedora currently, so I used that. Again the first step was checking for it in configure.ac

AC_ARG_WITH([javascript],
      AS_HELP_STRING([--with-javascript],[enable JavaScript plugins]),
      [], [with_javascript=check])

if test "x$with_javascript" != "xno" ; then
  if test "x$enable_introspection" = "xno" ; then
    if test "x$with_javascript" = "xyes"; then
      AC_MSG_ERROR([gobject-introspection is requird for javascript plugins])
    fi
  fi

  PKG_CHECK_MODULES(GJS, gjs-1.0 >= $GJS_REQUIRED)
  AC_SUBST(GJS_CFLAGS)
  AC_SUBST(GJS_LIBS)

  PKG_CHECK_MODULES(GJS_GI, gjs-gi-1.0 >= $GJS_REQUIRED)
  AC_SUBST(GJS_GI_CFLAGS)
  AC_SUBST(GJS_GI_LIBS)

  with_javascript=yes
  AC_DEFINE([WITH_JAVASCRIPT], [1], [enable JavaScript plugins])
fi
AM_CONDITIONAL([WITH_JAVASCRIPT], [test "x$with_javascript" = "xyes"])

I won't go into any details on the way Capa scans for plugins (it uses $HOME/.local/share/capa/plugins//main.js), merely illustrate how to execute a plugin once it has been located. The important object in the Gjs API is GjsContext, providing the execution context for the javascript code. It is possible to have multiple contexts, so each plugin is independent and potentially able to be sandboxed. The JavaScript file to be invoked is main.js in the plugin's base directory. The first step is to setup the context's search path to point to the plugin base directory:

void runplugin(const gchar *plugindir) {
    const gchar *searchpath[2];
    GjsContext *context;

    searchpath[0] = plugindir;
    searchpath[1] = NULL;

    context = gjs_context_new_with_search_path((gchar **)searchpath);

The context is now ready to execute some javascript code. The Capa plugin system expects the main.js file to contain a method called activate. To start the plugin, we can thus simply evaluate const Main = imports.main; Main.activate();

   const gchar *script = "const Main = imports.main; Main.activate();";

   gjs_context_eval(context,
                     script,
                     -1,
                     "main.js",
                     &status;,
                     NULL);

   if (status !=0) {
     fprintf(stderr, "Loading plugin failed\n");
   }

Presto, you now have a javascript plugin running, having written no JavaScript at any point in the process. There is one slight issue in this though - how does the plugin get access to the application instance ? One way would be to provide a static method in your API to get hold of the application's main object, but I really wanted to pass the object into the plugin's activate method. This is where I hit Gjs's limitations - there appears to be no official API to set any global variable except for ARGV. After much poking around in the Gjs code though I discovered an exported method, which wasn't in the header files

JSContext* gjs_context_get_context(GjsContext *js_context);

And decided to (temporarily) abuse that until a better way could be found. I have an object instance of the CapaApp class which I wanted to pass into the activate method. The first step was to set this in the global namespace of the script being evaluated. Gjs comes with an API for converting a GObject instance into a JSObject instance which the runtime needs. Thus I wrote a simple helper

static void set_global(GjsContext *context,
                       const char *name,
                       GObject *value)
{
    JSContext *jscontext;
    JSObject *jsglobal;
    JSObject *jsvalue;

    jscontext = gjs_context_get_context(context);
    jsglobal = JS_GetGlobalObject(jscontext);
    JS_EnterLocalRootScope(jscontext);
    jsvalue = gjs_object_from_g_object(jscontext, value);
    JS_DefineProperty(jscontext, jsglobal,
                      name, OBJECT_TO_JSVAL(jsvalue),
                      NULL, NULL,
                      JSPROP_READONLY | JSPROP_PERMANENT);
    JS_LeaveLocalRootScope(jscontext);
}

There was one little surprise in this though. The gjs_object_from_g_object method will only succeed if the current Gjs context has the introspection data for that object loaded. So it was necessary to import my application's introspection data by eval'ing const Capa = imports.gi.Capa. That done, it was now possible to pass variables into the plugin. The complete revised plugin loading code looks like

void runplugin(CapaApp *application, const gchar *plugindir) {
    const gchar *script = "const Main = imports.main; Main.activate(app);";
    const gchar *searchpath[2];
    GjsContext *context;

    searchpath[0] = plugindir;
    searchpath[1] = NULL;

    context = gjs_context_new_with_search_path((gchar **)searchpath);

    gjs_context_eval(context,
                     "const Capa = imports.gi.Capa",
                     -1,
                     "dummy.js",
                     &status;,
                     NULL);

    set_global(context, plugin, "app", application);

    gjs_context_eval(context,
                     script,
                     -1,
                     "main.js",
                     &status;,
                     NULL);

    if (status !=0) {
      fprintf(stderr, "Loading plugin failed\n");
    }

This code is slightly simplified, omitting error handling, for purposes of this blog post, but the real thing is not much harder. Looking at the code again, there is really very little (if anything) about the code which is specific to my application. It would be quite easy to pull out the code which finds & loads plugins into a library (eg "libgplugin"). This would make it possible for any existing GTK applications to be retrofitted with support plugins simply by generating introspection data for their internal APIs, and then instantiating a "PluginManager" object instance.

In summary, GObject Introspection is an incredibly compelling addition to GLib. With a mere handful of additions to configure.ac and Makefile.am, it completely solves "language bindings" problem for you. I'd go as far as to say that this is a single most compelling reason to write any new C libraries using GLib/GObject. Furthermore if there are existing C libraries not using GObject, then provide a GObject wrapper for them as a top priority. Don't ever write or auto-generate a language binding again. Writing GTK applications either entirely in JavaScript, or in a mix of C + JavaScript plugins is also a really nice development, avoiding the issue of "clashing runtime environments" seen when using Python + GTK. The Gjs/Seed/GObject developers deserve warm praise for these great enhancements.

Following gphoto SVN development with GIT

Posted: January 10th, 2010 | Filed under: Entangle | No Comments »

Since I started developing the Capa photo capture application, I’ve been following development of gphoto much more closely. Unfortunately gphoto is using subversion for source control. There are many things wrong with subversion in comparison to modern SCM systems like Mercurial or GIT. In this particular case though, the main problem is speed, or lack thereof. gphoto uses sourceforge as its hosting service and sf.net subversion servers are slower than you can possibly imagine. As an example, run ‘svn log’ to browse changes and you’ll be waiting 30 seconds for it to even start to give you an answer. Then run ‘svn diff’ to look at the contents of a change and you’ll be waiting another 30 seconds or more. Totally unacceptable. Once you’ve used a distributed SCM system like Mercurial or GIT, you cease to have tolerance for any operations which take longer than a 2-3 seconds.

Fortunately, GIT has the ability to checkout directly from SVN repository. The gphoto SVN repository actually contains many separate sub-projects in it and I didn’t want to import them all to my local GIT repository. This meant I couldn’t make use of the branch / tag tracking support directly and had todo things the long way. The good news is that the long way has already been blogged about and it isn’t hard.

There were two projects I was interested in getting, libgphoto (the main library) & gphoto (the command line frontend) and I wanted each to end up in their own GIT repository. For both, I wanted the trunk and 2.4.x branch. Starting with gphoto, since it has much less history, the first step was to clone the trunk

# git svn clone https://gphoto.svn.sourceforge.net/svnroot/gphoto/trunk/gphoto2 gphoto2

This takes a fairly long time because it pulls down every single SVN changeset in the repository. Once that’s complete though, the .git/config contains

[svn-remote "svn"]
        url = https://gphoto.svn.sourceforge.net/svnroot/gphoto/trunk/gphoto2
        fetch = :refs/remotes/git-svn

And the local ‘master’ branch is connected to the ‘git-svn’ remote.

$ git branch -a
* master
  remotes/git-svn

Anytime further changes are made in the SVN repository, those can be pulled down to the local GIT repository using git svn fetch git-svn. At this point it is possible to add in the branches. Simply edit the .git/config file and add another ‘svn-remote’ entry, this time pointing at the branch path.

[svn-remote "svn24"]
        url = https://gphoto.svn.sourceforge.net/svnroot/gphoto/branches/libgphoto2-2_4/gphoto2
        fetch = :refs/remotes/git-svn-2.4

And then pull down all the changes for that branch, and create a local branch for this

# git svn fetch svn24
# git checkout -b v2.4 git-svn-2.4

This leaves a local branch ‘v2.4’ and a remote branch ‘git-svn-2.4’

$ git branch -a
  master
* v2.4
  remotes/git-svn
  remotes/git-svn-2.4

That takes care of the gphoto2 frontend command line app codebase. It is then a simply matter to repeat the same thing substituting libgphoto2 into the SVN paths to checkout the library codebase. Though this takes a little longer because it has much much more history. This little upfront pain to clone the SVN repo to GIT will be paid back many hundreds of times over thanks to the speed that GIT brings to SCM operation.

The moral of the story is simple: Don’t ever choose subversion. If you have the choice, use GIT. If you don’t have the choice, then mirror SVN to GIT anyway.

Edit: One thing I forgot to mention is that after setting up all branches, run a git gc on the repo. This will dramatically reduce the disk usage & speed up GIT operations further

$ du -h -c -s .
45M .
45M total
$ git gc
Counting objects: 3695, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3663/3663), done.
Writing objects: 100% (3695/3695), done.
Total 3695 (delta 3081), reused 0 (delta 0)
$ du -h -c -s .
5.0M .
5.0M total

Going from 45 MB to 5 MB is quite impressive !

Capa: a desktop application for photo capture via a digital camera

Posted: December 21st, 2009 | Filed under: Entangle | 6 Comments »

A couple of months ago I attended an LPMG event on the subject of off-camera flash. The talk was quite interactive with the presenters using a wide array of camera and flash equipment on stage to demonstrate the techniques they were covering. The cameras were connected to a laptop, in turn connected to a projector, allowing the audience to view the photos on the big screen as soon as they were captured.

I’m not entirely certain, but I believe the presenters were using Nikon Camera Control Pro on the laptop for control of the digital SLR. Watching it, I couldn’t help wondering if it was possible to do remote camera control & capture on a Linux laptop using only open source software. After a short investigation I discovered that, in addition to its image file download/management capabilities, gphoto allows for triggering the shutter and changing settings on digital cameras. Applications providing graphical frontends to gphoto though, only appeared to expose its image file download/management capabilities. Thus decided to write a graphical frontend myself. I’m calling it Capa

Before continuing with more wordy stuff, here is what everyone is really wanting, a screenshot

The goal of the Capa application is to provide a simple and efficient interface for tethered shooting and controlling camera settings. It will leave all file management tasks to other applications such as the gphoto plugins for GVFS or FUSE. The two main libraries on which the application is built are gphoto and GTK. The source code is being licensed under the terms of the GPLv3+

The code is at a very early stage of development with no formal releases yet, but it is at the point where it might be useful to people beyond myself, hence this blog posting. At this point it is capable of either triggering the camera shutter directly, or event monitoring where it detects photos shot on the camera. In both cases it will immediately download and display all photos. When a new image is detected it will be immediately downloaded & displayed. The images are not left on the memory card. The current session defines a directory on the host computer where all images are saved, defaulting to a directory under $HOME/Pictures (or wherever your XDG preferences point). The UI for changing tunable settings is rather crude. If you’re lucky it may work, but don’t count on it yet :-)

The interface is fully colour management aware. It is capable of automatically detecting the monitor’s current configured ICC profile in accordance with the X11 ICC profile specification. All images displayed by the application, whether full size or as thumbnails, will have the necessary colour profile transformation applied. GNOME Colour Manager is an excellent new app for configuring your display profiles in the necessary manner for them to work with Capa. Integration with HAL allows immediate automatic detection of any newly plugged in USB cameras and similar support for UDev is planned.

In the very near future it is intended that GObject introspection and GJS will be used to support a JavaScript plugin engine. The codebase has a strict separation between its object model and UI model specifically designed to facilitate plugins. This will allow end user customization & scripting of the UI to best suit their needs. For example, timer triggered shooting, motion detection and many other neat ideas could be provided via plugins.

For a little more information visit the Capa website.

Routed subnets without NAT for libvirt managed virtual machines in Fedora

Posted: December 13th, 2009 | Filed under: libvirt, Virt Tools | 6 Comments »

There are a huge number of ways of configuring networking for virtual machines when running libvirt. The two most common options, and our default recommendations for people, are either NAT (also known as “virtual networking”) or bridging (also known as “shared physical device”). The NAT option has the advantage that it can be made to work out of the box for pretty much all machines, even though with only wifi or dialup networking, but only allows outbound access from VMs, no incoming connections from outside the host. The bridging option has the advantage that machines on the wider LAN can access guests on the host, but it does not work with wifi.

This post is going to quickly describe a 3rd way of providing network connectivity to VMs, which we called the ‘routed’ option. In terms of its implementation in libvirt, it is actually just a variant on the NAT option, but without the NAT. For the purposes of this discussion I am going to describe my home network setup which is entirely wireless.

WLAN router
This is an LinkSys WRT54GL wireless router which of course runs Linux in the form of OpenWRT Kamikaze. This provides a DHCP service on the wireless LAN for the subnet 192.168.254.0/24
Mini server
This is a Mac Mini running Fedora 12, primarily acting as server for running SqueezeCenter. While it has an ethernet port, its location in the house means wifi access is the only option.
Random laptops
This is mostly the IBM Thinkpad I do most of my day-to-day work on. Again it only ever connects over wifi

The requirement is to run a number of virtual machines on the mini server, and be able to have unrestricted access to them from the laptop. Since the mini server is wireless, bridging is out of the question. Similarly, since I need unrestricted access to the VMs, the NAT option is also not viable. Hence this post about setting up routed networking.

Configuring the virtualization host

The libvirt virtual networking service normally gives you a ‘virbr0’ configured todo NAT. The XML format, however, allows you to specify that any virtual network be setup without NAT. It is perfectly acceptable to have many virtual networks on the same host, some using NAT, some not. Thus leave the default network alone, and simply define a new one using a subnet of 192.168.200.0/24 on the mini server.

# cat > vms.xml <<EOF
<network>
  <name>vms</name>
  <forward mode='route'/>
  <bridge name='virbr1' />
  <ip address='192.168.200.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.200.2' end='192.168.200.254' />
    </dhcp>
  </ip>
</network>
EOF

# virsh net-define vms.xml
Network vms defined from vms.xml

With the configuration for the network defined, it can be started, and indeed set to startup upon system boot

# virsh net-start vms
Network vms started

# virsh net-autostart vms
Network vms marked as autostarted

If you look at iptables output you will set libvirt has defined a series of iptables rules in the FORWARD chain to allow traffic to be pass from the virtual network to the LAN & vica-verca. You must, however, ensure that the sysctl net.ipv4.ip_forward is enabled otherwise the kernel won’t even attempt forwarding!

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
 2856  203K ACCEPT     udp  --  virbr1 *       0.0.0.0/0            0.0.0.0/0           udp dpt:53
    0     0 ACCEPT     tcp  --  virbr1 *       0.0.0.0/0            0.0.0.0/0           tcp dpt:53
    2   656 ACCEPT     udp  --  virbr1 *       0.0.0.0/0            0.0.0.0/0           udp dpt:67
    0     0 ACCEPT     tcp  --  virbr1 *       0.0.0.0/0            0.0.0.0/0           tcp dpt:67


Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
 453K  672M ACCEPT     all  --  *      virbr1  0.0.0.0/0            192.168.200.0/24
 245K   13M ACCEPT     all  --  virbr1 *       192.168.200.0/24     0.0.0.0/0
    0     0 ACCEPT     all  --  virbr1 virbr1  0.0.0.0/0            0.0.0.0/0
    0     0 REJECT     all  --  *      virbr1  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable
    0     0 REJECT     all  --  virbr1 *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable

The 4 rules in the INPUT chain allow DHCP/DNS requests to the dnsmasq instance running on virbr1. The first rule in the FORWARD chain allows traffic from the hosts’s WLAN to pass to the VM subnet only. The second rule allows traffic from VMs to the WLAN. The third rule allows traffic between VMs. The final two rules block everything else, mostly to protect against IP spoofing

That really is all that is needed on the virtualization host to setup a routed network. The next step takes place on the LAN router.

Configuring the LAN/WLAN router

The virtualization host is now all set to forward traffic from 192.168.200.0/24 to & from the WLAN 192.168.254.0/24. This on its own though is not sufficient, because no other hosts on the WLAN know where the 192.168.200.0/24 subnet is ! It is thus necessary to configure a static route on the LAN/WLAN router, in this case my OpenWRT box.

To be able to route to the new subnet, the virtualization host needs to have a static IP address, even if it is being configured via DHCP. I fixed my virt host to use the IP 192.168.254.223, so now enabling route to the subnet containing the VMs merely requires adding one static route. Using the ‘ip’ command this could be done with:

# ip route add 192.168.254.0/24 via 192.168.254.223

OpenWRT Kamikaze of course comes with a config file that lets you do that in a way that is persistent across reboots.

# cat >> /etc/config/networks <<EOF
config 'route' 'minivms'
        option 'interface' 'lan'
        option 'target' '192.168.200.0'
        option 'netmask' '255.255.255.0'
        option 'gateway' '192.168.254.223'
EOF

# /etc/init.d/network restart

Depending on the precise way the network interfaces on the router are configured, and the current iptables setup it might be necessary to add a rule to the FORWARD chain. In my case I had to allow the 192.168.200.0 subnet to be forwarded over the ‘br-lan’ device, by adding to /etc/firewall.user

# iptables -I FORWARD 1 -i br-lan -o br-lan --dest 192.168.254.0/24 -j ACCEPT

With that in place my laptop can now ping guests guests running on the mini server’s virtual network, and vica-verca. No NAT or bridging involved and all playing nicely with wifi.

Configuring the guests

With the router and virtual host both configured the stage is set to provision virtual machines. If using virt-manager, then in the last step of the “New VM wizard”, expand the ‘Advanced options’ panel and simply select the ‘vms’ network from the drop down list of choices. The guest will then be connected to the newly defined virtual network, instead of the default NAT based one.
If provisioning using virt-install on the command line, then use the argument ‘–network network:vms’ to tell it to use the new virtual network.

Other ways of providing routed networking

While the above setup only requires three configuration steps, (1. define network on the virt host, 2. add static route on WLAN router. 3. add iptables rules), the obvious pain point here is that you might not have the ability to add static routes on the WLAN router. If that is not the case, then you could provide the static routes on all the client machines on the WLAN (ie add it to the laptop itself). This is sub-optimal too for rather obvious scalability reasons.

What we really want is to be able to provide routed networking without having to define a new IP subnet. In other words we need to figure out how to make libvirt’s virtual networking capability support Proxy ARP either of individual IPs or by subnetting. Patches on a postcard please… :-)

Taking screenshots on colour managed displays with GIMP

Posted: December 12th, 2009 | Filed under: Fedora, Photography | 2 Comments »

Last week I mentioned how I had started running F12 with a colour managed desktop wherever possible with the application. Today I had need to capture some screenshots of a new (& as yet unannounced ) application I’m working on. There are two ways I normally capture screenshots. The first is to just press the ‘PrintScreen’ button, let GNOME save the PNG image and then crop it in GIMP or something like that. The second way is to use GIMP’s own screen capture function (File -> Create -> Screenshot), useful if you want an capture of a specific window instead of the whole desktop.

Today I acquired the screenshot using GIMP since I already had it open. And the colours in the screenshot looked like complete garbage. It shouldn’t be hard to understand what went wrong here. The default ICC profile for new images created in GIMP is the sRGB colourspace. In image windows, GIMP applies a transformation to the image colours, going from the sRGB profile to the monitor’s calibrated profile. Except that since this image was created from a screenshot of a colour managed display, the colours have already been transformed according to the monitor’s profile. GIMP is in essence applying a duplicate conversion. It is no wonder the result looks awful.

Having realized that a duplicate conversion was taking place, the solution is easy. Tell GIMP that the image is in the monitor’s colourspace, rather than the default sRGB. This is done using the menu Image -> Mode -> Assign Color Profile. With the ‘Assign colour profile’ operation, you are not changing the pixel values in the source image, merely telling GIMP how to interpret them. Since it now knows the image is already in the monitor’s colourspace, the transformation becomes a no-op, and the image displays in sensible colours again.

It is possible to leave it at that, save the image and do whatever you were going todo with it. This is sub-optimal if you intend to distribute the image to other people. The sRGB colourspace is intended as a generic colourspace which has reasonable display characteristics even on monitors which are not calibrated / colour managed. If uploading the web, most people viewing the image are not going to have colour managed displays. Thus, if you want the image to look reasonable for them, it is wise to now convert it to the sRGB colourspace. This is done using the menu Image -> Mode -> Convert to Color Profile. In contrast to the ‘Assign’ operation, the ‘Convert’ operation does the change the actual pixel values in the source image. Depending on the overlap between the monitor’s colourspace and the sRGB colourspace, and the rendering intent chosen, this will be a slightly lossy process. The image colours won’t display in quite same way as before, but it will display better on other people’s monitors.

In summary, if you are taking screenshots of a colour management aware application on a colour managed display, you need to first assign the monitor profile to the captured image, and then convert it to the sRGB profile. Oh and remember that, depending on the source of the data, this assign+convert step may also be required when pasting image data from the clipboard.