Announce: gerrymander 1.5 “some beans and some beans is four!” – a client API and command line tool for gerrit

Posted: February 22nd, 2016 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , | No Comments »

I’m pleased to announce the availability of a new release of gerrymander, version 1.5. Gerrymander provides a python command line tool and APIs for querying information from the gerrit review system, as used in OpenStack and many other projects. You can get it from pypi

# pip install gerrymander

Or straight from GitHub

# git clone git://github.com/berrange/gerrymander.git

If you’re the impatient type, then go to the README file which provides a quick start guide to using the tool.

This release contains a mixture of bug fixes and new features

  • Honour the ‘files’ parameter in the ‘todo-noones’ command
  • Only match filenames against current patchset
  • Handle pagination with gerrit >= 2.9
  • Avoid looping forever if sort key is missing in results
  • Don’t call encode() on integer types
  • Auto-detect gerrit server from git remote
  • Don’t include your own changes in todo lists
  • Fix type casting of cache lifetime values in config file
  • Optionally show hierarchical relationship between changes via new ‘–deps’ option

Thanks to everyone who contributed this release, whether by reporting bugs, requesting features or submitting patches.

Announce: Entangle “Top” release 0.7.1 – an app for tethered camera control & capture

Posted: February 21st, 2016 | Filed under: Entangle, Fedora, Photography | 1 Comment »

I am pleased to announce a new release 0.7.1 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

The this is mostly a bug fix release, but there was a little feature work on the film strip viewer widget. It has been rewritten to dynamically scale thumbnails according to the available space, and caches thumbnails at 256px size instead of 128px.

  • Fix linking problem with strict linkers
  • Misc spelling fixes to online help docs
  • Replace use of GSimpleAsyncResult with GTask
  • Specify versions when importing from python plugins
  • Remove use of deprecated GTK APIs
  • Render image stats overlay partially transparent
  • Fix error reporting when saving settings
  • Flush events after capture to avoid accidentally restarting preview
  • Make Nikon fine focus stepping finer
  • Ensure images are sorted by last modified date
  • Switch from 128 px to 256 px thumbnail sizes to benefit larger high dpi screens
  • Rewrite film strip browser to dynamically resize icons to fit available space
  • Draw symbolic icons in film strip if image is not yet loaded
  • Refresh translations from Zanata

Ceph single node deployment on Fedora 23

Posted: December 21st, 2015 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , | 1 Comment »

A little while back Cole documented a minimal ceph deployment on Fedora. Unfortunately, since then the ‘mkcephfs’ command has been dropped in favour of the ‘ceph-deploy’ tool. There’s various other blog posts talking about ceph-deploy, but none of them had quite the right set of commands to get a working single node deployment – the status would always end up in “HEALTH_WARN” which is pretty much an error state for ceph. After much trial & error I finally figured out the steps that work on Fedora 23.

Even though we’re doing a single node deployment, the ‘ceph-deploy’ tool expects to be able to ssh into the local host as root, without password prompts. So before starting, make sure to install ssh keys and edit /etc/ssh/sshd_config to set PermitRootLogin to yes. Everything that follows should also be run as root.

First, we need the ‘ceph-deploy’ tool installed

# dnf install ceph-deploy

ceph-deploy will create some config files in the local directory, so it is best to create a directory to hold them and run it from there

# mkdir ceph-deploy
# cd ceph-deploy

Make sure that the hostname for the local machine is resolvable, both with domain name and unqualified. If it is not, then add entries to /etc/hosts to make it resolve. The first step simply creates the basic config file for ceph-deploy

# export CEPH_HOST=`hostname -f`
# ceph-deploy new $CEPH_HOST

Since this will be a single node deployment there are 2 critical additions that must be made to the ceph.conf that was just created in the current directory

# echo "osd crush chooseleaf type = 0" >> ceph.conf
# echo "osd pool default size = 1" >> ceph.conf

Without these two settings, the storage will never achieve a healthy status.

Now tell ceph-deploy to actually install the main ceph software. By default it will try to activate YUM repos hosted on ceph.com, but Fedora has everything needed, so the ‘--no-adjust-repos‘ argument tells it not to add custom repos

# ceph-deploy install --no-adjust-repos $CEPH_HOST

With the software install the monitor service can be created and started

# ceph-deploy mon create-initial

Ceph can use storage on a block device, but for single node test deployments it is far easier to just point it to a local directory

# mkdir -p /srv/ceph/osd
# ceph-deploy osd prepare $CEPH_HOST:/srv/ceph/osd
# ceph-deploy osd activate $CEPH_HOST:/srv/ceph/osd

Assuming that completed without error, check the cluster status shows HEALTH_OK

# ceph status
    cluster 7e7be62d-4c83-4b59-8c11-6b57301e8cb4
     health HEALTH_OK
     monmap e1: 1 mons at {t530wlan=192.168.1.66:6789/0}
            election epoch 2, quorum 0 t530wlan
     osdmap e5: 1 osds: 1 up, 1 in
      pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects
            246 GB used, 181 GB / 450 GB avail
                  64 active+clean

If it displays “HEALTH_WARN” don’t make the mistake of thinking that is merely a warning – chances are it is a fatal error that will prevent anything working. If you did get errors, then purge all trace of ceph before trying again

# ceph-deploy purgedata $CEPH_HOST
# ceph-deploy purge $CEPH_HOST
# ceph-deploy forgetkeys
# rm -rf /srv/ceph/osd

Once everything it working, it should be possible to use the ‘rbd’ command on the local node to setup volumes suitable for use with QEMU/KVM.

Setting up a local caching proxy for Fedora YUM repositories

Posted: December 9th, 2015 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , | 3 Comments »

For my day-to-day development work I currently have four separate physical servers, one old x86_64 server for file storage, two new x86_64 servers and one new aarch64 server. Even with a fast fibre internet connection, downloading the never ending stream of Fedora RPM updates takes non-negligible time. I also have cause to install distro chroots on a reasonably frequent basis for testing various things related to containers & virtualization, which involves yet more RPM downloads. So I decided it was time to investigate the setup of a local caching proxy for Fedora YUM repositories. I could have figured this out myself, but I fortunately knew that Matthew Booth had already setup exactly the kind of system I wanted, and he shared the necessary config steps that are outlined below.

The general idea is that we will reconfigure the YUM repository location on each machine needing updates to point to a local apache server, instead of the Fedora mirror manager metalink locations. This apache server will be setup using mod_proxy to rewrite requests to point to the offsite upstream download location, but will also be told to use a local squid server to access the remote site, thereby caching the downloads.

Apache setup

Apache needs to be installed, if not already present:

# dnf install httpd

A new drop-in config file addition for apache is created with two mod_proxy directives. The ProxyPass directive tells apache that any requests for http://<our-ip>/fedora/* should be translated into requests to the remote site http://dl.fedoraproject.org/pub/fedora/linux/*. The ProxyRemote directive tells apache that it should not make direct connections to the remote site, but instead use the local proxy server running on port 3128. IOW, requests that would go to dl.fedoraproject.org will instead get sent to the local squid server.

# cat > /etc/httpd/conf.d/yumcache.conf <<EOF
ProxyPass /fedora/ http://dl.fedoraproject.org/pub/fedora/linux/
ProxyPass /fedora-secondary/ http://dl.fedoraproject.org/pub/fedora-secondary/
ProxyRemote * http://localhost:3128/
EOF

The ‘fedora-secondary’ ProxyPass is just there for my aarch64 machine – not required if you are x86_64 only

The out of the box SELinux configuration prevents apache from making network requests, so it is necessary to toggle a SELinux boolean flag before starting apache

# setsebool httpd_can_network_relay=1

With that done, we can start apache and set it to run on future boots too

# systemctl start httpd.service
# systemctl enable httpd.service

Squid setup

Squid needs to be installed, if not already present:

# dnf install squid

The out of the box configuration for squid needs a few small tweaks to optimize it for YUM repo mirroring. The default cache replacement policy purges the least recently used objects from the cache. This is not ideal for YUM repositories – if the YUM update needs 100 RPMS downloading and only 95 of the fit in cache, by the time the last package is downloaded we’ll be pushing the first package out of cache again, which means the next machine will have cache miss. The LFUDA policy keeps popular objects in the cache regardless of size and optimizes the byte hit rate at expense of object hit rate. Some RPMS can be really rather large, so the default maximum object size of 4 MB is totally inadequate, increasing it to 8 GB is probably overkill but will ensure we always attempt to cache any RPM regardless of its size. The cache_dir directive is there to tell squid to use threads for accessing objects to give greater concurrency. The last two directives are critical telling squid not to cache the repomd.xml files whose contents change frequently – without this you’ll often YUM trying to fetch outdated repo data files which no longer exist

# cat >> /etc/squid/squid.conf <<EOF
cache_replacement_policy heap LFUDA
maximum_object_size 8192 MB
cache_dir aufs /var/spool/squid 16000 16 256 max-size=8589934592
acl repomd url_regex /repomd\.xml$
cache deny repomd
EOF

With that configured, squid can be started and set to run on future boots

# systemctl start squid.service
# systemctl enable squid.service

Firewall setup

If a firewall is present on the cache machine, it is necessary to allow remote access to apache. This can be enabled with a simple firewall-cmd instruction

# firewall-cmd --add-service=http --permanent

Client setup

With the cache server setup of the way, all that remains is to update the Fedora YUM config files on each client machine to point to the local server. There is a convenient tool called ‘fedrepos’ which can do this, avoiding the need to open an editor and change the files manually.

# dnf install fedrepos
# fedrepos baseurl http://yumcache.mydomain/fedora --no-metalink

NB on the aarch64 machine, we need to point to fedora-secondary instead

# fedrepos baseurl http://yumcache.mydomain/fedora-secondary --no-metalink

Replace ‘yumcache.mydomain’ with the hostname or IP address of the server running the apache+squid cache of course. If the cache is working as expected you should see YUM achieve 100 MB/s download speed when it gets a cache hit.

Announce: libvirt-sandbox “Dashti Margo” 0.6.0 release – an application sandbox toolkit

Posted: July 1st, 2015 | Filed under: Fedora, libvirt, Security, Virt Tools | Tags: , , , , , | 2 Comments »

I pleased to announce the a new public release of libvirt-sandbox, version 0.6.0, is now available from:

http://sandbox.libvirt.org/download/

The packages are GPG signed with

  Key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF (4096R)

The libvirt-sandbox package provides an API layer on top of libvirt-gobject which facilitates the cration of application sandboxes using virtualization technology. An application sandbox is a virtual machine or container that runs a single application binary, directly from the host OS filesystem. In other words there is no separate guest operating system install to build or manage.

At this point in time libvirt-sandbox can create sandboxes using either LXC or KVM, and should in theory be extendable to any libvirt driver.

This release contains a mixture of new features and bugfixes.

The first major feature is the ability to provide block devices to sandboxes. Most of the time sandboxes only want/need filesystems, but there are some use cases where block devices are useful. For example, some applications (like databases) can directly use raw block devices for storage. Another one is where a tool actually wishes to be able to format filesystems and have this done inside the container. The complexity with exposing block devices is giving the sandbox tools a predictable path for accessing the device which does not change across hypervisors. To solve this, instead of allowing users of virt-sandbox to specify a block device name, they provide an opaque tag name. The block device is then made available at a path /dev/disk/by-tag/TAGNAME, which symlinks back to whatever hypervisor specific disk name was used.

The second major feature is the ability to provide a custom root filesystem for the sandbox. The original intent of the sandbox tool was that it provide an easy way to confine and execute applications that are installed on the host filesystem, so by default the host / filesystem is mapped to the sandbox / filesystem read-only. There are some use cases, however, where the user may wish to have a completely different root filesystem. For example, they may wish to execute applications from some separate disk image. So virt-sandbox now allows the user to map in a different root filesystem for the sandbox.

Both of these features were developed as part of a Google Summer of Code 2015 project which is aiming to enhance libvirt sandbox so that it is capable of executing images distributed by the Docker container image repository service. The motivation for this goes back to the original reason for creating the libvirt-sandbox project in the first place, which was to provide a hypervisor agnostic framework for sandboxing applications, as a higher level above the libvirt API. Once this is work is complete it’ll be possible to launch Docker images via libvirt QEMU, KVM or LXC, with no need for the Docker toolchain itself.

The detailed list of changes in this release is:

  • API/ABI in-compatible change, soname increased
  • Prevent use of virt-sandbox-service as non-root upfront
  • Fix misc memory leaks
  • Block SIGHUP from the dhclient binary to prevent accidental death if the controlling terminal is closed & reopened
  • Add support for re-creating libvirt XML from sandbox config to facilitate upgrades
  • Switch to standard gobject introspection autoconf macros
  • Add ability to set filters on network interfaces
  • Search /usr/lib instead of /lib for systemd unit files, as the former is the canonical location even when / and /usr are merged
  • Only set SELinux labels on hosts that support SELinux
  • Explicitly link to selinux, instead of relying on indirect linkage
  • Update compiler warning flags
  • Fix misc docs comments
  • Don’t assume use of SELinux in virt-sandbox-service
  • Fix path checks for SUSE in virt-sandbox-service
  • Add support for AppArmour profiles
  • Mount /var after other FS to ensure host image is available
  • Ensure state/config dirs can be accessed when QEMU is running non-root for qemu:///system
  • Fix mounting of host images in QEMU sandboxes
  • Mount images as ext4 instead of ext3
  • Allow use of non-raw disk images as filesystem mounts
  • Check if required static libs are available at configure time to prevent silent fallback to shared linking
  • Require libvirt-glib >= 0.2.1
  • Add support for loading lzma and gzip compressed kmods
  • Check for support libvirt URIs when starting guests to ensure clear error message upfront
  • Add LIBVIRT_SANDBOX_INIT_DEBUG env variable to allow debugging of kernel boot messages and sandbox init process setup
  • Add support for exposing block devices to sandboxes with a predictable name under /dev/disk/by-tag/TAGNAME
  • Use devtmpfs instead of tmpfs for auto-populating /dev in QEMU sandboxes
  • Allow setup of sandbox with custom root filesystem instead of inheriting from host’s root.
  • Allow execution of apps from non-matched ld-linux.so / libc.so, eg executing F19 binaries on F22 host
  • Use passthrough mode for all QEMU filesystems