Announce: Entangle “Higgs Boson” release 0.4.0 – An app for tethered camera control & capture

I am pleased to announce a new release 0.4.0 of Entangle is available for download from the usual location:

   http://entangle-photo.org/download/

This release introduces a number of new features requested by users:

  • Better use of GtkApplication class
  • Add support for multiple windows & cameras
  • Add ability to sync capture/preview across windows
  • Add preference to control whether cameras autoconnect at startup
  • Add ability to apply aspect ratio masks to images
  • Add key bindings for common actions (see README)
  • Rewrite session browser widget to not use GtkIconView
  • Add popup menu with session browser to allow open and delete of captured images
  • Fix memory leak during preview
  • Fix memory leak in session browser widget
  • Add ability to DPMS-blank screen during capture for consistent lighting environment
  • Add ability to render a focus point during preview
  • Add ability to render grid lines during preview
  • Merge “New session” and “Open session” toolbar buttons into one “Select session” drop down / menu
  • Add custom icons for toolbar capture/preview buttons

As before we still need help getting the UI translated into as many languages as possible, so if you are able to help out, please join the Fedora translation team:

     https://fedora.transifex.net/projects/p/entangle/

Thanks to everyone who helped contribute to this release & troubleshooting of the previous releases.

Thoughts on improving OpenStack GIT commit practice/history

The following document presented for discussion is based on my experience over the past few months getting involved with OpenStack Nova through learning the codebase, examining its history, writing code and participating in reviews. I want to stress, that I don’t intend this is a criticism of any individuals. My desire is that this be taken as constructive feedback to help the project as a whole over the long term, since I believe OpenStack can benefit from stricter practices commonly followed in other mainstream opensource projects using GIT.

Enjoy the rather long doc that follows….

This document is also present on the OpenStack developer list

GIT Commit Good Practice

The following document is based on experiance doing code development, bug troubleshooting and code review across a number of projects using GIT, including libvirt, QEMU and OpenStack Nova. Examination of other open source projects such as the Kernel, CoreUtils, GNULIB and more suggested they all follow a fairly common practice. It is motivated by a desire to improve the quality of the Nova GIT history. Quality is a hard term to define in computing; One man’s “Thing of Beauty” is another’s man’s “Evil Hack”. We can, however, come up with some general guidelines for what todo, or conversely what not todo, when publishing GIT commits for merge with a project, in this case, OpenStack.

This topic can be split into two areas of concern

  1. The structured set/split of the code changes
  2. The information provided in the commit message

Executive Summary

The points and examples that will be raised in this document ought clearly demonstrate the value in splitting up changes into a sequence of individual commits, and the importance in writing good commit messages to go along with them. If these guidelines were widely applied it would result in a significant improvement in the quality of the OpenStack GIT history. Both a carrot & stick will be required to effect changes. This document intends to be the carrot by alerting people to the benefits, while anyone doing Gerrit code review can act as the stick ;-P

In other words, when reviewing a change in Gerrit, do not simply look at the correctness of the code. Review the commit message itself and request improvements to its content. Look out for commits which are mixing multiple logical changes and require the submitter to split them into separate commits. Ensure whitespace changes are not mixed in with functional changes. Ensure no-op code refactoring is done separately from functional changes. And so on.

It might be mentioned that Gerrit’s handling of patch series is not entirely perfect. This is a not a valid reason to avoid creating patch series. The tools being used should be subservient to developers needs, and since they are open source they can be fixed / improved. Software source code is “read mostly, write occassionally” and thus the most important criteria is the improve the long term maintainability by the large pool of developers in the community, and not to sacrifice too much for the sake of the single author who may never touch the code again.

And now the long detailed guidelines & examples of good & bad practice

Structural split of changes

The cardinal rule for creating good commits is to ensure there is only one “logical change” per commit. There are many reasons why this is an important rule:

  • The smaller the amount of code being changed, the quicker & easier it to review & identify potential flaws.
  • If a change is found to be flawed later, it make be neccessary to revert the broken commit. This is much easier todo if there are not other unrelated code changes entangled with the original commit.
  • When troubleshooting problems using GIT’s bisect capability, small well defined changes will aid in isolating exactly where the code problem was introduced.
  • When browsing history using GIT annotate/blame, small well defined changes also aid in isolating exactly where & why a piece of code came from.

Things to avoid when creating commits

With that in mind, there are some commonly encountered examples of bad things to avoid

  • Mixing whitespace changes with functional code changes. The whitespace changes will obscure the important functional changes, making it harder for a reviewer to correctly determine whether the change is correct.
    Solution: Create 2 commits, one with the whitespace changes, one with the functional changes. Typically the whitespace change would be done first, but that need not be a hard rule.
  • Mixing two unrelated functional changes. Again the reviewer will find it harder to identify flaws if two unrelated changes are mixed together. If it becomes necessary to later revert a broken commit the two unrelated changes will need to be untangled, with further risk of bug creation.
  • Sending large new features in a single giant commit. It may well be the case that the code for a new feature is only useful when all of it is present. This does not, however, imply the the entire feature should be provided in a single commit. New features often entail refactoring existing code. It is highly desirable that any refactoring is done in commits which are separate from those implementing the new feature. This helps reviewers and test suites validate that the refactoring has no unintentional functional changes.Even the newly written code can often be split up into multiple pieces that can be independantly reviewed. For example, changes which adds new internal APIs/classes, can be in self-contained commits. Again this leads to easier code review. It also allows other developers to cherry-pick small parts of the work, if the entire new feature is not immediately ready for merge.Addition of new public APIs or RPC interfaces should be done in commits separate from the actual internal implementation. This will encourage the author & reviewers to think about the generic API/RPC design, and not simply pick a design that is easier for their currently chosen internal implementation.

The basic rule to follow is

If a code change can be split into a sequence of patches/commits, then it should be split. Less is not more. More is more.

Examples of bad practice

Now for some illustrations from Nova history. NB, although I’m quoting commit hashes for reference, I am removing all author names, since I don’t want to blame/pick on any one person. Almost everybody, including me, is guilty of violating these good practice rules at times. In addition the people who reviewed & approved these commits are just as guilty as the person who wrote/submitted them ;-P

Example 1:
commit ae878fc8b9761d099a4145617e4a48cbeb390623
Author: [removed]
Date: Fri Jun 1 01:44:02 2012 +0000

Refactor libvirt create calls

* minimizes duplicated code for create
* makes wait_for_destroy happen on shutdown instead of undefine
* allows for destruction of an instance while leaving the domain
* uses reset for hard reboot instead of create/destroy
* makes resume_host_state use new methods instead of hard_reboot
* makes rescue/unrescue not use hard reboot to recreate domain

Change-Id: I2072f93ad6c889d534b04009671147af653048e7

There are at least two independent changes made in this commit.

  • The switch to use the new “reset” API for the “hard_reboot” method
  • The adjustment to internal driver methods to not use “hard_reboot”

What is the problem with this ? First there is no compelling reason why these changes needed to be made at the same time. A first commit could have included the changes to stop calling “hard_reboot” in various places. A second commit could have re-written the “hard_reboot” impl.

Because the switch to using the libvirt ‘reset’ method was burried in the large code refactoring, reviewers missed the fact that this was introducing a dependency on a newer libvirt API version. This commit was identified as the culprit reasonably quickly, but a trivial revert is not possible, due to the wide variety of unrelated changes included.

Example 2:
commit e0540dfed1c1276106105aea8d5765356961ef3d
Author: [removed]
Date: Wed May 16 15:17:53 2012 +0400

blueprint lvm-disk-images

Add ability to use LVM volumes for VM disks.

Implements LVM disks support for libvirt driver.

VM disks will be stored on LVM volumes in volume group
specified by `libvirt_images_volume_group` option.
Another option `libvirt_local_images_type` specify which storage
type will be used. Supported values are `raw`, `lvm`, `qcow2`,
`default`. If `libvirt_local_images_type` = `default`, usual
logic with `use_cow_images` flag is used.
Boolean option `libvirt_sparse_logical_volumes` controls which type
of logical volumes will be created (sparsed with virtualsize or
usual logical volumes with full space allocation). Default value
for this option is `False`.

Commit introduce three classes: `Raw`, `Qcow2` and `Lvm`. They contain
image creation logic, that was stored in
`LibvirtConnection._cache_image` and `libvirt_info` methods,
that produce right `LibvirtGuestConfigDisk` configurations for
libvirt. `Backend` class choose which image type to use.

Change-Id: I0d01cb7d2fd67de2565b8d45d34f7846ad4112c2

 

This is introducing one major new feature, so on the surface it seems reasonable to use a single commit, but looking at the patch, it clearly has entangled a significant amount of code refactoring with the new LVM feature code. This makes it hard to identify likely regressions in support for QCow2/Raw images. This should have been split into at least four separate commits

  1. Replace the ‘use_cow_images’ config FLAG with the new FLAG ‘libvirt_local_images_type’, with back-compat code for support of legacy ‘use_cow_images’ FLAG
  2. Creation of internal “Image” class and subclasses for Raw & QCow2 image type impls.
  3. Refactor libvirt driver to replace raw/qcow2 image management code, with calls to the new “Image” class APIs
  4. Introduce the new “LVM” Image class implementation

Examples of good practice

Example 1:
commit 3114a97ba188895daff4a3d337b2c73855d4632d
Author: [removed]
Date: Mon Jun 11 17:16:10 2012 +0100

Update default policies for KVM guest PIT & RTC timers

commit 573ada525b8a7384398a8d7d5f094f343555df56
Author: [removed]
Date: Tue May 1 17:09:32 2012 +0100

Add support for configuring libvirt VM clock and timers

Together these two changes provide support for configuring the KVM guest timers. The introduction of the new APIs for creating libvirt XML configuration have been clearly separated from the change to the KVM guest creation policy, which uses the new APIs.

Example 2:
commit 62bea64940cf629829e2945255cc34903f310115
Author: [removed]
Date: Fri Jun 1 14:49:42 2012 -0400

Add a comment to rpc.queue_get_for().

Change-Id: Ifa7d648e9b33ad2416236dc6966527c257baaf88

commit cf2b87347cd801112f89552a78efabb92a63bac6
Author: [removed]
Date: Wed May 30 14:57:03 2012 -0400

Add shared_storage_test methods to compute rpcapi.
...snip...
Add get_instance_disk_info to the compute rpcapi.
...snip...
Add remove_volume_connection to the compute rpcapi.
...snip...
Add compare_cpu to the compute rpcapi.
...snip...
Add get_console_topic() to the compute rpcapi.
...snip...
Add refresh_provider_fw_rules() to compute rpcapi.
...many more commits...

This sequence of commits refactored the entire RPC API layer inside nova to allow pluggable messaging implementations. With such a major change in a core piece of functionality, splitting up the work into a large sequence of commits was key to be able to do meaningful code review, and track / identify possible regressions at each step of the process.

Information in commit messages

As important as the content of the change, is the content of the commit message describing it. When writing a commit message there are some important things to remember

  1. Do not assume the reviewer understands what the original problem was. When reading bug reports, after a number of back & forth comments, it is often as clear as mud, what the root cause problem is. The commit message should have a clear statement as to what the original problem is. The bug is merely interesting historical background on /how/ the problem was identified. It should be possible to review a proposed patch for correctness without needing to read the bug ticket.
  2. Do not assume the reviewer has access to external web services/sites In 6 months time when someone is on a train/plane/coach/beach/pub troubleshooting a problem & browsing GIT history, there is no guarantee they will have access to the online bug tracker, or online blueprint documents. The great step forward with distributed SCM is that you no longer need to be “online” to have access to all information about the code repository. The commit message should be totally self-contained, to maintain that benefit.
  3. Do not assume the code is self-evident/self-documenting. What is self-evident to one person, might be clear as mud to another person. Always document what the original problem was and how it is being fixed, for any change except the most obvious typos, or whitespace only commits.
  4. Describe /why/ a change is being made. A common mistake is to just document how the code has been written, without describing /why/ the developer chose todo it that way. By all means describe the overall code structure, particularly for large changes, but more importantly describe the intent/motivation behind the changes.
  5. Read the commit message to see if it hints at improved code structure. Often when describing a large commit message, it becomes obvious that a commit should have in fact been split into 2 or more parts. Don’t be afraid to go back and rebase the change to split it up into separate commits.
  6. Ensure sufficient information to decide whether to review. When gerrit sends out email alerts for new patch submissions there is minimal information included, principally the commit message and the list of files changes. Given the high volume of patches, it is not reasonable to expect all reviewers to examine the patches in detail. The commit message must thus contain sufficient information to alert the potential reviewers to the fact that this is a patch they need to look at.
  7. The first commit line is the most important. In GIT commits the first line of the commit message has special significance. It is used as email subject line, git annotate messages, gitk viewer annotations, merge commit messages and many more places where space is at a premium. As well as summarising the change itself, it should take care to detail what part of the code is affected. eg if it affects the libvirt driver, mention ‘libvirt’ somewhere in the first line.
  8. Describe any limitations of the current code. If the code being changed still has future scope for improvements, or any known limitations then mention these in the commit message. This demonstrates to the reviewer that the broader picture has been considered and what tradeoffs have been done in terms of short term goals vs long term wishes.

The basic rule to follow is

The commit message must contain all the information required to fully understand & review the patch for correctness. Less is not more. More is more.

Including external references

The commit message is primarily targetted towards human interpretation, but there is always some metadata provided for machine use. In the case of OpenStack this includes at least the ‘Change-id’, but also optional “bug” ID references and “blueprint” name references. Although GIT records the author & committer of a patch, it is common practice across many open source projects to include a “Signed-off-by” tag. Though OpenStack does not mandate its use, the latter is still useful to include if a patch is a combination of work by many different developers, since GIT only records a single author. All machine targetted metadata, however, is of secondary consequence to humans and thus it should all be grouped together at the end of the commit message. For example:

Switch libvirt get_cpu_info method over to use config APIs

The get_cpu_info method in the libvirt driver currently uses
XPath queries to extract information from the capabilities
XML document. Switch this over to use the new config class
LibvirtConfigCaps. Also provide a test case to validate
the data being returned

Fixes: bug #1003373
Implements: blueprint libvirt-xml-cpu-model
Change-Id: I4946a16d27f712ae2adf8441ce78e6c0bb0bb657
Signed-off-by: Daniel P. Berrange <berrange@xxxxxxxxxx>

 

As well as the ‘Signed-off-by’ tag, there are various other ad-hoc tags that can be used to credit other people involved in a patch who aren’t the author.

  • ‘Reviewed-by: …some name.. <…email…>’
    Although Gerrit tracks formal review by project members, some patches have been reviewed by people outside the community prior to submission
  • ‘Suggested-by: …some name.. <…email…>’
    If a person other than the patch author suggested the code enhancement / influnced the design
  • ‘Reported-by: …some name.. <…email…>’
    If a person reported the bug / problem being fixed but did not otherwise file a launchpad bug report.

…invent other tags as relevant to credit other contributions

Some examples of bad practice

Now for some illustrations from Nova history, again with authors names removed since no one person is to blame for these.

Example 1:
commit 468e64d019f51d364afb30b0eed2ad09483e0b98
Author: [removed]
Date: Mon Jun 18 16:07:37 2012 -0400

Fix missing import in compute/utils.py

Fixes bug 1014829

Problem: this does not mention what imports where missing and why they were needed. This info was actually in the bug tracker, and should have been copied into the commit message, so it provides a self-contained description. eg:

Add missing import of 'exception' in compute/utils.py

nova/compute/utils.py makes a reference to exception.NotFound, however exception has not been imported.
Example 2:
commit 2020fba6731634319a0d541168fbf45138825357
Author: [removed]
Date: Fri Jun 15 11:12:45 2012 -0600

Present correct ec2id format for volumes and snaps

Fixes bug 1013765

* Add template argument to ec2utils.id_to_ec2_id() calls

Change-Id: I5e574f8e60d091ef8862ad814e2c8ab993daa366

Problem: this does not mention what the current (broken) format is, nor what the new fixed format is. Again this info was available in the bug tracker and should have been included in the commit message. Furthermore, this bug was fixing a regression caused by an earlier change, but there is no mention of what the earlier change was. eg

Present correct ec2id format for volumes and snaps

During the volume uuid migration, done by changeset XXXXXXX,
ec2 id formats for volumes and snapshots was dropped and is
now using the default instance format (i-xxxxx). These need
to be changed back to vol-xxx and snap-xxxx.

Adds a template argument to ec2utils.id_to_ec2_id() calls

Fixes bug 1013765
Example 3:
commit f28731c1941e57b776b519783b0337e52e1484ab
Author: [removed]
Date: Wed Jun 13 10:11:04 2012 -0400

Add libvirt min version check.

Fixes LP Bug #1012689.

Change-Id: I91c0b7c41804b2b25026cbe672b9210c305dc29b

Problem: This commit message is merely documenting what was done, and not why it was done. It should have mentioned what earlier changeset introduced the new min libvirt version. It should also have mentioned what behaviour is when the check fails. eg

Add libvirt version check, min 0.9.7

The commit XXXXXXXX introduced use of the 'reset' API
which is only available in libvirt 0.9.7 or newer. Add a check
performed at startup of the compute server against the libvirt
connection version. If the version check fails the compute
service will shutdown.

Fixes LP Bug #1012689.

Change-Id: I91c0b7c41804b2b25026cbe672b9210c305dc29b

Examples of good practice

Example 1:
commit 3114a97ba188895daff4a3d337b2c73855d4632d
Author: [removed]
Date: Mon Jun 11 17:16:10 2012 +0100

Update default policies for KVM guest PIT & RTC timers

The default policies for the KVM guest PIT and RTC timers
are not very good at maintaining reliable time in guest
operating systems. In particular Windows 7 guests will
often crash with the default KVM timer policies, and old
Linux guests will have very bad time drift

Set the PIT such that missed ticks are injected at the
normal rate, ie they are delayed

Set the RTC such that missed ticks are injected at a
higher rate to "catch up"

This corresponds to the following libvirt XML

<clock offset='utc'>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
</clock>

And the following KVM options

-no-kvm-pit-reinjection
-rtc base=utc,driftfix=slew

This should provide a default configuration that works
acceptably for most OS types. In the future this will
likely need to be made configurable per-guest OS type.

Fixes LP bug #1011848
Change-Id: Iafb0e2192b5f3c05b6395ffdfa14f86a98ce3d1f

Some things to note about this example commit message

  • It describes what the original problem is (bad KVM defaults)
  • It describes the functional change being made (the new PIT/RTC policies)
  • It describes what the result of the change is (new the XML/QEMU args)
  • It describes scope for future improvement (the possible per-OS type config)
Example 2:
commit 31336b35b4604f70150d0073d77dbf63b9bf7598
Author: [removed]
Date: Wed Jun 6 22:45:25 2012 -0400

Add CPU arch filter scheduler support

In a mixed environment of running different CPU architecutres,
one would not want to run an ARM instance on a X86_64 host and
vice versa.

This scheduler filter option will prevent instances running
on a host that it is not intended for.

The libvirt driver queries the guest capabilities of the
host and stores the guest arches in the permitted_instances_types
list in the cpu_info dict of the host.

The Xen equivalent will be done later in another commit.

The arch filter will compare the instance arch against
the permitted_instances_types of a host
and filter out invalid hosts.

Also adds ARM as a valid arch to the filter.

The ArchFilter is not turned on by default.

Change-Id: I17bd103f00c25d6006a421252c9c8dcfd2d2c49b

Some things to note about this example commit message

  • It describes what the problem scenario is (mixed arch deployments)
  • It describes the intent of the fix (make the schedular filter on arch)
  • It describes the rough architecture of the fix (how libvirt returns arch)
  • It notes the limitations of the fix (work needed on Xen)

Digikam/KDE in Fedora 16 switching UI theme on its own ?

Posted: April 3rd, 2012 | Filed under: Fedora, Photography | Tags: , , , | 3 Comments »

I use the wonderful Digikam application for managing my photos on Fedora 16. I don’t mind that I’m running a KDE based application under GNOME shell, since it themes itself to match & its featureset easily beats any other open source desktop photo management application. Normally the UI theme when running Digikam under GNOME looks like this:

Normal Digikam Theme under GNOME

And then a strange thing happened last week. When I launched Digikam on my laptop the UI style suddenly changed to this:

Notice in particular the different tree view expander icons and the different scrollbars. More interestingly though, the overall UI felt more responsive when interacting with Digikam. Next time I launched Digikam, it was back to the “normal” GNOME compatible theme. Wierd. And now just last night the same behaviour occurred on my other laptop – Digikam launched in a different theme, but upon restart, went back to the original theme. WTF ?

I tried playing with the ‘Themes’ menu in Digikam itself and all I can change is the colour scheme, not the widget styling. Trying  to change the KDE application theme in KDE Control Center had precisely zero effect on the Digikam theme.

Can anyone explain this behaviour ? Is there some trick to controlling KDE application themes when running under GNOME on Fedora 16 ? Most importantly how I can get back this alternative Digikam theme? I really liked how responsive it felt to interact with, compared to the standard GNOME-like theme.

 

Announce: Release of Entangle v0.3.2 – An app for tethered camera control & capture

I am pleased to announce a new release 0.3.2 of Entangle is available for download from the usual location:

This release has focused almost exclusively on i18n, integrating with the Fedora Transifex team for translations

  • Major code style cleanup
  • Mark all translatable strings in code & UI files
  • Register with Transifex for translations via Fedora team
  • Pull in translations (German, Polish, Ukrainian, Japanese: full, Spanish, Chinese: partial).
  • Add m4 macros for compiler warnings, missing from previous release dist.

There are a great many languages with no coverage here, so if you are able to help out, please join the Fedora translation team:

Native IPv6 connectivity for home broadband line

Posted: March 22nd, 2012 | Filed under: Fedora | Tags: , , , , , , | 1 Comment »

Many years ago now I setup IPv6 across all my machines, both at home and public servers. Back then (2007), only 1 of my 3 ISPs (Bytemark) was offering any IPv6 support at all, so for my Linode server I used a static tunnel from Hurricane Internet, and for home connectivity a dynamic tunnel from Sixxs. Now, 5 years on, the situation has improved somewhat. Linode offer IPv6 as standard with any virtual machine hosted on their network. I get my home DSL connectivity from UKFSN, who resell Enta.net services and sometime last year I learnt that they are providing IPv6 service to their customers.

In my home network, I used a LinkSys modem for the ADSL PPP login. A separate OpenWRT 54GL provides the LAN/WLAN subnet, and routes traffic to the subnet used by the LinkSys modem. While OpenWRT supports IPv6 very well, my LinkSys modem has zero support. So over the past 5 years, the LinkSys has done the IPv4 PPP login, while the aiccu tunnel daemon on my OpenWRT machine does the IPv6 tunnel login. This was never ideal, but functionally it works fine. With native IPv6 connectivity though, the PPP client is responsible for both IPv4 and IPv6 connectivity. So I faced the problem of how to enable this given, that the LinkSys ADSL modem has zero IPv6 support.

The answer to this conundrum is to move the responsibility for the PPP login off the ADSL modem entirely, by putting it into “Bridged” mode. In such a setup, the modem is solely responsible for negotiating the DSL link on the line. It then forwards all traffic from the DSL link to its LAN port, using the PPPoE (PPP-over-Ethernet) protocol. The OpenWRT box now runs the PPP daemon to establish the IP layer connectivity to the DSL ISP. This sounds complicated, but it is all surprisingly easy to configure.

  • On the LinkSys router, find the DSL setup options and change the mode from “PPPoA” to “Bridged”. The loginname/password details are now irrelevant here (and indeed grayed out on my router admin page)
  • On the OpenWRT router, edit the /etc/config/network section and add PPPoE config section, taking care to add the ‘ipv6=1’ option. Contrary to instructions from my ISP, I didn’t need to configure any IPv6 address/subnet on the ppp0 interface, it is automatically handled via link-local addresses.
    config 'interface' 'wan'
    	option 'ifname' 'eth0.1'
    	option 'proto' 'pppoe'
    	option 'username' 'NNNNNNN@adsllogin.co.uk'
    	option 'password' 'XXXXXXX'
    	option 'defaultroute' '1'
    	option 'peerdns' '1'
    	option 'ipv6' '1'
  • Restart networking on the OpenWRT box (/etc/init.d/network restart). If all went to plan the OpenWRT box now has a login to the DSL ISP with both IPv4 and IPv6 connectivity
    ppp0      Link encap:Point-to-Point Protocol  
              inet addr:XX.YY.ZZ.AA  P-t-P:BB.CC.DD.EE  Mask:255.255.255.255
              inet6 addr: fe80::XXXX:YYYY::ZZZZ/10 Scope:Link
              UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1492  Metric:1
              RX packets:169629 errors:0 dropped:0 overruns:0 frame:0
              TX packets:120721 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:3 
              RX bytes:228671985 (218.0 MiB)  TX bytes:11603336 (11.0 MiB)
  • Provide IPv6 connectivity to the LAN using RADVD. With OpenWRT this is trivially achieved by editing /etc/config/radvd. UKFSN/Enta provided me with a /56 subnet for local LAN use. I just allocated the first /64 of this to my LAN for now. The rest I will for creating various subnets between virtual machines I test with
    config prefix
    	option interface	'lan'
    	option prefix           '2001:XXXX:YYYY:ZZZZ::/64'
    	option AdvOnLink	1
    	option AdvAutonomous	1
    	option AdvRouterAddr	0
    	option ignore		0
  • Don’t forget to ensure that a firewall is up for the IPv6 link – there’s no NAT to “protect” you, so you want to setup a “deny all” rule for incoming connectivity on the “ppp0” device.

The upshot is that 5 years on from my initial setup, I now have native IPv6 connectivity everywhere. No more IPv6-in-IPv4 tunnels required. I’ve not compared the download speeds of my native IPv6 connection against the Sixxs IPv6 tunnel I used previously, but I can say that the ping times have improved. Previously IPv6 pings were about 10ms slower than IPv4 pings. Now the ping times are identical, which is nice :-)