In my previous apartment in London I didn’t have the space to unbox & use my Hi-Fi, so for 2 years I’ve been listening to music pretty much exclusively through my laptop. While I unquestionably missed the audio quality of my Roksan CD player & Amp, I’ve gotten accustomed to the convenience of not having to physically switch CDs every 60 minutes. A quick move across the pond later, I once again have the space to use my Hi-Fi (yay for cheap Cambridge rents – well compared to London at least!), but it has reminded me just how tedious CD changing is.
And then I remembered the fine folk at Slim Devices who make a nifty little device to connect the music library on your computer, to your Hi-Fi. The latest generation 3 SqueezeBox looks even smarter than their previous models, and with their explicit support for Linux was just the thing I needed. I ordered one of the wifi-enabled versions, which arrived just in time for the weekend, and even after only 2 days of use I can highly recommend this to anyone looking for a similar device.
Installation was a breeze. The server side software comes in an RPM which installs & starts with the only configuration option being to tell it the directory containing your music. Configuration of the device itself amounted to no more than selecting the correct wifi network, entering the WEP key (WPA is supported too), and telling it IP address of the host running the server software. So from unpacking to playing music was no more than 5 minutes of work (well, plus another 5 minutes trying to convince NetworkManager to connect to my wireless network after I had enabled WEP). Along with the defacto standard MP3 support, it also supports OGG, WAV, FLAC and Apple LossLess – with the latter formats, audio quality is supreme. The best thing about it though in comparison to similar devices, is the UI provided by the LCD display and remote control. As well as the initial network configuration, this lets one browse & search your local music archive by artist, album, etc, select from countless Internet radio stations, and interact with other non-music items such as RSS news feeds, weather reports, and other plugins… Oh yeah, plugins – the server software is completely open source, well documented and easily extendable – 3rd party plugins for the device are actively encouraged, and there are some seriously useful add-ons available.
The only downside of all this is that its highlighted the shortcomings in the audio quality of my mp3 collection. Even with a high quality, VBR encoding there is still noticable loss of definition at both ends of the spectrum. So I’ve decided to re-rip all my CD’s into the lossless FLAC format – averaging out at about 400 MB / CD a 200 GB hard drive will comfortably store 500 CDs and if I ever need to switch formats again I can encode straight from the FLACs, skipping over the ripping part.
DV has been doing some great (and much needed) work to provide a stable API for managing Xen, by building the libvirt library. There are already APIs in C, and Python, but when it comes to system administration I’m really a Perl guy so I knocked up a set of XS bindings (in the Sys::Virt namespace) to allow libvirt to be called from Perl scripts. With DV’s recent 0.1.0 release of libvirt I thought it timely to upload the Perl bindings to CPAN, thus there is now a module Sys-Virt 0.1.0 available. There’s basic API docs, along with a couple of practical examples
On an unrelated note, its been great to be able to buy DVDs in Boston/the US at a fraction the price I’d have paid in London. So I was rather surprised to come across a DVD that actually costs more here – Abbas Kiarostami‘s film 10 comes in at 26.99 USD, vs 8.97 GBP (equiv ~15 USD). Never thought I’d be shipping DVD’s from the UK to the US! Now just got to get around to ordering one of the nice MultiSystem (PAL+NTSC) & MultiRegion Pioneer DVD players so I can watch DVDs on a real TV instead of my laptop…
I’m really rather puzzled by why people consider having non-English blogs on Fedora Planet so troublesome. I’m monitoring perhaps 20 big news / blog aggregator sites through the LiFeRea RSS reader, which makes managing large volumes of news output frankly trivial. Out of easily 100’s of articles that get published every day, I can quickly dismiss 90% just based on the title alone, for 10% I might read the short summary content, and 1% I’ll read in full. Dimissing a story I’m not interested in takes one key press to move to the next new article. So really, the inclusion of a handful of foreign language blogs is line-noise & with unmeasurable impact on the amount of time it takes to process my daily news feeds.
So for someone not interested in reading them, the inclusion of a handful of foreign language blogs is line-noise & with unmeasurable impact on the amount of time it takes to process a daily news feeds. Conversely for people who do want to read them there is a clear benefit from having all Fedora blogs on one aggregator.
If there is anything at all highlighted by the introduction of these new feeds its just how far technology regressed when the world ditched NNTP in favour of the web. The capabilities for organizing, filtering & generally managing news feeds over RSS pale in comparison to the old NNTP news readers – no I’m not talking about the lame Netscape news reader – I’m refering to tools like trn
or tin
– the mutt of news readers! The focus should be on producing better RSS readers…
I for one welcome our French Fedora bloggers, even though I (shamefully) can’t understand what they’re saying :-)
For a couple of years now I’ve been using a Nikon Coolscan V for scanning my 35mm negatives & slides into digital format. While it is not supported by the open source Sane software, the cross-platform VueScan software does a fanatastic job of driving it, even including support for the ICE (infrared) cleaning. VueScan is the first piece of commercial software I’ve bought in years, and I can say it is well worth the $80 for their professional edition which provides lifetime upgrades. If you’ve got a scanner which isn’t supported by Sane, then its worth checking it out.
This was all well and good, but I’ve got and ever increasing number of medium format negatives & slides taken with my Zero Image 2000 pinhole camera which won’t fit in the Nikon. So I went looking for a medium format scanner and finally settled on the Epson Perfection 4490 which was well rated by a number of photographic magazines, and a snip at only $200 – a small fraction the price of the dedicated Nikon film scanner when I bought it back in London.
Getting it working under Linux was a little bit of a roller-coaster ride. I plugged it in, added its USB vendor & product IDs to hotplug usermaps, re-plugged it, and fired up VueScan. “No scanners found”, damn. Checked the device permissions, fine. Tried Sane instead, “No scanners found”. Odd, because various web postings claimed it worked with both Sane & VueScan. Upgraded to the latest version of VueScan, still nothing. I was mildly worried that I had a dud unit now. I re-checked the release notes for VueScan, where-upon I discovered the small print – its only supported if you have the Epson drivers installed, because it needs a firmware loaded. Fortunately it turns out that Epson are a (reasonably) Linux friendly company, providing a Linux version of their scanner software for a large number of distros. The firmware itself is closed source & proprietry, but they do provide both a Linux version of their scanner software (IScan) and a SANE backend, under the GPL (+ an exception to allow them to deal with the firmware loading library). With this all installed, I now have a choice of 3 programs to do scanning with, Sane, IScan, and VueScan.
The only remaining problem is that if I try to scan at the full 4800 DPI my laptop (with 768 MB of RAM) goes into a swap death spiral, because the combination of the raw RGB scan, the infrared scan and the post-processing requires on the order of 1 GB of memory for a single medium format slide. So I’m stuck at 2400 DPI for while, until I talk myself into shelling out for a new desktop with 4 GB of RAM. That said, this is more than adequate for now – the image below is a scan of one of my first pinhole images from a year & a half ago in London, scaled down from 5000×5000 pixels.
One negative done, 195 to go…I may be some time…
For all my development projects I try to make use of code coverage tools to ensure the test suites are reasonably comprehensive, for example, with Test-AutoBuild I use the excellant Devel-Cover module. The nightly build runs the test suite and publishes a code coverage report giving a breakdown of test coverage for API documentation, functions, statements, and even conditional expressions. The colour coding of coverage makes it possible to quickly identify modules which are lacking coverage and, given knowledge about which modules contain most complexity, limited resources for writing tests can be directed to areas of the code which will have the biggest impact in raising application quality.
When using code coverage, however, one must be careful not to fall into the trap of writing tests simply to increase coverage. There are many aspects of the code which just aren’t worth while testing – for example areas so simple that the time involved writing tests is not offset by a meaingful rise in code quality. More importantly though, is that there is a limit to what source code coverage analysis can tell you about the real world test coverage. It is perfectly feasible to have 100% coverage over a region of code and still have serious bugs. The basic root of the problem is that the system being tested is not operating in isolation. No matter how controlled your test environment is, there are always external variables which can affect your code.
I encountered just such an example last weekend. A few months back I added a comprehensive set of tests for validating the checkout of code modules from Perforce, Subversion, and Mercurial. The code coverage report said: 100% covered. Great I thought, I can finally forget about this bit of code for a while. And then we passed the Daylight Savings Time shift and all the tests started failing. It turned out that the modules were not correctly handling timezone information when parsing dates while DST was in effect. There is no easy way test for this other than to run the same test suite over & over under at least 4 different timezones – UTC (GMT), BST (GMT+1), EST (GMT+5), EDT (EST+1/GMT+6). Just setting $TZ isn’t really enough – to automate reliably I would really need to run the builds on four different geographically dispersed servers (or perhaps 4 Xen instances each running in a different timezones).
A second example, testing that no modules have hardcoded the path separator is simply impossible to test for within a single run of a test suite. Running the test on UNIX may give a pass, and 100% coverage, but this merely tells me which tells me that no module has used ‘\’ or ‘:’ as a path separator. To validate that no module has used ‘/’ as a path separator the only option is to re-run the test suite on Windows. Fortunately virtualization can come to the rescue this time again, in the form of QEMU which allows emulation of an x86 CPU.
Going back to example of checking out code from a SCM server, another problem in Test-AutoBuild (which I must address soon) is ensuring that the different failure conditions in talking to the SCM server are handled. Some of the things which can go wrong include, incorrect host name specified, a network outage causes a connection to break mid-operation, incorrect path for the module to checkout, missing installation of local SCM client tools. 100% test coverage of the code for checking out a module can’t tell you that there is a large chunk of error handling code missing altogether.
In summary, no matter how comprehensive your test suite is, there is always room for improvement. Think about what code is not there – error handling code. Think about what external systems you interact with & the failures scenarios that can occur. Think about what environmental assumptions you might have made – OS path separators. Think about what environmental changes can occurr – time zones. In summary while code coverage is an incredibly valuable tool in identifying what areas of *existing* code are not covered, only use it to help priortise ongoing development of a test suite, not as an end goal. There really is no substitute for running the tests under as many different environments as you can lay your hands on. And not having access to a large server farm is no longer an excuse – virtualization (take your pick of Xen, QEMU, UML, and VMWare) will allow a single server to simulate dozens of different environments. The only limit to testing is your imagination….