For a long time the libvirtd daemon was single threaded, using a poll() loop to handle and dispatch all I/O requests in a single thread. This was reasonable when each operation was guaranteed to be a fast, but as libvirt has grown in scope, so has the complexity and execution time of many requests. Ultimately we had no choice but to make all the internal drivers thread-safe, and the libvirtd daemon itself multi-threaded. Each libvirt virConnectPtr connection can now be safely used from multiple threads at once. The libvirtd daemon will also process RPC calls from multiple clients concurrently, and even parallelize requests from a single client. Achieving this obviously required that we make significant use of pthread mutexes for locking data structures can be accessed concurrently.
To try and keep the code within the realm of human understanding, we came up with a fixed 2-level locking model. At the first level, each libvirt driver has a lock protecting its global shared state. At the second level, individual managed objects each have a lock (eg, each instance of virDomainObj, virNetworkObj, virStoragePoolObj has a lock). The rules for locking, can be stated pretty concisely
- The driver lock must be held when accessing any driver shared state
- The managed object lock must be held when accessing any managed object state
- The driver lock must be held before obtaining a lock on a managed object
- The driver and managed object locks may be released in any order
- The locks may not be recursively obtained
With these rules there are three normal locking sequences for each public API entry pointer implemented by a driver. The first, simplest, pattern is for a API that only accesses driver state
lock(driver)
....
....do something with 'driver'.....
....
unlock(driver)
The next pattern is for an API which has work with a managed object. These can usually release the driver lock once the managed object has been locked
lock(driver)
lock(object)
unlock(driver)
....
.... do something with 'object' ....
....
unlock(object)
The final pattern is for an API which has to work with both the driver and managed object state. In this case both locks must be held for the duration of the function
lock(driver)
lock(object)
....
.... do something with 'object' and 'driver'....
....
unlock(object)
unlock(driver)
For the 0.6.0 release I updated several hundred methods in libvirt to follow these 3 locking patterns. Inevitably there would be mistakes along the way, and those reviewing the patches did indeed find some. End users also found some more when we released this. And there is a continual risk of causing accidental regressions. One approach to validating the locking rules is via functional testing, but libvirt is a very complicated and expansive codebase, making it incredibly hard to exercise all possible codepaths in a functional test.
About a year ago, Richard Jones (of OCaml Tutorial fame) did an proof of concept using CIL to analyse libvirt. His examples loaded all libvirt code, generated control flow graphs and reported on exported functions. CIL is an enormously powerful toolset able to parse even very complicated C programs and perform complex static analysis checks. It sounded like just the tool for checking libvirt locking rules, as well as being a good excuse to learn OCaml
Functional languages really force you to think about the concepts you wish to express & check up front, before hacking code. This is possibly what discourages alot of people from using functional languages ;-) So after trying and failing to hack something up quickly, I stopped and considered what I needed to represent. The libvirt internal code has a pretty consistent pattern across different drivers and managed objects. So we can generalize some concepts
- Driver data types (eg, struct qemu_driver, xenUnifiedPrivatePtr)
- Managed object data types (eg, virDomainObjPtr, virNetworkObjPtr)
- A set of methods which lock drivers (eg, qemuDriverLock, xenUnifiedLock)
- A set of methods which unlock drivers (eg qemuDriverUnlock, xenUnifiedUnlock)
- A set of methods which obtain a locked managed object (eg, virDomainObjFindByName, virDomainObjAssignDef, virNetworkObjectFindByUUID)
- A set of methods which unlock managed objects (virDomainObjUnlock)
- A set of public API entry points for each driver (the function pointers in each virDriver struct instance)
I won’t go into fine details about CIL, because it is quite a complex beast. Suffice to say, it loads C code, parses it, and builds up an in-memory data structure representing the entire codebase. Todo this it needs the intermediate files from after pre-processing, eg, the ‘.i’ files generated by GCC when the ‘-save-temps’ flag is given. Once it has loaded the code into memory you can start to extract and process the interesting information.
The first step in libvirt analysis is to find all the public API entry points for each driver. Todo this, we search over all the global variables looking for instances of driver structs virDriver, virNetworkDriver, etc, and then processing their initializers to get all the function pointers. At this point we now have a list containing all the functions we wish to check
The next step is to iterate over each function, and run a data flow analysis. CIL provides two useful modules for data flow analysis. One of them runs analysis from the entry point, forwards, to each return point. The other runs analysis from each return point, backwards to the entry point. I can’t really say which module is “best”, but for libvirt locking analysis it felt like a forward data flow analysis would work pretty well. To perform analysis, you provide an implementation of the Cil.DataFlow.ForwardsTransfer
interface which has a couple of methods, of which the interesting ones are doStmt
and doInstr
. The first is invoked once for each C code block, the second is invoked once for each C statement. When invoked they are passed some initial state. The interface implementation looks at the code block or statement in question, and decides what state has changed, and returns the new state. For lock analysis, I decided to store the following state about variables:
- The set of driver variables that are currently locked
- The set of driver variables that are currently unlocked
- The set of object variables that are currently locked
- The set of object variables that are currently unlocked
Thus, the first part of the implementation of the doInstr
method was simply looking for function calls which lock or unlock a managed object or driver.
From this core state, some additional state can be derived at each C level statement, recording locking mistakes. The mistakes currently identified are
- Statements using an unlocked driver variable
- Statements using an unlocked object variable
- Statement locking a object variable while not holding a locked driver variable
- Statements locking a driver variable while holding a locked object variable
- Statements causing deadlock by fetching a lock object, while an object is already locked
The lock checker runs a forwards code flow analysis, constructing this state for each interesting function we identified earlier. When complete, it simply looks at the final state accumulated for each function. For each of the 5 types of mistake, it prints out the function name, the line number and the C code causing the mistake. In addition to those 5 mistakes, it also checks the final list of locked variables, and reports on any functions which forget to unlock a managed object, or driver, in any of their exit points.
In summary, we have been able to automatically detect and report on 7 critical locking mistakes across the entire driver codebase without needing to ever run the code. We also have a data flow analysis framework that can be further extended to detect other interesting locking problems. For example, it would be desirable to check whether one public API function calls into another, since this would cause a deadlock with non-recursive mutexes.
The final reports generated from the lock checking tool are designed to make it easily for the person reading them to figure out what locking rule has been violated. An example report from current libvirt CVS codebase looks like this
================================================================
Function: umlDomainStart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 1564 "uml_driver.c"
ret = umlStartVMDaemon(dom->conn, driver___0, vm);
================================================================
================================================================
Function: umlDomainGetAutostart
----------------------------------------------------------------
- Total exit points with locked vars: 1
- At exit on #line 1675
return (ret);
^^^^^^^^^
variables still locked are
| struct uml_driver * driver___0
- Total blocks with lock ordering mistakes: 0
================================================================
================================================================
Function: umlDomainSetAutostart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 4
- Driver used while unlocked on #line 1704
configFile = virDomainConfigFile(dom->conn,
(char const *)driver___0->configDir,
(char const *)(vm->def)->name);
- Driver used while unlocked on #line 1706
autostartLink = virDomainConfigFile(dom->conn,
(char const *)driver___0->autostartDir,
(char const *)(vm->def)->name);
- Driver used while unlocked on #line 1712
err = virFileMakePath((char const *)driver___0->autostartDir);
- Driver used while unlocked on #line 1713
virReportSystemErrorFull(dom->conn, 21, err, "uml_driver.c",
"umlDomainSetAutostart", 1715U,
(char const *)tmp___1, driver___0->autostartDir);
================================================================
================================================================
Function: testOpen
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 3
- Driver used while unlocked on #line 663 "test.c"
tmp___20 = virAlloc((void *)(& privconn->domainEventCallbacks),
sizeof(*(privconn->domainEventCallbacks)));
- Driver used while unlocked on #line 663
privconn->domainEventQueue = virDomainEventQueueNew();
- Driver used while unlocked on #line 670
privconn->domainEventTimer = virEventAddTimeout(-1, & testDomainEventFlush,
(void *)privconn,
(void (*)(void *opaque ))((void *)0));
================================================================
================================================================
Function: qemudClose
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 1815 "qemu_driver.c"
virDomainEventCallbackListRemoveConn(conn, driver___0->domainEventCallbacks);
================================================================
================================================================
Function: qemudDomainSuspend
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 2259
tmp___4 = qemudSaveDomainStatus(dom->conn, driver___0, vm);
================================================================
================================================================
Function: qemudDomainResume
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 2312
tmp___4 = qemudSaveDomainStatus(dom->conn, driver___0, vm);
================================================================
================================================================
Function: qemudDomainGetSecurityLabel
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 3189
tmp___2 = (*((driver___0->securityDriver)->domainGetSecurityLabel))(dom->conn,
vm, seclabel);
================================================================
================================================================
Function: qemudDomainStart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 3630
ret = qemudStartVMDaemon(dom->conn, driver___0, vm, (char const *)((void *)0),
-1);
================================================================
================================================================
Function: qemudDomainAttachDevice
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 4192
tmp___8 = qemudSaveDomainStatus(dom->conn, driver___0, vm);
================================================================
================================================================
Function: qemudDomainDetachDevice
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 4316
tmp___3 = qemudSaveDomainStatus(dom->conn, driver___0, vm);
================================================================
================================================================
Function: qemudDomainSetAutostart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 4
- Driver used while unlocked on #line 4381
configFile = virDomainConfigFile(dom->conn,
(char const *)driver___0->configDir,
(char const *)(vm->def)->name);
- Driver used while unlocked on #line 4383
autostartLink = virDomainConfigFile(dom->conn,
(char const *)driver___0->autostartDir,
(char const *)(vm->def)->name);
- Driver used while unlocked on #line 4389
err = virFileMakePath((char const *)driver___0->autostartDir);
- Driver used while unlocked on #line 4390
virReportSystemErrorFull(dom->conn, 10, err, "qemu_driver.c",
"qemudDomainSetAutostart", 4392U,
(char const *)tmp___1, driver___0->autostartDir);
================================================================
================================================================
Function: qemudDomainMigratePrepare2
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Object fetched while locked objects exist #line 4990
vm = virDomainAssignDef(dconn, & driver___0->domains, def);
================================================================
================================================================
Function: storagePoolRefresh
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 827 "storage_driver.c"
virStoragePoolObjRemove(& driver___0->pools, pool);
================================================================
================================================================
Function: storagePoolSetAutostart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 2
- Driver used while unlocked on #line 962
err = virFileMakePath((char const *)driver___0->autostartDir);
- Driver used while unlocked on #line 963
virReportSystemErrorFull(obj->conn, 18, err, "storage_driver.c",
"storagePoolSetAutostart", 965U,
(char const *)tmp___1, driver___0->autostartDir);
================================================================
================================================================
Function: storageVolumeCreateXML
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Object locked while driver is unlocked on #line 1277
virStoragePoolObjLock(pool);
================================================================
================================================================
Function: networkStart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 1228 "network_driver.c"
ret = networkStartNetworkDaemon(net->conn, driver___0, network);
================================================================
================================================================
Function: networkDestroy
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 1
- Driver used while unlocked on #line 1251
ret = networkShutdownNetworkDaemon(net->conn, driver___0, network);
================================================================
================================================================
Function: networkSetAutostart
----------------------------------------------------------------
- Total exit points with locked vars: 0
- Total blocks with lock ordering mistakes: 4
- Driver used while unlocked on #line 1363
configFile = virNetworkConfigFile(net->conn,
(char const *)driver___0->networkConfigDir,
(char const *)(network->def)->name);
- Driver used while unlocked on #line 1365
autostartLink = virNetworkConfigFile(net->conn,
(char const *)driver___0->networkAutostartDir,
(char const *)(network->def)->name);
- Driver used while unlocked on #line 1371
err = virFileMakePath((char const *)driver___0->networkAutostartDir);
- Driver used while unlocked on #line 1372
virReportSystemErrorFull(net->conn, 19, *tmp___1, "network_driver.c",
"networkSetAutostart", 1374U, (char const *)tmp___0,
driver___0->networkAutostartDir);
================================================================
In summary, CIL is an incredibly powerful tool, and it is well worth learning OCaml in order to use the CIL framework for doing static analysis of C code. The nummber of bugs it has allowed us to identify and fix more than compensates for effort developing this test harness.
The libvirt project uses CVS as its primary SCM system. Our project code submission process requires that people submit patches for review & approval on the mailing list before committing them to CVS. This is great for catching stupid bugs and working through various design options, but CVS really isn’t conducive to this kind of workflow. It is not unusual to have a number of outstanding patches going through review, and you want to keep them all tracked separately during development. In other words you want a queue of pending patches.
There are many ways to implement such a workflow. A long time ago Andrew Morton had a set of scripts for managing patches, which evolved into Quilt. In these modern times, many projects (including the kernel of course) use GIT, and its ability to rebase branches to achieve this workflow. We even maintain a GIT mirror of libvirt which could be used for managing outstanding patches. While certainly a very capable tool, GIT has a somewhat steep learning curve, and even once learnt it violates the principle of least surprise on a daily basis with seemingly simple operations working in the most bizarre way. So with libvirt though I’ve been using a Mercurial mirror, and its ‘mq’ extension for patch management. This isn’t the only way of skinning the cat – you could also use branches, or the new ‘rebase’ command inspired by GIT, but my mind works in terms of patch queues, so I chose ‘mq’. For benefit of anyone else with an interest in Mecurial, here’s a quick overview of my workflow
The first task is to get a reliable process to synchronize the CVS repository to Mercurial. For this purpose I chose to use Tailor, though there are plenty of other tools which do the job fine too. For a repeatable conversion, you need to create a recipe describing what you want. I have the following script saved as $HOME/work/libvirt-hg.tailor
, with execute permission enabled:
$ sudo yum install tailor
$ cd $HOME/work
$ cat > libvirt-hg.tailor <<EOF
#!/usr/bin/env tailor
"""
[DEFAULT]
verbose = True
[project]
target = hg:target
start-revision = INITIAL
root-directory = /home/berrange/work
state-file = libvirt-hg.state
source = cvs:source
subdir = libvirt-hg
[cvs:source]
module = libvirt
repository = :pserver:anoncvs@libvirt.org:2401/data/cvs
[hg:target]
"""
EOF
$ chmod +x libvirt-hg.tailor
$ ./libvirt-hg.tailor
When invoking this script for the first time it’ll take a seriously long time, checking out every revision of every file in CVS, figuring out the changesets, and populating a mercurial repository with them. When complete it will have left a directory libvirt-hg
containing the mercurial repository mirroring the CVS repo, and a file libvirt-hg.state
recording how much work it has done. On a daily basis (or more/less often as desired), re-running libvirt-hg.tailor
will make it “catch up” with any new changes in CVS, updating the libirt-hg
repository with the new changesets.
While you could work directly in the libvirt-hg
repository, my preference is to work in a clone of it, and leave that as a pristine mirror of the CVS repository. So I create a repository called ‘libvirt-work’ for my day-to-day patch queue. If I want to try out someone else’s work – for example Ben Guthro’s libvirt events patchset, I create a patch queue just for that, by again cloning the pristine libvirt-hg
repository.
$ cd $HOME/work
$ hg clone libvirt-hg libvirt-work
For some reason Mercurial extensions aren’t enabled by default, so if you’ve not used the ‘mq’ extension before, create a $HOME/.hgrc containing:
$ cat $HOME/.hgrc
[ui]
username="Daniel P. Berrange <berrange@redhat.com>"
[diff]
git = 1
showfunc = 1
[extensions]
hgext.hgk=
hgext.mq=
hgext.extdiff =
This does a number of things. The ‘showfunc’ setting makes it include function names in context diffs. The ‘hgk’ extension is a graphical repository browser, ‘mq’ is the patch queue extension, and ‘extdiff’ allows use of more interesting diff options. With that in place I can create a queue for my new bit of work in libvirt
$ cd $HOME/work
$ cd libvirt-work
$ hg qinit
The first patch I want to work on is refactoring mac address handling, so I’ll add a patch to track this work
$ hg qnew mac-address-refactor
Now off editing files as normal – ‘hg add’ / ‘hg remove; to create / delete files as needed, ‘hg status’ or ‘hg diff’ to see what’s changed, etc. The only difference comes when the work is complete – instead of using ‘hg commit’ to record a permanent changeset, I want to update the patch queue. This is done with the ‘hg qrefresh’ command, and the ‘hg qdiff’ command will show you the contents of the patch file
$ hg qrefresh
$ hg qdiff | diffstat
capabilities.c | 16 +++++++++++++++-
capabilities.h | 11 +++++++++++
domain_conf.c | 34 +++++++---------------------------
domain_conf.h | 8 ++------
lxc_conf.c | 3 +++
lxc_driver.c | 4 +++-
openvz_conf.c | 2 +-
qemu_conf.c | 3 +++
qemu_driver.c | 6 ++++--
util.c | 24 +++++++++++++++++++++++-
util.h | 12 +++++++++++-
xen_internal.c | 3 +++
xend_internal.c | 8 ++++++--
xm_internal.c | 34 +++++++++++-----------------------
14 files changed, 103 insertions(+), 65 deletions(-)
This bit of work was a pre-cursor to the real thing I wanted to work on, the OpenVZ networking code. With this refactoring out of the way, I want to add support for the ‘get version’ operation in OpenVZ driver, so I can start a new patch
$ hg qnew openvz-version
Patches stack up in a series, each building on the last
$ hg qseries
mac-address-refactor
openvz-version
Fast forward another hour, and I’ve got the version operation implemented, and ready for the final patch, enhancing the OpenVZ networking support.
$ hg qnew openvz-network
$ hg qseries
mac-address-refactor
openvz-version
openvz-network
In working through this 3rd patch though, I realize there was a problem in the first one, so I save my current work, and then ‘pop’ patches off the queue until I get back to the first.
$ hg qrefresh
$ hg qtop
openvz-network
$ hg qpop
Now at: openvz-version
$ hg qpop
Now at: mac-address-refactor
Once I’ve fixed the mistakes in this patch, I can then ‘push’ patches back onto the queue. If you’re lucky Mercurial can resolve / merge changes, but as with any rebase, sometimes there are conflicts which require fixing up by hand. If this occurrs, ‘.rej’ reject files are saved which must be manually addressed, and then the updated patch saved with ‘hg qrefresh’ before continuing.
$ hg qpush
applying openvz-version
Now at: openvz-version
$ hg qpush
applying openvz-network
Now at: openvz-network
When a piece of work is done the raw patch files all live in $REPO/.hg/patches/
and can be submitted for review to the mailing list.
If a bit of work goes on for a long time, it is often neccessary to re-sync with latest upstream CVS repository. First step in this re-base is to get Tailor to pull down all latest changes into the the local libvirt-hg
repository
$ cd $HOME/work
$ ./libvirt-hg.tailor
Then in the work-ing repository to be rebased, first ‘pop’ off all local patches. Then pull in the new changesets from libvirt-hg
, before re-applying the patches, fixing up any merge conflicts which arise
$ hg qpop --all
Patch queue now empty
$ hg pull
pulling from /home/berrange/work/libvirt-hg
searching for changes
adding changesets
adding manifests
adding file changes
added 15 changesets with 67 changes to 48 files
(run 'hg update' to get a working copy)
$ hg update
48 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ hg qpush
applying mac-address-refactor
Now at: mac-address-refactor
$ hg qpush
applying openvz-version
Now at: openvz-version
$ hg qpush
applying openvz-network
Now at: openvz-network
This time I was lucky and didn’t have any merge conflicts.
While I remmeber, another reason for keeping the libvirt-hg
repository pristine, is that I typically do work on different features across different machines. On one machine I make be doing OpenVZ work, while another has a patch queue of KVM work, while yet another may be Xen work. Instead of running Tailor on every machine, I can just have one master mercurial mirror, and push that where needed. Then each local machine has its own independant libvirt-work
cloned repository with relevant patchqueue.
There is soo much more I could describe here – I’ve not even mentioned most of the commands available in the ‘mq’ extension – ‘hg help’ will show them all. For further reading, turn to the Mercurial mq wiki page
The libvirt library provides a core C API upon which language specific bindings are also built. Being written in C of course means we have to do our memory management / allocation. Historically we’ve primarily used the standard malloc/realloc/calloc/free APIs that even knows and loves/hates, although we do have an internal variable length string API (more on that in a future post). Of course there are many other higher level memory management APIs (obstacks/memory pools, garbage collectors, etc) you can write C code with too, but in common with the majority of other other apps/code we happen to be using the general purpose malloc/free pattern everywhere. I’ve recently been interested in improving our code quality, reliability and testing, and found myself thinking about whether there was a way to make our use of these APIs less error prone, and verifiably correct at build time.
If you consider the standard C library malloc/realloc/calloc/free APIs, they in fact positively encourage application coding errors. Off the top of my head there are at least 7 common problems, probably more….
- malloc() – pass incorrect number of bytes
- malloc() – fail to check the return value
- malloc() – forget to fully initialize returned memory
- free() – double free by not setting pointer to NULL
- realloc() – fail to check the return value
- realloc() – fail to re-assign the pointer with new address
- realloc() – leaking memory upon failure to resize
A great many libraries will create wrapper functions around malloc/free/realloc but they generally only attempt to address one or two of these points. As an example, consider GLib, which has a wrapper around malloc() attempting to address point 2. It does this by making it call abort() on failure to allocate, but then re-introduces the risk by also adding a wrapper which doesn’t abort()
gpointer g_malloc (gulong n_bytes) G_GNUC_MALLOC;
gpointer g_try_malloc (gulong n_bytes) G_GNUC_MALLOC;
It also wraps realloc() for the same reason, and adds an annotation to make the compiler warn if you don’t use the return value
gpointer g_realloc (gpointer mem,
gulong n_bytes) G_GNUC_WARN_UNUSED_RESULT;
gpointer g_try_realloc (gpointer mem,
gulong n_bytes) G_GNUC_WARN_UNUSED_RESULT;
This at least addresses point 6, ensuring that you update your variable with the new pointer, but can’t protect against failure to check for NULL. And the free() wrapper doesn’t address the double-free issue at all.
You can debate which of “checking for NULL” vs “calling abort()” is the better approach – Havoc has some compelling points – but that’s not the point of this blog posting. I was interested in whether I could create wrappers for libvirt which kept the choice in the hands of the caller, while still protecting from the common risks.
-
For any given pointer type the compiler knows how many bytes need to be allocated for a single instance of it – it sizeof(datatype) or sizeof(*mptr). Both options are a little tedious to write out, eg
mytype *foo;
foo = malloc(sizeof(*foo));
foo = malloc(sizeof(mytype));
Since C has no data type representing a data type, you cannot simply pass a data type to a function as you might like to:
mytype *foo = malloc(mytype);
You do, however, have access to the preprocessor and so you can achieve the same effect via a trivial macro.
#define MALLOC(ptr) malloc(sizeof(*(ptr)))
or
#define MALLOC(mytype) malloc(sizeof(mytype))
-
GCC has a number of ways to annotate APIs, one of which is __attribute__((__warn_unused_result__))
. This causes emit a warning if return value is not used / checked. Add -Werror and you can get guarenteed compile time verification (the rest of your code was 100% warning free, wasn’t it – it ought to be :-). This doesn’t actually help with the NULL check for malloc, because you always do something with the return value – assign it to a variable. The core problem here is that the API encodes the error condition as a special case value in the returned “data”. The obvious answer to this is to separate the two cases, keeping the return value soley as a bi-state success of fail flag.
The data can be passed by via a output parameter. This might look like:
myptr *foo;
mymalloc(&foo;, sizeof(*foo));
And the compiler would be able to warn that you failed to check for failure. It looks rather ugly to write this way, but consider that a preprocessor macro is already being used for the previous issue, so we can merely extend its use
#define MALLOC(ptr) mymalloc(&(foo), sizeof(*(foo))
myptr *foo;
if (MALLOC(foo) < 0)
... do error handling...
-
The memory allocated by malloc() is not initialized to zero. For that a separate calloc() function is provided. This leaves open the possiblity of an app mistakenly using the wrong variant. Or consider an app using malloc() and explicitly initializing all struct members. Some time later a new member is added and now the developer needs to find all malloc() calls wrt to that struct and explicitly initilize the new member. It would be safer to always have malloc zero all memory - even though it has an obvious performance impact, the net win in safety is probably worth it for most applications. Dealing with this in realloc() is much harder though, because you don't know how many extra bytes (if any) you need to initialize without knowing the original size.
-
Since free() takes the pointer to be freed directly it cannot reset the caller's original pointer variable back to NULL. So the application is at risk of mistakenly calling free() a second time, leading to corruption or a crash. free() will happily ignore a NULL pointer though. If free() were to accept a pointer to a pointer instead, it could automatically NULL-ify it. Again this has extra computational cost from the often redundant assignment to NULL, but it is negligable in the context of the work done to actually free the memory region, and easily offset by the safety benefits.
-
As with point 2, this is caused by the fact that the error condition is special case of the return data value.
-
This is a fun one which may escape detection for ages because most of the time the returned pointer address is the same as the original address. realloc() doesn't often need to relocate the data to another address. Given that solving the previous point required changing the return data to be an output-parameter, we've already basically solved this issue to. Pass in a pointer to the pointer to be reallocated and it can update the address directly.
-
What todo on failure of a realloc() is an interesting question. The chances are that most applications will immediately free() the block and go into cleanup code - indeed I don't recall coming across code that would ever continue its work if realloc() failed. Thus one could simply define that if realloc fails it automatically free's the original data too.
Having considered this all its possible to define a set of criteria for the design of a low level memory allocation API that is considerably safer than the standard C one, while still retaining nearly all its flexibility and avoiding the imposition of policy such as calling abort() on failure.
- Use return values only for a success/fail error condition flag
- Annotate all the return values with __warn_unused_result__
- Pass a pointer to the pointer into all functions
- Define macros around all functions to automatically calculate datatype sizes
So the primary application usage would be via a set of macros:
VIR_ALLOC(ptr)
VIR_ALLOC_N(ptr, count)
VIR_REALLOC_N(ptr, count)
VIR_FREE(ptr)
These call through to the underlying APIs:
int virAlloc(void *ptrptr, size_t bytes)
int virAllocN(void *ptrptr, size_t bytes, size_t count)
int virReallocN(void *ptrptr, size_t bytes, size_t count)
int virFree(void *ptrptr)
The only annoying thing here is that although we're passing a pointer to a pointer into all these, the first param is still just 'void *' and not 'void **'. This works because 'void *' is defined to hold any type of pointer, and in addition using 'void **' causes the compiler to complain bitterly about strict aliasing violations. Internally the impls of these methods can still safely cast to 'void **' when deferencing the pointer.
All 3 of Alloc/Realloc functions will have __warn_unused_result__
annotation so the caller is forced to check the return value for failure, validated at compile time generating a warning (or fatal compile error with -Werror).
And finally to wire up the macros to the APIs:
#define VIR_ALLOC(ptr) virAlloc(&(ptr), sizeof(*(ptr)))
#define VIR_ALLOC_N(ptr, count) virAllocN(&(ptr), sizeof(*(ptr)), (count))
#define VIR_REALLOC_N(ptr, count) virReallocN(&(ptr), sizeof(*(ptr)), (count))
#define VIR_FREE(ptr) virFree(&(ptr))
If this is all sounding fairly abstract, an illustration of usage should clear things up. These snippets are taken from libvirt, showing before and after
- Allocating a new variable:
-
Before
virDomain *domain;
if ((domain = malloc(sizeof(*domain))) == NULL) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
After
virDomain *domain;
if (VIR_ALLOC(domain) < 0) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
- Allocating an array of domains
-
Before
virDomain *domains;
int ndomains = 10;
if ((domains = malloc(sizeof(*domains) * ndomains)) == NULL) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
After
virDomain *domains;
int ndomains = 10;
if (VIR_ALLOC_N(domains, ndomains) < 0) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
- Allocating an array of domain pointers
-
Before
virDomain **domains;
int ndomains = 10;
if ((domains = malloc(sizeof(*domains) * ndomains)) == NULL) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
After
virDomain **domains;
int ndomains = 10;
if (VIR_ALLOC_N(domains, ndomains) < 0) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
- Re-allocate the array of domains to be longer
-
Before
virDomain **tmp;
ndomains = 20
if ((tmp = realloc(domains, sizeof(*domains) * ndomains)) == NULL {
free(domains);
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
domains = tmp;
After
ndomains = 20
if (VIR_REALLOC_N(domains, ndomains) < 0) {
__virRaiseError(VIR_ERROR_NO_MEMORY)
return NULL;
}
- Free the domain
-
Before
free(domain);
domain = NULL;
After
VIR_FREE(domain);
As these short examples show the number of lines of code hasn't changed much, but the clarity of them has - particularly the realloc() usage, and of course there is now compile time verification of usage. The main problem remaining is the classic memory leak, by forgetting to call free() at all. If you want to use a low level memory allocation API this problem is essentially unavoidable. Fixing it really requires a completely different type of API (eg, obstacks/pools) or a garbage collector. And there's always valgrind to identify leaks which works very nicely, particularly if you have extensive test suite coverage
The original mailing thread from which this post is derived is on the libvirt mailing list
If you are a Perl developer you may have come across CPANTS which analyses and ranks all distributions and authors on CPAN based on their Kwalitee score. To quote…
What is a good module? That's hard to say.
What is good code? That's also hard to say.
"Quality" is not a well-defined term in computing ... and especially not Perl.
One man's Thing of Beauty is another's man's Evil Hack
Since we can't define quality, how do we write a program to assure it?
Kwalitee: It looks like quality, it sounds like quality, but it’s not quite quality.
I was rather disappointed to discover my own Kwalitee scores were rather poor so have been spending time to improve matters. The key to this is to run a test to check the Kwalitee score of a new release before uploading it to CPAN. Conveniently the very code used to generate the rankings is available to download and run offline in the form of the Module-CPANTS-Analyse. Inconveniently this wasn’t in Fedora…until today. I finally got all the dependent modules through review and built for rawhide, with F-8 to follow shortly… So now if you want to test the Kwalitee of your Perl modules before release just run:
# yum install perl-Module-CPANTS-Analyse
# cpants_lint.pl /path/to/my/module.tar.gz
It already helped me get Test-AutoBuild perfect…
$ cpants_lint.pl Test-AutoBuild-1.2.2.tar.gz
Checked dist Test-AutoBuild-1.2.2.tar.gz
Kwalitee rating 112.50% (27/24)
Congratulations for building a 'perfect' distribution!
112.50% you might wonder ? Yes, indeed. Only 24 of the checks are considered mandatory at this time. The other 3 are optional, but good practice none-the-less. To get to 100% you need to pass all the mandatory checks. If you manage this, then passing any optional checks will give you an additional boost.
For all my development projects I try to make use of code coverage tools to ensure the test suites are reasonably comprehensive, for example, with Test-AutoBuild I use the excellant Devel-Cover module. The nightly build runs the test suite and publishes a code coverage report giving a breakdown of test coverage for API documentation, functions, statements, and even conditional expressions. The colour coding of coverage makes it possible to quickly identify modules which are lacking coverage and, given knowledge about which modules contain most complexity, limited resources for writing tests can be directed to areas of the code which will have the biggest impact in raising application quality.
When using code coverage, however, one must be careful not to fall into the trap of writing tests simply to increase coverage. There are many aspects of the code which just aren’t worth while testing – for example areas so simple that the time involved writing tests is not offset by a meaingful rise in code quality. More importantly though, is that there is a limit to what source code coverage analysis can tell you about the real world test coverage. It is perfectly feasible to have 100% coverage over a region of code and still have serious bugs. The basic root of the problem is that the system being tested is not operating in isolation. No matter how controlled your test environment is, there are always external variables which can affect your code.
I encountered just such an example last weekend. A few months back I added a comprehensive set of tests for validating the checkout of code modules from Perforce, Subversion, and Mercurial. The code coverage report said: 100% covered. Great I thought, I can finally forget about this bit of code for a while. And then we passed the Daylight Savings Time shift and all the tests started failing. It turned out that the modules were not correctly handling timezone information when parsing dates while DST was in effect. There is no easy way test for this other than to run the same test suite over & over under at least 4 different timezones – UTC (GMT), BST (GMT+1), EST (GMT+5), EDT (EST+1/GMT+6). Just setting $TZ isn’t really enough – to automate reliably I would really need to run the builds on four different geographically dispersed servers (or perhaps 4 Xen instances each running in a different timezones).
A second example, testing that no modules have hardcoded the path separator is simply impossible to test for within a single run of a test suite. Running the test on UNIX may give a pass, and 100% coverage, but this merely tells me which tells me that no module has used ‘\’ or ‘:’ as a path separator. To validate that no module has used ‘/’ as a path separator the only option is to re-run the test suite on Windows. Fortunately virtualization can come to the rescue this time again, in the form of QEMU which allows emulation of an x86 CPU.
Going back to example of checking out code from a SCM server, another problem in Test-AutoBuild (which I must address soon) is ensuring that the different failure conditions in talking to the SCM server are handled. Some of the things which can go wrong include, incorrect host name specified, a network outage causes a connection to break mid-operation, incorrect path for the module to checkout, missing installation of local SCM client tools. 100% test coverage of the code for checking out a module can’t tell you that there is a large chunk of error handling code missing altogether.
In summary, no matter how comprehensive your test suite is, there is always room for improvement. Think about what code is not there – error handling code. Think about what external systems you interact with & the failures scenarios that can occur. Think about what environmental assumptions you might have made – OS path separators. Think about what environmental changes can occurr – time zones. In summary while code coverage is an incredibly valuable tool in identifying what areas of *existing* code are not covered, only use it to help priortise ongoing development of a test suite, not as an end goal. There really is no substitute for running the tests under as many different environments as you can lay your hands on. And not having access to a large server farm is no longer an excuse – virtualization (take your pick of Xen, QEMU, UML, and VMWare) will allow a single server to simulate dozens of different environments. The only limit to testing is your imagination….