Skip to content.

Personal tools
You are here: Home » Members » chrism's Home » The Side Effect Benefit of 100% Statement Coverage

The Side Effect Benefit of 100% Statement Coverage

I've noticed a side-effect benefit of requiring 100% code statement coverage within the open source projects I manage.

I don't like reading sanctimonious treatises on testing either. This is only partly one of those.

First of all, I hereby to resolve to buy Ned Batchelder a case of his favorite for all his excellent work on coverage . This tool measures statement coverage while some program runs. nose has helpful bindings to this tool, which Chris Perkins pointed me at one day, so I probably owe the nose guys and Chris at least one beer apiece too.

Because statement coverage is so easy to measure using Ned's coverage tool, and because it integrates so well with unittest via nose, I have resolved that the open source code I release will all have 100% statement coverage when its tests are run.

As far as I'm concerned, 100% statement coverage is the "least you can do" to make sure the code is of a particular quality; it doesn't mean the code does what it's supposed to, but it does mean that the author has probably grokked most of the code, and hopefully, by extension, the problem domain, in order to figure out how to test all of its statements.

I promised that this wouldn't be entirely cheerleading for testing or a chiding wag of the finger pointed at people who do not test. You can decide for yourself the direct quality benefits of 100% statement coverage via tests; I can only say it works for me. Let's instead consider a big side effect benefit.

With a 100% test coverage invariant, you can reject poor-quality patches with less subjectivity.

A common sort of patch received by open source project maintainers is the "paper towel roll" patch. It's a patch that was coded while its author looked through a paper towel roll at some very specific bit of code in your larger system. The patch is wrong, but its author does not have enough context to know why: it patches some subsystem in a way doesn't make any sense when the entirety of the system is considered. It works, but applying it as-is would be disastrous on some level (documentation requirements / conceptual integrity / code cleanliness, additional unwanted software dependencies, etc).

Paper-towel-roll patches are tricky to deal with as an open source maintainer. Do you throw it away? Do you add the feature implied by the patch in "the right way"? How do you deal with the original submitter? How pissed off is he going to be if you reject it out-of-hand without trying to help him implement it in the right way? Do you even want the feature? Even if you're +0 or +1 on the feature, do you have enough time to deal with doing it properly?

One handy attribute of paper-towel-roll patches is this: they usually come without any tests. We can turn this obvious deficiency into at least one immediate advantage if we require 100% statement coverage.

Without a 100% coverage invariant, your only recourse to reject a poor paper-towel-roll patch will be to say "hey, this patch is pretty terrible", which is obviously subjective and can lead to pointless, time-consuming arguments. It may also require that you explain in great detail the system from end-to-end to the submitter, just so he or she will understand why it's a bad patch. This takes a long time, and the payoff rate is low, because often the submitter is a one-timer: he won't be back to submit patch #2.

With the 100% coverage invariant, you can instead use the less provocative request "this patch doesn't maintain the 100% statement test coverage invariant, could you fix this?" The requirement is unambiguous: either 100% of the statements in the package are executed when the tests run or they're not. You can't argue with a percentage. We've taken the subjectivity out of the initial contact with the patch submitter.

Insisting that a patch maintains the invariant is usually enough to scare off extremely casual paper-towel-roll patch submitters who've submitted something insane which in reality you just don't really want to even think about. This ability to reject a patch without wasting any time is the 99% benefit. Nobody gets their ego bruised, no time is wasted, and your code is still of high quality.

But the submitter might have enough tenacity to write the tests after you ask him to. This will have one of two outcomes:

  • the act of needing to write the tests will give the patch submitter enough context to do it correctly, so the second time around, you just will receive a patch that is both correct and has tests.
  • the patch submitter may come back with an equally awful solution that just happens to have 100% statement coverage.

This is risky too: he's gone through all the hoop-jumping work and you still have to have "the talk" with him. I haven't really figured out how to deal with this. On the positive side, though, it does mean you've identified somebody who has the tenacity to do something right, and has a high likelihood of submitting more patches in the future. This is extremely good.

Created by chrism
Last modified 2009-10-21 11:56 AM

*Statement* coverage

Hi Chris,

What exactly do you mean with "statement coverage"? I mean, what falls outside of "statement coverage"? I'm busy bringing the coverage of one of my tools waaaay up and I've got a few at 100%, so I'm interested :-)


Second outcome is a good place to be

I think the second outcome is where you want to be, if you have to deal with this sort of thing (which obviously, you do). The likelihood that the re-submission (with test(s)) is crap is lower than then it was with the first submission, and the likelihood the submitter is willing to fix mistakes after the second submission is high. At this point, you may face having to give a "-1" for other reasons (e.g. you just don't like the new feature / bug fix / whatever ) but you'll have a much better time with it when it is *just* about the feature/bugfix/whatever, not the crappy code. (Also, I think "Hey! This patch needs a test! Go away, and don't come back until you have one!" is a perfectly reasonable thing to say to a developer now-a-days, so crack that whip! ;-)

Funny (if it wasn't sad)

This is a really funny description of a situation I have tried to deal with many times. Unfortunately I never had the "test coverage defense" and I have to assume I upset many people by rejecting submissions the "subjective way".


Considered figleaf?

I'm not saying it's better because I haven't tried it. Have you?

I attended Ned's talk at PyCon 2009 about coverage. (Great talk Ned!) and he did stress the importance of not always expecting 100% due to the complexity of some code. You might get 100% only by common sense of reading your test code.


I haven't tried figleaf, no.

I think I'll have to disagree with Ned about not being able to reasonably acheive 100% statement coverage for "frameworky" components that have a well-understood design and architecture (like a web framework). You can always get there, it just takes work. OTOH, for application code that changes very quickly (maybe every day, as new experiments are tried, or as architecture-y refactoring goes on while you're trying to figure out the problem domain), I'd agree: immediate 100% coverage may have diminishing returns.


statement coverage just means that all the statements in all the python files that compose some package (or set of packages) are executed. Effectively, 100% coverage here means "all lines are executed". There is also branch coverage and other more exotic forms of coverage testing. Ned's PyCon presentation is online where he discusses some of these, it's worth a listen.