November 30th, 2012

Have You Tested Your Tests?

by Thomas Bradford

"Which came first, the chicken or the egg?" is the classic causality dilemma where your head will explode if you think about it for too long. It has an analog in software development that we often don’t think about: "Have you tested your tests?" What about the tests of your tests, have you tested those? It seems silly, but it’s really something that one needs to think about when approaching the maintenance of a test suite and its associated framework or scaffolding.

Maybe most developers are doing it right. They keep their test scaffolding so minimalistic that there’s no overwhelming reason to do exhaustive tests of the tests themselves, but when a scaffolding becomes more complex than that code that it’s testing, you have to ask yourself some questions – "Do we write tests to make sure these tests are testing correctly?" or "Have we let this situation get completely out of control?"

Code becomes complex both rapidly and exponentially, and all but the most trivial of programs can benefit from test coverage. What do we mean by trivial?

function assertTrue(value) {

    if ( !value ) {

        throw Error("Value is not true");



This test is an example of a trivial function. Does it require test coverage? No, because there is no variability in its state. You pass in a value, it evaluates it and possibly throws an error. But what if you added variability?

var requiredCondition = true;

function assertRequiredCondition(value) {

    if ( value != requiredCondition ) {

        throw new Error("Value not equal to " + requiredCondition);



You’ve just increased the complexity of this test by 100% simply by making it dependent on external state. Now you have to be sure that your testing is set up properly such that requiredCondition is pre-populated with the correct value, and while that probably doesn’t require a test of its own, what if you make the requiredCondition dependent on the result of a database query?

It only gets worse…

So where does it end? If you continue down this path to its worst possible conclusion, you end up with a scaffolding that is more complex than the code it’s testing, and when that happens, unless that scaffolding also has an exhaustive test suite of its own, you can no longer rely on the validity of your tests. So don’t do that!

We accept that a test scaffolding requires some complexity to perform its tasks, including database and networked access, but it’s important to make that scaffolding as minimalistic as possible, doing only what is absolutely necessary and nothing more, such that you can focus on what you’re actually testing – your product – instead of spending an inordinate amount of time fiddling with the scaffolding when things go awry.

A Few Tips

Trust your test framework

If you’re using a testing library, then trust that its authors knew what they were doing and don’t alter its behavior in ways that might break the world on future updates. You don’t need to write your own assert() functions if someone has already done it for you and you can live with what’s provided if it meets at least 80% of your requirements. Also, don’t monkey-patch it! The reasons for this should be obvious.

Avoid tests that do too much

Sometimes many smaller tests are preferable to single monster tests. Include only the steps necessary to test a single behavior, and once it’s tested, back out and run another test. Don’t continue within the same test to evaluate other behaviors, even if they’re related, because that may not always be the case and it only increases the variability and complexity of the individual test.

And most importantly…

Ensure isolation

setUp and tearDown imply that you’re building something and destroying it afterward. If your test scaffolding tries to be clever by caching data for faster access, then not only are you increasing complexity, but you can be assured that you’ll eventually miss something and be testing invalid state on subsequent tests.

How do you know when you’ve broken isolation? It’s when you’re confused as to why a test runs fine on its own, but fails when run as part of the larger suite.

Yes, maybe caching makes your tests execute faster, but you’re not trying to create fast tests, you’re trying to create correct tests, and slower performance during the testing phase is an acceptable tradeoff if your ultimate goal is a quality product.

Anything else?

Any other tips? Please feel free to share them in the Comments section


Thomas Bradford

Posted in Development
blog comments powered by Disqus