Test Fluency and Comprehension

Project Hub

Introduction

I want to collect some experiences and thoughts I’ve had developing code with TDD. This is more in line with the big Ah Hah moments than the nitty gritty of frameworks, mocks, etc. I had this inspiration in my Etappe project, using the IntelliJ Karma code runner. When I ran a test suite with it, it would display a nicely indented synopsis of the tests that were run, the ones that passed and failed, and the ones that were skipped.

This inspired me to think:

I recently watched a talk by Kevlin Henney which confirmed my thinking nicely.

Structure and Interpretation of Test Cases

Test code maps to structured output report

As an example, here’s a small spec. It’s written in Javascript for Karma. Note that the expections are omitted to emphasize the structure.

describe('domain agency', () => {
    describe('create', () => {
        describe('validation error', () => {
            it('when no name', () => { ... })
            it('when no api', () => { ... })
        })
        describe('value', () => {
            it('is type Agency', () => { ... })
            it('has name', () => { ... })
            it('has api', () => { ... })
        });
    })
    describe('get all', () => {
        it('agencies', () => { ... })
    })
})

When we run this skeleton, the report output would be:

  domain agency
    create
      validation error
        ✓ when no name
        ✓ when no api
      value
        ✓ is type Agency
        ✓ has name
        ✓ has api
    get all
      ✓ agencies

Note how comprehensible the report is.

Using a good organization of the nested specs makes it easy to see what’s been tested. When one is adding a new spec, it’s easy to locate the group it belongs to, or to find the place to insert a new group. Organization is a bit of an art. It’s not always perfect. But even a little of it is better than none.

In a way this is TDD applied to TDD. One starts by visualizing what the output of a test should look like. Write initial skeleton working code for the test to provide a nicely formatted test report. The skeleton code has no asserts - it follows the TDD approach of starting out by faking the results.

Comprehension

A comprehensible test will have a comprehesible test report.

Fleuncy

How can the test report be made comprehensible?

The tree structure can be flattened by reading out from the root to the leaf. For example, in the above test report, one could read:

domain agency : create : value : is type Agency

Which can be restated as “[the] domain ‘agency’ create[s] [a] value [that] is type ‘Agency’

Spec report exmple

The Etappe Karma report has about 200 specs arranged in a comprehensible nested test report. From a developer’s point of view it is effectively a unit test, but note that the report is couched in terms of a functional spec and uses the language of the domain model.

Compared to other unit-test reports:

Conclusion

We can write comprehensible tests by acquiring fluency in the structure of a test report. When we can get the test report to tell a story about the domain, we have reached fluency. We can use meta-TDD to write comprehensible tests by expressing the expectation before designing the implementation.

Example code

There’s a rather extensive example in etappe - transit trip planning. It demonstrates using Karma for JavaScipt unit testing in the browser and Protractor for integration testing. The latter makes use of Mountebank, a lovely NodeJS package for mocking Web API services.

A current work-in-progress is mcu-gif - a TDD approach to GIF decoding for MCUs. I found the dusty but interesting ccspec framework which applies the describe/it paradigm to C/C++ testing This has allowed me to extend the learnings from the JavaScript Etappe project to a C/C++ project.