SWC2013TDD » History » Version 8

« Previous - Version 8/19 (diff) - Next » - Current version
Chris Cannam, 2013-02-05 08:45 PM


Test-driven development outline

We assume that the "intro to Python" section has at least introduced how you would run a Python program and compare the output against an external source of "correct" results; also that the NumPy/audiofile section has shown how to suck in an entire (mono) audio file as a NumPy array.

Motivation

We'll refer first back to the "intro to Python" example, with the text file of dates and observations.

Date,Species,Count
2012.04.28,marlin,2
2012.04.28,turtle,1
2012.04.28,shark,3
# I think it was a Marlin... luis
2012.04.27,marlin,4

We have our program that prints out the number of marlin.

$ python count-marlin.py
2
$

We can check this against some human-generated output, or the result of "grep" or something if the program is simple enough, in order to see whether it produces the right result. But what if we change the program to add a new feature -- will we remember to check all the old behaviour as well and make sure we haven't broken it? What if the program as a whole is so complex and subtle that we don't actually know what its output will be?

We need to do two things:

  1. automate the tests, and
  2. make sure we test the individual components that the program is made up of (so we can be confident of its behaviour even when we don't know what the program as a whole should produce)

Automating a test

Simple program that uses assert. It calls the fish counter for a known file, and checks the output.

We're starting to automate things. We can make it more convenient by using nosetests, which runs all the functions it finds called test_-something in files called test_-something in the current directory and subdirectories (recursively searched).

Split this out thus, and run it using nosetests.

Testing units, and test-driven development