SWC2013TDD » History » Version 11
Version 10 (Chris Cannam, 2013-02-06 07:23 PM) → Version 11/19 (Chris Cannam, 2013-02-06 07:56 PM)
h1. Test-driven development outline
We assume that the "intro to Python" section has at least introduced how you would run a Python program and compare the output against an external source of "correct" results; also that the NumPy/audiofile section has shown how to suck in an entire (mono) audio file as a NumPy array.
h2. Motivation
We'll refer first back to the "intro to Python" example, with the text file of dates and observations.
<pre>
Date,Species,Count
2012.04.28,marlin,2
2012.04.28,turtle,1
2012.04.28,shark,3
# I think it was a Marlin... luis
2012.04.27,marlin,4
</pre>
We have our program that prints out the number of marlin.
<pre>
$ python count-marlin.py
2
$
</pre>
We can check this against some human-generated output, or the result of "grep" or something if the program is simple enough, in order to see whether it produces the right result. But what if we change the program to add a new feature -- will we remember to check all the old behaviour as well and make sure we haven't broken it? What if the program as a whole is so complex and subtle that we don't actually know what its output will be?
We need to do two things:
# automate the tests, and
# make sure we test the individual components that the program is made up of (so we can be confident of its behaviour even when we don't know what the program as a whole should produce)
h2. Automating a test
Simple program that uses @assert@. It calls the fish counter for a known file, and checks the output.
We're starting to automate things. We can make it more convenient by using @nosetests@, which runs all the functions it finds called @test_@-something in files called @test_@-something in the current directory and subdirectories (recursively searched).
Split this out thus, and run it using @nosetests@.
h2. Testing units, and test-driven development
So consider we have a program that loads data from an audio file, like
<pre>
import scikits.audiolab as al
sfile = al.Sndfile("testfiles/beatbox.wav")
count = sfile.nframes
samples = sfile.read_frames(count)
</pre>
and then does something with @samples@.
Now, for a lot of methods -- particularly spectral domain ones -- the first thing we want to do is chop up @samples@ into frames of a fixed length (1024 is a popular number), either overlapping or non-overlapping. (Draw diagram on whiteboard)
In this case the file has 253929 frames. At 1024 samples per frame, how many frames do we expect this file to consist of if the frames are not overlapping?
253929/1024 comes out (integer division) as 247. Is that the right answer? I've no idea!
We assume that the "intro to Python" section has at least introduced how you would run a Python program and compare the output against an external source of "correct" results; also that the NumPy/audiofile section has shown how to suck in an entire (mono) audio file as a NumPy array.
h2. Motivation
We'll refer first back to the "intro to Python" example, with the text file of dates and observations.
<pre>
Date,Species,Count
2012.04.28,marlin,2
2012.04.28,turtle,1
2012.04.28,shark,3
# I think it was a Marlin... luis
2012.04.27,marlin,4
</pre>
We have our program that prints out the number of marlin.
<pre>
$ python count-marlin.py
2
$
</pre>
We can check this against some human-generated output, or the result of "grep" or something if the program is simple enough, in order to see whether it produces the right result. But what if we change the program to add a new feature -- will we remember to check all the old behaviour as well and make sure we haven't broken it? What if the program as a whole is so complex and subtle that we don't actually know what its output will be?
We need to do two things:
# automate the tests, and
# make sure we test the individual components that the program is made up of (so we can be confident of its behaviour even when we don't know what the program as a whole should produce)
h2. Automating a test
Simple program that uses @assert@. It calls the fish counter for a known file, and checks the output.
We're starting to automate things. We can make it more convenient by using @nosetests@, which runs all the functions it finds called @test_@-something in files called @test_@-something in the current directory and subdirectories (recursively searched).
Split this out thus, and run it using @nosetests@.
h2. Testing units, and test-driven development
So consider we have a program that loads data from an audio file, like
<pre>
import scikits.audiolab as al
sfile = al.Sndfile("testfiles/beatbox.wav")
count = sfile.nframes
samples = sfile.read_frames(count)
</pre>
and then does something with @samples@.
Now, for a lot of methods -- particularly spectral domain ones -- the first thing we want to do is chop up @samples@ into frames of a fixed length (1024 is a popular number), either overlapping or non-overlapping. (Draw diagram on whiteboard)
In this case the file has 253929 frames. At 1024 samples per frame, how many frames do we expect this file to consist of if the frames are not overlapping?
253929/1024 comes out (integer division) as 247. Is that the right answer? I've no idea!