Wiki » History » Version 10

Version 9 (Chris Cannam, 2012-08-31 02:03 PM) → Version 10/14 (Luis Figueira, 2012-09-03 03:36 PM)

h1. Wiki

h2. Requirements

Third day of a three-day Software Carpentry bootcamp. Hours are 0900-1630 approx.

Should consist of working through introductory examples of audio processing techniques in Python, with small-group exercises for self-guided learning (helpers should be available in the room). Topics of interest should include reading audio data from files, synthesising signals and playing them, saving results back to audio files, simple filters, simple analysis techniques, and interactive plotting.

The material will need to follow from the subjects covered in the first two days of the bootcamp, such as unit testing and test-driven development, writing readable code, use of version control (we'll be using Mercurial in the bootcamp but the principles should mostly be general ones), selection and use of the most appropriate existing Python modules, etc.

h3. Software Requirements

* Current Software Requirements page

* Software installation help wiki page

[[SWCAudioDaySoftware|Audio Day Software]]

h2. Related Links

* Bootcamp announcement: http://soundsoftware.ac.uk/york2012-bootcamp
* Software carpentry site: http://software-carpentry.org
* Audio Presentation (Becky's): http://software-carpentry.org/4_0/media/audio/

h2. Open questions

Can we come up with some good examples of simple audio processing problems that readily admit test-driven solutions? I think there will be a lot of interest in the question of how to apply automated testing and unit tests to audio research software.

Any examples we use will need to be simple enough to be worked through by people who have not necessarily worked on that specific aspect of audio programming before, but relevant enough that they give a clue about where to begin when working on their own problems subsequently.

* "Here":https://code.soundsoftware.ac.uk/projects/cepstral-pitchtracker/repository/entry/test/TestPeakInterpolator.cpp is an example with some basic unit tests for a peak interpolation method (in C++ with Boost) -- I think we can do better, but with what?
* Other things we talked about: testing FFTs using 4 points and known inputs/outputs (e.g. pure cosine); how to unit test e.g. a low-pass filter using expected outputs

h3. Things worth bearing in mind

* Validation vs. verification. Validation is what the evaluation section of a typical paper does -- tries to establish how well the method / model reflects reality. Verification is trying to establish whether the code implements the method or model at all. Unit testing is a code verification tool.
* That means, in a sense we aren't really testing the method. We're testing the implementation and the API. We have no expectation to test using "real-world" data. Tests should be small (and take next to no time to run).
* We have to accept some limitations because of this, and make sure we test at the right level. For example, we can't meaningfully test an FFT _library_ using only 4-point FFTs, because the library may select a totally different implementation depending on the FFT size. But we _can_ do some meaningful testing of a single implementation that way, or of a wrapper library (e.g. Python module calling out to FFTW).
* A standard for whether our testing is good enough: can I swap in a totally different implementation later, and know that it still works? In some cases this may mean we don't want to test too much for exact values: for example the peak interpolator above could legitimately return different results depending on what interpolation method is used.

h3. Misc meeting notes etc

* [[31Aug2012|31st August 2012]]