Wiki

Requirements

Third day of a three-day Software Carpentry bootcamp. Hours are 0900-1630 approx.

Should consist of working through introductory examples of audio processing techniques in Python, with small-group exercises for self-guided learning (helpers should be available in the room). Topics of interest should include reading audio data from files, synthesising signals and playing them, saving results back to audio files, simple filters, simple analysis techniques, and interactive plotting.

The material will need to follow from the subjects covered in the first two days of the bootcamp, such as unit testing and test-driven development, writing readable code, use of version control (we'll be using Mercurial in the bootcamp but the principles should mostly be general ones), selection and use of the most appropriate existing Python modules, etc.

Software Requirements

Related Links

Open questions

Can we come up with some good examples of simple audio processing problems that readily admit test-driven solutions? I think there will be a lot of interest in the question of how to apply automated testing and unit tests to audio research software.

Any examples we use will need to be simple enough to be worked through by people who have not necessarily worked on that specific aspect of audio programming before, but relevant enough that they give a clue about where to begin when working on their own problems subsequently.

  • Here is an example with some basic unit tests for a peak interpolation method (in C++ with Boost) -- I think we can do better, but with what?
  • Other things we talked about: testing FFTs using 4 points and known inputs/outputs (e.g. pure cosine); how to unit test e.g. a low-pass filter using expected outputs

Things worth bearing in mind

  • Validation vs. verification. Validation is what the evaluation section of a typical paper does -- tries to establish how well the method / model reflects reality. Verification is trying to establish whether the code implements the method or model at all. Unit testing is a code verification tool.
  • That means, in a sense we aren't really testing the method. We're testing the implementation and the API. We have no expectation to test using "real-world" data. Tests should be small (and take next to no time to run).
  • We have to accept some limitations because of this, and make sure we test at the right level. For example, we can't meaningfully test an FFT library using only 4-point FFTs, because the library may select a totally different implementation depending on the FFT size. But we can do some meaningful testing of a single implementation that way, or of a wrapper library (e.g. Python module calling out to FFTW).
  • A standard for whether our testing is good enough: can I swap in a totally different implementation later, and know that it still works? In some cases this may mean we don't want to test too much for exact values: for example the peak interpolator above could legitimately return different results depending on what interpolation method is used.

Misc meeting notes etc