Tutorial Sketches - Explanations » History » Version 8
« Previous -
Next » -
Astrid Bin, 2015-07-17 03:56 PM
Tutorial Sketches - Explanations¶
Find the tutorial sketches in Respository > Projects, or click this link: https://code.soundsoftware.ac.uk/projects/beaglert/repository/show/projects
Below is a list of the sample sketches, and what they do.
Basic sketch structure¶
A collection of BeagleRT files are called a "project".
The structure of a BeagleRT project¶
If you open a project folder in the above repository, you'll see that each of these BeagleRT project contains two files: main.cpp and render.cpp (some projects have additional files, but every project has at least these two). The main.cpp file you don't really have to worry about; it contains helper functions and things that run command line arguments. Most work is done in the render.cpp file.
The structure of a render.cpp file¶
A render.cpp file has three functions: setup(), render() and cleanup().
setup() is a function that runs at the beginning when the project starts.
render() is a function that is regularly called, over and over continuously, at the highest priority by the audio engine.
cleanup() is a function that is called when the program stops, to finish up any processes that might still be running.
Here we will briefly explain each function and the structure of the render.cpp document
Before any functions¶
At the top of the file, include any libraries you might need.
Additionally, declare any global variables. In these tutorial sketches, all global variables are preceded by a g so we always know which variables are global - gSampleData, for instance. It's not mandatory but is a really good way of keeping track of what's global and what isn't.
Sometimes it's necessary to access a variable from another file, such as main.cpp. In this case, precede this variable with the keyword extern.
setup(), render() and cleanup() each take the same arguments. These are:
These arguments are pointers to data structures. The main one that's used is context, which is a pointer to a data structure containing lots of information you need.
Take a look at what's in the data structure here: https://code.soundsoftware.ac.uk/projects/beaglert/embedded/structBeagleRTContext.html In that link there is also a
You can access any of those bits of information contained in the data structure like this: context->[item in struct]
For example, context->audioChannels returns the number of audioChannels. context->audioIn[n] would give you the current input sample (assuming that your input is mono - if it's not you will have to account for multiple channels).
Note that every audioIn, audioOut, analogIn, analogOut and digital are buffers.
This sketch performs an FFT (Fast Fourier Transform) on incoming audio. It uses the NE10 library, included at the top of the file (line 11).
Read the documentation on the NE10 library here: http://projectne10.github.io/Ne10/doc/annotated.html
The variables timeDomainIn, timeDomainOut and frequencyDomain are variables of the struct ne10_fft_cpx_float32_t (http://projectne10.github.io/Ne10/doc/structne10__fft__cpx__float32__t.html). These are declared at the top of the file (line 21), and memory is allocated for them in setup() (line 41).
In render() a for loop performs the FFT is performed on each sample, and the resulting output is placed on each channel.
This sketch produces a sine wave.
The frequency of the sine wave is determined in a global variable, gFrequency (line 12).
This sketch produces a sine wave, the frequency and amplitude of which are affected by data received on the analog pins. Before looping through each audio frame, we declare a value for the frequency and amplitude of our sine wave (line 55); we adjust these values by taking in data from analog sensors (for example, a potentiometer).
The important thing to notice is that audio is sampled twice as often as analog data. The audio sampling rate is 44.1kHz (44100 frames per second) and the analog sampling rate is 22.05kHz (22050 frames per second). On line 62 you might notice that we are processing the analog data and updating frequency and amplitude only on every second audio sample, since the analog sampling rate is half that of the audio.