Mercurial > hg > syncopation-dataset
diff SMC2015latex/section/framework.tex @ 71:9a60ca4ae0fb
updating models and latex files. added
results.csv
author | christopherh <christopher.harte@eecs.qmul.ac.uk> |
---|---|
date | Mon, 11 May 2015 23:36:25 +0100 |
parents | cf0305dc0ba0 |
children |
line wrap: on
line diff
--- a/SMC2015latex/section/framework.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/section/framework.tex Mon May 11 23:36:25 2015 +0100 @@ -3,8 +3,8 @@ \begin{figure}[t] \centering -\includegraphics[width=0.95\columnwidth]{images/framework.pdf} -\caption{Module hierarchy in the synpy toolkit: the top-level module provides a simple interface for the user to test different syncopation models. Musical constructs such as bars, velocity and note sequences, notes and time-signatures are defined in the `music objects' module; support for common procedures such as sequence concatenation and subdivision is provided in `basic functions'. Models and file reading components can be chosen as required by the user.\label{fig:framework}} +\includegraphics[width=0.9\columnwidth]{images/framework.pdf} +\caption{Module hierarchy in the SynPy toolkit: the top-level module provides a simple interface for the user to test different syncopation models. Musical constructs such as bars, velocity and note sequences, notes and time-signatures are defined in the `music objects' module; support for common procedures such as sequence concatenation and subdivision is provided in `basic functions'. Models and file reading components can be chosen as required by the user.\label{fig:framework}} \end{figure} The architecture of the toolkit is shown in Figure~\ref{fig:framework}. Syncopation values can be calculated for each bar in a given source of rhythm data along with selected statistics over all bars; the user specifies which model to use and supplies any special parameters that are required. Sources of rhythm data can be a bar object or a list of bars (detailed below in Section~\ref{sec:musicobjects}) or, alternatively, the name of a file containing music data. Where a model is unable to calculate a value for a given rhythm pattern, a `None' value is recorded for that bar and the indices of unmeasured bars reported in the output. If no user parameters are specified, the default parameters specified in the literature for each model are used. Output can optionally be saved directly to XML or JSON files. An example of usage in the Python interpreter is shown in Figure~\ref{ta:example}. @@ -49,13 +49,20 @@ V{1,0,0,0.5,0,0,1,0,0,0,0.5,0,0.5,0,0,0} \end{minted} } -\caption{Example rhythm annotation \code{.rhy} file containing two bars of the Son Clave rhythm. The first is expressed as a note sequence with resolution of four ticks per quarternote; the second is the same rhythm expressed as a velocity sequence (see Section~\ref{sec:background}).} +\caption{Example rhythm annotation file \code{clave.rhy} containing two bars of the Son Clave rhythm as discussed Section~\ref{sec:background}. The first bar is expressed as a note sequence with resolution of four ticks per quarter-note; the second is the same rhythm expressed as a velocity sequence.} \label{ta:clave} \end{figure} -Our \code{.rhy} annotation format is a light text syntax for descibing rhtyhm patterns directly in terms of note and velocity sequences (see Figure~\ref{ta:clave}). The full syntax specification is given in Backus Naur Form on the toolkit page \cite{Song14URL}. +Our \code{.rhy} annotation format is a light text syntax for describing rhythm patterns directly in terms of note and velocity sequences (see Figure~\ref{ta:clave}). The full syntax specification is given in Backus Naur Form on the toolkit repository \cite{Song14URL}. The MIDI file reader can open type 0 and type 1 standard MIDI files and select a given track to read rhythm from. Notes with zero delta time between them (i.e. chords) are treated as the same event for the purposes of creating note sequences from the MIDI stream. Time-signature and tempo events encoded in the MIDI stream are assumed to correctly describe those parameters of the recorded music so it is recommended that the user uses correctly annotated and quantised MIDI files. +\begin{figure*}[t] +\centering +\includegraphics[width=0.85\textwidth]{images/allmodels.pdf} +\caption{Syncopation predictions of the seven models in the toolkit for the syncopation dataset from~\cite{Song15thesis}. The range of prediction values across all rhythm patterns is given for each model. Within each rhythm category, the rhythm patterns are arranged by tatum-rate (i.e. quarter-note rate then eighth-note rate) then in alphabetical order (the data set naming convention uses letters a-l to represent short rhythm components that make up longer patterns). Gaps in model output occur where a particular model is unable to process the specific rhythm category i.e. LHL, PRS, TMC, SG cannot process polyrhythms and KTH can only measure rhythms in duple meters.} +\label{fig:modelpredictions} +\end{figure*} + \subsection{Plugin architecture} The system architecture has been designed to allow new models to be added easily. Models have a common interface, exposing a single function that will return the syncopation value for a bar of music. Optional parameters may be supplied as a Python dictionary if the user wishes to specify settings different from the those given in the literature for a specific model.