changeset 67:bb6b9a612d02

update
author christopherh <christopher.harte@eecs.qmul.ac.uk>
date Mon, 27 Apr 2015 18:15:53 +0100
parents e33186a69ad2
children 5803b73310af
files SMC2015latex/section/background.tex SMC2015latex/section/dataset.tex SMC2015latex/section/framework.tex SMC2015latex/section/introduction.tex
diffstat 4 files changed, 19 insertions(+), 22 deletions(-) [+]
line wrap: on
line diff
--- a/SMC2015latex/section/background.tex	Mon Apr 27 17:09:03 2015 +0100
+++ b/SMC2015latex/section/background.tex	Mon Apr 27 18:15:53 2015 +0100
@@ -1,10 +1,7 @@
 \section{Background}
 \label{sec:background}
 
-\subsection{Rhythm representation}
-\label{sec:background:rhythm}
-In this section, we introduce some key concepts to assist readers in understanding the mechanisms of 
-each syncopation model. Please refer to \cite{Song15thesis} for the detailed explanation all relevant rhythmic concepts in music theory and their mathematical notations.
+In this section, to introduce the theory behind the toolkit, we briefly present key aspects of the mathematical framework described in \cite{Song15thesis} and then give an overview of each syncopation model. %Please refer to for a more detailed treatment of all the related concepts and their mathematical notation.
 
 \begin{figure}[t]
 \centering
@@ -13,11 +10,12 @@
 \label{fig:general}
 \end{figure}
 
+% \subsection{Rhythm representation}
+% \label{sec:background:rhythm}
 
-
-\subsubsection{Time-span}
+\subsection{Time-span}
 \label{sec:background:rhythm:timespan}
-The term \emph{time-span} has been defined as the period between two points in time, including all time points in between \cite{Lerdahl_Jackendoff83GTTM}. To represent a given rhythm, we must specify the time-span within which it occurs by defining a reference time origin $\timeorigin$ and end time $\timeend$, the total duration $\timespan$ of which is $\timespan = \timeend-\timeorigin$ (Figure~\ref{fig:general}.
+The term \emph{time-span} has been defined as the period between two points in time, including all time points in between \cite{Lerdahl_Jackendoff83GTTM}. To represent a given rhythm, we must specify the time-span within which it occurs by defining a reference time origin $\timeorigin$ and end time $\timeend$, the total duration $\timespan$ of which is $\timespan = \timeend-\timeorigin$ (Figure~\ref{fig:general}).
 
 The basic time unit is in \emph{ticks} as opposed to seconds, therefore we set the parameter Ticks Per Quarter-note (TPQ) to describe the time-span of a length of rhythm. The minimum TPQ is determined by the rhythm-pattern so that all the events can be represented. As demonstrated in Figure~\ref{fig:clave}, the \emph{Son} clave rhythm pattern could be represented both at 8 and 4 ticks per quarter-note but the minimum representable resolution would be 4.
 
@@ -30,7 +28,7 @@
 
 
 
-\subsubsection{Note and rhythm}
+\subsection{Note and sequences}
 \label{sec:background:rhythm:note}
 A single, \emph{note} event $\note$ occurring in this time-span may be described by the tuple $(\starttime, \durationtime, \velocity)$ as shown in Figure~\ref{fig:general}, where $\starttime$ represents start or \emph{onset} time relative to $\timeorigin$, $\durationtime$ represents note duration in the same units and $\velocity$ represents the note \emph{velocity} (i.e. the dynamic; how loud or accented the event is relative to others), where $\velocity > 0$.
 
@@ -63,7 +61,7 @@
 \end{equation}
 , which is different from the original note sequence in Equation~\ref{eq:note_sequence}. 
 
-\subsubsection{Metrical structure and time-signature}
+\subsection{Metrical structure and time-signature}
 \label{sec:background:rhythm:meter}
 
 \begin{figure}[t]
@@ -81,7 +79,7 @@
 
 \subsection{Syncopation models}
 \label{sec:background:models}
-In this section we give a brief review of each implemented syncopation model, including their general hypothesis and mechanism.   To compare the capabilities of each model, we give an overview of the musical features each captures in Table~\ref{ta:capabilites}. For a detailed review of these models see \cite{Song15thesis}.
+In this section we give a brief review of each implemented syncopation model, including their general hypothesis and mechanism.   To compare the capabilities of each model, we give an overview of the musical features each captures in Table~\ref{ta:capabilities}. For a detailed review of these models see \cite{Song15thesis}.
 
 \subsubsection{Longuet-Higgins and Lee 1984 (\lhl)}
 \label{sec:background:models:lhl}
@@ -141,7 +139,7 @@
 \end{tabular}
 }
 \caption{Musical properties captured by the different syncopation models. All models use note onsets, but only two use note duration rather than inter-onset intervals. Only SG takes dynamics (i.e. variation in note velocity) into account. All models handle monorhythms but the four models based on hierarchical decomposition of rhythm patterns are unable to handle polyrhythmic patterns. All models can process both duple and triple meters with the exception of KTH that can only process duple.}
-\label{ta:capabilites}
+\label{ta:capabilities}
 \end{table}
 
 
--- a/SMC2015latex/section/dataset.tex	Mon Apr 27 17:09:03 2015 +0100
+++ b/SMC2015latex/section/dataset.tex	Mon Apr 27 18:15:53 2015 +0100
@@ -1,13 +1,13 @@
 \section{Syncopation Dataset}
 \label{sec:data}
 
-The major outcome of the SynPy toolkit is to provide prediction of the level of syncopation of a certain rhythm pattern, or none if not applicable. As a demonstration, we apply all seven syncopation models on the rhythms in the syncopation perceptual dataset in~\cite{Song15thesis, Song13}. This dataset includes 27 monorhythms in 4/4 meter, 36 monorhythms in 6/8 and 48 polyrhythms in 4/4, altogether 111 rhythm-stimuli. 
+The major outcome of the SynPy toolkit is to provide prediction of the level of syncopation of a any rhythm pattern that can be measured by a given model. As a demonstration, we apply all seven syncopation models on the rhythms patterns used as stimuli for the syncopation perceptual dataset from~\cite{Song15thesis, Song13}. This dataset includes 27 mono-rhythms in 4/4 meter, 36 monorhythms in 6/8 and 48 poly-rhythms in 4/4; altogether forming a set of 111 rhythm patterns. 
 
 \begin{figure*}[t]
 \centering
 \includegraphics[width=0.85\textwidth]{images/allmodels.pdf}
-\caption{Syncopation predictions of seven models for the syncopation dataset. The range of syncopation predictions for all the rhythm patterns are given for each model. Within each rhythm category, the rhythm patterns are arranged by the tatum-rate (i.e. quarter-note rate, eighth-note rate) then in alphabetic order. For example, in 4/4 monorhythms group the rhythm patterns are from ab, ac, ad, ba, ... to dd, then abab, ... ,dddd. LHL, PRS, TMC, SG can be only applicable to monorhythms, KTH can be only measure rhythms in duple meter.}
+\caption{Syncopation predictions of the seven models in the toolkit for the syncopation dataset from~\cite{Song15thesis}. The range of prediction values across all rhythm patterns is given for each model. Within each rhythm category, the rhythm patterns are arranged by tatum-rate (i.e. quarter-note rate then eighth-note rate) then in alphabetical order (the data set naming convention uses letters a-l to represent short rhythm components that make up longer patterns). Gaps in model output occur where a particular model is unable to process the specific rhtyhm category i.e. LHL, PRS, TMC, SG cannot process polyrhythms and KTH can only measure rhythms in duple meters.}
 \label{fig:modelpredictions}
 \end{figure*}
 
-Figure~\ref{fig:modelpredictions} plots the syncopation predictions of individual model for each rhythm. It shows that each model has different ranges of prediction and scope of capabilities rhythm categories (refer to Table~\ref{ta:capabilities}). 
+Figure~\ref{fig:modelpredictions} plots the syncopation predictions of individual model for each rhythm. It presents the different ranges of prediction values for each model and shows their capabilities in terms of rhythm categories (refer to Table~\ref{ta:capabilities}). 
--- a/SMC2015latex/section/framework.tex	Mon Apr 27 17:09:03 2015 +0100
+++ b/SMC2015latex/section/framework.tex	Mon Apr 27 18:15:53 2015 +0100
@@ -1,4 +1,5 @@
 \section{Framework}
+\label{sec:framework}
 
 \begin{figure}[t]
 \centering
@@ -6,7 +7,7 @@
 \caption{Module hierarchy in the synpy toolkit: the top-level module provides a simple interface for the user to test different syncopation models. Musical constructs such as bars, velocity and note sequences, notes and time-signatures are defined in the `music objects' module; support for common procedures such as sequence concatenation and subdivision is provided in `basic functions'. Models and file reading components can be chosen as required by the user.\label{fig:framework}}
 \end{figure}
 
-The architecture of the toolkit is shown in Figure~\ref{fig:framework}. Syncopation values can be calculated for each bar in a given source of rhythm data along with selected statistics over all bars; the user specifies which model to use and supplies any special parameters that are required. Sources of rhythm data can be a bar object or a list of bars (detailed below in section~\ref{sec:musicobjects}) or, alternatively, the name of a file containing music data. Where a model is unable to calculate a value for a given rhythm pattern, a `None' value is recorded for that bar and the indices of unmeasured bars reported in the output.  If no user parameters are specified, the default parameters specified in the literature for each model are used. Output can optionally be saved directly to XML or JSON files. An example of usage in the Python interpreter is shown in Figure~\ref{ta:example}.
+The architecture of the toolkit is shown in Figure~\ref{fig:framework}. Syncopation values can be calculated for each bar in a given source of rhythm data along with selected statistics over all bars; the user specifies which model to use and supplies any special parameters that are required. Sources of rhythm data can be a bar object or a list of bars (detailed below in Section~\ref{sec:musicobjects}) or, alternatively, the name of a file containing music data. Where a model is unable to calculate a value for a given rhythm pattern, a `None' value is recorded for that bar and the indices of unmeasured bars reported in the output.  If no user parameters are specified, the default parameters specified in the literature for each model are used. Output can optionally be saved directly to XML or JSON files. An example of usage in the Python interpreter is shown in Figure~\ref{ta:example}.
 
 \begin{figure}
 \footnotesize{
@@ -48,7 +49,7 @@
 V{1,0,0,0.5,0,0,1,0,0,0,0.5,0,0.5,0,0,0}
 \end{minted}
 }
-\caption{Example rhythm annotation \code{.rhy} file containing two bars of the Son Clave rhythm. The first is expressed as a note sequence with resolution of four ticks per quarternote; the second is the same rhythm expressed as a velocity sequence (see section~\ref{sec:background}).}
+\caption{Example rhythm annotation \code{.rhy} file containing two bars of the Son Clave rhythm. The first is expressed as a note sequence with resolution of four ticks per quarternote; the second is the same rhythm expressed as a velocity sequence (see Section~\ref{sec:background}).}
 \label{ta:clave} 
 \end{figure}
 Our \code{.rhy} annotation format is a light text syntax for descibing rhtyhm patterns directly in terms of note and velocity sequences (see Figure~\ref{ta:clave}). The full syntax specification is given in Backus Naur Form on the toolkit page \cite{Song14URL}.
--- a/SMC2015latex/section/introduction.tex	Mon Apr 27 17:09:03 2015 +0100
+++ b/SMC2015latex/section/introduction.tex	Mon Apr 27 18:15:53 2015 +0100
@@ -1,13 +1,11 @@
 \section{Introduction}
 \label{sec:introduction}
 
-Syncopation is a fundamental feature of rhythm in music and a crucial aspect of musical character in many styles and cultures. Having comprehensive models to capture syncopation perception allows us to better understand the broader aspects of music perception. Over the last thirty years, several modelling approaches for syncopation have been developed and heavily used in studies in multiple disciplines~\cite{Fitch_Rosenfeld07, Smith_Honing07, Keller_Schubert11, Madison13, Witek14}. To date, formal investigations on the links between syncopation and  music perception subjects such as meter induction, emotion and groove, have largely relied on quantitative measures of syncopation [cites?]. However, until now there has not been a comprehensive reference implementation of the different algorithms available to facilitate quantifying syncopation.
+Syncopation is a fundamental feature of rhythm in music and a crucial aspect of musical character in many styles and cultures. Having comprehensive models to capture syncopation perception allows us to better understand the broader aspects of music perception. Over the last thirty years, several modelling approaches for syncopation have been developed and heavily used in studies in multiple disciplines~\cite{LHL84,Pressing97,Toussaint02Metrical,Sioros11,Keith91,Toussaint05Offbeatness,Gomez05,Keller_Schubert11}. To date, formal investigations on the links between syncopation and  music perception subjects such as meter induction~\cite{Povel_Essens85, Fitch_Rosenfeld07}, emotion~\cite{Keller_Schubert11}, groove~\cite{Madison13, Witek14} and neurophysiological responses~\cite{Winkler09, Vuust11}, have largely relied on quantitative measures of syncopation. However, until now there has not been a comprehensive reference implementation of the different algorithms available to facilitate quantifying syncopation.
 
-In~\cite{Song15thesis}, Song provides a consolidated mathematical framework and in-depth review of seven widely used syncopation models including: Longuet-Higgins and Lee's model (LHL)~\cite{LHL84}, Pressing's model (PRS)~\cite{Pressing97,Pressing93}, Toussaint's Metric Complexity model (TMC)~\cite{Toussaint02Metrical}, Sioros and Guedes's model (SG)~\cite{Sioros11,Sioros12}, Keith's model (KTH)~\cite{Keith91}, Toussaint's off-beatness measure (TOB)~\cite{Toussaint05Offbeatness} and G\'omez et al.'s Weighted Note-to-Beat Distance (WNBD)~\cite{Gomez05}. 
+In~\cite{Song15thesis}, Song provides a consolidated mathematical framework and in-depth review of seven widely used syncopation models: Longuet-Higgins and Lee~\cite{LHL84}, Pressing~\cite{Pressing97,Pressing93}, Toussaint's Metric Complexity~\cite{Toussaint02Metrical}, Sioros and Guedes \cite{Sioros11,Sioros12}, Keith~\cite{Keith91}, Toussaint's off-beatness measure~\cite{Toussaint05Offbeatness} and G\'omez et al.'s Weighted Note-to-Beat Distance~\cite{Gomez05}. 
 Based on this mathematical framework, the SynPy toolkit provides implementations of these syncopation models in the Python programming language. 
 
-novel features - time sig, tempo, real music file input, polyrhythm
+The toolkit not only provides the first open-source implementation of these models in a unified framework but also allows convenient data input from standard MIDI files and text-based rhythm annotations. Multiple bars of music can be processed, reporting syncopation values bar by bar as well as various descriptive statistics across a whole piece. Strengths of the toolkit also include easy output to XML and JSON files plus the ability to accept arbitrary rhythm patterns as well as time-signature and tempo changes. In addition, the toolkit defines a common interface for syncopation models, providing a simple plugin architecture for future extensibility.  
 
-XXXXX Key features XXXXX. For ease of input, the SynPy toolkit is able to process standard MIDI files or text annotations of rhythm patterns in a simple, intuitive syntax. Multiple bars of music can be processed, reporting syncopation values bar by bar as well as various descriptive statistics across a whole piece. The toolkit defines a common interface for syncopation models, providing a simple plugin architecture for future extensibility.  
-
-In section~\ref{sec:background} we introduce mathematical representations of a few key rhythmic concepts that form the basis of the toolkit then briefly review seven syncopation models that have been implemented. In section~\ref{sec:framework} we outline the functional requirements and  architecture of SynPy, describing input sources, options and usage.
+In Section~\ref{sec:background} we introduce mathematical representations of a few key rhythmic concepts that form the basis of the toolkit then briefly review seven syncopation models that have been implemented. In Section~\ref{sec:framework} we outline the functional requirements and  architecture of SynPy, describing input sources, options and usage.