Mercurial > hg > syncopation-dataset
changeset 71:9a60ca4ae0fb
updating models and latex files. added
results.csv
author | christopherh <christopher.harte@eecs.qmul.ac.uk> |
---|---|
date | Mon, 11 May 2015 23:36:25 +0100 |
parents | c9615e237705 |
children | ef891481231e |
files | SMC2015latex/csong.bib SMC2015latex/section/background.tex SMC2015latex/section/conclusion.tex SMC2015latex/section/dataset.tex SMC2015latex/section/framework.tex SMC2015latex/section/introduction.tex SMC2015latex/syncopation_toolkit.tex Syncopation models/results.csv Syncopation models/synpy/SG.py Syncopation models/synpy/TMC.py Syncopation models/synpy/basic_functions.py Syncopation models/synpy/music_objects.py |
diffstat | 12 files changed, 308 insertions(+), 164 deletions(-) [+] |
line wrap: on
line diff
--- a/SMC2015latex/csong.bib Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/csong.bib Mon May 11 23:36:25 2015 +0100 @@ -949,7 +949,7 @@ @misc {Song14URL, author = {Chunyang Song and Christopher Harte and Marcus Pearce}, - title = {C4{DM} Syncopation Dataset and Toolkit}, + title = {SynPy Toolkit and Syncopation Perceptual Dataset}, year = {2014}, howpublished = {{{https://code.soundsoftware.ac.uk/projects/syncopation-dataset}}} }
--- a/SMC2015latex/section/background.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/section/background.tex Mon May 11 23:36:25 2015 +0100 @@ -3,6 +3,17 @@ In this section, to introduce the theory behind the toolkit, we briefly present key aspects of its underlying mathematical framework (described in detail in \cite{Song15thesis}) and then give a short overview of each of the implemented syncopation models. %Please refer to for a more detailed treatment of all the related concepts and their mathematical notation. + + +% \subsection{Rhythm representation} +% \label{sec:background:rhythm} + +\subsection{Time-span} +\label{sec:background:rhythm:timespan} +The term \emph{time-span} has been defined as the period between two points in time, including all time points in between \cite{Lerdahl_Jackendoff83GTTM}. To represent a given rhythm, we must specify the time-span within which it occurs by defining a reference time origin $\timeorigin$ and end time $\timeend$, the total duration $\timespan$ of which is $\timespan = \timeend-\timeorigin$ (Figure~\ref{fig:general}). + +For the SynPy toolkit, we use \emph{ticks} as as the basic time unit as opposed to seconds (in keeping with the representation used for standard MIDI files) where the rate is given in \emph{Ticks Per Quarter-note} (TPQ). The TPQ rate that is chosen is arbitrary so long as the start time and duration of all notes in a rhythm pattern can be represented as integer values. As Figure~\ref{fig:clave} demonstrates, the \emph{Son} clave rhythm pattern could be correctly represented both at 8 and 4 TPQ but not at 2 TPQ because the pattern contains a note that starts on the fourth $16^{\textrm{th}}$-note position of the bar. + \begin{figure} \centering \includegraphics[width=\columnwidth]{images/general3.pdf} @@ -10,56 +21,46 @@ \label{fig:general} \end{figure} -% \subsection{Rhythm representation} -% \label{sec:background:rhythm} -\subsection{Time-span} -\label{sec:background:rhythm:timespan} -The term \emph{time-span} has been defined as the period between two points in time, including all time points in between \cite{Lerdahl_Jackendoff83GTTM}. To represent a given rhythm, we must specify the time-span within which it occurs by defining a reference time origin $\timeorigin$ and end time $\timeend$, the total duration $\timespan$ of which is $\timespan = \timeend-\timeorigin$ (Figure~\ref{fig:general}). -For the SynPy toolkit, the basic time unit is \emph{ticks-per-quarternote} (TPQ) as opposed to seconds describe the time-span of a length of rhythm. The minimum TPQ is determined by the rhythm-pattern so that all the events can be represented. As demonstrated in Figure~\ref{fig:clave}, the \emph{Son} clave rhythm pattern could be represented both at 8 and 4 ticks per quarter-note but the minimum representable resolution would be 4. -\begin{figure}[t] -\centering -\includegraphics[width=0.85\columnwidth]{images/clave_tpq.pdf} -\caption{The representation of \emph{Son} clave rhythm in different settings of Ticks Per Quarter-note (TPQ). Each quarter-note is represented by 8 and 4 ticks in (a) and (b) respectively, thus all the sounded notes are captured (highlighted by the blue circles); however in (c) where TQP is 2, the second note cannot be represented by this resolution.} -\label{fig:clave} -\end{figure} +\subsection{Note and velocity sequences} +\label{sec:background:rhythm:note} +A single, \emph{note} event $\note$ occurring in a time-span may be described by the tuple $(\starttime, \durationtime, \velocity)$ as shown in Figure~\ref{fig:general}, where $\starttime$ represents start or \emph{onset} time relative to $\timeorigin$, $\durationtime$ represents note duration in the same units and $\velocity$ represents the note \emph{velocity} (i.e. the dynamic; how loud or accented the event is relative to others), where $\velocity > 0$. - - -\subsection{Note and sequences} -\label{sec:background:rhythm:note} -A single, \emph{note} event $\note$ occurring in this time-span may be described by the tuple $(\starttime, \durationtime, \velocity)$ as shown in Figure~\ref{fig:general}, where $\starttime$ represents start or \emph{onset} time relative to $\timeorigin$, $\durationtime$ represents note duration in the same units and $\velocity$ represents the note \emph{velocity} (i.e. the dynamic; how loud or accented the event is relative to others), where $\velocity > 0$. - -This allows us to represent an arbitrary rhythm as a note sequence $\sequence$, ordered in time +This allows us to represent an arbitrary rhythm as a \emph{note sequence} $\sequence$, ordered in time \begin{equation} \label{eq:def_sequence} \sequence = \langle\note_0, \note_1, \cdots, \note_{\sequencelength-1}\rangle \end{equation} -Suppose TQP is set as 4, an example note sequence for the clave rhythm in Figure~\ref{fig:clave} can be: +If TPQ is set to 4, an example note sequence representing the clave rhythm in Figure~\ref{fig:clave} could be: \begin{equation} \label{eq:note_sequence} -\sequence = \langle (0,3,2),(3,1,1),(6,2,2),(10,2,1),(12,4,1) \rangle +\sequence = \langle {(0,3,2),(3,1,1),(6,2,2),(10,2,1),(12,4,1)} \rangle, +\end{equation} +the higher velocity values of the first and third notes showing that these notes are accented in this example. + +An alternative representation of a rhythm is the \emph{velocity sequence}. This is a sequence of values representing equally spaced points in a time-span; each value corresponding to the normalised velocity of a note onset if present or zero otherwise. The velocity sequence for the note sequence in Equation~\ref{eq:note_sequence} can therefore be represented as +\begin{equation} +\label{eq:velocity_sequence} +\spanvector = \langle 1,0,0,0.5,0,0,1,0,0,0,0.5,0,0.5,0,0,0 \rangle. \end{equation} -The higher $velocity$ values of the first and third notes reflect that these notes are accented. +It should be noted that the conversion between note sequence and velocity sequence is not commutative, because the note duration information is lost in the conversion. As a result, converting from velocity sequence to note sequence, an assumption must be made that note durations equal to the inter-onset-intervals. Converting the velocity sequence in Equation~\ref{eq:velocity_sequence} back to a note sequence would therefore yield +\begin{equation} +\label{eq:new_note_sequence} +\sequence' = \langle (0,3,2),(3,3,1),(6,4,2),(10,2,1),(12,4,1) \rangle, +\end{equation} +which has different durations for the second and fourth notes compared to the original sequence in Equation~\ref{eq:note_sequence}. -An alternative representation of a rhythm is the \emph{velocity sequence}. This is a sequence of values representing equally spaced points in time; the values corresponding to the normalised velocity of a note onset if one is present at that time or zero otherwise. +\begin{figure} +\centering +\includegraphics[width=0.85\columnwidth]{images/clave_tpq.pdf} +\caption{Representation of the \emph{Son} clave rhythm at different Ticks Per Quarter-note (TPQ) resolutions. In (a) and (b) there is a tick for each note of the rhythm pattern thus all the sounded notes are captured (highlighted by the blue circles). However, in (c) where TPQ is 2, the second note of the pattern cannot be represented; the minimum resolution in this case is 4 TPQ.} +\label{fig:clave} +\end{figure} -The velocity sequence for the above clave rhythm can be derived as -\begin{equation} -\label{eq:velocity_sequence} -\spanvector = \langle 1,0,0,0.5,0,0,1,0,0,0,0.5,0,0.5,0,0,0 \rangle -\end{equation} - -It should be noted that the conversion between note sequence and velocity sequence is not commutative, because the note duration information is lost when converting from note sequence to velocity sequence. For example, the resulting note sequence converted from Equation~\ref{eq:velocity_sequence} would be -\begin{equation} -\label{eq:note_sequence} -\sequence' = \langle (0,3,2),(3,3,1),(6,4,2),(10,2,1),(12,4,1) \rangle -\end{equation} -, which is different from the original note sequence in Equation~\ref{eq:note_sequence}. \subsection{Metrical structure and time-signature} \label{sec:background:rhythm:meter} @@ -67,62 +68,48 @@ \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/meter_hierarchy7.pdf} -\caption{Metrical hierarchies for different time-signatures.(a) A simple-duple hierarchy dividing the bar into two groups of two (as with a 4/4 time-signature). (b) A compound-duple hierarchy dividing a bar into two beats, each of which is subdivided by three (e.g. 6/8 time-signature). Reading the weights from left to right in any level $\metriclevel$ gives the elements in sequence $\metricvector_\metriclevel$} +\caption{Metrical hierarchies for different time-signatures: (a) A simple-duple hierarchy dividing the bar into two groups of two (as with a 4/4 time-signature); (b) A compound-duple hierarchy dividing a bar into two beats, each of which is subdivided by three (e.g. 6/8 time-signature). %Reading the weights from left to right in any level $\metriclevel$ gives the elements in sequence $\metricvector_\metriclevel$ +} \label{fig:meter-hierarchy} \end{figure} -Isochronous-meter is formed with a multi-level hierarchical metrical structure~\cite{Lerdahl_Jackendoff83GTTM, London04Meter}. As shown in Figure~\ref{fig:meter-hierarchy}, under a certain metrical hierarchy, a bar is divided by a subdivision factor $\subdivision$ at each metrical level with index $\metriclevel$ where $\metriclevel \in [0, \levelmax]$. The list of subdivision factors is referred as a \emph{subdivision sequence}. +Isochronous-meter is formed with a multi-level hierarchical metrical structure~\cite{Lerdahl_Jackendoff83GTTM, London04Meter}. The metrical hierarchy may be described with a \emph{subdivision sequence} $\langle \subdivision_0, \subdivision_1, ... ,\subdivision_{\levelmax}\rangle$ such that in each metrical level $\metriclevel$, the value $\subdivision_\metriclevel$ specifies how nodes in the level above (i.e. $\metriclevel-1$) should be split to produce the current level (see Figure~\ref{fig:meter-hierarchy}). Any time-signature can be described by specifying a subdivision sequence and the metrical level that represents the beat. -Events at different metrical positions vary in perceptual salience or \emph{metrical weight}~\cite{Palmer_Krumhansl90}. These weights may be represented as a \emph{weight sequence} $\metricweightset = \langle \metricweight_0, \metricweight_1, ... \metricweight_{\levelmax}\rangle$. The prevailing hypothesis for the assignment of weights in the metrical hierarchy is that a time point that exists in both the current metrical level and the level above is said to have a \emph{strong} weight compared gto time points that are not also present in the level above~\cite{Lerdahl_Jackendoff83GTTM}. The choice of values for the weights in $\metricweightset$ can vary between different models but the assignment of weights to nodes is common to all as in ~\cite{Lerdahl_Jackendoff83GTTM}. +Events at different metrical positions vary in perceptual salience or \emph{metrical weight}~\cite{Palmer_Krumhansl90}. These weights may be represented as a \emph{weight sequence} $\metricweightset = \langle \metricweight_0, \metricweight_1, ... \metricweight_{\levelmax}\rangle$. The prevailing hypothesis for the assignment of weights in the metrical hierarchy is that a time point that exists in both the current metrical level and the level above is said to have a \emph{strong} weight compared to time points that are not also present in the level above~\cite{Lerdahl_Jackendoff83GTTM}. The hierarchy of weights and subdivisions forms a key component in the calculation of many syncopation models. The choice of values for the weights in $\metricweightset$ can vary between different models but the assignment of weights to nodes at a given level in the hierarchy, as described in~\cite{Lerdahl_Jackendoff83GTTM}, is common to all. \subsection{Syncopation models} \label{sec:background:models} -In this section we give a brief review of each implemented syncopation model, including their general hypothesis and mechanism. To compare the capabilities of each model, we give an overview of the musical features each captures in Table~\ref{ta:capabilities}. For a detailed review of these models see \cite{Song15thesis}. + +In this section we briefly review each implemented syncopation model, discussing their general hypothesis and giving a flavour of their mechanism. It is not possible to go into the full details of each implementation here but a thorough review of the models is given in chapter 3 of \cite{Song15thesis}. To help compare the capabilities of different models, we also give an overview of the musical features each one captures in Table~\ref{ta:capabilities}. \subsubsection{Longuet-Higgins and Lee 1984 (\lhl)} \label{sec:background:models:lhl} -Longuet-Higgins and Lee's model \cite{LHL84} decomposes rhythm patterns into a tree structure as described in Section~\ref{sec:background:rhythm:meter} with metrical weights $\metricweight_\metriclevel = -\metriclevel$ for all $\metricweight_\metriclevel \in \metricweightset$ i.e. $\metricweightset = \langle 0,-1,-2, ... \rangle$. +Longuet-Higgins and Lee's model \cite{LHL84} decomposes rhythm patterns into a tree structure as described in Section~\ref{sec:background:rhythm:meter} assigning metrical weights $\metricweight_\metriclevel = -\metriclevel$ %for all $\metricweight_\metriclevel \in \metricweightset$ +i.e. $\metricweightset = \langle 0,-1,-2, ... \rangle$. The hypothesis of this model is that a syncopation occurs when a rest ($\RestNode$) in one metrical position follows a note ($\NoteNode$) in a weaker position. Where such a note-rest pair occurs, the difference in their metrical weights is taken as a local syncopation score. Summing the local scores produces the syncopation prediction for the whole rhythm sequence. \subsubsection{Pressing 1997 (\pressing)} \label{sec:background:models:prs} -Pressing's cognitive complexity model~\cite{Pressing97,Pressing93} specifies six prototype binary sequences and ranks them in terms of \emph{cognitive cost}. For example, the lowest cost is the \emph{null} prototype that contains either a rest or a single note whereas the \emph{filled} prototype that has a note in every position of the sequence e.g. +Pressing's cognitive complexity model~\cite{Pressing97,Pressing93} specifies six prototype velocity sequences and ranks them in terms of \emph{cognitive cost}. For example, the lowest cost is the \emph{null} prototype for rhythms that contain either a single rest or note; a higher cost is given to the \emph{filled} prototype that has a note in every position of the sequence e.g. $ \langle 1,1,1,1 \rangle \nonumber -$ -which, in turn, has a lower cost than the \emph{syncopated} prototype that has a 0 in the first (i.e.\ strongest) metrical position e.g. +$. +The highest cost is given to the \emph{syncopated} prototype that has a rest in the first (i.e.\ strongest) metrical position e.g. $ \langle 0,1,1,1 \rangle \nonumber $. -The model analyses the cost for the whole rhythm-pattern and its sub-sequences at each metrical level determined by $\subdivision_\metriclevel$. The final output is a weighted sum of the costs by the number of sub-sequences in each level. +The model analyses the cost for the whole rhythm-pattern and for each of its sub-sequences at every metrical level determined by the subdivision factor. The final output is a sum of the costs per level weighted by the number of sub-sequences in each. \subsubsection{Toussaint 2002 `Metric Complexity' (\metrical)} \label{sec:background:models:tmc} -Toussaint's \emph{metric complexity} measure \cite{Toussaint02Metrical} defines the metrical weights as $\metricweight_\metriclevel = \metriclevel_{\textrm{max}} - \metriclevel +1$, thus stronger metrical position is associated with higher weight and the weakest position will be $\metricweight_{\metriclevel_{\textrm{max}}}=1$. The hypothesis of the model is that the level of syncopation is the difference between the metrical simplicity of the rhythm (i.e. the sum of the metrical weights for each note) and the maximum possible metrical simplicity (i.e. the sum of metrical weights for a rhythm containing the same number of notes but placed at strongest possible metrical positions). - -\subsubsection{Sioros and Guedes 2011 (\sioros)} -\label{sec:background:models:sg} -Sioros and Guedes~\cite{Sioros11,Sioros12} has three main hypotheses: First, accenting of notes affects perceived syncopation and should be included in the model (the only model in this study to do so). Second, humans try to minimise the syncopation of a particular note relative to its neighbours in each level of the metrical hierarchy. Third, syncopations at the beat level are more salient than those that occur in higher or lower metrical levels so the outcome should be scaled to reflect this~\cite{Sioros13}. - -\subsubsection{Keith 1991 (\keith)} -\label{sec:background:models:kth} - - -\subsubsection{Toussaint 2005 `Off-Beatness' (\offbeat)} -\label{sec:background:models:tob} - - -\subsubsection{G\'omez 2005 `Weighted Note-to-Beat Distance' (WNBD)} -\label{sec:background:models:wnbd} - +Toussaint's metric complexity measure \cite{Toussaint02Metrical} defines the metrical weights as $\metricweight_\metriclevel = \metriclevel_{\textrm{max}} - \metriclevel +1$, thus stronger metrical positions are associated with higher weights and the weakest position will be $\metricweight_{\metriclevel_{\textrm{max}}}=1$. The hypothesis of the model is that the level of syncopation is the difference between the metrical simplicity of the given rhythm (i.e. the sum of the metrical weights for each note) and the maximum possible metrical simplicity for a rhythm containing the same number of notes. \begin{table} \renewcommand{\arraystretch}{1.2} \centering - {\footnotesize \begin{tabular}{lccccccc} %\hline @@ -142,6 +129,31 @@ \label{ta:capabilities} \end{table} +\subsubsection{Sioros and Guedes 2011 (\sioros)} +\label{sec:background:models:sg} +Sioros and Guedes~\cite{Sioros11,Sioros12} also use metrical hierarchy to determine syncopation. The main hypotheses are that humans try to minimise the syncopation of a particular note relative to its neighbours in each level of the metrical hierarchy, and that syncopations at the beat level are more salient than those that occur in higher or lower metrical levels. + +The metrical weights for this model are $\metricweight_\metriclevel = \metriclevel$ i.e. $\metricweightset = \langle 0, 1, 2, ... \rangle$. %for all $\metricweight_\metriclevel \in \metricweightset$. +The syncopation for a note is a function of its velocity, its position in the hierarchy and the weights of the previous and next notes in the rhythm sequence. + +\subsubsection{Keith 1991 (\keith)} +\label{sec:background:models:kth} +Keith's model \cite{Keith91} defines two types of syncopated events: a \emph{hesitation}, where a note ends off the beat (carrying a weight of 1) and \emph{anticipation}, where a note begins off the beat (with a weight of 2). Where a note exhibits both a hesitation and an anticipation, a \emph{syncopation} is said to occur and the respective weights are summed to give a total of 3. The start and end time is considered off-beat if the they are not divisible by the nearest power of two less than the note duration. + +\subsubsection{Toussaint 2005 `Off-Beatness' (\offbeat)} + +\label{sec:background:models:tob} + +The off-beatness measure~\cite{Toussaint05Offbeatness} is a geometric model that treats the time-span of a rhythm sequence as a ${\timespansequence}$-unit cycle. The hypothesis, as applied to syncopation, is that syncopated events are those that occur in `off-beat' positions in the cycle; the definition of \emph{off-beatness} in this case being any position that does not fall on a regular subdivision of the cycle length ${\timespansequence}$, thus the model is unable to measure cycles where ${\timespansequence}$ is 1 or prime. + +\subsubsection{G\'omez 2005 `Weighted Note-to-Beat Distance' (WNBD)} + +\label{sec:background:models:wnbd} + +The WNBD model of G\'omez et al.~\cite{Gomez05} defines note events that start in between beats in the notated meter to be `off-beat' thus leading to syncopation. The syncopation value for a note is inversely related to its distance from the nearest beat and is assigned more weight if the note crosses over the following beat. + + + %All the models use temporal features (i.e. onset time point and/or note duration) in the modelling. The SG model also process dynamic information of musical events (i.e. note velocity). We use the term \emph{monorhythm} to refer to any rhythm-pattern that is not polyrhythmic. All the models can measure syncopation of monorhythms, but only KTH, TOB and WNBD models can deal with polyrhythms. Finally, all the models can deal with rhythms (notated) in duple meter, but all models except KTH can cope with rhythms in a triple meter.
--- a/SMC2015latex/section/conclusion.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/section/conclusion.tex Mon May 11 23:36:25 2015 +0100 @@ -1,5 +1,5 @@ \section{Conclusion} \label{sec:conclusion} -In this paper we have described SynPy, an open-source Python toolkit for syncopation calculation. We introduced the relevant rhythmical concepts behind the the toolkit: note, bar, note and velocity sequence, as well as parameters like TPQ, the subdivision and metrical weights sequence. We also briefly reviewed the hypothesis and mechanism of the seven syncopation models implemented in the toolkit. We then outlined the architecture of the toolkit and demonstrated the easiness of using the software by showing examples of command lines and input rhythm annotation. Finally we presented the syncopation predictions for the dataset in~\cite{•} calculated by our toolkit, providing a overall visualisation on the prediction ranges and capabilities of individual model. +In this paper we have described SynPy, an open-source Python toolkit for syncopation calculation. We have introduced the theoretical concepts underpinning the toolkit and briefly reviewed the hypothesis and mechanism of the seven implemented models. The architecture of the toolkit has been introduced in Section~\ref{sec:framework} and an example of command line usage shown demonstrating ease of use. We have presented the syncopation predictions calculated by SynPy for the dataset from~\cite{Song15thesis}, providing an overall visualisation of the prediction ranges and capabilities of each individual model. -The SynPy toolkit possesses several merits, including allowing input from multiple different sources of music data including standard MIDI files, and provides implementions seven common syncopation models found in the literature. It will be a valuable tool for any researchers who study syncopation models, enabling a level of comparison and testing for new models that was hitherto unavailable. The plugin architecture of the toolkit allows new models to be added easily in the future and open-source hosting in a repository on the soundsoftware.ac.uk servers ensures long term sustainability of the project code. \ No newline at end of file +The SynPy toolkit possesses a number of merits, including the ability to process arbitrary rhythm patterns, convenient input from different sources of music data including standard MIDI files and text annotations, and output to XML and JSON files for further data analysis. It will be a valuable tool for many researchers in the computational music analysis community. It will be particularly useful to those who study syncopation models because it enables a level of comparison and testing for new models that was hitherto unavailable. The plugin architecture of the toolkit allows new models to be added easily in the future and open-source hosting in a repository on the soundsoftware.ac.uk servers ensures long term sustainability of the project. \ No newline at end of file
--- a/SMC2015latex/section/dataset.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/section/dataset.tex Mon May 11 23:36:25 2015 +0100 @@ -1,13 +1,8 @@ +% big figure has moved to the framework section so it appears on teh penultimate page in the PDF. + \section{Syncopation Dataset} \label{sec:data} -The major outcome of the SynPy toolkit is to provide prediction of the level of syncopation of a any rhythm pattern that can be measured by a given model. As a demonstration, we apply all seven syncopation models on the rhythms patterns used as stimuli for the syncopation perceptual dataset from~\cite{Song15thesis, Song13}. This dataset includes 27 mono-rhythms in 4/4 meter, 36 monorhythms in 6/8 and 48 poly-rhythms in 4/4; altogether forming a set of 111 rhythm patterns. - -\begin{figure*}[t] -\centering -\includegraphics[width=0.85\textwidth]{images/allmodels.pdf} -\caption{Syncopation predictions of the seven models in the toolkit for the syncopation dataset from~\cite{Song15thesis}. The range of prediction values across all rhythm patterns is given for each model. Within each rhythm category, the rhythm patterns are arranged by tatum-rate (i.e. quarter-note rate then eighth-note rate) then in alphabetical order (the data set naming convention uses letters a-l to represent short rhythm components that make up longer patterns). Gaps in model output occur where a particular model is unable to process the specific rhtyhm category i.e. LHL, PRS, TMC, SG cannot process polyrhythms and KTH can only measure rhythms in duple meters.} -\label{fig:modelpredictions} -\end{figure*} +The major outcome of the SynPy toolkit is to provide prediction of the level of syncopation of any rhythm pattern that can be measured by a given model. As a demonstration, we apply all seven syncopation models on the rhythms patterns used as stimuli for the syncopation perceptual dataset from~\cite{Song15thesis, Song13}. This dataset includes 27 monorhythms in 4/4 meter, 36 monorhythms in 6/8 and 48 polyrhythms in 4/4; altogether forming a set of 111 rhythm patterns. Figure~\ref{fig:modelpredictions} plots the syncopation predictions of individual model for each rhythm. It presents the different ranges of prediction values for each model and shows their capabilities in terms of rhythm categories (refer to Table~\ref{ta:capabilities}).
--- a/SMC2015latex/section/framework.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/section/framework.tex Mon May 11 23:36:25 2015 +0100 @@ -3,8 +3,8 @@ \begin{figure}[t] \centering -\includegraphics[width=0.95\columnwidth]{images/framework.pdf} -\caption{Module hierarchy in the synpy toolkit: the top-level module provides a simple interface for the user to test different syncopation models. Musical constructs such as bars, velocity and note sequences, notes and time-signatures are defined in the `music objects' module; support for common procedures such as sequence concatenation and subdivision is provided in `basic functions'. Models and file reading components can be chosen as required by the user.\label{fig:framework}} +\includegraphics[width=0.9\columnwidth]{images/framework.pdf} +\caption{Module hierarchy in the SynPy toolkit: the top-level module provides a simple interface for the user to test different syncopation models. Musical constructs such as bars, velocity and note sequences, notes and time-signatures are defined in the `music objects' module; support for common procedures such as sequence concatenation and subdivision is provided in `basic functions'. Models and file reading components can be chosen as required by the user.\label{fig:framework}} \end{figure} The architecture of the toolkit is shown in Figure~\ref{fig:framework}. Syncopation values can be calculated for each bar in a given source of rhythm data along with selected statistics over all bars; the user specifies which model to use and supplies any special parameters that are required. Sources of rhythm data can be a bar object or a list of bars (detailed below in Section~\ref{sec:musicobjects}) or, alternatively, the name of a file containing music data. Where a model is unable to calculate a value for a given rhythm pattern, a `None' value is recorded for that bar and the indices of unmeasured bars reported in the output. If no user parameters are specified, the default parameters specified in the literature for each model are used. Output can optionally be saved directly to XML or JSON files. An example of usage in the Python interpreter is shown in Figure~\ref{ta:example}. @@ -49,13 +49,20 @@ V{1,0,0,0.5,0,0,1,0,0,0,0.5,0,0.5,0,0,0} \end{minted} } -\caption{Example rhythm annotation \code{.rhy} file containing two bars of the Son Clave rhythm. The first is expressed as a note sequence with resolution of four ticks per quarternote; the second is the same rhythm expressed as a velocity sequence (see Section~\ref{sec:background}).} +\caption{Example rhythm annotation file \code{clave.rhy} containing two bars of the Son Clave rhythm as discussed Section~\ref{sec:background}. The first bar is expressed as a note sequence with resolution of four ticks per quarter-note; the second is the same rhythm expressed as a velocity sequence.} \label{ta:clave} \end{figure} -Our \code{.rhy} annotation format is a light text syntax for descibing rhtyhm patterns directly in terms of note and velocity sequences (see Figure~\ref{ta:clave}). The full syntax specification is given in Backus Naur Form on the toolkit page \cite{Song14URL}. +Our \code{.rhy} annotation format is a light text syntax for describing rhythm patterns directly in terms of note and velocity sequences (see Figure~\ref{ta:clave}). The full syntax specification is given in Backus Naur Form on the toolkit repository \cite{Song14URL}. The MIDI file reader can open type 0 and type 1 standard MIDI files and select a given track to read rhythm from. Notes with zero delta time between them (i.e. chords) are treated as the same event for the purposes of creating note sequences from the MIDI stream. Time-signature and tempo events encoded in the MIDI stream are assumed to correctly describe those parameters of the recorded music so it is recommended that the user uses correctly annotated and quantised MIDI files. +\begin{figure*}[t] +\centering +\includegraphics[width=0.85\textwidth]{images/allmodels.pdf} +\caption{Syncopation predictions of the seven models in the toolkit for the syncopation dataset from~\cite{Song15thesis}. The range of prediction values across all rhythm patterns is given for each model. Within each rhythm category, the rhythm patterns are arranged by tatum-rate (i.e. quarter-note rate then eighth-note rate) then in alphabetical order (the data set naming convention uses letters a-l to represent short rhythm components that make up longer patterns). Gaps in model output occur where a particular model is unable to process the specific rhythm category i.e. LHL, PRS, TMC, SG cannot process polyrhythms and KTH can only measure rhythms in duple meters.} +\label{fig:modelpredictions} +\end{figure*} + \subsection{Plugin architecture} The system architecture has been designed to allow new models to be added easily. Models have a common interface, exposing a single function that will return the syncopation value for a bar of music. Optional parameters may be supplied as a Python dictionary if the user wishes to specify settings different from the those given in the literature for a specific model.
--- a/SMC2015latex/section/introduction.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/section/introduction.tex Mon May 11 23:36:25 2015 +0100 @@ -1,11 +1,11 @@ \section{Introduction} \label{sec:introduction} -Syncopation is a fundamental feature of rhythm in music and a crucial aspect of musical character in many styles and cultures. Having comprehensive models to capture syncopation perception allows us to better understand the broader aspects of music perception. Over the last thirty years, several modelling approaches for syncopation have been developed and heavily used in studies in multiple disciplines~\cite{LHL84,Pressing97,Toussaint02Metrical,Sioros11,Keith91,Toussaint05Offbeatness,Gomez05,Keller_Schubert11}. To date, formal investigations on the links between syncopation and music perception subjects such as meter induction~\cite{Povel_Essens85, Fitch_Rosenfeld07}, emotion~\cite{Keller_Schubert11}, groove~\cite{Madison13, Witek14} and neurophysiological responses~\cite{Winkler09, Vuust11}, have largely relied on quantitative measures of syncopation. However, until now there has not been a comprehensive reference implementation of the different algorithms available to facilitate quantifying syncopation. +Syncopation is a fundamental feature of rhythm in music and a crucial aspect of musical character in many styles and cultures. Having comprehensive models to capture syncopation perception allows us to better understand the broader aspects of music perception. Over the last thirty years, several modelling approaches for syncopation have been developed and widely used in studies in multiple disciplines~\cite{LHL84,Pressing97,Toussaint02Metrical,Sioros11,Keith91,Toussaint05Offbeatness,Gomez05,Keller_Schubert11}. To date, formal investigations on the links between syncopation and music perception subjects such as meter induction~\cite{Povel_Essens85, Fitch_Rosenfeld07}, emotion~\cite{Keller_Schubert11}, groove~\cite{Madison13, Witek14} and neurophysiological responses~\cite{Winkler09, Vuust11}, have largely relied on quantitative measures of syncopation. However, until now there has not been a comprehensive reference implementation of the different algorithms available to facilitate quantifying syncopation. -In~\cite{Song15thesis}, Song provides a consolidated mathematical framework and in-depth review of seven widely used syncopation models: Longuet-Higgins and Lee~\cite{LHL84}, Pressing~\cite{Pressing97,Pressing93}, Toussaint's Metric Complexity~\cite{Toussaint02Metrical}, Sioros and Guedes \cite{Sioros11,Sioros12}, Keith~\cite{Keith91}, Toussaint's off-beatness measure~\cite{Toussaint05Offbeatness} and G\'omez et al.'s Weighted Note-to-Beat Distance~\cite{Gomez05}. With the exception of Sioros and Guedes' model, code for which was open-sourced as part of the Kinetic project~\ref{Sioros11URL}, reference code for the models has not previously been publically available. -Based on this mathematical framework, the SynPy toolkit provides implementations of these syncopation models in the Python programming language. +In~\cite{Song15thesis}, Song provides a consolidated mathematical framework and in-depth review of seven widely used syncopation models: Longuet-Higgins and Lee~\cite{LHL84}, Pressing~\cite{Pressing97,Pressing93}, Toussaint's Metric Complexity~\cite{Toussaint02Metrical}, Sioros and Guedes \cite{Sioros11,Sioros12}, Keith~\cite{Keith91}, Toussaint's off-beatness measure~\cite{Toussaint05Offbeatness} and G\'omez et al.'s Weighted Note-to-Beat Distance~\cite{Gomez05}. With the exception of Sioros and Guedes' model, code for which was open-sourced as part of the Kinetic project~\cite{Sioros11URL}, reference code for the models has not previously been publically available. +Based on this mathematical framework, the SynPy toolkit (available from the repository at~\cite{Song14URL}) provides implementations of these syncopation models in the Python programming language. -The toolkit not only provides the first open-source implementation of these models in a unified framework but also allows convenient data input from standard MIDI files and text-based rhythm annotations. Multiple bars of music can be processed, reporting syncopation values bar by bar as well as various descriptive statistics across a whole piece. Strengths of the toolkit also include easy output to XML and JSON files plus the ability to accept arbitrary rhythm patterns as well as time-signature and tempo changes. In addition, the toolkit defines a common interface for syncopation models, providing a simple plugin architecture for future extensibility. +The toolkit not only provides the first open-source implementation of these models in a unified framework but also allows convenient data input from standard MIDI files and text-based rhythm annotations. Multiple bars of music can be processed, reporting syncopation values bar by bar as well as descriptive statistics across a whole piece. Strengths of the toolkit also include easy output to XML and JSON files plus the ability to accept arbitrary rhythm patterns as well as time-signature and tempo changes. In addition, the toolkit defines a common interface for syncopation models, providing a simple plugin architecture for future extensibility. -In Section~\ref{sec:background} we introduce mathematical representations of a few key rhythmic concepts that form the basis of the toolkit then briefly review seven syncopation models that have been implemented. In Section~\ref{sec:framework} we outline the functional requirements and architecture of SynPy, describing input sources, options and usage. +In Section~\ref{sec:background} we introduce mathematical representations of a few key rhythmic concepts that form the basis of the toolkit then briefly review seven syncopation models that have been implemented. In Section~\ref{sec:framework} we outline the architecture of SynPy, describing input sources, options and usage.
--- a/SMC2015latex/syncopation_toolkit.tex Mon Apr 27 20:32:10 2015 +0100 +++ b/SMC2015latex/syncopation_toolkit.tex Mon May 11 23:36:25 2015 +0100 @@ -137,9 +137,9 @@ % \begin{abstract} -In this paper we present SynPy, an open-source software toolkit for quantifying syncopation. The toolkit provides implementations for seven widely known syncopation models using a simple plugin architecture for easy extensibility. Synpy can process multiple bars of music containing arbitrary rhythm patterns and also accepts time-signature and tempo changes. The tools are easy to use allowing input from various sources including text annotations and standard MIDI files. +In this paper we present SynPy, an open-source software toolkit for quantifying syncopation. It is flexible yet easy to use, providing the first comprehensive set of implementations for seven widely known syncopation models using a simple plugin architecture for extensibility. SynPy is able to process multiple bars of music containing arbitrary rhythm patterns and can accept time-signature and tempo changes within a piece. The toolkit can take input from various sources including text annotations and standard MIDI files. Results can also be output to XML and JSON file formats. -This toolkit will be a valuable tool for researchers studying syncopation modelling. It enables quantitative comparison of existing models and also provides a convenient platform for developement and testing of new models. +This toolkit will be valuable to the computational music analysis community, meeting the needs of a broad range of studies where a quantitative measure of syncopation is required. It facilitates a new degree of comparison for existing syncopation models and also provides a convenient platform for the development and testing of new models. \end{abstract} %
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Syncopation models/results.csv Mon May 11 23:36:25 2015 +0100 @@ -0,0 +1,112 @@ +Rhythm, LHL, PRS, TMC, SG, TOB, KTH, WNBD +ab.rhy,3,7.5,2,0.520833333333,1,3,0 +abab.rhy,5,12.5,5,1.27604166667,2,6,4.0 +ac.rhy,2,5.0,1,0.3125,0,3,0 +ad.rhy,2,5.5,2,0.451388888889,1,2,0 +adad.rhy,4,10.5,5,1.20659722222,2,4,8.0 +af.rhy,None,None,None,None,1,3,3.0 +ag.rhy,None,None,None,None,0,3,3.0 +ah.rhy,None,None,None,None,1,6,9.0 +aj.rhy,None,None,None,None,1,3,3.0 +ak.rhy,None,None,None,None,0,3,3.0 +al.rhy,None,None,None,None,1,6,9.0 +ba.rhy,3,7.5,2,0.510416666667,1,3,0 +baba.rhy,6,12.5,5,1.23958333333,2,6,4.0 +bb.rhy,3,10.0,3,1.03125,2,6,0 +bbbb.rhy,6,15.0,7,2.515625,4,12,16.0 +bc.rhy,2,6.5,2,0.423611111111,1,1,0 +bcbc.rhy,3,11.5,5,1.15277777778,2,2,4.0 +bd.rhy,2,7.0,2,0.5625,2,3,0 +bdbd.rhy,4,12.0,5,1.421875,4,6,12.0 +bf.rhy,None,None,None,None,0,5,6.0 +bg.rhy,None,None,None,None,0,4,3.0 +bh.rhy,None,None,None,None,0,6,12.0 +bj.rhy,None,None,None,None,0,3,6.0 +bk.rhy,None,None,None,None,0,4,3.0 +bl.rhy,None,None,None,None,0,6,12.0 +ca.rhy,-1,0,0,0.0,0,0,0 +cb.rhy,1,4.5,1,0.208333333333,1,1,0 +cbcb.rhy,1,7.5,2,0.486111111111,2,2,4.0 +cc.rhy,-1,1.0,0,0.0,0,0,0 +cd.rhy,-1,3.5,0,0.0,1,0,0 +cdcd.rhy,-1,6.5,0,0.0,2,0,4.0 +cf.rhy,None,None,None,None,1,3,3.0 +cg.rhy,None,None,None,None,0,3,6.0 +ch.rhy,None,None,None,None,1,6,9.0 +cj.rhy,None,None,None,None,1,3,3.0 +ck.rhy,None,None,None,None,0,3,6.0 +cl.rhy,None,None,None,None,1,6,9.0 +da.rhy,1,2.5,1,0.260416666667,1,2,0 +dada.rhy,2,4.5,2,0.607638888889,2,4,8.0 +db.rhy,1,5.0,1,0.46875,2,3,0 +dbdb.rhy,2,8.0,2,1.09375,4,6,12.0 +dc.rhy,-1,2.5,0,0.0,1,0,0 +dcdc.rhy,-1,4.5,0,0.0,2,0,4.0 +dd.rhy,-1,2.0,0,0.0,2,0,0 +dddd.rhy,-1,3.0,0,0.0,4,0,8.0 +df.rhy,None,None,None,None,0,5,3.0 +dg.rhy,None,None,None,None,0,3,6.0 +dh.rhy,None,None,None,None,0,6,9.0 +dj.rhy,None,None,None,None,0,3,3.0 +dk.rhy,None,None,None,None,0,3,6.0 +dl.rhy,None,None,None,None,0,6,9.0 +fa.rhy,None,None,None,None,0,3,3.0 +fb.rhy,None,None,None,None,0,5,6.0 +fc.rhy,None,None,None,None,0,3,3.0 +fd.rhy,None,None,None,None,0,5,3.0 +ff.rhy,3,10.0,3,0.90625,1,None,12.0 +fg.rhy,3,5.0,None,None,0,None,12.0 +fh.rhy,3,10.0,3,0.947916666667,1,None,15.0 +fj.rhy,2,7.5,2,0.819444444444,1,None,9.0 +fk.rhy,2,7.5,2,0.375,0,None,9.0 +fl.rhy,2,7.0,2,0.479166666667,1,None,12.0 +ga.rhy,None,None,None,None,1,3,3.0 +gb.rhy,None,None,None,None,0,4,3.0 +gc.rhy,None,None,None,None,1,3,6.0 +gd.rhy,None,None,None,None,0,3,6.0 +gf.rhy,3,10.0,3,0.9375,2,None,12.0 +gg.rhy,3,10.0,3,0.875,1,None,12.0 +gh.rhy,3,10.0,3,0.979166666667,2,None,15.0 +gj.rhy,2,8.5,2,1.25,2,None,9.0 +gk.rhy,2,8.5,2,0.805555555556,1,None,9.0 +gl.rhy,2,8.0,2,0.909722222222,2,None,12.0 +ha.rhy,None,None,None,None,1,6,9.0 +hb.rhy,None,None,None,None,0,6,12.0 +hc.rhy,None,None,None,None,1,6,9.0 +hd.rhy,None,None,None,None,0,6,9.0 +hf.rhy,3,10.0,3,0.989583333333,2,None,15.0 +hg.rhy,3,10.0,3,0.927083333333,1,None,15.0 +hh.rhy,3,10.0,3,1.03125,2,None,18.0 +hj.rhy,2,7.5,2,0.902777777778,2,None,12.0 +hk.rhy,2,7.5,2,0.458333333333,1,None,12.0 +hl.rhy,2,7.0,2,0.5625,2,None,15.0 +ja.rhy,None,None,None,None,0,3,3.0 +jb.rhy,None,None,None,None,0,3,6.0 +jc.rhy,None,None,None,None,0,3,3.0 +jd.rhy,None,None,None,None,0,3,3.0 +jf.rhy,1,5.5,1,0.59375,1,None,9.0 +jg.rhy,1,1.0,None,None,0,None,9.0 +jh.rhy,1,5.5,1,0.635416666667,1,None,12.0 +jj.rhy,0,6.0,0,0.333333333333,1,None,6.0 +jk.rhy,0,4.5,0,0.375,0,None,6.0 +jl.rhy,0,5.0,0,0.166666666667,1,None,9.0 +ka.rhy,None,None,None,None,1,3,3.0 +kb.rhy,None,None,None,None,0,4,3.0 +kc.rhy,None,None,None,None,1,3,6.0 +kd.rhy,None,None,None,None,0,3,6.0 +kf.rhy,1,5.5,1,0.375,2,None,9.0 +kg.rhy,1,5.5,1,0.625,1,None,9.0 +kh.rhy,1,5.5,1,0.416666666667,2,None,12.0 +kj.rhy,0,5.5,0,0.375,2,None,6.0 +kk.rhy,0,4.0,0,0.416666666667,1,None,6.0 +kl.rhy,0,4.5,0,0.208333333333,2,None,9.0 +la.rhy,None,None,None,None,1,6,9.0 +lb.rhy,None,None,None,None,0,6,12.0 +lc.rhy,None,None,None,None,1,6,9.0 +ld.rhy,None,None,None,None,0,6,9.0 +lf.rhy,1,5.0,1,0.427083333333,2,None,12.0 +lg.rhy,1,5.0,1,0.677083333333,1,None,12.0 +lh.rhy,1,5.0,1,0.46875,2,None,15.0 +lj.rhy,0,5.0,0,0.166666666667,2,None,9.0 +lk.rhy,0,3.5,0,0.208333333333,1,None,9.0 +ll.rhy,-1,2.0,0,0.0,2,None,12.0
--- a/Syncopation models/synpy/SG.py Mon Apr 27 20:32:10 2015 +0100 +++ b/Syncopation models/synpy/SG.py Mon May 11 23:36:25 2015 +0100 @@ -4,7 +4,7 @@ ''' -from basic_functions import get_H, velocity_sequence_to_min_timespan, get_rhythm_category, upsample_velocity_sequence +from basic_functions import get_H, velocity_sequence_to_min_timespan, get_rhythm_category, upsample_velocity_sequence, find_rhythm_Lmax from parameter_setter import are_parameters_valid def get_syncopation(bar, parameters = None): @@ -14,11 +14,13 @@ if get_rhythm_category(velocitySequence, subdivisionSequence) == 'poly': print 'Warning: SG model detects polyrhythms so returning None.' + elif bar.is_empty(): + print 'Warning: SG model detects empty bar so returning None.' else: - #velocitySequence = velocity_sequence_to_min_timespan(velocitySequence) # converting to the minimum time-span format + velocitySequence = velocity_sequence_to_min_timespan(velocitySequence) # converting to the minimum time-span format # set the defaults - Lmax = 5 + Lmax = 10 weightSequence = range(Lmax+1) # i.e. [0,1,2,3,4,5] if parameters!= None: if 'Lmax' in parameters: @@ -29,60 +31,65 @@ if not are_parameters_valid(Lmax, weightSequence, subdivisionSequence): print 'Error: the given parameters are not valid.' else: - # generate the metrical weights of level Lmax, and upsample(stretch) the velocity sequence to match the length of H - H = get_H(weightSequence,subdivisionSequence, Lmax) - - velocitySequence = upsample_velocity_sequence(velocitySequence, len(H)) + Lmax = find_rhythm_Lmax(velocitySequence, Lmax, weightSequence, subdivisionSequence) + if Lmax != None: + # generate the metrical weights of level Lmax, and upsample(stretch) the velocity sequence to match the length of H + H = get_H(weightSequence,subdivisionSequence, Lmax) + #print len(velocitySequence) + #velocitySequence = upsample_velocity_sequence(velocitySequence, len(H)) + #print len(velocitySequence) + + # The ave_dif_neighbours function calculates the (weighted) average of the difference between the note at a certain index and its neighbours in a certain metrical level + def ave_dif_neighbours(index, level): - # The ave_dif_neighbours function calculates the (weighted) average of the difference between the note at a certain index and its neighbours in a certain metrical level - def ave_dif_neighbours(index, level): + averages = [] + parameterGarma = 0.8 + + # The findPre function is to calculate the index of the previous neighbour at a certain metrical level. + def find_pre(index, level): + preIndex = (index - 1)%len(H) # using % is to restrict the index varies within range(0, len(H)) + while(H[preIndex] > level): + preIndex = (preIndex - 1)%len(H) + #print 'preIndex', preIndex + return preIndex - averages = [] - parameterGarma = 0.8 - - # The findPre function is to calculate the index of the previous neighbour at a certain metrical level. - def find_pre(index, level): - preIndex = (index - 1)%len(H) # using % is to restrict the index varies within range(0, len(H)) - while(H[preIndex] > level): - preIndex = (preIndex - 1)%len(H) - #print 'preIndex', preIndex - return preIndex + # The findPost function is to calculate the index of the next neighbour at a certain metrical level. + def find_post(index, level): + postIndex = (index + 1)%len(H) + while(H[postIndex] > level): + postIndex = (postIndex + 1)%len(H) + #print 'postIndex', postIndex + return postIndex + + # The dif function is to calculate a difference level factor between two notes (at note position index1 and index 2) in velocity sequence + def dif(index1,index2): + parameterBeta = 0.5 + dif_v = velocitySequence[index1]-velocitySequence[index2] + dif_h = abs(H[index1]-H[index2]) + diffactor = (parameterBeta*dif_h/4+1-parameterBeta) + if diffactor>1: + return dif_v + else: + return dif_v*diffactor - # The findPost function is to calculate the index of the next neighbour at a certain metrical level. - def find_post(index, level): - postIndex = (index + 1)%len(H) - while(H[postIndex] > level): - postIndex = (postIndex + 1)%len(H) - #print 'postIndex', postIndex - return postIndex - - # The dif function is to calculate a difference level factor between two notes (at note position index1 and index 2) in velocity sequence - def dif(index1,index2): - parameterBeta = 0.5 - dif_v = velocitySequence[index1]-velocitySequence[index2] - dif_h = abs(H[index1]-H[index2]) - dif = dif_v*(parameterBeta*dif_h/4+1-parameterBeta) - #print 'dif', dif - return dif - # From the highest to the lowest metrical levels where the current note resides, calculate the difference between the note and its neighbours at that level - for l in range(level, max(H)+1): - ave = (parameterGarma*dif(index,find_pre(index,l))+dif(index,find_post(index,l)) )/(1+parameterGarma) - averages.append(ave) - #print 'averages', averages - return averages + # From the highest to the lowest metrical levels where the current note resides, calculate the difference between the note and its neighbours at that level + for l in range(level, max(H)+1): + ave = (parameterGarma*dif(index,find_pre(index,l))+dif(index,find_post(index,l)) )/(1+parameterGarma) + averages.append(ave) + return averages - # if the upsampling was successfully done - if velocitySequence != None: - syncopation = 0 - # Calculate the syncopation value for each note - for index in range(len(velocitySequence)): - if velocitySequence[index] != 0: # Onset detected - h = H[index] - # Syncopation potential according to its metrical level, which is equal to the metrical weight - potential = 1 - pow(0.5,h) - level = h # Metrical weight is equal to its metrical level - syncopation += min(ave_dif_neighbours(index, level))*potential - else: - print 'Try giving a bigger Lmax so that the rhythm sequence can be measured by the matching metrical weights sequence (H).' + # if the upsampling was successfully done + if velocitySequence != None: + syncopation = 0 + # Calculate the syncopation value for each note + for index in range(len(velocitySequence)): + if velocitySequence[index] != 0: # Onset detected + h = H[index] + # Syncopation potential according to its metrical level, which is equal to the metrical weight + potential = 1 - pow(0.5,h) + level = h # Metrical weight is equal to its metrical level + syncopation += min(ave_dif_neighbours(index, level))*potential + else: + print 'Try giving a bigger Lmax so that the rhythm sequence can be measured by the matching metrical weights sequence (H).' return syncopation
--- a/Syncopation models/synpy/TMC.py Mon Apr 27 20:32:10 2015 +0100 +++ b/Syncopation models/synpy/TMC.py Mon May 11 23:36:25 2015 +0100 @@ -4,7 +4,7 @@ ''' -from basic_functions import get_H, ceiling, velocity_sequence_to_min_timespan, get_rhythm_category +from basic_functions import get_H, ceiling, velocity_sequence_to_min_timespan, get_rhythm_category, find_rhythm_Lmax from parameter_setter import are_parameters_valid # The get_metricity function calculates the metricity for a binary sequence with given sequence of metrical weights in a certain metrical level. @@ -22,29 +22,7 @@ maxMetricity = maxMetricity+H[i] return maxMetricity -# find the metrical level L that contains the same number of metrical positions as the length of the binary sequence -# if the given Lmax is not big enough to analyse the given sequence, request a bigger Lmax -def find_L(rhythmSequence, Lmax, weightSequence, subdivisionSequence): - L = Lmax - # initially assuming the Lmax is not big enough - needBiggerLmax = True - - # from the lowest metrical level (Lmax) to the highest, find the matching metrical level that - # has the same length as the length of binary sequence - while L >= 0: - if len(get_H(weightSequence,subdivisionSequence, L)) == len(rhythmSequence): - needBiggerLmax = False - break - else: - L = L - 1 - - # if need a bigger Lmax, print error message and return None; otherwise return the matching metrical level L - if needBiggerLmax: - print 'Error: needs a bigger L_max (i.e. the lowest metrical level) to match the given rhythm sequence.' - L = None - - return L # The get_syncopation function calculates the syncopation value of the given sequence for TMC model. #def get_syncopation(seq, subdivision_seq, weight_seq, L_max, rhythm_category): @@ -71,7 +49,7 @@ print 'Error: the given parameters are not valid.' else: binarySequence = velocity_sequence_to_min_timespan(binarySequence) # converting to the minimum time-span format - L = find_L(binarySequence, Lmax, weightSequence, subdivisionSequence) + L = find_rhythm_Lmax(binarySequence, Lmax, weightSequence, subdivisionSequence) if L != None: #? generate the metrical weights of the lowest level, #? using the last matching_level number of elements in the weightSequence, to make sure the last element is 1
--- a/Syncopation models/synpy/basic_functions.py Mon Apr 27 20:32:10 2015 +0100 +++ b/Syncopation models/synpy/basic_functions.py Mon May 11 23:36:25 2015 +0100 @@ -185,6 +185,31 @@ def string_to_sequence(inputString,typeFunction=float): return map(typeFunction, inputString.split(',')) +# find the metrical level L that contains the same number of metrical positions as the length of the binary sequence +# if the given Lmax is not big enough to analyse the given sequence, request a bigger Lmax +def find_rhythm_Lmax(rhythmSequence, Lmax, weightSequence, subdivisionSequence): + L = Lmax + + # initially assuming the Lmax is not big enough + needBiggerLmax = True + + # from the lowest metrical level (Lmax) to the highest, find the matching metrical level that + # has the same length as the length of binary sequence + while L >= 0: + if len(get_H(weightSequence,subdivisionSequence, L)) == len(rhythmSequence): + needBiggerLmax = False + break + else: + L = L - 1 + + # if need a bigger Lmax, print error message and return None; otherwise return the matching metrical level L + if needBiggerLmax: + print 'Error: needs a bigger L_max (i.e. the lowest metrical level) to match the given rhythm sequence.' + L = None + + return L + + # # The get_subdivision_seq function returns the subdivision sequence of several common time-signatures defined by GTTM, # # or ask for the top three level of subdivision_seq manually set by the user. # def get_subdivision_seq(timesig, L_max):
--- a/Syncopation models/synpy/music_objects.py Mon Apr 27 20:32:10 2015 +0100 +++ b/Syncopation models/synpy/music_objects.py Mon May 11 23:36:25 2015 +0100 @@ -177,12 +177,20 @@ self.velocitySequence = rhythmSequence self.noteSequence = None - self.tpq = ticksPerQuarter - self.qpm = qpmTempo if isinstance(timeSignature, basestring): self.timeSignature = TimeSignature(timeSignature) else: self.timeSignature = timeSignature + + if ticksPerQuarter==None: + self.tpq = len(self.get_velocity_sequence())*self.timeSignature.get_denominator()/(4*self.timeSignature.get_numerator()) + else: + self.tpq = ticksPerQuarter + + self.qpm = qpmTempo + + + self.nextBar = nextBar self.prevBar = prevBar