changeset 66:194ddc4bd6b0

Doc updates for 2015
author Chris Cannam
date Wed, 12 Aug 2015 19:34:12 +0100
parents 029159daf3f1
children 170b90ac1105
files audio_tempo_estimation/qm-tempotracker/README.txt multiple_f0_estimation/silvet-live/README.txt multiple_f0_estimation/silvet/README.txt structural_segmentation/qm-segmenter/README.txt structural_segmentation/segmentino/README.txt vamp-plugins_abstract/Makefile vamp-plugins_abstract/qmvamp-mirex2015.tex
diffstat 7 files changed, 374 insertions(+), 9 deletions(-) [+]
line wrap: on
line diff
--- a/audio_tempo_estimation/qm-tempotracker/README.txt	Wed Aug 12 19:15:29 2015 +0100
+++ b/audio_tempo_estimation/qm-tempotracker/README.txt	Wed Aug 12 19:34:12 2015 +0100
@@ -1,11 +1,13 @@
-# MIREX 2013
+
+# MIREX 2015 submission
 #
 # Audio Tempo Estimation
 #
-# Luis Figueira, luis.figueira@eecs.qmul.ac.uk
+# Chris Cannam, c.cannam@qmul.ac.uk
 # Centre for Digital Music
 # Queen Mary, University of London
 
+
 # Architecture
 
 - Linux	64-bit
--- a/multiple_f0_estimation/silvet-live/README.txt	Wed Aug 12 19:15:29 2015 +0100
+++ b/multiple_f0_estimation/silvet-live/README.txt	Wed Aug 12 19:34:12 2015 +0100
@@ -2,7 +2,11 @@
 # MIREX 2015 submission
 #
 # Multiple Fundamental Frequency Estimation & Tracking
+#
 # Chris Cannam, c.cannam@qmul.ac.uk
+# Centre for Digital Music
+# Queen Mary, University of London
+
 
   This submission uses the Silvet note estimation Vamp plugin in "live
   mode", running in Sonic Annotator.
--- a/multiple_f0_estimation/silvet/README.txt	Wed Aug 12 19:15:29 2015 +0100
+++ b/multiple_f0_estimation/silvet/README.txt	Wed Aug 12 19:34:12 2015 +0100
@@ -2,7 +2,10 @@
 # MIREX 2015 submission
 #
 # Multiple Fundamental Frequency Estimation & Tracking
+#
 # Chris Cannam, c.cannam@qmul.ac.uk
+# Centre for Digital Music
+# Queen Mary, University of London
 
   This submission uses the Silvet note estimation Vamp plugin running
   in Sonic Annotator.
--- a/structural_segmentation/qm-segmenter/README.txt	Wed Aug 12 19:15:29 2015 +0100
+++ b/structural_segmentation/qm-segmenter/README.txt	Wed Aug 12 19:34:12 2015 +0100
@@ -1,8 +1,14 @@
-# MIREX 2013 submission
+
+# MIREX 2015 submission
 #
 # Structural Segmentation
-# QM Segmenter by Mark Levy
-# Prepared by Chris Cannam, chris.cannam@eecs.qmul.ac.uk
+#
+# QM Segmenter, by Mark Levy
+#
+# Chris Cannam, c.cannam@qmul.ac.uk
+# Centre for Digital Music
+# Queen Mary, University of London
+
 
 # Architecture
 
--- a/structural_segmentation/segmentino/README.txt	Wed Aug 12 19:15:29 2015 +0100
+++ b/structural_segmentation/segmentino/README.txt	Wed Aug 12 19:34:12 2015 +0100
@@ -1,8 +1,14 @@
-# MIREX 2013 submission
+
+# MIREX 2015 submission
 #
 # Structural Segmentation
-# Segmentino by Matthias Mauch
-# Prepared by Chris Cannam, chris.cannam@eecs.qmul.ac.uk
+#
+# Segmentino, by Matthias Mauch
+#
+# Chris Cannam, c.cannam@qmul.ac.uk
+# Centre for Digital Music
+# Queen Mary, University of London
+
 
 # Architecture
 
--- a/vamp-plugins_abstract/Makefile	Wed Aug 12 19:15:29 2015 +0100
+++ b/vamp-plugins_abstract/Makefile	Wed Aug 12 19:34:12 2015 +0100
@@ -1,4 +1,4 @@
-all: qmvamp-mirex2013.pdf qmvamp-mirex2014.pdf
+all: qmvamp-mirex2013.pdf qmvamp-mirex2014.pdf qmvamp-mirex2015.pdf
 
 qmvamp-mirex2013.pdf: qmvamp-mirex2013.tex qmvamp-mirex2013.bib
 	( echo q | xelatex qmvamp-mirex2013 ) && bibtex qmvamp-mirex2013 && xelatex qmvamp-mirex2013 && xelatex qmvamp-mirex2013
@@ -6,6 +6,10 @@
 qmvamp-mirex2014.pdf: qmvamp-mirex2014.tex qmvamp-mirex2014.bib
 	( echo q | xelatex qmvamp-mirex2014 ) && bibtex qmvamp-mirex2014 && xelatex qmvamp-mirex2014 && xelatex qmvamp-mirex2014
 
+qmvamp-mirex2015.pdf: qmvamp-mirex2015.tex qmvamp-mirex2014.bib
+	( echo q | xelatex qmvamp-mirex2015 ) && bibtex qmvamp-mirex2014 && xelatex qmvamp-mirex2015 && xelatex qmvamp-mirex2015
+
 clean:
 	rm -f qmvamp-mirex2013.bbl qmvamp-mirex2013.aux qmvamp-mirex2013.blg qmvamp-mirex2013.log 
 	rm -f qmvamp-mirex2014.bbl qmvamp-mirex2014.aux qmvamp-mirex2014.blg qmvamp-mirex2014.log 
+	rm -f qmvamp-mirex2015.bbl qmvamp-mirex2015.aux qmvamp-mirex2015.blg qmvamp-mirex2015.log 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/vamp-plugins_abstract/qmvamp-mirex2015.tex	Wed Aug 12 19:34:12 2015 +0100
@@ -0,0 +1,340 @@
+% -----------------------------------------------
+% Template for MIREX 2010
+% (based on ISMIR 2010 template)
+% -----------------------------------------------
+
+\documentclass{article}
+\usepackage{mirex2010,amsmath,cite}
+\usepackage{graphicx}
+
+% Title.
+% ------
+\title{MIREX 2015:\\Vamp Plugins from the Centre for Digital Music}
+
+% Single address
+% To use with only one author or several with the same address
+% ---------------
+\oneauthor
+{Chris Cannam, Emmanouil Benetos, Matthias Mauch, Matthew E. P. Davies,}
+{Simon Dixon, Christian Landone, Katy Noland, and Dan Stowell}
+{Queen Mary, University of London \\ {\em chris.cannam@eecs.qmul.ac.uk}}
+
+\begin{document}
+%
+\maketitle
+%
+\begin{abstract}
+
+In this submission we offer for evaluation several audio feature
+extraction plugins in Vamp format.
+
+Some of these plugins represent efficient implementations based on
+modern work, while others are no longer state-of-the-art and were
+developed a few years ago. The methods implemented in this set of
+plugins are described in the literature and are referenced throughout
+this paper. All of the plugins are written in C++ and have been
+published under open source licences, in most cases the GPL.
+
+A number of these plugins were also submitted to the 2013 and 2014
+editions of MIREX: most of those are unchanged here and may offer a
+useful baseline for comparison across years. One plugin, that
+submitted for the Multiple Fundamental Frequency Estimation and
+Tracking task, has been updated for this year's submission.
+
+\end{abstract}
+%
+\section{Introduction}\label{sec:introduction}
+
+The Vamp plugin format\footnote{http://vamp-plugins.org/} was
+developed at the Centre for Digital Music (C4DM) at Queen Mary,
+University of London, during 2005-2006 in response to a desire to
+publish work in a form that would be immediately useful to people
+outside this research field. The Vamp plugin format was published with
+an open source SDK, alongside the Sonic
+Visualiser~\cite{sonicvisualise2010} audio analysis application which
+provided a useful host for Vamp plugins.
+
+In subsequent years the Vamp format has become a moderately popular
+means of distributing methods from the Centre and other research
+groups. Some dozens of Vamp plugins are now available from groups such
+as the Music Technology Group at UPF in Barcelona, the Sound and Music
+Computing group at INESC in Porto, the BBC, and others, as well as
+from the Centre for Digital Music.
+
+The plugins submitted for this evaluation are provided as a set of
+dynamic library files. Those with names starting ``QM'' are all
+provided in a single library file, the QM Vamp Plugins set, made
+available in binary form for Windows, OS/X, and Linux from the Centre
+for Digital Music's download
+page.\footnote{http://vamp-plugins.org/plugin-doc/qm-vamp-plugins.html} These
+plugins come from a number of authors who are credited in this
+abstract and in the plugins' accompanying documentation.
+
+In addition to the QM Vamp Plugins set, this submission contains a
+number of separate plugins: the Chordino and Segmentino plugins from
+Matthias Mauch; the BeatRoot Vamp Plugin from Simon Dixon; OnsetsDS
+from Dan Stowell; and the Silvet note transcription plugin from
+Emmanouil Benetos and Chris Cannam.
+
+The plugins are all provided as 64-bit Linux shared objects depending
+on GNU libc 2.15 or newer and GNU libstdc++ 3.4.15 or newer. Sonic
+Annotator v1.0 is also
+required\footnote{http://code.soundsoftware.ac.uk/projects/sonic-annotator/}
+in order to run the task scripts.
+
+For an overview of this submission across all of the tasks and plugins
+it covers, please see the relevant repository at the SoundSoftware
+site.\footnote{http://code.soundsoftware.ac.uk/projects/mirex2013/}
+
+\section{Submissions by MIREX Task}
+
+\subsection{Audio Beat Tracking}
+
+\subsubsection{QM Tempo and Beat Tracker}
+\label{tempo_and_beat_tracker}
+
+The QM Tempo and Beat Tracker\cite{matthew2007a} Vamp plugin analyses
+a single channel of audio and estimates the positions of metrical
+beats within the music.
+
+This plugin uses the complex-domain onset detection method from~\cite{chris2003a} with a hybrid of the two-state beat tracking model
+proposed in~\cite{matthew2007a} and a dynamic programming method based
+on~\cite{ellis2007}. 
+
+To identify the tempo, the onset detection function is partitioned
+into 6-second frames with a 1.5-second increment. The autocorrelation
+function of each 6-second onset detection function is found and this
+is then passed through a perceptually weighted comb
+filterbank\cite{matthew2007a}. The successive comb filterbank output
+signals are grouped together into a matrix of observations of
+periodicity through time. The best path of periodicity through these
+observations is found using the Viterbi algorithm, where the
+transition matrix is defined as a diagonal Gaussian.
+
+Given the estimates of periodicity, the beat locations are recovered
+by applying the dynamic programming algorithm\cite{ellis2007}. This
+process involves the calculation of a recursive cumulative score
+function and backtrace signal. The cumulative score indicates the
+likelihood of a beat existing at each sample of the onset detection
+function input, and the backtrace gives the location of the best
+previous beat given this point in time. Once the cumulative score and
+backtrace have been calculated for the whole input signal, the best
+path through beat locations is found by recursively sampling the
+backtrace signal from the end of the input signal back to the
+beginning.
+
+The QM Tempo and Beat Tracker plugin was written by Matthew
+Davies and Christian Landone.
+
+\subsubsection{BeatRoot}
+
+The BeatRoot Vamp plugin\footnote{http://code.soundsoftware.ac.uk/projects/beatroot-vamp/} is an open source Vamp plugin library that
+implements the BeatRoot beat-tracking method of Simon
+Dixon\cite{simon2001a}. The BeatRoot algorithm has been submitted to
+MIREX evaluation in earlier years\cite{simon2006a}; this plugin
+consists of the most recent BeatRoot code release, converted from Java
+to C++ and modified for plugin format.
+
+The BeatRoot plugin was written by Simon Dixon and Chris Cannam.
+
+\subsection{Audio Key Detection}
+
+\subsubsection{QM Key Detector}
+
+The QM Key Detector Vamp plugin continuously estimates the key of the
+music by comparing the degree to which a block-by-block chromagram
+correlates to stored key profiles for each major and minor key.
+
+This plugin uses the correlation method described in~\cite{krumhansl1990} and~\cite{gomez2006}, but using different tone
+profiles. The key profiles used in this implementation are drawn from
+analysis of Book I of the Well Tempered Klavier by J S Bach, recorded
+at A=440 equal temperament, as described in~\cite{noland2007signal}.
+
+The QM Key Detector plugin was written by Katy Noland and
+Christian Landone.
+
+\subsection{Audio Chord Estimation}
+
+\subsubsection{Chordino}
+
+The Chordino plugin\footnote{http://isophonics.net/nnls-chroma} was developed following Mauch's 2010 work on chord
+extraction, submitted to MIREX in that
+year\cite{mauch:md1:2010}. While that submission used a C++ chroma
+implementation with a MATLAB dynamic Bayesian network as a chord
+extraction front-end\cite{matthias2010a}, Chordino is an entirely C++
+implementation that was developed specifically to be made freely
+available as an open-source plugin for general use.
+
+The method for the Chordino plugin has two parts:
+
+{\bf NNLS Chroma} --- NNLS Chroma analyses a single channel of audio
+using frame-wise spectral input from the Vamp host. The spectrum is
+transformed to a log-frequency spectrum (constant-Q) with three bins
+per semitone. On this representation, two processing steps are
+performed: tuning, after which each centre bin (i.e. bin 2, 5, 8, …)
+corresponds to a semitone, even if the tuning of the piece deviates
+from 440 Hz standard pitch; and running standardisation: subtraction
+of the running mean, division by the running standard deviation. This
+has a spectral whitening effect.
+
+The processed log-frequency spectrum is then used as an input for NNLS
+approximate transcription using a dictionary of harmonic notes with
+geometrically decaying harmonics magnitudes. The output of the NNLS
+approximate transcription is semitone-spaced. To get the chroma, this
+semitone spectrum is multiplied (element-wise) with the desired
+profile (chroma or bass chroma) and then mapped to 12 bins.
+
+{\bf Chord transcription} --- A fixed dictionary of chord profiles is
+used to calculate frame-wise chord similarities. A standard
+HMM/Viterbi approach is used to smooth these to provide a chord
+transcription.
+
+Chordino was written by Matthias Mauch.
+
+\subsection{Audio Onset Detection}
+
+\subsubsection{QM Note Onset Detector}
+
+The QM Note Onset Detector Vamp plugin estimates the onset times of
+notes within the music. It calculates an onset likelihood function for
+each spectral frame, and picks peaks in a smoothed version of this
+function.
+
+Several onset detection functions are available in this plugin; this
+submission uses the complex-domain method described
+in~\cite{chris2003a}.
+
+The QM Note Onset Detector plugin was written by Chris Duxbury, Juan
+Pablo Bello and Christian Landone.
+
+\subsubsection{OnsetsDS}
+
+OnsetsDS\footnote{http://code.soundsoftware.ac.uk/projects/vamp-onsetsds-plugin/} is an onset detector plugin wrapping Dan Stowell's OnsetsDS
+library\footnote{http://onsetsds.sourceforge.net/}, described
+in~\cite{dan2007a}.
+
+OnsetsDS was designed to provide an FFT-based onset detection that
+works very efficiently in real-time, with a fast reaction time. It is
+not tailored for non-real-time use or for any particular type of
+signal.
+
+The OnsetsDS plugin was written by Dan Stowell and Chris Cannam.
+
+\subsection{Multiple Fundamental Frequency Estimation and Tracking}
+
+\subsubsection{Silvet}
+
+Silvet (for Shift-Invariant Latent Variable
+Transcription)\footnote{http://code.soundsoftware.ac.uk/projects/silvet/}
+is a Vamp plugin for automatic music transcription, using a method
+based on that of~\cite{emmanouil2012a}. It produces a note
+transcription as output, and we have included a script to transform
+this into a framewise output as well, in order to make it available
+for framewise evaluation as well as note-tracking evaluation.
+
+Silvet uses a probablistic latent-variable estimation method to
+decompose a Constant-Q time-frequency matrix into note activations
+using a set of spectral templates learned from recordings of solo
+instruments. The method is thought to perform quite well for clear
+recordings that contain only instruments with a good correspondence to
+the known templates. Silvet does not contain any vocal templates, or
+templates for typical rock or electronic instruments.
+
+The method implemented in Silvet is very similar to that submitted to
+MIREX in 2012 as the BD1, BD2 and BD3 submissions in the Multiple F0
+Tracking task of that year~\cite{emmanouil2012b}. In common with that
+submission, and unlike the paper cited at~\cite{emmanouil2012a},
+Silvet uses a simple thresholding method instead of an HMM for note
+identification. However, Silvet follows~\cite{emmanouil2012a}
+rather than~\cite{emmanouil2012b} in including a 5-bin-per-semitone
+pitch shifting parameter.
+
+The Silvet plugin was written by Chris Cannam and Emmanouil Benetos.
+
+\subsubsection{Silvet Live}
+
+The Silvet Live submission uses the Silvet plugin in its
+recently-added ``Live'' mode. This has somewhat lower latency than the
+default mode, and is much faster to run. This is mainly a result of
+using a reduced 12-bin chromagram and corresponding instrument
+templates, making this conceptually a very simple method. Results are
+expected to be substantially poorer than those for the default Silvet
+parameters.
+
+The Silvet plugin was written by Chris Cannam and Emmanouil Benetos.
+
+\subsection{Structural Segmentation}
+
+\subsubsection{QM Segmenter}
+
+The QM Segmenter Vamp plugin divides a single channel of music up into
+structurally consistent segments.
+
+The method, described in~\cite{mark2008a}, relies upon timbral or
+pitch similarity to obtain the high-level song structure. This is
+based on the assumption that the distributions of timbre features are
+similar over corresponding structural elements of the music.
+
+The input feature is a frequency-domain representation of the audio
+signal, in this case using a Constant-Q transform for the underlying
+features (though the plugin supports other timbral and pitch
+features). The extracted features are normalised in accordance with
+the MPEG-7 standard (NASE descriptor), and the value of this envelope
+is stored for each processing block of audio. This is followed by the
+extraction of 20 principal components per block using PCA, yielding a
+sequence of 21 dimensional feature vectors where the last element in
+each vector corresponds to the energy envelope.
+
+A 40-state Hidden Markov Model is then trained on the whole sequence
+of features, with each state corresponding to a specific timbre
+type. This partitions the timbre-space of a given track into 40
+possible types. After training and decoding the HMM, the song is
+assigned a sequence of timbre-features according to specific
+timbre-type distributions for each possible structural segment.
+
+The segmentation itself is computed by clustering timbre-type
+histograms. A series of histograms are created over a sliding window
+which are grouped into M clusters by an adapted soft k-means
+algorithm.  Reference histograms, iteratively updated during
+clustering, describe the timbre distribution for each segment. The
+segmentation arises from the final cluster assignments.
+
+The QM Segmenter plugin was written by Mark Levy.
+
+\subsubsection{Segmentino}
+
+The Segmentino plugin is a C++ implementation of a segmentation method
+described in Matthias Mauch's paper on using musical structure to
+enhance chord transcription\cite{matthias2009a} and expanded on in
+Mauch's PhD thesis\cite{matthiasphd}.
+
+A beat-quantised chroma representation is used to calculate pair-wise
+similarities between beats (really: beat ``shingles'', i.e.\ multi-beat
+vectors). Based on this first similarity calculation, an exhaustive
+comparison of all possible segments of reasonable length in beats is
+executed, and segments are added to form segment families if they are
+sufficiently similar to another ``family member''. Having accumulated a
+lot of families, the families are rated, and the one with the highest
+score is used as the first segmentation group that gets
+annotated. This last step is repeated until no more families fit the
+remaining ``holes'' in the song that haven't already been assigned to a
+segment.
+
+This method was developed for ``classic rock'' music, and therefore
+assumes a few characteristics that are not necessarily found in other
+music: repetition of harmonic sequences in the music that coincide
+with structural segments in a song; a steady beat; segments of a
+certain length; corresponding segments have the same length in
+beats.
+
+Segmentino plugin was written by Matthias Mauch and Massimiliano
+Zanoni.
+
+\subsection{Audio Tempo Estimation}
+
+For this task we submit the same plugin as that used in the Audio Beat
+Tracking task in section~\ref{tempo_and_beat_tracker}.
+
+\bibliography{qmvamp-mirex2014}
+
+\end{document}