Overview

Scripts to assess invariance of rhythmic and melodic descriptors as described in [1]. This code extracts rhythmic and melodic audio features for a dataset of synthesised rhythms and melodies. It assesses the invariance of the features to transformations of the timbre, recording quality, tempo and pitch via a classification and retrieval experiment.
You can download the dataset of synthesised rhythms and melodies from [2].
If you use this software or dataset for research please cite [1].

For any questions please contact m.x.panteli{at}gmail.com.
This code is licensed under the terms of the MIT License.
Copyright (c) 2016 Maria Panteli.

Usage:
1) extract_features.py: Extracts the scale transform rhythmic descriptor and pitch bihistogram melodic descriptor for audio recordings located in audio/rhythms and audio/melodies. Requires the dataset to be downloaded in directory ‘audio’.

2) evaluate.py: Assesses the performance of each descriptor with respect to the different transformations and transformation values by a classification and retrieval experiment. The classification experiment runs a 5-fold cross-validation on the dataset with 30 rhythm/melody classes and 100 instances each. The retrieval experiment queries one instance from each rhythm/melody class and assesses the recall rate in the top 99 positions.

3) results.py: Prints results from the classification and retrieval experiments to Latex tables and assesses the effect of music style and monophonic versus polyphonic character via box plots and paired t-tests.

[1] M. Panteli and S. Dixon. On the Evaluation of Rhythmic and Melodic Descriptors for Music Similarity. In Proceedings of the 17th International Society for Music Information Retrieval Conference, pages 468-474, 2016.

[2] Rhythms - https://archive.org/details/panteli_maria_rhythm_dataset
Melodies - https://archive.org/details/panteli_maria_melody_dataset