This code performs automatic music transcription using an efficient version of the Multi-Source Shift-Invariant Probabilistic Latent Component Analysis model. Optional GPU processing considerably improves speed (works faster than real-time). The output is a non-binary piano-roll representation which if post-processed (e.g. thresholded) can be used to derive a MIDI-like representation of the transcription. The present demo includes 3 sets of piano templates computed from isolated note samples from the MAPS database.

Related publications

E. Benetos, S. Cherla, and T. Weyde, “An effcient shift-invariant model for polyphonic music transcription,” in 6th International Workshop on Machine Learning and Music, Prague, Czech Republic, 2013.
[More Details] [BIBTEX] [URL (ext.)]
E. Benetos and T. Weyde, Multiple-F0 estimation and note tracking for MIREX 2013 using an efficient latent variable model. 2013.
[More Details] [BIBTEX] [URL (ext.)]