Mercurial > hg > musicweb-iswc2016
changeset 42:3b0d3a5c9278
fixed some citations
author | gyorgyf |
---|---|
date | Sun, 01 May 2016 03:14:45 +0100 |
parents | 967b0369ba07 |
children | 4f7cf0afbad1 |
files | musicweb.bib musicweb.tex |
diffstat | 2 files changed, 21 insertions(+), 2 deletions(-) [+] |
line wrap: on
line diff
--- a/musicweb.bib Sun May 01 02:57:01 2016 +0100 +++ b/musicweb.bib Sun May 01 03:14:45 2016 +0100 @@ -2,13 +2,32 @@ %% http://bibdesk.sourceforge.net/ -%% Created for George Fazekas at 2016-05-01 02:56:50 +0100 +%% Created for George Fazekas at 2016-05-01 03:12:50 +0100 %% Saved with string encoding Unicode (UTF-8) +@inproceedings{logan2000mel, + Author = {Logan, Beth}, + Booktitle = {Proc. Int. Symp. of Music Information Retrieval (ISMIR)}, + Date-Added = {2016-05-01 02:10:41 +0000}, + Date-Modified = {2016-05-01 02:12:50 +0000}, + Title = {{Mel Frequency Cepstral Coefficients for Music Modeling}}, + Year = {2000}} + +@article{Schubert:06, + Author = {Schubert, Emery and Wolfe, Joe}, + Date-Added = {2016-05-01 02:02:38 +0000}, + Date-Modified = {2016-05-01 02:04:18 +0000}, + Journal = {Acta Acustica united with Acustica}, + Number = {5}, + Pages = {820-825}, + Title = {{Does Timbral Brightness Scale with Frequency and Spectral Centroid?}}, + Volume = {92}, + Year = {2006}} + @inproceedings{hershey:07, Author = {Hershey, J. R. and Olsen, P. A.}, Booktitle = {Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
--- a/musicweb.tex Sun May 01 02:57:01 2016 +0100 +++ b/musicweb.tex Sun May 01 03:14:45 2016 +0100 @@ -299,7 +299,7 @@ \subsection{Content-based linking}\label{sec:mir} -Content-based Music Information Retrieval (MIR) \cite{casey08} facilitates applications that rely on perceptual, statistical, semantic or musical features derived from audio using digital signal processing and machine learning methods. These features may include statistical aggregates computed from time-frequency representations extracted over short time windows. For instance, spectral centroid is said to correlate with the perceived brightness of a sound [Schubert et.al., 2006], therefore it may be used in the characterisation in timbral similarity between music pieces. More complex representations include features that are extracted using a perceptually motivated algorithm. Mel-Frequency Cepstral Coefficients (MFCCs) for instance are often used in speech recognition as well as in estimating music similarity. Higher-level musical features include keys, chords, tempo, rhythm, as well as semantic features like genre or mood, with specific algorithms to extract this information from audio. +Content-based Music Information Retrieval (MIR) \cite{casey08} facilitates applications that rely on perceptual, statistical, semantic or musical features derived from audio using digital signal processing and machine learning methods. These features may include statistical aggregates computed from time-frequency representations extracted over short time windows. For instance, spectral centroid is said to correlate with the perceived brightness of a sound \cite{Schubert:06}, therefore it may be used in the characterisation in timbral similarity between music pieces. More complex representations include features that are extracted using a perceptually motivated algorithm. Mel-Frequency Cepstral Coefficients (MFCCs) for instance are often used in speech recognition as well as in estimating music similarity \cite{logan2000mel}. Higher-level musical features include keys, chords, tempo, rhythm, as well as semantic features like genre or mood, with specific algorithms to extract this information from audio. % Content-based features are increasingly used in music recommendation systems to overcome issues such as infrequent access of lesser known pieces in large music catalogues (the ``long tail'' problem) or the difficulty of recommending new pieces without user ratings in systems that employ collaborative filtering (``cold start'' problem) \cite{Celma2010}.