changeset 49:388a1d46f00f

after merge future work
author mariano
date Sun, 01 May 2016 05:43:48 +0100
parents f94a152a553a (current diff) 20598a0f7dcd (diff)
children b573e6a3f08c
files musicweb.tex
diffstat 1 files changed, 2 insertions(+), 2 deletions(-) [+]
line wrap: on
line diff
--- a/musicweb.tex	Sun May 01 05:43:01 2016 +0100
+++ b/musicweb.tex	Sun May 01 05:43:48 2016 +0100
@@ -298,9 +298,9 @@
 High-level stylistic descriptors are not easily estimated from audio but they can correlate with lower level features such as the average tempo of a track, the frequency of note onsets, the most commonly occurring keys or chords or the overall spectral envelope that characterises dominant voices or instrumentation. To exploit different types of similarity, we model each artist using three main categories of audio descriptors: rhythmic, harmonic and timbral. We compute the joint distribution of several low-level features in each category over a large collection of tracks from each artist. We then link artists exhibiting similar distributions of these features.
 
 % for XXXX artists with a mean track count of YYY
-We obtain audio features form the AcousticBrainz\footnote{https://acousticbrainz.org/} Web service which provides audio descriptors in each category of interest. Tracks are indexed by MusicBrainz identifiers enabling unambiguous linking to artists and other relevant metadata. For each artist in our database, we retrieve features for a large collection of their tracks in the above categories, including beats-per-minute and onset rate (rhythmic), chord histograms (harmonic) and MFCC (timbral) features.
+We obtain audio features form the AcousticBrainz\footnote{https://acousticbrainz.org/} Web service which provides descriptors in each category of interest. Tracks are indexed by MusicBrainz identifiers enabling unambiguous linking to artists and other relevant metadata. For each artist in our database, we retrieve features for a large collection of their tracks in the above categories, including beats-per-minute and onset rate (rhythmic), chord histograms (harmonic) and MFCC (timbral) features.
 
-For each artist, we fit a Gaussian Mixture Model (GMM) with full covariances on each set of aggregated features in each category across several tracks and compute the distances $D_{cat}$ for the selected category using Eq.\ref{eq:dist}
+For each artist, we fit a Gaussian Mixture Model (GMM) with full covariances on each set of aggregated features in each category across several tracks and compute pair-wise distances $D_{cat}$ within the selected category using Eq.\ref{eq:dist}
 %
 \begin{equation}\label{eq:dist}
 D_{cat} = d_{skl}(artist\_model_{cat}(i), artist\_model_{cat}(j)),