annotate toolboxes/MIRtoolbox1.3.2/MIRToolboxDemos/demo8classification.m @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 function demo8classification
wolffd@0 2 % To get familiar with different approaches of classification using
wolffd@0 3 % MIRtoolbox, and to assess their performances.
wolffd@0 4
wolffd@0 5 % Part 1. The aim of this experiment is to categorize a set of very short
wolffd@0 6 % musical excerpts according to their genres, through a supervised learning.
wolffd@0 7
wolffd@0 8 % 1.3. Select the training set for current directory.
wolffd@0 9 try
wolffd@0 10 cd train_set
wolffd@0 11 catch
wolffd@0 12 error('Please change current directory to ''MIRtoolboxDemos'' directory')
wolffd@0 13 end
wolffd@0 14
wolffd@0 15 % Load all the files of the folder into one audio structure (called for
wolffd@0 16 % instance training), and associate for each folder a label defined by the
wolffd@0 17 % first two letters of the respective file name.
wolffd@0 18 train = miraudio('Folder','Label',1:2);
wolffd@0 19 cd ..
wolffd@0 20
wolffd@0 21 % In the same way, select the testing set for current directory, and load
wolffd@0 22 % all the files including their labels:
wolffd@0 23 cd test_set
wolffd@0 24 test = miraudio('Folder','Label',1:2);
wolffd@0 25 cd ..
wolffd@0 26
wolffd@0 27 % 1.4. Compute the mel-frequency cepstrum coefficient for each different
wolffd@0 28 % audio file of both sets:
wolffd@0 29 mfcc_train = mirmfcc(train);
wolffd@0 30 mfcc_test = mirmfcc(test);
wolffd@0 31
wolffd@0 32 % 1.5. Estimate the label (i.e., genre) of each file from the testing set,
wolffd@0 33 % based on a prior learning using the training set. Use for this purpose
wolffd@0 34 % the classify function.
wolffd@0 35 help mirclassify
wolffd@0 36
wolffd@0 37 % Let's first try a classification based of mfcc, for instance, using the
wolffd@0 38 % minimum distance strategy:
wolffd@0 39 mirclassify(test,mfcc_test,train,mfcc_train)
wolffd@0 40
wolffd@0 41 % The results indicates the outcomes and the total correct classification
wolffd@0 42 % rate (CCR).
wolffd@0 43
wolffd@0 44 % 1.6. Let's try a k-nearest-neighbour strategy. For instance, for k = 5:
wolffd@0 45 mirclassify(test,mfcc_test,train,mfcc_train,5)
wolffd@0 46
wolffd@0 47 % 1.7. Use a Gaussian mixture modelling with one gaussian per class:
wolffd@0 48 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1)
wolffd@0 49
wolffd@0 50 % try also with three Gaussians per class.
wolffd@0 51 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3)
wolffd@0 52
wolffd@0 53 % As this strategy is stochastic, the results vary for every trial.
wolffd@0 54 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1)
wolffd@0 55 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1)
wolffd@0 56 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3)
wolffd@0 57 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3)
wolffd@0 58
wolffd@0 59 % 1.8. Carry out the classification using other features such as spectral
wolffd@0 60 % centroid:
wolffd@0 61 spectrum_train = mirspectrum(train);
wolffd@0 62 spectrum_test = mirspectrum(test);
wolffd@0 63 centroid_train = mircentroid(spectrum_train);
wolffd@0 64 centroid_test = mircentroid(spectrum_test);
wolffd@0 65 mirclassify(test,centroid_test,train,centroid_train,'GMM',1)
wolffd@0 66 mirclassify(test,centroid_test,train,centroid_train,'GMM',1)
wolffd@0 67 mirclassify(test,centroid_test,train,centroid_train,'GMM',3)
wolffd@0 68 mirclassify(test,centroid_test,train,centroid_train,'GMM',3)
wolffd@0 69
wolffd@0 70 % try also spectral entropy and spectral irregularity.
wolffd@0 71 entropy_train = mirentropy(spectrum_train);
wolffd@0 72 entropy_test = mirentropy(spectrum_test);
wolffd@0 73 mirclassify(test,entropy_test,train,entropy_train,'GMM',1)
wolffd@0 74 mirclassify(test,entropy_test,train,entropy_train,'GMM',1)
wolffd@0 75 mirclassify(test,entropy_test,train,entropy_train,'GMM',3)
wolffd@0 76 mirclassify(test,entropy_test,train,entropy_train,'GMM',3)
wolffd@0 77
wolffd@0 78 irregularity_train = mirregularity(spectrum_train,'Contrast',.1);
wolffd@0 79 irregularity_test = mirregularity(spectrum_test,'Contrast',.1);
wolffd@0 80 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',1)
wolffd@0 81 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',1)
wolffd@0 82 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',3)
wolffd@0 83 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',3)
wolffd@0 84
wolffd@0 85 % Try classification based on a set of features such as:
wolffd@0 86 mirclassify(test,{entropy_test,centroid_test},...
wolffd@0 87 train,{entropy_train,centroid_train},'GMM',1)
wolffd@0 88 mirclassify(test,{entropy_test,centroid_test},...
wolffd@0 89 train,{entropy_train,centroid_train},'GMM',1)
wolffd@0 90 mirclassify(test,{entropy_test,centroid_test},...
wolffd@0 91 train,{entropy_train,centroid_train},'GMM',3)
wolffd@0 92 mirclassify(test,{entropy_test,centroid_test},...
wolffd@0 93 train,{entropy_train,centroid_train},'GMM',3)
wolffd@0 94
wolffd@0 95 % 1.9. By varying the features used for classification, the strategies and
wolffd@0 96 % their parameters, try to find an optimal strategy that give best correct
wolffd@0 97 % classification rate.
wolffd@0 98 bright_train = mirbrightness(spectrum_train);
wolffd@0 99 bright_test = mirbrightness(spectrum_test);
wolffd@0 100 rolloff_train = mirbrightness(spectrum_train);
wolffd@0 101 rolloff_test = mirbrightness(spectrum_test);
wolffd@0 102 spread_train = mirspread(spectrum_train);
wolffd@0 103 spread_test = mirspread(spectrum_test);
wolffd@0 104 mirclassify(test,{bright_test,rolloff_test,spread_test},...
wolffd@0 105 train,{bright_train,rolloff_train,spread_train},'GMM',3)
wolffd@0 106 skew_train = mirskewness(spectrum_train);
wolffd@0 107 skew_test = mirskewness(spectrum_test);
wolffd@0 108 kurtosis_train = mirkurtosis(spectrum_train);
wolffd@0 109 kurtosis_test = mirkurtosis(spectrum_test);
wolffd@0 110 flat_train = mirflatness(spectrum_train);
wolffd@0 111 flat_test = mirflatness(spectrum_test);
wolffd@0 112 mirclassify(test,{skew_test,kurtosis_test,flat_test},...
wolffd@0 113 train,{skew_train,kurtosis_train,flat_train},'GMM',3)
wolffd@0 114 for i = 1:3
wolffd@0 115 mirclassify(test,{mfcc_test,centroid_test,skew_test,kurtosis_test,...
wolffd@0 116 flat_test,entropy_test,irregularity_test,...
wolffd@0 117 bright_test,rolloff_test,spread_test},...
wolffd@0 118 train,{mfcc_train,centroid_train,skew_train,kurtosis_train,...
wolffd@0 119 flat_train,entropy_train,irregularity_train,...
wolffd@0 120 bright_train,rolloff_train,spread_train},'GMM',3)
wolffd@0 121 end
wolffd@0 122
wolffd@0 123 % You can also try to change the size of the training and testing sets (by
wolffd@0 124 % simply interverting them for instance).
wolffd@0 125 for i = 1:3
wolffd@0 126 mirclassify(train,{mfcc_train,centroid_train,skew_train,kurtosis_train,...
wolffd@0 127 flat_train,entropy_train,irregularity_train,...
wolffd@0 128 bright_train,rolloff_train,spread_train},...
wolffd@0 129 test,{mfcc_test,centroid_test,skew_test,kurtosis_test,...
wolffd@0 130 flat_test,entropy_test,irregularity_test,...
wolffd@0 131 bright_test,rolloff_test,spread_test},'GMM',3)
wolffd@0 132 end
wolffd@0 133
wolffd@0 134 %%
wolffd@0 135 % Part 2. In this second experiment, we will try to cluster the segments of
wolffd@0 136 % an audio file according to their mutual similarity.
wolffd@0 137
wolffd@0 138 % 2.1. To simplify the computation, downsample
wolffd@0 139 % the audio file to 11025 Hz.
wolffd@0 140 a = miraudio('czardas','Sampling',11025);
wolffd@0 141
wolffd@0 142 % 2.2. Decompose the file into successive frames of 2 seconds with half-
wolffd@0 143 % overlapping.
wolffd@0 144 f = mirframe(a,2,.1);
wolffd@0 145
wolffd@0 146 % 2.3. Segment the file based on the novelty of the key strengths.
wolffd@0 147 n = mirnovelty(mirkeystrength(f),'KernelSize',5)
wolffd@0 148 p = mirpeaks(n)
wolffd@0 149 s = mirsegment(a,p)
wolffd@0 150
wolffd@0 151 % 2.4. Compute the key strengths of each segment.
wolffd@0 152 ks = mirkeystrength(s)
wolffd@0 153
wolffd@0 154 % 2.5. Cluster the segments according to their key strengths.
wolffd@0 155 help mircluster
wolffd@0 156 mircluster(s,ks)
wolffd@0 157
wolffd@0 158 % The k means algorithm used in the clustering is stochastic, and its
wolffd@0 159 % results may vary at each run. By default, the algorithm is run 5 times
wolffd@0 160 % and the best result is selected. Try the analysis with a higher number of
wolffd@0 161 % runs:
wolffd@0 162 mircluster(s,ks,'Runs',10)