annotate toolboxes/MIRtoolbox1.3.2/MIRToolboxDemos/demo8classification.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function demo8classification
Daniel@0 2 % To get familiar with different approaches of classification using
Daniel@0 3 % MIRtoolbox, and to assess their performances.
Daniel@0 4
Daniel@0 5 % Part 1. The aim of this experiment is to categorize a set of very short
Daniel@0 6 % musical excerpts according to their genres, through a supervised learning.
Daniel@0 7
Daniel@0 8 % 1.3. Select the training set for current directory.
Daniel@0 9 try
Daniel@0 10 cd train_set
Daniel@0 11 catch
Daniel@0 12 error('Please change current directory to ''MIRtoolboxDemos'' directory')
Daniel@0 13 end
Daniel@0 14
Daniel@0 15 % Load all the files of the folder into one audio structure (called for
Daniel@0 16 % instance training), and associate for each folder a label defined by the
Daniel@0 17 % first two letters of the respective file name.
Daniel@0 18 train = miraudio('Folder','Label',1:2);
Daniel@0 19 cd ..
Daniel@0 20
Daniel@0 21 % In the same way, select the testing set for current directory, and load
Daniel@0 22 % all the files including their labels:
Daniel@0 23 cd test_set
Daniel@0 24 test = miraudio('Folder','Label',1:2);
Daniel@0 25 cd ..
Daniel@0 26
Daniel@0 27 % 1.4. Compute the mel-frequency cepstrum coefficient for each different
Daniel@0 28 % audio file of both sets:
Daniel@0 29 mfcc_train = mirmfcc(train);
Daniel@0 30 mfcc_test = mirmfcc(test);
Daniel@0 31
Daniel@0 32 % 1.5. Estimate the label (i.e., genre) of each file from the testing set,
Daniel@0 33 % based on a prior learning using the training set. Use for this purpose
Daniel@0 34 % the classify function.
Daniel@0 35 help mirclassify
Daniel@0 36
Daniel@0 37 % Let's first try a classification based of mfcc, for instance, using the
Daniel@0 38 % minimum distance strategy:
Daniel@0 39 mirclassify(test,mfcc_test,train,mfcc_train)
Daniel@0 40
Daniel@0 41 % The results indicates the outcomes and the total correct classification
Daniel@0 42 % rate (CCR).
Daniel@0 43
Daniel@0 44 % 1.6. Let's try a k-nearest-neighbour strategy. For instance, for k = 5:
Daniel@0 45 mirclassify(test,mfcc_test,train,mfcc_train,5)
Daniel@0 46
Daniel@0 47 % 1.7. Use a Gaussian mixture modelling with one gaussian per class:
Daniel@0 48 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1)
Daniel@0 49
Daniel@0 50 % try also with three Gaussians per class.
Daniel@0 51 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3)
Daniel@0 52
Daniel@0 53 % As this strategy is stochastic, the results vary for every trial.
Daniel@0 54 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1)
Daniel@0 55 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1)
Daniel@0 56 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3)
Daniel@0 57 mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3)
Daniel@0 58
Daniel@0 59 % 1.8. Carry out the classification using other features such as spectral
Daniel@0 60 % centroid:
Daniel@0 61 spectrum_train = mirspectrum(train);
Daniel@0 62 spectrum_test = mirspectrum(test);
Daniel@0 63 centroid_train = mircentroid(spectrum_train);
Daniel@0 64 centroid_test = mircentroid(spectrum_test);
Daniel@0 65 mirclassify(test,centroid_test,train,centroid_train,'GMM',1)
Daniel@0 66 mirclassify(test,centroid_test,train,centroid_train,'GMM',1)
Daniel@0 67 mirclassify(test,centroid_test,train,centroid_train,'GMM',3)
Daniel@0 68 mirclassify(test,centroid_test,train,centroid_train,'GMM',3)
Daniel@0 69
Daniel@0 70 % try also spectral entropy and spectral irregularity.
Daniel@0 71 entropy_train = mirentropy(spectrum_train);
Daniel@0 72 entropy_test = mirentropy(spectrum_test);
Daniel@0 73 mirclassify(test,entropy_test,train,entropy_train,'GMM',1)
Daniel@0 74 mirclassify(test,entropy_test,train,entropy_train,'GMM',1)
Daniel@0 75 mirclassify(test,entropy_test,train,entropy_train,'GMM',3)
Daniel@0 76 mirclassify(test,entropy_test,train,entropy_train,'GMM',3)
Daniel@0 77
Daniel@0 78 irregularity_train = mirregularity(spectrum_train,'Contrast',.1);
Daniel@0 79 irregularity_test = mirregularity(spectrum_test,'Contrast',.1);
Daniel@0 80 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',1)
Daniel@0 81 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',1)
Daniel@0 82 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',3)
Daniel@0 83 mirclassify(test,irregularity_test,train,irregularity_train,'GMM',3)
Daniel@0 84
Daniel@0 85 % Try classification based on a set of features such as:
Daniel@0 86 mirclassify(test,{entropy_test,centroid_test},...
Daniel@0 87 train,{entropy_train,centroid_train},'GMM',1)
Daniel@0 88 mirclassify(test,{entropy_test,centroid_test},...
Daniel@0 89 train,{entropy_train,centroid_train},'GMM',1)
Daniel@0 90 mirclassify(test,{entropy_test,centroid_test},...
Daniel@0 91 train,{entropy_train,centroid_train},'GMM',3)
Daniel@0 92 mirclassify(test,{entropy_test,centroid_test},...
Daniel@0 93 train,{entropy_train,centroid_train},'GMM',3)
Daniel@0 94
Daniel@0 95 % 1.9. By varying the features used for classification, the strategies and
Daniel@0 96 % their parameters, try to find an optimal strategy that give best correct
Daniel@0 97 % classification rate.
Daniel@0 98 bright_train = mirbrightness(spectrum_train);
Daniel@0 99 bright_test = mirbrightness(spectrum_test);
Daniel@0 100 rolloff_train = mirbrightness(spectrum_train);
Daniel@0 101 rolloff_test = mirbrightness(spectrum_test);
Daniel@0 102 spread_train = mirspread(spectrum_train);
Daniel@0 103 spread_test = mirspread(spectrum_test);
Daniel@0 104 mirclassify(test,{bright_test,rolloff_test,spread_test},...
Daniel@0 105 train,{bright_train,rolloff_train,spread_train},'GMM',3)
Daniel@0 106 skew_train = mirskewness(spectrum_train);
Daniel@0 107 skew_test = mirskewness(spectrum_test);
Daniel@0 108 kurtosis_train = mirkurtosis(spectrum_train);
Daniel@0 109 kurtosis_test = mirkurtosis(spectrum_test);
Daniel@0 110 flat_train = mirflatness(spectrum_train);
Daniel@0 111 flat_test = mirflatness(spectrum_test);
Daniel@0 112 mirclassify(test,{skew_test,kurtosis_test,flat_test},...
Daniel@0 113 train,{skew_train,kurtosis_train,flat_train},'GMM',3)
Daniel@0 114 for i = 1:3
Daniel@0 115 mirclassify(test,{mfcc_test,centroid_test,skew_test,kurtosis_test,...
Daniel@0 116 flat_test,entropy_test,irregularity_test,...
Daniel@0 117 bright_test,rolloff_test,spread_test},...
Daniel@0 118 train,{mfcc_train,centroid_train,skew_train,kurtosis_train,...
Daniel@0 119 flat_train,entropy_train,irregularity_train,...
Daniel@0 120 bright_train,rolloff_train,spread_train},'GMM',3)
Daniel@0 121 end
Daniel@0 122
Daniel@0 123 % You can also try to change the size of the training and testing sets (by
Daniel@0 124 % simply interverting them for instance).
Daniel@0 125 for i = 1:3
Daniel@0 126 mirclassify(train,{mfcc_train,centroid_train,skew_train,kurtosis_train,...
Daniel@0 127 flat_train,entropy_train,irregularity_train,...
Daniel@0 128 bright_train,rolloff_train,spread_train},...
Daniel@0 129 test,{mfcc_test,centroid_test,skew_test,kurtosis_test,...
Daniel@0 130 flat_test,entropy_test,irregularity_test,...
Daniel@0 131 bright_test,rolloff_test,spread_test},'GMM',3)
Daniel@0 132 end
Daniel@0 133
Daniel@0 134 %%
Daniel@0 135 % Part 2. In this second experiment, we will try to cluster the segments of
Daniel@0 136 % an audio file according to their mutual similarity.
Daniel@0 137
Daniel@0 138 % 2.1. To simplify the computation, downsample
Daniel@0 139 % the audio file to 11025 Hz.
Daniel@0 140 a = miraudio('czardas','Sampling',11025);
Daniel@0 141
Daniel@0 142 % 2.2. Decompose the file into successive frames of 2 seconds with half-
Daniel@0 143 % overlapping.
Daniel@0 144 f = mirframe(a,2,.1);
Daniel@0 145
Daniel@0 146 % 2.3. Segment the file based on the novelty of the key strengths.
Daniel@0 147 n = mirnovelty(mirkeystrength(f),'KernelSize',5)
Daniel@0 148 p = mirpeaks(n)
Daniel@0 149 s = mirsegment(a,p)
Daniel@0 150
Daniel@0 151 % 2.4. Compute the key strengths of each segment.
Daniel@0 152 ks = mirkeystrength(s)
Daniel@0 153
Daniel@0 154 % 2.5. Cluster the segments according to their key strengths.
Daniel@0 155 help mircluster
Daniel@0 156 mircluster(s,ks)
Daniel@0 157
Daniel@0 158 % The k means algorithm used in the clustering is stochastic, and its
Daniel@0 159 % results may vary at each run. By default, the algorithm is run 5 times
Daniel@0 160 % and the best result is selected. Try the analysis with a higher number of
Daniel@0 161 % runs:
Daniel@0 162 mircluster(s,ks,'Runs',10)