Daniel@0: function demo8classification Daniel@0: % To get familiar with different approaches of classification using Daniel@0: % MIRtoolbox, and to assess their performances. Daniel@0: Daniel@0: % Part 1. The aim of this experiment is to categorize a set of very short Daniel@0: % musical excerpts according to their genres, through a supervised learning. Daniel@0: Daniel@0: % 1.3. Select the training set for current directory. Daniel@0: try Daniel@0: cd train_set Daniel@0: catch Daniel@0: error('Please change current directory to ''MIRtoolboxDemos'' directory') Daniel@0: end Daniel@0: Daniel@0: % Load all the files of the folder into one audio structure (called for Daniel@0: % instance training), and associate for each folder a label defined by the Daniel@0: % first two letters of the respective file name. Daniel@0: train = miraudio('Folder','Label',1:2); Daniel@0: cd .. Daniel@0: Daniel@0: % In the same way, select the testing set for current directory, and load Daniel@0: % all the files including their labels: Daniel@0: cd test_set Daniel@0: test = miraudio('Folder','Label',1:2); Daniel@0: cd .. Daniel@0: Daniel@0: % 1.4. Compute the mel-frequency cepstrum coefficient for each different Daniel@0: % audio file of both sets: Daniel@0: mfcc_train = mirmfcc(train); Daniel@0: mfcc_test = mirmfcc(test); Daniel@0: Daniel@0: % 1.5. Estimate the label (i.e., genre) of each file from the testing set, Daniel@0: % based on a prior learning using the training set. Use for this purpose Daniel@0: % the classify function. Daniel@0: help mirclassify Daniel@0: Daniel@0: % Let's first try a classification based of mfcc, for instance, using the Daniel@0: % minimum distance strategy: Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train) Daniel@0: Daniel@0: % The results indicates the outcomes and the total correct classification Daniel@0: % rate (CCR). Daniel@0: Daniel@0: % 1.6. Let's try a k-nearest-neighbour strategy. For instance, for k = 5: Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,5) Daniel@0: Daniel@0: % 1.7. Use a Gaussian mixture modelling with one gaussian per class: Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1) Daniel@0: Daniel@0: % try also with three Gaussians per class. Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3) Daniel@0: Daniel@0: % As this strategy is stochastic, the results vary for every trial. Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1) Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,'GMM',1) Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3) Daniel@0: mirclassify(test,mfcc_test,train,mfcc_train,'GMM',3) Daniel@0: Daniel@0: % 1.8. Carry out the classification using other features such as spectral Daniel@0: % centroid: Daniel@0: spectrum_train = mirspectrum(train); Daniel@0: spectrum_test = mirspectrum(test); Daniel@0: centroid_train = mircentroid(spectrum_train); Daniel@0: centroid_test = mircentroid(spectrum_test); Daniel@0: mirclassify(test,centroid_test,train,centroid_train,'GMM',1) Daniel@0: mirclassify(test,centroid_test,train,centroid_train,'GMM',1) Daniel@0: mirclassify(test,centroid_test,train,centroid_train,'GMM',3) Daniel@0: mirclassify(test,centroid_test,train,centroid_train,'GMM',3) Daniel@0: Daniel@0: % try also spectral entropy and spectral irregularity. Daniel@0: entropy_train = mirentropy(spectrum_train); Daniel@0: entropy_test = mirentropy(spectrum_test); Daniel@0: mirclassify(test,entropy_test,train,entropy_train,'GMM',1) Daniel@0: mirclassify(test,entropy_test,train,entropy_train,'GMM',1) Daniel@0: mirclassify(test,entropy_test,train,entropy_train,'GMM',3) Daniel@0: mirclassify(test,entropy_test,train,entropy_train,'GMM',3) Daniel@0: Daniel@0: irregularity_train = mirregularity(spectrum_train,'Contrast',.1); Daniel@0: irregularity_test = mirregularity(spectrum_test,'Contrast',.1); Daniel@0: mirclassify(test,irregularity_test,train,irregularity_train,'GMM',1) Daniel@0: mirclassify(test,irregularity_test,train,irregularity_train,'GMM',1) Daniel@0: mirclassify(test,irregularity_test,train,irregularity_train,'GMM',3) Daniel@0: mirclassify(test,irregularity_test,train,irregularity_train,'GMM',3) Daniel@0: Daniel@0: % Try classification based on a set of features such as: Daniel@0: mirclassify(test,{entropy_test,centroid_test},... Daniel@0: train,{entropy_train,centroid_train},'GMM',1) Daniel@0: mirclassify(test,{entropy_test,centroid_test},... Daniel@0: train,{entropy_train,centroid_train},'GMM',1) Daniel@0: mirclassify(test,{entropy_test,centroid_test},... Daniel@0: train,{entropy_train,centroid_train},'GMM',3) Daniel@0: mirclassify(test,{entropy_test,centroid_test},... Daniel@0: train,{entropy_train,centroid_train},'GMM',3) Daniel@0: Daniel@0: % 1.9. By varying the features used for classification, the strategies and Daniel@0: % their parameters, try to find an optimal strategy that give best correct Daniel@0: % classification rate. Daniel@0: bright_train = mirbrightness(spectrum_train); Daniel@0: bright_test = mirbrightness(spectrum_test); Daniel@0: rolloff_train = mirbrightness(spectrum_train); Daniel@0: rolloff_test = mirbrightness(spectrum_test); Daniel@0: spread_train = mirspread(spectrum_train); Daniel@0: spread_test = mirspread(spectrum_test); Daniel@0: mirclassify(test,{bright_test,rolloff_test,spread_test},... Daniel@0: train,{bright_train,rolloff_train,spread_train},'GMM',3) Daniel@0: skew_train = mirskewness(spectrum_train); Daniel@0: skew_test = mirskewness(spectrum_test); Daniel@0: kurtosis_train = mirkurtosis(spectrum_train); Daniel@0: kurtosis_test = mirkurtosis(spectrum_test); Daniel@0: flat_train = mirflatness(spectrum_train); Daniel@0: flat_test = mirflatness(spectrum_test); Daniel@0: mirclassify(test,{skew_test,kurtosis_test,flat_test},... Daniel@0: train,{skew_train,kurtosis_train,flat_train},'GMM',3) Daniel@0: for i = 1:3 Daniel@0: mirclassify(test,{mfcc_test,centroid_test,skew_test,kurtosis_test,... Daniel@0: flat_test,entropy_test,irregularity_test,... Daniel@0: bright_test,rolloff_test,spread_test},... Daniel@0: train,{mfcc_train,centroid_train,skew_train,kurtosis_train,... Daniel@0: flat_train,entropy_train,irregularity_train,... Daniel@0: bright_train,rolloff_train,spread_train},'GMM',3) Daniel@0: end Daniel@0: Daniel@0: % You can also try to change the size of the training and testing sets (by Daniel@0: % simply interverting them for instance). Daniel@0: for i = 1:3 Daniel@0: mirclassify(train,{mfcc_train,centroid_train,skew_train,kurtosis_train,... Daniel@0: flat_train,entropy_train,irregularity_train,... Daniel@0: bright_train,rolloff_train,spread_train},... Daniel@0: test,{mfcc_test,centroid_test,skew_test,kurtosis_test,... Daniel@0: flat_test,entropy_test,irregularity_test,... Daniel@0: bright_test,rolloff_test,spread_test},'GMM',3) Daniel@0: end Daniel@0: Daniel@0: %% Daniel@0: % Part 2. In this second experiment, we will try to cluster the segments of Daniel@0: % an audio file according to their mutual similarity. Daniel@0: Daniel@0: % 2.1. To simplify the computation, downsample Daniel@0: % the audio file to 11025 Hz. Daniel@0: a = miraudio('czardas','Sampling',11025); Daniel@0: Daniel@0: % 2.2. Decompose the file into successive frames of 2 seconds with half- Daniel@0: % overlapping. Daniel@0: f = mirframe(a,2,.1); Daniel@0: Daniel@0: % 2.3. Segment the file based on the novelty of the key strengths. Daniel@0: n = mirnovelty(mirkeystrength(f),'KernelSize',5) Daniel@0: p = mirpeaks(n) Daniel@0: s = mirsegment(a,p) Daniel@0: Daniel@0: % 2.4. Compute the key strengths of each segment. Daniel@0: ks = mirkeystrength(s) Daniel@0: Daniel@0: % 2.5. Cluster the segments according to their key strengths. Daniel@0: help mircluster Daniel@0: mircluster(s,ks) Daniel@0: Daniel@0: % The k means algorithm used in the clustering is stochastic, and its Daniel@0: % results may vary at each run. By default, the algorithm is run 5 times Daniel@0: % and the best result is selected. Try the analysis with a higher number of Daniel@0: % runs: Daniel@0: mircluster(s,ks,'Runs',10)