annotate readme.rtf @ 0:e5724c21af7b

Upload the files that I used for ISMIR2012 Emotion recognition
author Yading Song <yadng.song@eecs.qmul.ac.uk>
date Thu, 20 Sep 2012 13:27:02 +0100
parents
children 2fca2ff3bf81
rev   line source
yadng@0 1 {\rtf1\ansi\ansicpg1252\cocoartf1138\cocoasubrtf470
yadng@0 2 {\fonttbl\f0\fswiss\fcharset0 Helvetica;}
yadng@0 3 {\colortbl;\red255\green255\blue255;}
yadng@0 4 \paperw11900\paperh16840\margl1440\margr1440\vieww10800\viewh8400\viewkind0
yadng@0 5 \pard\tx566\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\pardirnatural
yadng@0 6
yadng@0 7 \f0\fs24 \cf0 \
yadng@0 8 This is the dataset I used for ISMIR 2012 paper "Evaluation of Musical Features for Emotion Classification"\
yadng@0 9 \
yadng@0 10 It contains 3 parts,\
yadng@0 11 1. The top tags returned by last.fm (four emotion classes: happy, sad, angry, and relax)\
yadng@0 12 2. A list of songs labelled with the retrieved from part 1\
yadng@0 13 3. The fetched song titles that we used in this paper (due to the copyright, we didn't upload preview files)\
yadng@0 14 \
yadng@0 15 Queen Mary University of London\
yadng@0 16 Centre for Digital Music\
yadng@0 17 Yading Song\
yadng@0 18 yading.song@eecs.qmul.ac.uk}