view readme.rtf @ 8:a690d5f1de40

part 2 - happy files
author Yading Song <yading.song@eecs.qmul.ac.uk>
date Fri, 27 Mar 2015 22:59:45 +0000
parents 2fca2ff3bf81
children 2d307bb5d034
line wrap: on
line source
{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;}
{\colortbl;\red255\green255\blue255;}
\paperw11900\paperh16840\margl1440\margr1440\vieww10800\viewh8400\viewkind0
\pard\tx566\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\pardirnatural

\f0\fs24 \cf0 \
This is the emotion dataset used for ISMIR 2012 paper "Evaluation of Musical Features for Emotion Classification"\
\
It contains 3 parts,\
1. The top tags returned by last.fm (four emotion classes: happy, sad, angry, and relax)\
2. A list of songs labelled with the retrieved from part 1\
3. The fetched song titles that we used in this paper (due to the copyright, we didn't upload preview files)\
4. The audio files were fetched from Last.fm and 7Digital. Due to the copyright issue, this data set is only for research purpose. \
\
Feel free to contact me if you have any questions. \
\
Queen Mary University of London\
Centre for Digital Music\
Yading Song\
yading.song@eecs.qmul.ac.uk}