changeset 820:7b0ce3a9ddc1

Merge from branch "WAC2016"
author Nicholas Jillings <n.g.r.jillings@se14.qmul.ac.uk>
date Mon, 23 Nov 2015 09:13:12 +0000
parents bf38bba18886
children e266584705fc
files .hgignore README.txt analyse.html ape.js core.js docs/Instructions/ListeningTestInstructions.bib docs/Instructions/ListeningTestInstructions.pdf docs/Instructions/ListeningTestInstructions.tex docs/Instructions/img/cmd.png docs/Instructions/img/python.png docs/Instructions/img/pythonServer.png docs/Instructions/img/test.png docs/Instructions/img/warning.png docs/WAC2016/WAC2016.bib docs/WAC2016/WAC2016.pdf docs/WAC2016/WAC2016.tex docs/WAC2016/cc.png docs/WAC2016/img/boxplot.png docs/WAC2016/img/interface.png docs/WAC2016/img/test_create.png docs/WAC2016/img/test_create_2.png docs/WAC2016/img/timeline.pdf docs/WAC2016/sig-alternate.cls docs/WAC2016/waccopyright.sty pythonServer.py save.php scripts/comment_parser.py scripts/evaluation_stats.py scripts/generate_report.py scripts/score_parser.py scripts/score_plot.py scripts/timeline_view.py scripts/timeline_view_movement.py
diffstat 33 files changed, 4803 insertions(+), 189 deletions(-) [+]
line wrap: on
line diff
--- a/.hgignore	Tue Oct 13 10:20:04 2015 +0100
+++ b/.hgignore	Mon Nov 23 09:13:12 2015 +0000
@@ -30,4 +30,16 @@
 saves/*.csv
 saves/*/*.csv
 saves/*/*.png
-saves/*/*.xml
\ No newline at end of file
+saves/*/*.xml
+saves/ratings/*.pdf
+saves/timelines_movement/*.pdf
+saves
+re:^docs/WAC2016/\._WAC2016\.bib$
+re:^docs/WAC2016/\._WAC2016\.pdf$
+re:^docs/WAC2016/\._WAC2016\.tex$
+*.out
+*.fdb_latexmk
+*.toc
+subject
+resample
+test-data
\ No newline at end of file
--- a/README.txt	Tue Oct 13 10:20:04 2015 +0100
+++ b/README.txt	Mon Nov 23 09:13:12 2015 +0000
@@ -35,6 +35,7 @@
 
 
 QUICK START
+
 Using the example project: 
 1. Make sure your system sample rate corresponds with the sample rate of the audio files, if the input XML file enforces the given sample rate. 
 2. Run pythonServer.py (make sure you have Python installed). 
@@ -44,6 +45,7 @@
 
 
 LEGACY
+
 The APE interface and most of the functionality of the interface is inspired by the APE toolbox for MATLAB [1]. See https://code.soundsoftware.ac.uk/projects/ape for the source code and corresponding paper. 
 
 
@@ -81,10 +83,16 @@
 In Firefox, go to Tools>Web Developer>Web Console, or hit Cmd + Alt + K. 
 
 
+REMOTE TESTS
+
+As the test is browser-based, it can be run remotely from a web server without modification. To allow for remote storage of the output XML files (as opposed to saving them locally on the subject’s machine, which is the default if no ‘save’ path is specified or found), a PHP script on the server needs to accept the output XML files. An example of such script will be included in a future version. 
+
+
 SCRIPTS
 
-The tool comes with a few handy Python scripts for easy extraction of ratings or comments, and visualisation of ratings and timelines. See below for a quick guide on how to use them. All scripts written for Python 2.7. Visualisation requires the free matplotlib toolbox (http://matplotlib.org), numpy and scipy. 
-By default, the scripts can be run from the ‘scripts’ folder, with the result files in the ‘saves’ folder (the default location where result XMLs are stored). 
+The tool comes with a few handy Python (2.7) scripts for easy extraction of ratings or comments, and visualisation of ratings and timelines. See below for a quick guide on how to use them. All scripts written for Python 2.7. Visualisation requires the free matplotlib toolbox (http://matplotlib.org), numpy and scipy. 
+By default, the scripts can be run from the ‘scripts’ folder, with the result files in the ‘saves’ folder (the default location where result XMLs are stored). Each script takes the XML file folder as an argument, along with other arguments in some cases.
+Note: to avoid all kinds of problems, please avoid using spaces in file and folder names (this may work on some systems, but others don’t like it). 
 
 	comment_parser.py
 		Extracts comments from the output XML files corresponding with the different subjects found in ‘saves/’. It creates a folder per ‘audioholder’/page it finds, and stores a CSV file with comments for every ‘audioelement’/fragment within these respective ‘audioholders’/pages. In this CSV file, every line corresponds with a subject/output XML file. Depending on the settings, the first column containing the name of the corresponding XML file can be omitted (for anonymisation). 
@@ -93,6 +101,9 @@
 	evaluation_stats.py
 		Shows a few statistics of tests in the ‘saves/‘ folder so far, mainly for checking for errors. Shows the number of files that are there, the audioholder IDs that were tested (and how many of each separate ID), the duration of each page, the duration of each complete test, the average duration per page, and the average duration in function of the page number. 
 
+	generate_report.py
+		Similar to ‘evaluation_stats.py’, but generates a PDF report based on the output files in the ‘saves/‘ folder - or any folder specified as command line argument. Uses pdflatex to write a LaTeX document, then convert to a PDF. 
+
 	score_parser.py
 		Extracts rating values from the XML to CSV - necessary for running visualisation of ratings. Creates the folder ‘saves/ratings/‘ if not yet created, to which it writes a separate file for every ‘audioholder’/page in any of the output XMLs it finds in ‘saves/‘. Within each file, rows represent different subjects (output XML file names) and columns represent different ‘audioelements’/fragments. 
 
@@ -102,6 +113,9 @@
 		Requires the free matplotlib library. 
 		At this point, more than one subjects are needed for this script to work. 
 
+	timeline_view_movement.py
+		Creates a timeline for every subject, for every ‘audioholder’/page, corresponding with any of the output XML files found in ‘/saves’. It shows the marker movements of the different fragments, along with when each fragment was played (red regions). Automatically takes fragment names, rating axis title, rating axis labels, and audioholder name from the XML file (if available). 
+
 	timeline_view.py
 		Creates a timeline for every subject, for every ‘audioholder’/page, corresponding with any of the output XML files found in ‘/saves’. It shows when and for how long the subject listened to each of the fragments. 
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/analyse.html	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,776 @@
+<!DOCTYPE html>
+<html lang="en">
+	<head>
+		<meta charset="utf-8">
+
+		<!-- Always force latest IE rendering engine (even in intranet) & Chrome Frame
+		Remove this if you use the .htaccess -->
+		<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+
+		<title>Analysis</title>
+		<meta name="description" content="Show results from subjective evaluation">
+		<meta name="author" content="Brecht De Man">
+		
+		<script type="text/javascript" src="https://www.google.com/jsapi"></script>
+		<script type="text/javascript">
+			// To aid 'one-page set-up' all scripts and CSS must be included directly in this file!
+			
+			//google.load("visualization", "1", {packages:["corechart"]});
+			
+			/*************
+			*	SETUP   *
+			*************/
+			// folder where to find the XML files
+			xmlFileFolder = "analysis_test";
+			// array of XML files
+			var xmlFiles = ['McG-A-2013-09.xml', 'McG-A-2014-03.xml', 'McG-A-2014-12.xml', 'McG-B-2013-09.xml', 
+			'McG-B-2014-03.xml', 'McG-B-2014-12.xml', 'McG-C-2013-09.xml', 'McG-C-2014-03.xml', 'McG-C-2014-12.xml', 
+			'McG-D-2013-09.xml', 'McG-D-2014-03.xml', 'McG-D-2014-12.xml', 'McG-E-2013-09.xml', 'McG-E-2014-03.xml', 
+			'McG-E-2014-12.xml', 'McG-F-2013-09.xml', 'McG-F-2014-03.xml', 'McG-F-2014-12.xml', 'McG-G-2014-03.xml', 
+			'McG-G-2014-12.xml', 'McG-H-2013-09.xml', 'McG-H-2014-03.xml', 'McG-H-2014-12.xml', 'McG-I-2013-09.xml', 
+			'McG-I-2014-03.xml', 'McG-J-2013-09.xml', 'McG-J-2014-03.xml', 'McG-K-2013-09.xml', 'McG-K-2014-03.xml', 
+			'McG-L-2013-09.xml', 'McG-L-2014-03.xml', 'McG-M-2013-09.xml', 'McG-M-2014-03.xml', 'McG-N-2013-09.xml', 
+			'McG-N-2014-03.xml', 'McG-O-2013-09.xml', 'McG-O-2014-03.xml', 'McG-P-2013-09.xml', 'McG-P-2014-03.xml', 
+			'McG-pro1-2013-09.xml', 'McG-pro1-2014-03.xml', 'McG-pro1-2014-12.xml', 'McG-pro2-2013-09.xml', 
+			'McG-pro2-2014-03.xml', 'McG-pro2-2014-12.xml', 'McG-Q-2014-12.xml', 'McG-R-2014-12.xml', 
+			'McG-S-2014-12.xml', 'McG-subA-2013-09.xml', 'McG-subA-2014-03.xml', 'McG-subB-2014-03.xml', 
+			'McG-subB-2014-12.xml', 'McG-subC-2013-09.xml', 'McG-subC-2014-03.xml', 'McG-subC-2014-12.xml', 
+			'McG-subD-2013-09.xml', 'McG-subD-2014-12.xml', 'McG-subE-2014-12.xml', 'McG-subG-2014-12.xml', 
+			'McG-subH-2013-09.xml', 'McG-T-2014-12.xml', 'McG-U-2014-12.xml', 'McG-V-2014-12.xml', 
+			'McG-W-2014-12.xml', 'McG-X-2014-12.xml', 'MG1-2013-09.xml', 'MG2-2013-09.xml', 'MG3-2013-09.xml', 
+			'MG4-2013-09.xml', 'MG5-2013-09.xml', 'MG6-2013-09.xml', 'MG7-2013-09.xml', 'MG8-2013-09.xml', 
+			'MG9-2013-09.xml', 'QM-1-1.xml', 'QM-1-2.xml', 'QM-10-1.xml', 'QM-11-1.xml', 'QM-11-2.xml', 'QM-12-1.xml', 'QM-12-2.xml', 
+			'QM-13-1.xml', 'QM-14-1.xml', 'QM-15-1.xml', 'QM-16-1.xml', 'QM-17-1.xml', 'QM-18-1.xml', 'QM-18-2.xml', 
+			'QM-18-3.xml', 'QM-19-1.xml', 'QM-2-1.xml', 'QM-2-2.xml', 'QM-2-3.xml', 'QM-20-1.xml', 'QM-20-2.xml', 
+			'QM-20-3.xml', 'QM-21-1.xml', 'QM-21-2.xml', 'QM-3-1.xml', 'QM-3-2.xml', 'QM-3-3.xml', 'QM-4-1.xml', 'QM-5-1.xml', 
+			'QM-5-2.xml', 'QM-6-1.xml', 'QM-6-2.xml', 'QM-7-1.xml', 'QM-7-2.xml', 'QM-8-1.xml', 'QM-9-1.xml',
+			'PXL-L1.xml','PXL-L2.xml','PXL-L3.xml','PXL-L4.xml','PXL-L5.xml','PXL-S1.xml','PXL-S2.xml','PXL-S3.xml',
+			'PXL-S4.xml','PXL-S5.xml','PXL-S6.xml','PXL-S7.xml','PXL-pro.xml','DU-A1.xml','DU-A2.xml','DU-B1.xml',
+			'DU-B2.xml','DU-C1.xml','DU-C2.xml','DU-D1.xml','DU-D2.xml','DU-E1.xml','DU-F1.xml','DU-F2.xml','DU-G1.xml',
+			'DU-G2.xml','DU-H1.xml','DU-H2.xml','DU-I2.xml','DU-J2.xml','DU-K1.xml','DU-K2.xml','DU-L1.xml','DU-L2.xml',
+			'DU-M1.xml','DU-M2.xml','DU-N1.xml','DU-O1.xml','DU-O2.xml','DU-P1.xml','DU-P2.xml','DU-Q1.xml','DU-Q2.xml',
+			'DU-R1.xml','DU-R2.xml','DU-S1.xml','DU-S2.xml','DU-T1.xml','DU-T2.xml','DU-U1.xml','DU-U2.xml','DU-U3.xml'];
+			//['QM-1-1.xml','QM-2-1.xml','QM-2-2.xml','QM-2-3.xml','QM-3-1.xml','QM-3-2.xml','QM-4-1.xml','QM-5-1.xml','QM-5-2.xml','QM-6-1.xml','QM-6-2.xml','QM-7-1.xml','QM-7-2.xml','QM-8-1.xml','QM-9-1.xml','QM-10-1.xml','QM-11-1.xml','QM-12-1.xml','QM-12-2.xml','QM-13-1.xml','QM-14-1.xml','QM-15-1.xml','QM-16-1.xml','QM-17-1.xml','QM-18-1.xml','QM-18-2.xml','QM-18-3.xml','QM-19-1.xml','QM-20-1.xml','QM-20-2.xml','QM-20-3.xml','QM-21-1.xml','QM-21-2.xml'];
+			//['McG-A-2014-03.xml','McG-B-2014-03.xml','McG-C-2014-03.xml','McG-D-2014-03.xml','McG-E-2014-03.xml','McG-F-2014-03.xml','McG-G-2014-03.xml','McG-H-2014-03.xml'];
+							
+			//TODO: make retrieval of file names automatic / drag files on here
+			
+			/****************
+			*	VARIABLES  *
+			****************/
+			
+			// Counters
+			// How many files, audioholders, audioelementes and statements annotated (don't count current one)
+			var numberOfFiles = -1;
+			var numberOfaudioholders = -1;
+			var numberOfaudioelementes = -1;
+			var numberOfStatements = -1;
+			var numberOfSkippedComments = 0;
+			
+			// Object arrays
+			var fileNameArray = [];
+			var subjectArray = [];
+			var audioholderArray = [];
+			var audioelementArray = [];
+			
+			// End of (file, audioholder, audioelement) flags
+			var newFile = true;
+			var newAudioHolder = true;
+			var newAudioElement = true;
+			
+			var fileCounter = 0;		// file index
+			var audioholderCounter=0;	// audioholder index (current XML file)
+			var audioelementCounter=0;	// audioelement index (current audioholder)
+			var statementNumber=0; 		// total number of statements
+			
+			var root;					// root of XML file
+			var commentInFull = '';		// full comment
+			
+			var playAudio = true;		// whether corresponding audio should be played back
+			
+			// // Measuring time
+			// var lastTimeMeasured = -1;	//
+			// var durationLastAnnotation = -1;	// duration of last annotation
+			// var timeArray = [];
+			// var MIN_TIME = 1.0;	// minimum time counted as significant
+			// var measurementPaused = false; // whether time measurement is paused
+			// var timeInBuffer = 0; // 
+			
+			var topLevel;
+			window.onload = function() {
+				// Initialise page
+				topLevel = document.getElementById('topLevelBody');
+				var setup = document.createElement('div');
+				setup.id = 'setupTagDiv';
+				loadAllFiles();
+				printSurveyData() 
+				//makePlots();
+				// measure time at this point: 
+				lastTimeMeasured = new Date().getTime(); // in milliseconds
+			};
+			
+			// Assert function
+			function assert(condition, message) {
+				if (!condition) {
+					message = message || "Assertion failed";
+					if (typeof Error !== "undefined") {
+						throw new Error(message);
+					}
+					throw message; // Fallback
+				}
+			}
+
+			function median(values) { // TODO: replace code by '50th percentile' - should be the same?
+				values.sort( function(a,b) {return a - b;} );
+				var half = Math.floor(values.length/2);
+				if(values.length % 2)
+				return values[half];
+				else
+				return (values[half-1] + values[half]) / 2.0;
+			}
+
+			function percentile(values, n) {
+				values.sort( function(a,b) {return a - b;} );
+				// get ordinal rank
+				var rank = Math.min(Math.floor(values.length*n/100), values.length-1);
+				return values[rank];
+			}
+			
+			/***********************
+			*	TIME MEASUREMENT  *
+			************************/
+			
+			// measure time since last time this function was called
+			function timeSinceLastCall() {
+				// current time
+				var currentTime = new Date().getTime();
+				// calculate time difference
+				var timeDifference = currentTime - lastTimeMeasured + timeInBuffer;
+				// clear buffer (for pausing)
+				timeInBuffer = 0;
+				// remember last measured time
+				lastTimeMeasured = currentTime; 
+				return timeDifference;
+			}
+			
+			// pause time measurement
+			function pauseTimeMeasurement() {
+				// UN-PAUSE
+				if (measurementPaused) { // already paused
+					// button shows 'pause' again
+					document.getElementById('pauseButton').innerHTML = 'Pause';
+					// toggle state
+					measurementPaused = false;
+					// resume time measurement
+					lastTimeMeasured = new Date().getTime(); // reset time, discard time while paused
+				} else { // PAUSE
+					// button shows 'resume'
+					document.getElementById('pauseButton').innerHTML = 'Resume';
+					// toggle state
+					measurementPaused = true;
+					// pause time measurement
+					timeInBuffer = timeSinceLastCall();
+				}
+			}
+			
+			// show elapsed time on interface
+			function showTimeElapsedInSeconds() {
+				// if paused: un-pause
+				if (measurementPaused) {
+					pauseTimeMeasurement();
+				}
+			
+				// time of last annotation
+				var lastAnnotationTime = timeSinceLastCall()/1000;
+				document.getElementById('timeDisplay').innerHTML = lastAnnotationTime.toFixed(2); 
+				// average time over last ... annotations
+				var avgAnnotationTime;
+				var numberOfElementsToAverage = 
+						document.getElementById('numberOfTimeAverages').value; 
+				if (isPositiveInteger(numberOfElementsToAverage)) {
+					avgAnnotationTime = 
+						calculateAverageTime(lastAnnotationTime, 
+											 Number(numberOfElementsToAverage));
+				} else {
+					// change text field content to 'ALL'
+					document.getElementById('numberOfTimeAverages').value = 'ALL'; 
+					avgAnnotationTime = calculateAverageTime(lastAnnotationTime, -1);
+				}
+				document.getElementById('timeAverageDisplay').innerHTML = avgAnnotationTime.toFixed(2);
+			}
+			
+			// auxiliary function: is string a positive integer?
+			// http://stackoverflow.com/questions/10834796/...
+			// validate-that-a-string-is-a-positive-integer
+			function isPositiveInteger(str) {
+				var n = ~~Number(str);
+				return String(n) === str && n >= 0;
+			}
+			
+			// calculate average time
+			function calculateAverageTime(newTimeMeasurementInSeconds,numberOfPoints) {
+				// append last measurement time to time array, if significant
+				if (newTimeMeasurementInSeconds > MIN_TIME) {
+					timeArray.push(newTimeMeasurementInSeconds); 
+				}
+				// average over last N elements of this array
+				if (numberOfPoints < 0 || numberOfPoints>=timeArray.length) { // calculate average over all
+					var sum = 0;
+					for (var i = 0; i < timeArray.length; i++) { 
+						sum += timeArray[i];
+					}
+					averageOfTimes = sum/timeArray.length;
+				} else { // calculate average over specified number of times measured last
+					var sum = 0;
+					for (var i = timeArray.length-numberOfPoints; i < timeArray.length; i++) { 
+						sum += timeArray[i];
+					}
+					averageOfTimes = sum/numberOfPoints;
+				}
+				return averageOfTimes;
+			}
+			
+			
+			/********************************
+			*	   PLAYBACK OF AUDIO	   *
+			********************************/
+			
+			//PLAYaudioelement
+			// Keep track of whether audio should be played
+			function playFlagChanged(){
+				playAudio = playFlag.checked; // global variable
+				
+				if (!playAudio){ // if audio needs to stop
+					audio.pause(); // stop audio - if anything is playing
+					currently_playing = ''; // back to empty string so playaudioelement knows nothing's playing
+				}
+			}
+			
+			// audioholder that's currently playing
+			var currently_playing_audioholder = ''; // at first: empty string
+			var currently_playing_audioelement  = '';
+			var audio;
+			
+			// Play audioelement of audioholder if available, from start or from same position
+			function playaudioelement(audioholderName, audioelementerName){
+				if (playAudio) { // if enabled
+					// get corresponding file from folder
+					var file_location = 'audio/'+audioholderName + '/' + audioelementerName + '.mp3'; // fixed path and file name format
+					
+					// if not available, show error/warning message
+					//TODO ...
+				
+					// if nothing playing yet, start playing
+					if (currently_playing_audioholder == ''){ // signal that nothing is playing
+						//playSound(audioBuffer);
+						audio = new Audio(file_location);
+						audio.loop = true; // loop when end is reached
+						audio.play();
+						currently_playing_audioholder = audioholderName;
+						currently_playing_audioelement  = audioelementerName;
+					} else if (currently_playing_audioholder != audioholderName) {
+					// if different audioholder playing, stop that and start playing
+						audio.pause(); // stop audio
+						audio = new Audio(file_location); // load new file
+						audio.loop = true; // loop when end is reached
+						audio.play(); // play audio from the start
+						currently_playing_audioholder = audioholderName;
+						currently_playing_audioelement  = audioelementerName;
+					} else if (currently_playing_audioelement != audioelementerName) {
+					// if same audioholder playing, start playing from where it left off
+						skipTime = audio.currentTime; // time to skip to
+						audio.pause(); // stop audio
+						audio = new Audio(file_location);
+						audio.addEventListener('loadedmetadata', function() {
+							this.currentTime = skipTime;
+							console.log('Loaded '+audioholderName+'-'+audioelementerName+', playing from '+skipTime);
+						}, false); // skip to same time when audio is loaded! 
+						audio.loop = true; // loop when end is reached
+						audio.play(); // play from that time
+						audio.currentTime = skipTime;
+						currently_playing_audioholder = audioholderName;
+						currently_playing_audioelement  = audioelementerName;
+					} 
+					// if same audioelement playing: keep on playing (i.e. do nothing)
+				}
+			}
+			
+			/********************
+			*	READING FILES  *
+			********************/
+			
+			// Read necessary data from XML file
+			function readXML(xmlFileName){
+				if (window.XMLHttpRequest)
+				  {// code for IE7+, Firefox, Chrome, Opera, Safari
+				  xmlhttp=new XMLHttpRequest();
+				  }
+				else
+				  {// code for IE6, IE5
+				  xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
+				  }
+				xmlhttp.open("GET",xmlFileName,false);
+				xmlhttp.send();
+				return xmlhttp.responseXML; 
+			}
+			
+			// go over all files and compute relevant statistics
+			function loadAllFiles() {
+				// retrieve information from XMLs
+				
+				for (fileIndex = 0; fileIndex < xmlFiles.length; fileIndex++) {
+					xmlFileName = xmlFileFolder+"/"+xmlFiles[fileIndex];
+					xml = readXML(xmlFileName); 
+					if (xml != null) { // if file exists
+						// append file name to array of file names
+						fileNameArray.push(xmlFiles[fileIndex]);
+						
+						// get root of XML file
+						root = xml.getElementsByTagName('browserevaluationresult')[0];
+						
+						// get subject ID, add to array if not already there
+						pretest = root.getElementsByTagName('pretest')[0];
+						subjectID = pretest.getElementsByTagName('comment')[0];
+						if (subjectID){
+							if (subjectID.getAttribute('id')!='sessionId') { // warning in console when not available
+								console.log(xmlFiles[fileIndex]+': no SessionID available');
+							}
+							if (subjectArray.indexOf(subjectID.textContent) == -1) { // if not already in array
+								subjectArray.push(subjectID.textContent); // append to array
+							}
+						}
+						
+						// go over all audioholders, add to array if not already there
+						audioholderNodes = root.getElementsByTagName('audioholder');
+						// go over audioholderNodes and append audioholder name when not present yet
+						for (audioholderIndex = 0; audioholderIndex < audioholderNodes.length; audioholderIndex++) { 
+							audioholderName = audioholderNodes[audioholderIndex].getAttribute('id');
+							if (audioholderArray.indexOf(audioholderName) == -1) { // if not already in array
+								audioholderArray.push(audioholderName); // append to array
+							}
+							// within each audioholder, go over all audioelement IDs, add to array if not already there
+							audioelementNodes = audioholderNodes[audioholderIndex].getElementsByTagName('audioelement');
+							for (audioelementIndex = 0; audioelementIndex < audioelementNodes.length; audioelementIndex++) { 
+								audioelementName = audioelementNodes[audioelementIndex].getAttribute('id');
+								if (audioelementArray.indexOf(audioelementName) == -1) { // if not already in array
+									audioelementArray.push(audioelementName); // append to array
+								}
+							}
+						}
+						// count occurrences of each audioholder
+						// ...
+					}
+					else {
+						console.log('XML file '+xmlFileName+' not found.');
+					}
+				}
+
+				// sort alphabetically
+				fileNameArray.sort();
+				subjectArray.sort();
+				audioholderArray.sort();
+				audioelementArray.sort();
+				
+				// display all information in HTML
+				// show XML file folder
+				document.getElementById('xmlFileFolder_span').innerHTML = "\""+xmlFileFolder+"/\"";
+				// show number of files
+				document.getElementById('numberOfFiles_span').innerHTML = fileNameArray.length;
+				// show list of subject names
+				document.getElementById('subjectArray_span').innerHTML = subjectArray.toString();
+				// show list of audioholders
+				document.getElementById('audioholderArray_span').innerHTML = audioholderArray.toString();
+				// show list of audioelementes
+				document.getElementById('audioelementArray_span').innerHTML = audioelementArray.toString();
+			}
+
+			function printSurveyData() { 
+				// print some fields from the survey for different people
+
+				// go over all XML files
+				for (fileIndex = 0; fileIndex < xmlFiles.length; fileIndex++) {
+					xmlFileName = xmlFileFolder+"/"+xmlFiles[fileIndex];
+					xml = readXML(xmlFileName); 
+					// make a div
+					var div = document.createElement('div');
+					document.body.appendChild(div);
+					div.id = 'div_survey_'+xmlFileName;
+					div.style.width = '1100px'; 
+					//div.style.height = '350px'; 
+
+					// title for that div (subject id)
+					document.getElementById('div_survey_'+xmlFileName).innerHTML = '<h2>'+xmlFileName+'</h2>';
+
+					// which songs did they do
+					if (xml != null) { // if file exists
+						// get root of XML file
+						root = xml.getElementsByTagName('browserevaluationresult')[0];
+						// go over all audioholders
+						// document.getElementById('div_survey_'+xmlFileName).innerHTML += '<strong>Audioholders: </strong>';
+						// audioholderNodes = root.getElementsByTagName('audioholder');
+						// for (audioholderIndex = 0; audioholderIndex < audioholderNodes.length-1; audioholderIndex++) {
+						// 	document.getElementById('div_survey_'+xmlFileName).innerHTML += audioholderNodes[audioholderIndex].getAttribute('id')+', ';
+						// }
+						// document.getElementById('div_survey_'+xmlFileName).innerHTML += audioholderNodes[audioholderNodes.length-1].getAttribute('id');
+
+						// survey responses (each if available)
+						// get posttest node for total test
+						childNodes = root.childNodes;
+						posttestnode = null;
+						for (idx = 0; idx < childNodes.length; idx++){
+							if (childNodes[childNodes.length-idx-1].tagName == 'posttest') {
+								posttestnode = childNodes[childNodes.length-idx-1];
+								break;
+							}
+						}
+
+						// mix experience
+						if (posttestnode) {
+							posttestcomments = posttestnode.getElementsByTagName('comment');
+							for (idx=0; idx < posttestcomments.length; idx++){
+								commentsToPrint = ['generalExperience', 'interfaceExperience'];
+								idAttribute = posttestcomments[idx].getAttribute('id');
+								if (commentsToPrint.indexOf(idAttribute) >= 0) { // if exists? 
+									document.getElementById('div_survey_'+xmlFileName).innerHTML += '<br><strong>'+idAttribute+': </strong>'+posttestcomments[idx].textContent;
+								}
+							}
+						}
+					}
+				}
+			}
+
+			function makePlots() { //TODO: split into different functions
+				// TEMPORARY
+				makeTimeline(xmlFileFolder+"/"+xmlFiles[7]);
+
+				// create value array
+				var ratings = [];  // 3D matrix of ratings (audioholder, audioelement, subject)
+				for (audioholderIndex = 0; audioholderIndex < audioholderArray.length; audioholderIndex++) { 
+					ratings.push([]);
+					for (audioelementIndex = 0; audioelementIndex < audioelementArray.length; audioelementIndex++) { 
+						ratings[audioholderIndex].push([]);
+					}
+				}
+
+				// go over all XML files
+				for (fileIndex = 0; fileIndex < xmlFiles.length; fileIndex++) {
+					xmlFileName = xmlFileFolder+"/"+xmlFiles[fileIndex];
+					xml = readXML(xmlFileName); 
+					if (xml != null) { // if file exists
+						// get root of XML file
+						root = xml.getElementsByTagName('browserevaluationresult')[0];
+						// go over all audioholders
+						audioholderNodes = root.getElementsByTagName('audioholder');
+						for (audioholderIndex = 0; audioholderIndex < audioholderNodes.length; audioholderIndex++) { 
+							audioholderName = audioholderNodes[audioholderIndex].getAttribute('id');
+							audioelementNodes = audioholderNodes[audioholderIndex].getElementsByTagName('audioelement');
+							// go over all audioelements
+							for (audioelementIndex = 0; audioelementIndex < audioelementNodes.length; audioelementIndex++) { 
+								audioelementName = audioelementNodes[audioelementIndex].getAttribute('id');
+								// get value
+								var value = audioelementNodes[audioelementIndex].getElementsByTagName("value")[0].textContent;
+								if (value) { // if not empty, null, undefined...
+									ratingValue = parseFloat(value);
+									// add to matrix at proper position
+									aHidx = audioholderArray.indexOf(audioholderName);
+									aEidx = audioelementArray.indexOf(audioelementName);
+									ratings[aHidx][aEidx].push(ratingValue);
+								}
+							}
+						}
+
+						// go over all audioholders
+
+						// go over all audioelements within audioholder, see if present in idMatrix, add if not
+						// add corresponding rating to 'ratings', at position corresponding with position in idMatrix
+					}
+				}
+
+				for (audioholderIndex = 0; audioholderIndex < audioholderArray.length; audioholderIndex++) {
+					audioholderName = audioholderArray[audioholderIndex]; // for this song
+					tickArray = []
+
+					raw_data = [['SubjectID', 'Rating']];
+					audioElIdx = 0;
+					for (audioelementIndex = 0; audioelementIndex<ratings[audioholderIndex].length; audioelementIndex++){
+						if (ratings[audioholderIndex][audioelementIndex].length>0) {
+							audioElIdx++; // increase if not empty
+							// make tick label
+							tickArray.push({v:audioElIdx, f: audioelementArray[audioelementIndex]});
+						}
+						for (subject = 0; subject<ratings[audioholderIndex][audioelementIndex].length; subject++){
+							// add subject-value pair for each subject
+							raw_data.push([audioElIdx, ratings[audioholderIndex][audioelementIndex][subject]]); 
+						}
+					}
+
+					// create plot (one per song)
+					var data = google.visualization.arrayToDataTable(raw_data);
+
+					var options = {
+						title: audioholderName,
+						hAxis: {title: 'audioelement ID', minValue: 0, maxValue: audioElIdx+1,
+								ticks: tickArray},
+						vAxis: {title: 'Rating', minValue: 0, maxValue: 1},
+						seriesType: 'scatter',
+						legend: 'none'
+					};
+					var div = document.createElement('div');
+					document.body.appendChild(div);
+					div.id = 'div_'+audioholderName;
+					div.style.width = '1100px'; 
+					div.style.height = '350px'; 
+					var chart = new google.visualization.ComboChart(document.getElementById('div_'+audioholderName));
+					chart.draw(data, options);
+
+					// box plots
+					var div = document.createElement('div');
+					document.body.appendChild(div);
+					div.id = 'div_box_'+audioholderName;
+					div.style.width = '1100px'; 
+					div.style.height = '350px'; 
+					// Get median, percentiles, maximum and minimum; outliers.
+					pctl25 = [];
+					pctl75 = [];
+					med = [];
+					min = [];
+					max = [];
+					outlierArray = [];
+					max_n_outliers = 0; // maximum number of outliers for one audioelement
+					for (audioelementIndex = 0; audioelementIndex<ratings[audioholderIndex].length; audioelementIndex++){
+						med.push(median(ratings[audioholderIndex][audioelementIndex])); // median
+						pctl25.push(percentile(ratings[audioholderIndex][audioelementIndex], 25)); // 25th percentile
+						pctl75.push(percentile(ratings[audioholderIndex][audioelementIndex], 75)); // 75th percentile
+						IQR = pctl75[pctl75.length-1]-pctl25[pctl25.length-1];
+						// outliers: range of values which is above pctl75+1.5*IQR or below pctl25-1.5*IQR
+						outliers = [];
+						rest = [];
+						for (idx = 0; idx<ratings[audioholderIndex][audioelementIndex].length; idx++){
+							if (ratings[audioholderIndex][audioelementIndex][idx] > pctl75[pctl75.length-1]+1.5*IQR ||
+								ratings[audioholderIndex][audioelementIndex][idx] < pctl25[pctl25.length-1]-1.5*IQR){
+								outliers.push(ratings[audioholderIndex][audioelementIndex][idx]);
+							}
+							else {
+								rest.push(ratings[audioholderIndex][audioelementIndex][idx]);
+							}
+						}
+						outlierArray.push(outliers);
+						max_n_outliers = Math.max(max_n_outliers, outliers.length); // update max mber
+						// max: maximum value which is not outlier
+						max.push(Math.max.apply(null, rest));
+						// min: minimum value which is not outlier
+						min.push(Math.min.apply(null, rest));
+					}
+
+					// Build data array
+					boxplot_data = [['ID', 'Span', '', '', '', 'Median']];
+					for (idx = 0; idx < max_n_outliers; idx++) {
+						boxplot_data[0].push('Outlier');
+					}
+					for (audioelementIndex = 0; audioelementIndex<ratings[audioholderIndex].length; audioelementIndex++){
+						if (ratings[audioholderIndex][audioelementIndex].length>0) { // if rating array not empty for this audioelement
+							data_array = [
+									audioelementArray[audioelementIndex], // name
+									min[audioelementIndex], // minimum
+									pctl75[audioelementIndex],
+									pctl25[audioelementIndex],
+									max[audioelementIndex], // maximum
+									med[audioelementIndex]
+									];
+							for (idx = 0; idx < max_n_outliers; idx++) {
+								if (idx<outlierArray[audioelementIndex].length){
+									data_array.push(outlierArray[audioelementIndex][idx]);
+								}
+								else {
+									data_array.push(null);
+								}
+							}
+							boxplot_data.push(data_array);
+						}
+					}
+
+					// Create and populate the data table.
+					var data = google.visualization.arrayToDataTable(boxplot_data);
+					  // Create and draw the visualization.
+					var ac = new google.visualization.ComboChart(document.getElementById('div_box_'+audioholderName));
+					ac.draw(data, {
+						title : audioholderName,
+						//width: 600,
+						//height: 400,
+						vAxis: {title: "Rating"},
+						hAxis: {title: "audioelement ID"},
+						seriesType: "line",
+						pointSize: 5, 
+						lineWidth: 0,
+						colors: ['black'], 
+						series: { 0: {type: "candlesticks", color: 'blue'}, // box plot shape
+								  1: {type: "line", pointSize: 10, lineWidth: 0, color: 'red' } }, // median
+						legend: 'none'
+					});
+				}
+			}
+
+			function makeTimeline(xmlFileName){ // WIP
+				// Based on the XML file name, take time data and plot playback and marker movements
+
+				// read XML file and check if exists
+				xml = readXML(xmlFileName); 
+				if (!xml) { // if file does not exist
+					console.log('XML file '+xml+'does not exist. ('+xmlFileName+')')
+					return; // do nothing; exit function
+				}
+				// get root of XML file
+				root = xml.getElementsByTagName('browserevaluationresult')[0];
+
+				audioholder_time = 0; 
+				previous_audioholder_time = 0; // time spent before current audioholder
+				time_offset = 0; // test starts at zero
+
+				// go over all audioholders
+				audioholderNodes = root.getElementsByTagName('audioholder');
+				for (audioholderIndex = 0; audioholderIndex < audioholderNodes.length; audioholderIndex++) { 
+					audioholderName = audioholderNodes[audioholderIndex].getAttribute('id');
+					if (!audioholderName) {
+						console.log('audioholder name is empty; go to next one. ('+xmlFileName+')');
+						break;
+					}
+
+					// subtract total audioholder length from subsequent audioholder event times
+					audioholder_children = audioholderNodes[audioholderIndex].childNodes; 
+					foundIt = false;
+					console.log(audioholder_children[2].getElementsByTagName("metricResult")) // not working! 
+					for (idx = 0; idx<audioholder_children.length; idx++) { // go over children
+
+						if (audioholder_children[idx].getElementsByTagName('metricResult').length) {
+							console.log(audioholder_children[idx].getElementsByTagName('metricResult')[0]);
+							if (audioholder_children[idx].getElementsByTagName('metricResult')[0].getAttribute('id') == "testTime"){
+								audioholder_time = parseFloat(audioholder_children[idx].getElementsByTagName('metricResult')[0].textContent);
+								console.log(audioholder_time); 
+								foundIt = true;
+							}
+						}
+					}
+					if (!foundIt) {
+						console.log("Skipping audioholder without total time specified from "+xmlFileName+"."); // always hitting this
+						break;
+					}
+
+					audioelementNodes = audioholderNodes[audioholderIndex].getElementsByTagName('audioelement');
+					
+					// make div
+
+					// draw chart
+
+					// legend with audioelement names	
+				}
+			}
+			
+		</script>
+
+
+
+		<style>
+			div {
+				padding: 2px;
+				margin-top: 2px;
+				margin-bottom: 2px;
+			}
+			div.head{
+				margin-left: 10px;
+				border: black;
+				border-width: 2px;
+				border-style: solid;
+			}
+			div.attrib{
+				margin-left:25px;
+				border: black;
+				border-width: 2px;
+				border-style: dashed;
+				margin-bottom: 10px;
+			}
+			div#headerMatter{
+				background-color: #FFFFCC;
+			}
+			div#currentStatement{
+				font-size:3.0em;
+				font-weight: bold;
+				
+			}
+			div#debugDisplay {
+				color: #CCCCCC;
+				font-size:0.3em;
+			}
+			span#scoreDisplay {
+				font-weight: bold;
+			}
+			div#wrapper {
+				width: 780px;
+				border: 1px solid black;
+				overflow: hidden; /* add this to contain floated children */
+			}
+			div#instrumentSection {
+				width: 250px;
+				border: 1px solid red;
+				display: inline-block;
+			}
+			div#featureSection {
+				width: 250px;
+				border: 1px solid green;
+				display: inline-block;
+			}
+			div#valenceSection {
+				width: 250px;
+				border: 1px solid blue;
+				display: inline-block;
+			}
+			button#previousComment{
+				width: 120px;
+				height: 150px;
+				font-size:1.5em;
+			}
+			button#nextComment{
+				width: 666px;
+				height: 150px;
+				font-size:1.5em;
+			}
+			ul
+			{
+				list-style-type: none; /* no bullet points */
+				margin-left: -20px; /* less indent */
+				margin-top: 0px;
+  				margin-bottom: 5px;
+			}
+		</style>
+		
+	</head>
+
+	<body>
+		<h1>Subjective evaluation results</h1>
+		
+		<div id="debugDisplay">
+		XML file folder: <span id="xmlFileFolder_span"></span>
+		</div>
+
+		<div id="headerMatter">
+			<div>
+				<strong>Result XML files:</strong> <span id="numberOfFiles_span"></span>
+			</div>
+			<div>
+				<strong>Audioholders in dataset:</strong> <span id="audioholderArray_span"></span>
+			</div>
+			<div>
+				<strong>Subjects in dataset:</strong> <span id="subjectArray_span"></span>
+			</div>
+			<div>
+				<strong>Audioelements in dataset:</strong> <span id="audioelementArray_span"></span>
+			</div>
+			<br>
+		</div>
+		<br>
+
+		<!-- Show time elapsed 
+		The last annotation took <strong><span id="timeDisplay">(N/A)</span></strong> seconds.
+		<br>-->
+		
+	</body>
+</html>
--- a/ape.js	Tue Oct 13 10:20:04 2015 +0100
+++ b/ape.js	Mon Nov 23 09:13:12 2015 +0000
@@ -588,7 +588,7 @@
 	    {
 	        if (audioEngineContext.timer.testStarted == false)
 	        {
-	            alert('You have not started the test! Please press start to begin the test!');
+	            alert('You have not started the test! Please click a fragment to begin the test!');
 	            return;
 	        }
 	    }
--- a/core.js	Tue Oct 13 10:20:04 2015 +0100
+++ b/core.js	Mon Nov 23 09:13:12 2015 +0000
@@ -190,6 +190,12 @@
 				textArea.rows = "10";
 				break;
 			}
+			document.onkeydown=function(){
+				if(window.event.keyCode=='13'){ // when you hit enter
+					window.event.preventDefault(); // don't make newline
+					popup.proceedClicked(); // go to the next window (or start the test or submit)
+				}
+			}
 			this.popupResponse.appendChild(textArea);
 			textArea.focus();
 		} else if (node.type == 'checkbox') {
@@ -668,9 +674,22 @@
 			if (xmlhttp.status != 200 && xmlhttp.readyState == 4) {
 				createProjectSave(null);
 			} else {
-				popup.showPopup();
-				popup.popupContent.innerHTML = null;
-				popup.popupContent.textContent = "Thank you for performing this listening test";
+				if (xmlhttp.responseXML == null)
+				{
+					return createProjectSave(null);
+				}
+				var response = xmlhttp.responseXML.childNodes[0];
+				if (response.getAttribute('state') == "OK")
+				{
+					var file = response.getElementsByTagName('file')[0];
+					console.log('Save OK: Filename '+file.textContent+','+file.getAttribute('bytes')+'B');
+					popup.showPopup();
+					popup.popupContent.innerHTML = null;
+					popup.popupContent.textContent = "Thank you!";
+				} else {
+					var message = response.getElementsByTagName('message')[0];
+					errorSessionDump(message.textContent);
+				}
 			}
 		};
 		xmlhttp.send(file);
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/docs/Instructions/ListeningTestInstructions.bib	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,31 @@
+%% This BibTeX bibliography file was created using BibDesk.
+%% http://bibdesk.sourceforge.net/
+
+%% Created for Brecht De Man at 2015-09-30 17:44:12 +0200 
+
+
+%% Saved with string encoding Unicode (UTF-8) 
+
+
+
+@conference{ape,
+	Author = {De Man, Brecht and Joshua D. Reiss},
+	Booktitle = {136th Convention of the Audio Engineering Society},
+	Date-Added = {2015-09-29 17:07:16 +0000},
+	Date-Modified = {2015-09-29 17:07:20 +0000},
+	Keywords = {perceptual evaluation},
+	Month = {April},
+	Read = {1},
+	Title = {{APE}: {A}udio {P}erceptual {E}valuation toolbox for {MATLAB}},
+	Year = {2014},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QOi4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Xcml0aW5ncy9fcHVibGljYXRpb25zL2FlczEzNi5wZGbSFwsYGVdOUy5kYXRhTxEBsgAAAAABsgACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAA0Fxdh0grAAAACl8UCmFlczEzNi5wZGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKaS7PXHsUAAAAAAAAAAAABAAEAAAJIAAAAAAAAAAAAAAAAAAAAA1fcHVibGljYXRpb25zAAAQAAgAANBcQWcAAAARAAgAAM9cbQQAAAABABQACl8UAApeugAKXQIACUReAAKT1QACAE1NYWNpbnRvc2ggSEQ6VXNlcnM6AEJyZWNodDoAR29vZ2xlIERyaXZlOgBXcml0aW5nczoAX3B1YmxpY2F0aW9uczoAYWVzMTM2LnBkZgAADgAWAAoAYQBlAHMAMQAzADYALgBwAGQAZgAPABoADABNAGEAYwBpAG4AdABvAHMAaAAgAEgARAASADtVc2Vycy9CcmVjaHQvR29vZ2xlIERyaXZlL1dyaXRpbmdzL19wdWJsaWNhdGlvbnMvYWVzMTM2LnBkZgAAEwABLwAAFQACAA3//wAAgAbSGxwdHlokY2xhc3NuYW1lWCRjbGFzc2VzXU5TTXV0YWJsZURhdGGjHR8gVk5TRGF0YVhOU09iamVjdNIbHCIjXE5TRGljdGlvbmFyeaIiIF8QD05TS2V5ZWRBcmNoaXZlctEmJ1Ryb290gAEACAARABoAIwAtADIANwBAAEYATQBVAGAAZwBqAGwAbgBxAHMAdQB3AIQAjgDLANAA2AKOApAClQKgAqkCtwK7AsICywLQAt0C4ALyAvUC+gAAAAAAAAIBAAAAAAAAACgAAAAAAAAAAAAAAAAAAAL8}}
+
+@conference{waet,
+	Author = {Nicholas Jillings and David Moffat and De Man, Brecht and Joshua D. Reiss},
+	Booktitle = {12th Sound and Music Computing Conference},
+	Date-Added = {2015-09-22 16:48:27 +0000},
+	Date-Modified = {2015-09-22 16:48:33 +0000},
+	Month = {July},
+	Read = {1},
+	Title = {Web {A}udio {E}valuation {T}ool: {A} browser-based listening test environment},
+	Year = {2015}}
Binary file docs/Instructions/ListeningTestInstructions.pdf has changed
--- a/docs/Instructions/ListeningTestInstructions.tex	Tue Oct 13 10:20:04 2015 +0100
+++ b/docs/Instructions/ListeningTestInstructions.tex	Mon Nov 23 09:13:12 2015 +0000
@@ -10,6 +10,7 @@
 \usepackage{amssymb}
 \usepackage{cite}
 \usepackage{hyperref}				% Hyperlinks
+\usepackage[nottoc,numbib]{tocbibind}	% 'References' in TOC
 
 \graphicspath{{img/}}					% Relative path where the images are stored. 
 
@@ -20,50 +21,57 @@
 \begin{document}
 \maketitle
 
-These instructions are about use of the Web Audio Evaluation Tool \cite{deman2015c} with the APE interface \cite{deman2014b} on Windows and Mac OS X platforms. 
+These instructions are about use of the Web Audio Evaluation Tool \cite{waet} with the APE interface \cite{ape} on Windows and Mac OS X platforms. 
 % TO DO: Linux
 
 \tableofcontents
 
-
+\clearpage
 
 \section{Installation and set up}
-	Download the folder and unzip in a location of your choice. 
+	Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice. 
 	
 	\subsection{Contents}
 		The folder should contain the following elements: \\
 		
 		\textbf{Main folder:} 
-		\begin{itemize}
-                    	\item \texttt{ape.css, core.css, graphics.css}, structure.css: style files (edit to change appearance)
-                    	\item \texttt{ape.js}: JavaScript file for APE-style interface \cite{deman2014b}
-                    	\item \texttt{core.js}: JavaScript file with core functionality
-                    	\item \texttt{index.html}: webpage where interface should appear
-                    	\item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library
-                    	\item \texttt{pythonServer.py}: webserver for running tests locally
-                    	\item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files)\\
-		\end{itemize}
-                 \textbf{Documentation (/docs/)}
-                 \begin{itemize}
-                    	\item Project Specification Document (\LaTeX/PDF)
-                    	\item Results Specification Document (\LaTeX/PDF)
-                    	\item SMC15: PDF and \LaTeX source of corresponding SMC2015 publication \cite{deman2015c}\\
-		\end{itemize}
-                 \textbf{Example project (/example\_eval/)}
-                    	\begin{itemize}
-                    	\item An example of what the set up XML should look like, with example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface).\\ \end{itemize}
-                  \textbf{Output files (/saves/)}
-                    	\begin{itemize}
-                    	\item The output XML files of tests will be stored here by default by the \texttt{pythonServer.py} script.\\ \end{itemize}
-                  \textbf{Auxiliary scripts (/scripts/)}
-                    	\begin{itemize}
-                    	\item Helpful Python scripts for extraction and visualisation of data.\\ \end{itemize}
-                  \textbf{Test creation tool (/test\_create/)}
-                    	\begin{itemize}
-                    	\item Webpage for easily setting up your own test without having to delve into the XML.\\ \end{itemize}
+			\begin{itemize}
+	            	\item \texttt{ape.css, core.css, graphics.css, structure.css}: style files (edit to change appearance)
+	            	\item \texttt{ape.js}: JavaScript file for APE-style interface \cite{ape}
+	            	\item \texttt{CITING.txt, LICENSE.txt, README.txt}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file.
+	            	\item \texttt{core.js}: JavaScript file with core functionality
+	            	\item \texttt{index.html}: webpage where interface should appear
+	            	\item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library
+	            	\item \texttt{pythonServer.py}: webserver for running tests locally
+	            	\item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files)\\
+			\end{itemize}
+	     \textbf{Documentation (./docs/)}
+	         \begin{itemize}
+	         		\item Instructions: PDF and \LaTeX source of these instructions
+	            	\item Project Specification Document (\LaTeX/PDF)
+	            	\item Results Specification Document (\LaTeX/PDF)
+	            	\item SMC15: PDF and \LaTeX source of corresponding SMC2015 publication \cite{waet}
+	            	\item WAC2016: PDF and \LaTeX source of corresponding WAC2016 publication\\
+			\end{itemize}
+         \textbf{Example project (./example\_eval/)}
+            	\begin{itemize}
+            		\item An example of what the set up XML should look like, with example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface).\\ 
+            	\end{itemize}
+          \textbf{Output files (./saves/)}
+            	\begin{itemize}
+            		\item The output XML files of tests will be stored here by default by the \texttt{pythonServer.py} script.\\ 
+            	\end{itemize}
+          \textbf{Auxiliary scripts (./scripts/)}
+            	\begin{itemize}
+            		\item Helpful Python scripts for extraction and visualisation of data.\\ 
+            	\end{itemize}
+          \textbf{Test creation tool (./test\_create/)}
+            	\begin{itemize}
+            		\item Webpage for easily setting up your own test without having to delve into the XML.\\ 
+            	\end{itemize}
                     	
 	\subsection{Browser}
-		As Microsoft Internet Explorer doesn't support the Web Audio API \footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work). 
+		As Microsoft Internet Explorer doesn't support the Web Audio API\footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work). 
 		
 		The tool is platform-independent and works in any browser that supports the Web Audio API. It does not require any specific, proprietary software. However, in case the tool is hosted locally (i.e. you are not hosting it on an actual webserver) you will need Python, which is a free programming language - see the next paragraph. 
 	
@@ -72,71 +80,76 @@
 		
 		On Mac OS X, Python comes preinstalled. 
 
+\clearpage
 
-\section{Listening test}
+\section{Listening test: Local}
 	\subsection{Start local webserver}
 		If the test is hosted locally, you will need to run the local webserver provided with this tool. 
 		
+		\subsubsection{Mac OS X}
+			Open the Terminal (find it in \textbf{Applications/Terminal} or via Spotlight), and go to the folder you downloaded. To do this, type \texttt{cd [folder]}, where \texttt{[folder]} is the folder where to find the \texttt{pythonServer.py} script you downloaded. For instance, if the location is \texttt{/Users/John/Documents/test/}, then type
+			
+				\texttt{cd /Users/John/Documents/test/}
+				
+			Then hit enter and run the Python script by typing
+
+				\texttt{python pythonServer.py}
+
+			and hit enter again. See also Figure \ref{fig:terminal}.
+			
+			\begin{figure}[htbp]
+	                \begin{center}
+	                \includegraphics[width=.75\textwidth]{pythonServer.png}
+	                \caption{Mac OS X: The Terminal window after going to the right folder (\texttt{cd [folder\_path]}) and running \texttt{pythonServer.py}.}
+	                \label{fig:terminal}
+	                \end{center}
+	                \end{figure}
+
+	        Alternatively, you can simply type \texttt{python} (follwed by a space) and drag the file into the Terminal window from Finder. % DOESN'T WORK YET
+			
+			You can leave this running throughout the different experiments (i.e. leave the Terminal open). 
+
 		\subsubsection{Windows}
 		
-		Simply double click the Python script \texttt{pythonServer.py} in the folder you downloaded. 
+			Simply double click the Python script \texttt{pythonServer.py} in the folder you downloaded. 
+			
+			You may see a warning like the one in Figure \ref{fig:warning}. Click `Allow access'. 
+			
+			\begin{figure}[htbp]
+	                \begin{center}
+	                \includegraphics[width=.6\textwidth]{warning.png}
+	                \caption{Windows: Potential warning message when executing \texttt{pythonServer.py}.}
+	                \label{fig:warning}
+	                \end{center}
+	                \end{figure}
+	                
+	                The process should now start, in the Command prompt that opens - see Figure \ref{fig:python}. 
+	                
+	                \begin{figure}[htbp]
+	                \begin{center}
+	                \includegraphics[width=.75\textwidth]{python.png}
+	                \caption{Windows: The Command Prompt after running \texttt{pythonServer.py} and opening the corresponding website.}
+	                \label{fig:python}
+	                \end{center}
+	                \end{figure}
+	                
+	                You can leave this running throughout the different experiments (i.e. leave the Command Prompt open). 
 		
-		You may see a warning like the one in Figure \ref{fig:warning}. Click `Allow access'. 
 		
-		\begin{figure}[htbp]
-                \begin{center}
-                \includegraphics[width=.6\textwidth]{warning.png}
-                \caption{Windows: Potential warning message when executing \texttt{pythonServer.py}.}
-                \label{fig:warning}
-                \end{center}
-                \end{figure}
-                
-                The process should now start, in the Command prompt that opens - see Figure \ref{fig:python}. 
-                
-                \begin{figure}[htbp]
-                \begin{center}
-                \includegraphics[width=.75\textwidth]{python.png}
-                \caption{Windows: The Command Prompt after running \texttt{pythonServer.py} and opening the corresponding website.}
-                \label{fig:python}
-                \end{center}
-                \end{figure}
-                
-                You can leave this running throughout the different experiments (i.e. leave the Command Prompt open). 
-                
+\clearpage
+	\subsection{Sample rate}
+		Depending on how the experiment is set up, audio is resampled automatically (the Web Audio default) or the sample rate is enforced. In the latter case, you will need to make sure that the sample rate of the system is equal to the sample rate of these audio files. For this reason, all audio files in the experiment will have to have the same sample rate. 
+
+		Always make sure that all other digital equipment in the playback chain (clock, audio interface, digital-to-analog converter, ...) is set to this same sample rate.
 		
 		\subsubsection{Mac OS X}
-		Open the Terminal (find it in \textbf{Applications/Terminal} or via Spotlight), and go to the folder you downloaded. To do this, type \texttt{cd [folder]}, where \texttt{[folder]} is the folder where to find the \texttt{pythonServer.py} script you downloaded. For instance, if the location is \texttt{/Users/John/Documents/test/}, then type
+			To change the sample rate in Mac OS X, go to \textbf{Applications/Utilities/Audio MIDI Setup} or find this application with Spotlight. Then select the output of the audio interface you are using and change the `Format' to the appropriate number. Also make sure the bit depth and channel count are as desired. 
+			If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate. 
 		
-			\texttt{cd /Users/John/Documents/test/}
-			
-		Then hit enter and run the Python script by typing
-
-			\texttt{python pythonServer.py}
-
-		and hit enter again. See also Figure \ref{fig:terminal}.
-		
-		\begin{figure}[htbp]
-                \begin{center}
-                \includegraphics[width=.75\textwidth]{pythonServer.png}
-                \caption{Mac OS X: Potential warning message when executing \texttt{pythonServer.py}.}
-                \label{fig:terminal}
-                \end{center}
-                \end{figure}
-		
-		Alternatively, you can simply type \texttt{python} (follwed by a space) and drag the file into the Terminal window from Finder. % DOESN'T WORK YET
-		
-		You can leave this running throughout the different experiments (i.e. leave the Terminal open). 
-		
-		
-	\subsection{Sample rate}
-		Depending on how the experiment is set up, audio is resampled automatically (the Web Audio default) or the sample rate is enforced. In the latter case, you will need to make sure that the sample rate of the system is equal to the sample rate of these audio files. For this reason, all audio files in the experiment will have to have the same sample rate. 
-		
-		To change the sample rate in Mac OS X, go to \textbf{Applications/Utilities/Audio MIDI Setup} or find this application with Spotlight. Then select the output of the audio interface you are using and change the `Format' to the appropriate number. Also make sure the bit depth and channel count are as desired. 
-		If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate. 
-		
-		To change the sample rate in Windows, right-click on the speaker icon in the lower-right corner of your desktop and choose `Playback devices'. Right-click the appropriate playback device and click `Properties'. Click the `Advanced' tab and verify or change the sample rate under `Default Format'.    % NEEDS CONFIRMATION
-		
-		Always make sure that all other digital equipment in the playback chain (clock, audio interface, digital-to-analog converter, ...) is set to this same sample rate. 
+		\subsubsection{Windows}
+			To change the sample rate in Windows, right-click on the speaker icon in the lower-right corner of your desktop and choose `Playback devices'. Right-click the appropriate playback device and click `Properties'. Click the `Advanced' tab and verify or change the sample rate under `Default Format'.    % NEEDS CONFIRMATION
+			If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate. 
+		 
 				
 		
 	\subsection{Setting up a participant}
@@ -178,7 +191,7 @@
                         \end{center}
                         \end{figure}
                         
-                        If at any point in the test the participant reports weird behaviour or an error of some kind, or the test needs to be interrupted, please notify the experimenter and/or refer to Section \ref{sec:troubleshooting}. 
+            If at any point in the test the participant reports weird behaviour or an error of some kind, or the test needs to be interrupted, please notify the experimenter and/or refer to Section \ref{sec:troubleshooting}. 
 			
 			When the test is over (the subject should see a message to that effect, and click `Submit' one last time), the output XML file containing all collected data should have appeared in `saves/'. The names of these files are `test-0.xml', `test-1.xml', etc., in ascending order. The Terminal or Command prompt running the local web server will display the following file name. If such a file did not appear, please again refer to Section \ref{sec:troubleshooting}. 
 			
@@ -190,7 +203,7 @@
 		\subsubsection{Survey}
 			The tool allows for embedded questions before and after each page, and before and after the whole test. If these do \underline{not} include survey questions (about the participant's background, demographic information, and so on) make sure to ask the participant to complete the survey immediately after the test. Above anything else, this decreases the likelihood that the survey goes forgotten and the experimenters do not receive the data in time. 
 	
-	
+\clearpage
 	\subsection{Troubleshooting} \label{sec:troubleshooting}
 		Thanks to feedback from using the interface in experiments by the authors and others, many bugs have been caught and fatal crashes due to the interface (provided it is set up properly by the user) seem to be a thing of the past. 
 		However, if things do go wrong or the test needs to be interrupted for whatever reason, all data is not lost. In a normal scenario, the test needs to be completed until the end (the final `Submit'), at which point the output XML is stored in the \texttt{saves/}. If this stage is not reached, open the JavaScript Console (see below for how to find it) and type 
@@ -198,23 +211,31 @@
 		\texttt{createProjectSave()}
 
 		and hit enter. This will open a pop-up window with a hyperlink that reads `Save File'; click it and an XML file with results until that point should be stored in your download folder. 
+		
 		Alternatively, a lot of data can be read from the same console, in which the tool prints a lot of debug information. Specifically:
-            	\begin{itemize}
+        	\begin{itemize}
             	\item the randomisation of pages and fragments are logged;
             	\item any time a slider is played, its ID and the time stamp (in seconds since the start of the test) are displayed;
             	\item any time a slider is dragged and dropped, the location where it is dropped including the time stamp are shown; 
             	\item any comments and pre- or post-test questions and their answers are logged as well. 
-            	\end{itemize}
+        	\end{itemize}
 
 		You can select all this and save into a text file, so that none of this data is lost. You may to choose to do this even when a test was successful as an extra precaution. 
 
 		\subsubsection{Opening the JavaScript Console}
-                        \begin{itemize}
-                        \item In Google Chrome, the JavaScript Console can be found in \textbf{View$>$Developer$>$JavaScript Console}, or via the keyboard shortcut Cmd + Alt + J (Mac OS X). 
-                        \item In Safari, the JavaScript Console can be found in \textbf{Develop$>$Show Error Console}, or via the keyboard shortcut Cmd + Alt + C (Mac OS X). Note that for the Developer menu to be visible, you have to go to Preferences (Cmd + ,) and enable `Show Develop menu in menu bar' in the `Advanced' tab. 
-                        \item In Firefox, go to \textbf{Tools$>$Web Developer$>$Web Console}, or hit Cmd + Alt + K. 
-                        \end{itemize}
+            \begin{itemize}
+                \item In Google Chrome, the JavaScript Console can be found in \textbf{View$>$Developer$>$JavaScript Console}, or via the keyboard shortcut Cmd + Alt + J (Mac OS X). 
+                \item In Safari, the JavaScript Console can be found in \textbf{Develop$>$Show Error Console}, or via the keyboard shortcut Cmd + Alt + C (Mac OS X). Note that for the Developer menu to be visible, you have to go to Preferences (Cmd + ,) and enable `Show Develop menu in menu bar' in the `Advanced' tab. 
+                \item In Firefox, go to \textbf{Tools$>$Web Developer$>$Web Console}, or hit Cmd + Alt + K. 
+            \end{itemize}
 
-% TO DO: add bibliography here
+\clearpage
+\section{Listening test: remote}
+
+	(TBA)
+
+\clearpage
+\bibliographystyle{ieeetr}
+\bibliography{ListeningTestInstructions}{}
 
 \end{document}  
\ No newline at end of file
Binary file docs/Instructions/img/cmd.png has changed
Binary file docs/Instructions/img/python.png has changed
Binary file docs/Instructions/img/pythonServer.png has changed
Binary file docs/Instructions/img/test.png has changed
Binary file docs/Instructions/img/warning.png has changed
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/docs/WAC2016/WAC2016.bib	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,254 @@
+%% This BibTeX bibliography file was created using BibDesk.
+%% http://bibdesk.sourceforge.net/
+
+%% Created for Brecht De Man at 2015-10-12 17:58:50 +0100 
+
+
+%% Saved with string encoding Unicode (UTF-8) 
+
+
+
+@inproceedings{mushram,
+	Author = {Emmanuel Vincent and Maria G. Jafari and Mark D. Plumbley},
+	Booktitle = {UK ICA Research Network Workshop},
+	Date-Added = {2015-10-12 16:58:35 +0000},
+	Date-Modified = {2015-10-12 16:58:35 +0000},
+	Keywords = {perceptual evaluation},
+	Title = {Preliminary guidelines for subjective evalutation of audio source separation algorithms},
+	Year = {2006},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QNS4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL211c2hyYW0ucGRm0hcLGBlXTlMuZGF0YU8RAaAAAAAAAaAAAgAADE1hY2ludG9zaCBIRAAAAAAAAAAAAAAAAAAAANBcT3dIKwAAAApfEQttdXNocmFtLnBkZgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACqdQz0+BaAAAAAAAAAAAAAQABAAACSAAAAAAAAAAAAAAAAAAAAAGUGFwZXJzABAACAAA0FxBZwAAABEACAAAz0+BaAAAAAEAFAAKXxEACl67AApdAgAJRF4AApPVAAIASE1hY2ludG9zaCBIRDpVc2VyczoAQnJlY2h0OgBHb29nbGUgRHJpdmU6AERvY3VtZW50czoAUGFwZXJzOgBtdXNocmFtLnBkZgAOABgACwBtAHUAcwBoAHIAYQBtAC4AcABkAGYADwAaAAwATQBhAGMAaQBuAHQAbwBzAGgAIABIAEQAEgA2VXNlcnMvQnJlY2h0L0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL211c2hyYW0ucGRmABMAAS8AABUAAgAN//8AAIAG0hscHR5aJGNsYXNzbmFtZVgkY2xhc3Nlc11OU011dGFibGVEYXRhox0fIFZOU0RhdGFYTlNPYmplY3TSGxwiI1xOU0RpY3Rpb25hcnmiIiBfEA9OU0tleWVkQXJjaGl2ZXLRJidUcm9vdIABAAgAEQAaACMALQAyADcAQABGAE0AVQBgAGcAagBsAG4AcQBzAHUAdwCEAI4AxgDLANMCdwJ5An4CiQKSAqACpAKrArQCuQLGAskC2wLeAuMAAAAAAAACAQAAAAAAAAAoAAAAAAAAAAAAAAAAAAAC5Q==}}
+
+@conference{scale,
+	Author = {Arnau Vazquez Giner},
+	Booktitle = {AIA/DAGA Conference on Acoustics, Merano (Italy)},
+	Date-Added = {2015-10-12 16:55:54 +0000},
+	Date-Modified = {2015-10-12 16:55:54 +0000},
+	Keywords = {perceptual evaluation},
+	Title = {Scale - A Software Tool for Listening Experiments},
+	Year = {2013},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QMy4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL1NjYWxlLnBkZtIXCxgZV05TLmRhdGFPEQGYAAAAAAGYAAIAAAxNYWNpbnRvc2ggSEQAAAAAAAAAAAAAAAAAAADQXE93SCsAAAAKXxEJU2NhbGUucGRmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAqmAM9QyZAAAAAAAAAAAAAEAAQAAAkgAAAAAAAAAAAAAAAAAAAABlBhcGVycwAQAAgAANBcQWcAAAARAAgAAM9QyZAAAAABABQACl8RAApeuwAKXQIACUReAAKT1QACAEZNYWNpbnRvc2ggSEQ6VXNlcnM6AEJyZWNodDoAR29vZ2xlIERyaXZlOgBEb2N1bWVudHM6AFBhcGVyczoAU2NhbGUucGRmAA4AFAAJAFMAYwBhAGwAZQAuAHAAZABmAA8AGgAMAE0AYQBjAGkAbgB0AG8AcwBoACAASABEABIANFVzZXJzL0JyZWNodC9Hb29nbGUgRHJpdmUvRG9jdW1lbnRzL1BhcGVycy9TY2FsZS5wZGYAEwABLwAAFQACAA3//wAAgAbSGxwdHlokY2xhc3NuYW1lWCRjbGFzc2VzXU5TTXV0YWJsZURhdGGjHR8gVk5TRGF0YVhOU09iamVjdNIbHCIjXE5TRGljdGlvbmFyeaIiIF8QD05TS2V5ZWRBcmNoaXZlctEmJ1Ryb290gAEACAARABoAIwAtADIANwBAAEYATQBVAGAAZwBqAGwAbgBxAHMAdQB3AIQAjgDEAMkA0QJtAm8CdAJ/AogClgKaAqECqgKvArwCvwLRAtQC2QAAAAAAAAIBAAAAAAAAACgAAAAAAAAAAAAAAAAAAALb}}
+
+@conference{whisper,
+	Author = {Simon Ciba and Andr{\'e} Wlodarski and Hans-Joachim Maempel},
+	Booktitle = {126th Convention of the AES},
+	Date-Added = {2015-10-12 16:55:54 +0000},
+	Date-Modified = {2015-10-12 16:55:54 +0000},
+	Keywords = {perceptual evaluation},
+	Month = {May 7-10},
+	Title = {WhisPER -- {A} new tool for performing listening tests},
+	Year = {2009},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QNS4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL3doaXNwZXIucGRm0hcLGBlXTlMuZGF0YU8RAaAAAAAAAaAAAgAADE1hY2ludG9zaCBIRAAAAAAAAAAAAAAAAAAAANBcT3dIKwAAAApfEQt3aGlzcGVyLnBkZgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACngFz1DL0QAAAAAAAAAAAAQABAAACSAAAAAAAAAAAAAAAAAAAAAGUGFwZXJzABAACAAA0FxBZwAAABEACAAAz1DL0QAAAAEAFAAKXxEACl67AApdAgAJRF4AApPVAAIASE1hY2ludG9zaCBIRDpVc2VyczoAQnJlY2h0OgBHb29nbGUgRHJpdmU6AERvY3VtZW50czoAUGFwZXJzOgB3aGlzcGVyLnBkZgAOABgACwB3AGgAaQBzAHAAZQByAC4AcABkAGYADwAaAAwATQBhAGMAaQBuAHQAbwBzAGgAIABIAEQAEgA2VXNlcnMvQnJlY2h0L0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL3doaXNwZXIucGRmABMAAS8AABUAAgAN//8AAIAG0hscHR5aJGNsYXNzbmFtZVgkY2xhc3Nlc11OU011dGFibGVEYXRhox0fIFZOU0RhdGFYTlNPYmplY3TSGxwiI1xOU0RpY3Rpb25hcnmiIiBfEA9OU0tleWVkQXJjaGl2ZXLRJidUcm9vdIABAAgAEQAaACMALQAyADcAQABGAE0AVQBgAGcAagBsAG4AcQBzAHUAdwCEAI4AxgDLANMCdwJ5An4CiQKSAqACpAKrArQCuQLGAskC2wLeAuMAAAAAAAACAQAAAAAAAAAoAAAAAAAAAAAAAAAAAAAC5Q==}}
+
+@book{bech,
+	Annote = {p 115: GLS
+	- desired sample population
+	- normal hearing acuity (C4DM-wide test?)
+	- sensitive to audio quality characteristics
+	- ability to repeatedly rate stimuli consistently
+	- available for performing listening tests
+- web basd questionnaire
+- pure tone audiometry (?)
+- screening experiments => able to find pairs?
+
+p 125
+no audiometric measure can discriminate between naive and experienced listener
+listeners will have different strategies for evaluation: care should be exercised when averaging across listeners
+previous listening skills = important
+
+p 126
+ability direct influence on statistical resolution of test
+matching test (at the same time familiarisation): 80% at least
+
+p 167
+intra-subject reliability},
+	Author = {Bech, S. and Zacharov, N.},
+	Date-Added = {2015-09-29 19:47:28 +0000},
+	Date-Modified = {2015-09-29 19:47:28 +0000},
+	Isbn = {9780470869246},
+	Keywords = {psychophysics,perception; listening tests; perceptual evaluation},
+	Publisher = {John Wiley \& Sons},
+	Read = {1},
+	Title = {Perceptual Audio Evaluation - Theory, Method and Application},
+	Url = {http://books.google.co.uk/books?id=1WGPJai1gX8C},
+	Year = {2007},
+	Bdsk-Url-1 = {http://books.google.co.uk/books?id=1WGPJai1gX8C}}
+
+@conference{schoeffler2015mushra,
+	Author = {Schoeffler, Michael and St{\"o}ter, Fabian-Robert and Edler, Bernd and Herre, J{\"u}rgen},
+	Booktitle = {1st Web Audio Conference},
+	Date-Added = {2015-09-29 18:35:27 +0000},
+	Date-Modified = {2015-09-29 18:37:59 +0000},
+	Title = {Towards the Next Generation of Web-based Experiments: {A} Case Study Assessing Basic Audio Quality Following the {ITU-R} Recommendation {BS}. 1534 ({MUSHRA})},
+	Year = {2015},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QOi4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL3dhYzE1X211c2hyYS5wZGbSFwsYGVdOUy5kYXRhTxEBtgAAAAABtgACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAA0FxPd0grAAAACl8REHdhYzE1X211c2hyYS5wZGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAESrbDSMIYSAAAAAAAAAAAABAAEAAAJIAAAAAAAAAAAAAAAAAAAAAZQYXBlcnMAEAAIAADQXEFnAAAAEQAIAADSMHgCAAAAAQAUAApfEQAKXrsACl0CAAlEXgACk9UAAgBNTWFjaW50b3NoIEhEOlVzZXJzOgBCcmVjaHQ6AEdvb2dsZSBEcml2ZToARG9jdW1lbnRzOgBQYXBlcnM6AHdhYzE1X211c2hyYS5wZGYAAA4AIgAQAHcAYQBjADEANQBfAG0AdQBzAGgAcgBhAC4AcABkAGYADwAaAAwATQBhAGMAaQBuAHQAbwBzAGgAIABIAEQAEgA7VXNlcnMvQnJlY2h0L0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL3dhYzE1X211c2hyYS5wZGYAABMAAS8AABUAAgAN//8AAIAG0hscHR5aJGNsYXNzbmFtZVgkY2xhc3Nlc11OU011dGFibGVEYXRhox0fIFZOU0RhdGFYTlNPYmplY3TSGxwiI1xOU0RpY3Rpb25hcnmiIiBfEA9OU0tleWVkQXJjaGl2ZXLRJidUcm9vdIABAAgAEQAaACMALQAyADcAQABGAE0AVQBgAGcAagBsAG4AcQBzAHUAdwCEAI4AywDQANgCkgKUApkCpAKtArsCvwLGAs8C1ALhAuQC9gL5Av4AAAAAAAACAQAAAAAAAAAoAAAAAAAAAAAAAAAAAAADAA==}}
+
+@conference{ape,
+	Author = {De Man, Brecht and Joshua D. Reiss},
+	Booktitle = {136th Convention of the AES},
+	Date-Added = {2015-09-29 17:07:16 +0000},
+	Date-Modified = {2015-09-29 17:07:20 +0000},
+	Keywords = {perceptual evaluation},
+	Month = {April},
+	Read = {1},
+	Title = {{APE}: {A}udio {P}erceptual {E}valuation toolbox for {MATLAB}},
+	Year = {2014},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QOi4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Xcml0aW5ncy9fcHVibGljYXRpb25zL2FlczEzNi5wZGbSFwsYGVdOUy5kYXRhTxEBsgAAAAABsgACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAA0FxPd0grAAAACl8UCmFlczEzNi5wZGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKaS7PXG0EAAAAAAAAAAAABAAEAAAJIAAAAAAAAAAAAAAAAAAAAA1fcHVibGljYXRpb25zAAAQAAgAANBcQWcAAAARAAgAAM9cbQQAAAABABQACl8UAApeugAKXQIACUReAAKT1QACAE1NYWNpbnRvc2ggSEQ6VXNlcnM6AEJyZWNodDoAR29vZ2xlIERyaXZlOgBXcml0aW5nczoAX3B1YmxpY2F0aW9uczoAYWVzMTM2LnBkZgAADgAWAAoAYQBlAHMAMQAzADYALgBwAGQAZgAPABoADABNAGEAYwBpAG4AdABvAHMAaAAgAEgARAASADtVc2Vycy9CcmVjaHQvR29vZ2xlIERyaXZlL1dyaXRpbmdzL19wdWJsaWNhdGlvbnMvYWVzMTM2LnBkZgAAEwABLwAAFQACAA3//wAAgAbSGxwdHlokY2xhc3NuYW1lWCRjbGFzc2VzXU5TTXV0YWJsZURhdGGjHR8gVk5TRGF0YVhOU09iamVjdNIbHCIjXE5TRGljdGlvbmFyeaIiIF8QD05TS2V5ZWRBcmNoaXZlctEmJ1Ryb290gAEACAARABoAIwAtADIANwBAAEYATQBVAGAAZwBqAGwAbgBxAHMAdQB3AIQAjgDLANAA2AKOApAClQKgAqkCtwK7AsICywLQAt0C4ALyAvUC+gAAAAAAAAIBAAAAAAAAACgAAAAAAAAAAAAAAAAAAAL8}}
+
+@inproceedings{beaqlejs,
+	Author = {Kraft, Sebastian and Z{\"o}lzer, Udo},
+	Booktitle = {Linux Audio Conference, Karlsruhe, DE},
+	Date-Added = {2015-09-29 16:23:37 +0000},
+	Date-Modified = {2015-09-29 16:23:37 +0000},
+	Keywords = {perceptual evaluation},
+	Title = {{BeaqleJS}: {HTML5} and {JavaScript} based framework for the subjective evaluation of audio quality},
+	Year = {2014},
+	Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QQC4uLy4uLy4uLy4uL0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL3pvbHplcjIwMTRiZWFxbGVqcy5wZGbSFwsYGVdOUy5kYXRhTxEBzgAAAAABzgACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAA0FxPd0grAAAACl8RFnpvbHplcjIwMTRiZWFxbGVqcy5wZGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACnAK7RX7izAAAAAAAAAAAABAAEAAAJIAAAAAAAAAAAAAAAAAAAAAZQYXBlcnMAEAAIAADQXEFnAAAAEQAIAADRX6qjAAAAAQAUAApfEQAKXrsACl0CAAlEXgACk9UAAgBTTWFjaW50b3NoIEhEOlVzZXJzOgBCcmVjaHQ6AEdvb2dsZSBEcml2ZToARG9jdW1lbnRzOgBQYXBlcnM6AHpvbHplcjIwMTRiZWFxbGVqcy5wZGYAAA4ALgAWAHoAbwBsAHoAZQByADIAMAAxADQAYgBlAGEAcQBsAGUAagBzAC4AcABkAGYADwAaAAwATQBhAGMAaQBuAHQAbwBzAGgAIABIAEQAEgBBVXNlcnMvQnJlY2h0L0dvb2dsZSBEcml2ZS9Eb2N1bWVudHMvUGFwZXJzL3pvbHplcjIwMTRiZWFxbGVqcy5wZGYAABMAAS8AABUAAgAN//8AAIAG0hscHR5aJGNsYXNzbmFtZVgkY2xhc3Nlc11OU011dGFibGVEYXRhox0fIFZOU0RhdGFYTlNPYmplY3TSGxwiI1xOU0RpY3Rpb25hcnmiIiBfEA9OU0tleWVkQXJjaGl2ZXLRJidUcm9vdIABAAgAEQAaACMALQAyADcAQABGAE0AVQBgAGcAagBsAG4AcQBzAHUAdwCEAI4A0QDWAN4CsAKyArcCwgLLAtkC3QLkAu0C8gL/AwIDFAMXAxwAAAAAAAACAQAAAAAAAAAoAAAAAAAAAAAAAAAAAAADHg==}}
+
+@article{lipshitz1981great,
+	Author = {Lipshitz, Stanley P and Vanderkooy, John},
+	Date-Added = {2015-09-23 09:09:51 +0000},
+	Date-Modified = {2015-09-23 09:09:51 +0000},
+	Journal = {Journal of the AES},
+	Number = {7/8},
+	Pages = {482--491},
+	Publisher = {Audio Engineering Society},
+	Title = {The great debate: Subjective evaluation},
+	Volume = {29},
+	Year = {1981}}
+
+@article{clark1982high,
+	Author = {Clark, David},
+	Date-Added = {2015-09-23 09:07:19 +0000},
+	Date-Modified = {2015-09-23 09:07:19 +0000},
+	Journal = {Journal of the AES},
+	Number = {5},
+	Pages = {330--338},
+	Publisher = {Audio Engineering Society},
+	Title = {High-resolution subjective testing using a double-blind comparator},
+	Volume = {30},
+	Year = {1982}}
+
+@book{carroll1969individual,
+	Author = {Carroll, J Douglas},
+	Date-Added = {2015-09-23 09:01:03 +0000},
+	Date-Modified = {2015-09-23 09:01:03 +0000},
+	Publisher = {Bell Telephone Labs.},
+	Title = {Individual differences and multidimensional scaling},
+	Year = {1969}}
+
+@article{pascoe1983evaluation,
+	Author = {Pascoe, Gregory C and Attkisson, C Clifford},
+	Date-Added = {2015-09-23 08:59:38 +0000},
+	Date-Modified = {2015-09-23 08:59:38 +0000},
+	Journal = {Evaluation and program planning},
+	Number = {3},
+	Pages = {335--347},
+	Publisher = {Elsevier},
+	Title = {The evaluation ranking scale: a new methodology for assessing satisfaction},
+	Volume = {6},
+	Year = {1983}}
+
+@book{david1963method,
+	Author = {David, Herbert Aron},
+	Date-Added = {2015-09-23 08:58:19 +0000},
+	Date-Modified = {2015-09-23 08:58:19 +0000},
+	Publisher = {DTIC Document},
+	Title = {The method of paired comparisons},
+	Volume = {12},
+	Year = {1963}}
+
+@inproceedings{zacharov1999round,
+	Author = {Zacharov, Nick and Huopaniemi, Jyri and H{\"a}m{\"a}l{\"a}inen, Matti},
+	Booktitle = {AES Conference: 16th International Conference: Spatial Sound Reproduction},
+	Date-Added = {2015-09-23 08:53:31 +0000},
+	Date-Modified = {2015-09-23 08:53:31 +0000},
+	Organization = {Audio Engineering Society},
+	Title = {Round robin subjective evaluation of virtual home theatre sound systems at the AES 16th international conference},
+	Year = {1999}}
+
+@article{likert1932technique,
+	Author = {Likert, Rensis},
+	Date-Added = {2015-09-23 08:49:36 +0000},
+	Date-Modified = {2015-09-23 08:49:36 +0000},
+	Journal = {Archives of psychology},
+	Title = {A technique for the measurement of attitudes.},
+	Year = {1932}}
+
+@book{nunnally1967psychometric,
+	Author = {Nunnally, Jum C and Bernstein, Ira H and Berge, Jos MF ten},
+	Date-Added = {2015-09-23 08:43:17 +0000},
+	Date-Modified = {2015-09-23 08:43:17 +0000},
+	Publisher = {McGraw-Hill New York},
+	Title = {Psychometric theory},
+	Volume = {226},
+	Year = {1967}}
+
+@article{recommendation19971116,
+	Author = {{ITURBS Recommendation}},
+	Date-Added = {2015-09-23 08:36:37 +0000},
+	Date-Modified = {2015-09-23 08:36:37 +0000},
+	Journal = {International Telecommunication Union, Geneva},
+	Title = {1116-1: Methods for the subjective assessment of small impairments in audio systems including multichannel sound systems},
+	Year = {1997}}
+
+@article{recommendation20031534,
+	Author = {{ITURBS Recommendation}},
+	Date-Added = {2015-09-23 08:34:26 +0000},
+	Date-Modified = {2015-09-23 08:34:41 +0000},
+	Journal = {International Telecommunication Union},
+	Title = {BS. 1534-1: Method for the subjective assessment of intermediate quality levels of coding systems},
+	Year = {2003}}
+
+@article{recommendation2001bs,
+	Author = {{ITUR Recommendation}},
+	Date-Added = {2015-09-23 08:33:38 +0000},
+	Date-Modified = {2015-09-23 08:33:38 +0000},
+	Journal = {International Telecommunications Union, Geneva},
+	Title = {BS. 1534-1. Method for the Subjective Assessment of Intermediate Sound Quality (MUSHRA)},
+	Year = {2001}}
+
+@article{rec1997bs,
+	Author = {{ITUR Recommendation}},
+	Date-Added = {2015-09-23 08:32:42 +0000},
+	Date-Modified = {2015-09-23 08:32:42 +0000},
+	Journal = {International Telecommunications Union},
+	Title = {BS. 562-3,`Subjective Assessment of Sound Quality'},
+	Year = {1997}}
+
+@article{peryam1952advanced,
+	Author = {Peryam, David R and Girardot, Norman F},
+	Date-Added = {2015-09-23 08:31:32 +0000},
+	Date-Modified = {2015-09-23 08:31:32 +0000},
+	Journal = {Food Engineering},
+	Number = {7},
+	Pages = {58--61},
+	Title = {Advanced taste-test method},
+	Volume = {24},
+	Year = {1952}}
+
+@article{rec1996p,
+	Author = {{ITUT Recommendation}},
+	Date-Added = {2015-09-23 08:30:24 +0000},
+	Date-Modified = {2015-09-23 08:30:24 +0000},
+	Journal = {International Telecommunication Union, Geneva},
+	Title = {P. 800: Methods for subjective determination of transmission quality},
+	Year = {1996}}
+
+@inproceedings{hultigen,
+	Author = {Gribben, Christopher and Lee, Hyunkook},
+	Booktitle = {AES Convention 138},
+	Date-Added = {2015-09-23 08:11:17 +0000},
+	Date-Modified = {2015-09-29 16:23:17 +0000},
+	Organization = {Audio Engineering Society},
+	Title = {Toward the Development of a Universal Listening Test Interface Generator in Max},
+	Year = {2015}}
+
+@conference{waet,
+	Author = {Nicholas Jillings and David Moffat and De Man, Brecht and Joshua D. Reiss},
+	Booktitle = {12th Sound and Music Computing Conference},
+	Date-Added = {2015-09-22 16:48:27 +0000},
+	Date-Modified = {2015-09-22 16:48:33 +0000},
+	Month = {July},
+	Read = {1},
+	Title = {Web {A}udio {E}valuation {T}ool: {A} browser-based listening test environment},
+	Year = {2015}}
Binary file docs/WAC2016/WAC2016.pdf has changed
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/docs/WAC2016/WAC2016.tex	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,428 @@
+\documentclass{sig-alternate}
+\usepackage{hyperref}	% make links (like references, links to Sections, ...) clickable
+\usepackage{enumitem}	% tighten itemize etc by appending '[noitemsep,nolistsep]'
+\usepackage{cleveref}
+
+\graphicspath{{img/}} % put the images in this folder
+
+\begin{document}
+
+% Copyright
+\setcopyright{waclicense}
+
+\newcommand*\rot{\rotatebox{90}}
+
+
+%% DOI
+%\doi{10.475/123_4}
+%
+%% ISBN
+%\isbn{123-4567-24-567/08/06}
+%
+%%Conference
+%\conferenceinfo{PLDI '13}{June 16--19, 2013, Seattle, WA, USA}
+%
+%\acmPrice{\$15.00}
+
+%
+% --- Author Metadata here ---
+\conferenceinfo{Web Audio Conference WAC-2016,}{April 4--6, 2016, Atlanta, USA}
+\CopyrightYear{2016} % Allows default copyright year (20XX) to be over-ridden - IF NEED BE.
+%\crdata{0-12345-67-8/90/01}  % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE.
+% --- End of Author Metadata ---
+
+\title{Web Audio Evaluation Tool: A framework for subjective assessment of audio}
+%\subtitle{[Extended Abstract]
+%\titlenote{A full version of this paper is available as
+%\textit{Author's Guide to Preparing ACM SIG Proceedings Using
+%\LaTeX$2_\epsilon$\ and BibTeX} at
+%\texttt{www.acm.org/eaddress.htm}}}
+%
+% You need the command \numberofauthors to handle the 'placement
+% and alignment' of the authors beneath the title.
+%
+% For aesthetic reasons, we recommend 'three authors at a time'
+% i.e. three 'name/affiliation blocks' be placed beneath the title.
+%
+% NOTE: You are NOT restricted in how many 'rows' of
+% "name/affiliations" may appear. We just ask that you restrict
+% the number of 'columns' to three.
+%
+% Because of the available 'opening page real-estate'
+% we ask you to refrain from putting more than six authors
+% (two rows with three columns) beneath the article title.
+% More than six makes the first-page appear very cluttered indeed.
+%
+% Use the \alignauthor commands to handle the names
+% and affiliations for an 'aesthetic maximum' of six authors.
+% Add names, affiliations, addresses for
+% the seventh etc. author(s) as the argument for the
+% \additionalauthors command.
+% These 'additional authors' will be output/set for you
+% without further effort on your part as the last section in
+% the body of your article BEFORE References or any Appendices.
+
+% FIVE authors instead of four, to leave space between first two authors.
+\numberofauthors{5} %  in this sample file, there are a *total*
+% of EIGHT authors. SIX appear on the 'first-page' (for formatting
+% reasons) and the remaining two appear in the \additionalauthors section.
+%
+\author{
+% You can go ahead and credit any number of authors here,
+% e.g. one 'row of three' or two rows (consisting of one row of three
+% and a second row of one, two or three).
+%
+% The command \alignauthor (no curly braces needed) should
+% precede each author name, affiliation/snail-mail address and
+% e-mail address. Additionally, tag each line of
+% affiliation/address with \affaddr, and tag the
+% e-mail address with \email.
+%
+% 1st. author
+\alignauthor Nicholas Jillings\\
+       \email{n.g.r.jillings@se14.qmul.ac.uk}
+ % dummy author for nicer spacing
+ \alignauthor 
+% 2nd. author
+\alignauthor Brecht De Man\\
+       \email{b.deman@qmul.ac.uk}
+\and  % use '\and' if you need 'another row' of author names
+% 3rd. author
+\alignauthor David Moffat\\
+       \email{d.j.moffat@qmul.ac.uk}
+% 4th. author
+\alignauthor Joshua D. Reiss\\
+\email{joshua.reiss@qmul.ac.uk}
+\and % new line for address
+       \affaddr{Centre for Digital Music, School of Electronic Engineering and Computer Science}\\
+       \affaddr{Queen Mary University of London}\\
+       \affaddr{Mile End Road,}
+       \affaddr{London E1 4NS}\\
+       \affaddr{United Kingdom}\\
+}
+%Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London
+%% 5th. author
+%\alignauthor Sean Fogarty\\
+%       \affaddr{NASA Ames Research Center}\\
+%       \affaddr{Moffett Field}\\
+%       \email{fogartys@amesres.org}
+%% 6th. author
+%\alignauthor Charles Palmer\\
+%       \affaddr{Palmer Research Laboratories}\\
+%       \affaddr{8600 Datapoint Drive}\\
+%       \email{cpalmer@prl.com}
+%}
+% There's nothing stopping you putting the seventh, eighth, etc.
+% author on the opening page (as the 'third row') but we ask,
+% for aesthetic reasons that you place these 'additional authors'
+% in the \additional authors block, viz.
+%\additionalauthors{Additional authors: John Smith (The Th{\o}rv{\"a}ld Group,
+%email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat
+%(The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).}
+\date{1 October 2015}
+% Just remember to make sure that the TOTAL number of authors
+% is the number that will appear on the first page PLUS the
+% number that will appear in the \additionalauthors section.
+
+\maketitle
+\begin{abstract}
+
+Perceptual listening tests are commonplace in audio research and a vital form of evaluation. Many tools exist to run such tests, however many operate one test type and are therefore limited whilst most require proprietary software. Using Web Audio the Web Audio Evaluation Tool (WAET) addresses these concerns by having one toolbox which can be configured to run many different tests, perform it through a web browser and without needing proprietary software or computer programming knowledge. In this paper the role of the Web Audio API in giving WAET key functionalities are shown. The paper also highlights less common features, available to web based tools, such as easy remote testing environment and in-browser analytics.
+
+\end{abstract}
+
+
+\section{Introduction}
+
+	% Listening tests/perceptual audio evaluation: what are they, why are they important
+	% As opposed to limited scope of WAC15 paper: also musical features, realism of sound effects / sound synthesis, performance of source separation and other algorithms... 
+	Perceptual evaluation of audio, in the form of listening tests, is a powerful way to assess anything from audio codec quality to realism of sound synthesis to the performance of source separation, automated music production and other auditory evaluations.
+	In less technical areas, the framework of a listening test can be used to measure emotional response to music or test cognitive abilities. 
+	% maybe some references? If there's space.
+
+	% check out http://link.springer.com/article/10.1007/s10055-015-0270-8 - only paper that cited WAC15 paper
+
+	% Why difficult? Challenges? What constitutes a good interface?
+	% Technical, interfaces, user friendliness, reliability 
+	Several applications for performing perceptual listening tests currently exist. A review of existing listening test frameworks was undertaken and presented in~\Cref{tab:toolboxes}. Note that many rely on proprietary, 3rd party software such as MATLAB and MAX, making them less attractive for many. With the exception of the existing JavaScript-based toolboxes, remote deployment (web-based test hosting and result collection) is not possible. 
+	
+	HULTI-GEN~\cite{hultigen} is a single example of a toolbox that presents the user with a large number of different test interfaces and allows for customisation of each test interface, without requiring knowledge of any programming language. The Web Audio Evaluation Toolbox (WAET), presented here, stands out as it does not require proprietary software or a specific platform. It also provides a wide range of interface and test types in one user friendly environment. Furthermore any test based on the default test types can be configured in the browser as well. Note that the design of an effective listening test further poses many challenges unrelated to interface design, which are beyond the scope of this paper \cite{bech}. 
+
+	% Why in the browser? 
+	The Web Audio API provides important features including sample level manipulation of audio streams \cite{schoeffler2015mushra} and synchronous and flexible playback. Being in the browser allows leveraging the flexible object oriented JavaScript language and native support for web documents, such as the extensible markup language (XML) which is used for configuration and test result files. Using the web also reduces deployment requirements to a basic web server with extra functionality, such as test collection and automatic processing, using PHP. As recruiting participants can be very time-consuming, and as for some tests a large number of participants is needed, browser-based tests can enable participants in multiple locations to perform the test \cite{schoeffler2015mushra}.
+
+	Both BeaqleJS \cite{beaqlejs} and mushraJS\footnote{https://github.com/akaroice/mushraJS} also operate in the browser. However, BeaqleJS does not make use of the Web Audio API and therefore lacks arbitrary manipulation of audio stream samples, and neither offer an adequately wide choice of test designs for them to be useful to many researchers. %requires programming knowledge?... 
+	
+	% only browser-based? 
+	\begin{table*}[ht]
+	 \caption{Table with existing listening test platforms and their features}
+	 \small
+	 \begin{center}
+	    	\begin{tabular}{|*{9}{l|}}
+	    		\hline
+	    		\textbf{Toolbox}    & \rot{\textbf{APE}}   & \rot{\textbf{BeaqleJS}}    &\rot{\textbf{HULTI-GEN}} & \rot{\textbf{mushraJS}}    & \rot{\textbf{MUSHRAM}}    & \rot{\textbf{Scale}}    & \rot{\textbf{WhisPER}}    & \rot{\textbf{WAET}} \\ \hline
+	    		 \textbf{Reference}    & \cite{ape}    & \cite{beaqlejs}    & \cite{hultigen}    &    & \cite{mushram} & \cite{scale}    & \cite{whisper}    & \cite{waet} \\ \hline
+	    		 \textbf{Language} & MATLAB    & JS    & MAX    & JS    & MATLAB    &    MATLAB    & MATLAB    & JS \\ \hline
+	    		 \textbf{Remote} &    & (\checkmark)    &     & \checkmark    &    &    &    & \checkmark \\ \hline \hline
+	    		 MUSHRA (ITU-R BS. 1534) & & \checkmark & \checkmark & \checkmark & \checkmark & & & \checkmark \\ \hline
+	    		 APE & \checkmark & & & & & & & \checkmark \\ \hline
+	    		 Rank Scale & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 Likert Scale & & & \checkmark & & & & \checkmark & \checkmark \\ \hline
+	    		 ABC/HR (ITU-R BS. 1116)   & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 -50 to 50 Bipolar with ref. & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 Absolute Category Rating Scale & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 Degradation Category Rating Scale & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 Comparison Category Rating Scale & & & \checkmark & & & & \checkmark & \checkmark \\ \hline
+	    		 9 Point Hedonic Category Rating Scale & & & \checkmark & & & & \checkmark & \checkmark \\ \hline
+	    		 ITU-R 5 Continuous Impairment Scale & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 Pairwise / AB Test & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 Multi-attribute ratings & & & \checkmark & & & & & \checkmark \\ \hline
+	    		 ABX Test & & \checkmark & \checkmark & & & & & \checkmark \\ \hline
+	    		 Adaptive psychophysical methods & & & & & & & \checkmark & \\ \hline 
+	    		 Repertory Grid Technique & & & & & & & \checkmark & \\ \hline
+	    		 Semantic Differential  & & & & & & \checkmark & \checkmark &\checkmark \\ \hline
+	    		 n-Alternative Forced Choice & & & & & & \checkmark & & \\ \hline
+	    	\end{tabular}
+	 \end{center}
+	 \label{tab:toolboxes}
+	 \end{table*}
+        % 
+        %Selling points: remote tests, visualisaton, create your own test in the browser, many interfaces, few/no dependencies, flexibility
+
+        %[Talking about what we do in the various sections of this paper. Referring to \cite{waet}. ]
+    To meet the need for a cross-platform, versatile and easy-to-use listening test tool, we previously developed the Web Audio Evaluation Tool \cite{waet} which at the time of its inception was capable of running a listening test in the browser from an XML configuration file, and storing an XML file as well, with one particular interface. This has now expanded into a tool with which a wide range of listening test types can easily be constructed and set up remotely, without any need for manually altering code or configuration files, and allows visualisation of the collected results in the browser. In this paper, we discuss these different aspects and explore which future improvements would be possible.
+
+    \begin{figure}[tb]
+    	\centering
+    	\includegraphics[width=.5\textwidth]{interface.png}
+    	\caption{A simple example of a multi-stimulus, single attribute, single rating scale test with a reference and comment fields.}
+    	\label{fig:interface}
+    \end{figure}
+
+\begin{comment}
+        % MEETING 8 OCTOBER
+        \subsection{Meeting 8 October}
+        \begin{itemize}
+        	\item Do we manipulate audio?\\
+	        	\begin{itemize}
+	        		\item Add loudness equalisation? (test\_create.html) Tag with gains. 
+	        		\item Add volume slider? 
+	        		\item Cross-fade (in interface node): default 0, number of seconds
+	        		\item Also: we use the playback buffer to present metrics of which portion is listened to
+	        	\end{itemize}
+	        \item Logging system information: whichever are possible (justify others)
+	        \item Input streams as audioelements
+	       	\item Capture microphone to estimate loudness (especially Macbook)
+	       	\item Test page (in-built oscillators): left-right calibration, ramp up test tone until you hear it; optional compensating EQ (future work implementing own filters) --> Highlight issues! 
+	       	\item Record IP address (PHP function, grab and append to XML file)
+	       	\item Expand anchor/reference options
+	       	\item AB / ABX
+        \end{itemize}
+
+        \subsubsection{Issues}
+        \begin{itemize}
+        	\item Filters not consistent (Nick to test across browsers)
+        	\item Playback audiobuffers need to be destroyed and rebuilt each time
+        	\item Can't get channel data, hardware input/output... 
+        \end{itemize}
+\end{comment}
+	
+\section{Architecture}  % title? 'back end'? % NICK
+\label{sec:architecture}
+%A slightly technical overview of the system. Talk about XML, JavaScript, Web Audio API, HTML5. 
+
+    Although WAET uses a sparse subset of the Web Audio API functionality, its performance comes directly from it. Listening tests can convey large amounts of information other than obtaining the perceptual relationship between the audio fragments. With WAET it is possible to track which parts of the audio fragments were listened to and when, at what point in the audio stream the participant switched to a different fragment, and how a fragment's rating was adjusted over time within a session, to name a few. Not only does this allow evaluation of a wealth of perceptual aspects, but it also helps detect poor participants whose results are potentially not representative.
+    
+    One of the key initial design parameters for WAET was to make the tool as open as possible to non-programmers and to this end all of the user modifiable options are included in a single XML document. This document is the specification document and can be designed either by manually writing the XML (or modifying an existing document or template) or using the included test creator. These standalone HTML pages do not require any server or internet connection and help a build the specification document. The first (test\_create.html) is for simple tests and operates step-by-step to guide the user through a drag and drop, clutter free interface. The advanced version is for more complex tests. Both models support automatic verification to ensure the XML file is valid and will highlight areas which are either incorrect and would cause an error, or options which should be removed as they are blank.
+    
+    The basic test creator, Figure \ref{fig:test_create}, utilises the Web Audio API to perform quick playback checks and also allows for loudness normalisation techniques inspired from \cite{ape}. These are calculated offline by accessing the raw audio samples exposed from the buffer before being applied to the audio element as a gain attribute. Therefore the tool performs loudness normalisation without editing any audio files. Equally the gain attribute can be modified in either editor using an HTML5 slider or number box respectively.
+    \begin{comment}
+    \begin{figure}[h!]
+	\centering
+	\includegraphics[width=.45\textwidth]{test_create_2.png}
+	\caption{Screen-shot of test creator tool using drag and drop to create specification document}
+	\label{fig:test_create}
+	\end{figure}
+	\end{comment}
+    
+    %Describe and/or visualise audioholder-audioelement-... structure. 
+    The specification document contains the URL of the audio fragments for each test page. These fragments are downloaded asynchronously in the test and decoded offline by the Web Audio offline decoder. The resulting buffers are assigned to a custom Audio Objects node which tracks the fragment buffer, the playback \textit{bufferSourceNode}, other specification attributes including its unique test ID, the interface object(s) associated with the fragment and any metric or data collection objects. The Audio Object is controlled by an over-arching custom Audio Context node (not to be confused with the Web Audio Context). This parent JS Node allows for session wide control of the Audio Objects including starting and stopping playback of specific nodes.
+    
+    The only issue with this model is the \textit{bufferNode} in the Web Audio API, implemented in the standard as a `use once' object. Once this has been played, the node must be discarded as it cannot be instructed to play the same \textit{bufferSourceNode} again. Therefore on each play request the buffer object must be created and then linked with the stored \textit{bufferSourceNode}. This is an odd behaviour for such a simple object which has no alternative except to use the HTML5 audio element. However, they do not have the ability to synchronously start on a given time and therefore not suited.
+    
+    In the test, each buffer node is connected to a gain node which will operate at the level determined by the specification document. Therefore it is possible to perform a `Method of Adjustment' test where an interface could directly manipulate these gain nodes. These gain nodes are used for cross-fading between samples when operating in synchronous playback. Cross-fading can either be fade-out fade-in or a true cross-fade. There is also an optional `Master Volume' slider which can be shown on the test GUI. This slider modifies a gain node before the destination node. This slider can also be monitored and therefore its data tracked providing extra validation. This is not indicative of the final volume exiting the speakers and therefore its use should only be considered in a lab environment to ensure proper usage.
+    
+    %Which type of files?  WAV, anything else? Perhaps not exhaustive list, but say something along the lines of 'whatever browser supports'. Compatability?
+    The media files supported depend on the browser level support for the initial decoding of information and is the same as the browser support for the HTML5 audio element. The most widely supported media file is the wave (.WAV) format which is accepted by every browser supporting the Web Audio API. The toolbox will work in any browser which supports the Web Audio API.
+    
+    All the collected session data is returned in an XML document structured similarly to the configuration document, where test pages contain the audio elements with their trace collection, results, comments and any other interface-specific data points.
+    
+\section{Remote tests} % with previous? 
+	\label{sec:remote}
+
+	If the experimenter is willing to trade some degree of control for a higher number of participants, the test can be hosted on a public web server so that participants can take part remotely. This way, a link can be shared widely in the hope of attracting a large amount of subjects, while listening conditions and subject reliability may be less ideal. However, a sound system calibration page and a wide range of metrics logged during the test mitigate these problems. In some experiments, it may be preferred that the subject has a `real life', familiar listening set-up, for instance when perceived quality differences on everyday sound systems are investigated. 
+	Furthermore, a fully browser-based test, where the collection of the results is automatic, is more efficient and technically reliable even when the test still takes place under lab conditions.
+
+	The following features allow easy and effective remote testing: 
+	\begin{description}[noitemsep,nolistsep]
+		\item[PHP script to collect result XML files] and store on central server. 
+		\item[Randomly pick a specified number of pages] to ensure an equal and randomised spread of the different pages (`audioHolders') across participants. 
+		\item[Calibration of the sound system (and participant)] by a perceptual pre-test to gather information about the frequency response and speaker configuration - this can be supplemented with a survey.
+		% In theory calibration could be applied anywhere??
+		% \item Functionality to participate multiple times
+		% 	\begin{itemize}[noitemsep,nolistsep]
+		% 		\item Possible to log in with unique ID (no password)
+		% 		\item Pick `new user' (generates new, unique ID) or `already participated' (need already available ID)
+		% 		\item Store XML on server with IDs plus which audioholders have already been listened to
+		% 		\item Don't show `post-test' survey after first time
+		% 		\item Pick `new' audioholders if available
+		% 		\item Copy survey information first time to new XMLs
+		% 	\end{itemize}
+		\item[Intermediate saves] for tests which were interrupted or unfinished.
+		\item[Collect IP address information] for geographic location, through PHP function which grabs address and appends to XML file. 
+		\item[Collect Browser and Display information] to the extent it is available and reliable. 
+	\end{description}
+
+	
+\section{Interfaces} % title? 'Front end'? % Dave
+\label{sec:interfaces}
+
+The purpose of this listening test framework is to allow any user the maximum flexibility to design a listening test for their exact application with minimum effort. To this end, a large range of standard listening test interfaces have been implemented.
+
+To provide users with a flexible system, a large range of `standard' listening test interfaces have been implemented, including: % pretty much the same wording as two sentences earlier
+	\begin{itemize}[noitemsep,nolistsep]
+		\item MUSHRA (ITU-R BS. 1534)~\cite{recommendation20031534}
+		\begin{comment}
+		\begin{itemize}[noitemsep,nolistsep]
+			\item Multiple stimuli are presented and rated on a continuous scale, which includes a reference, hidden reference and hidden anchors.
+		\end{itemize}
+		\end{comment}
+		\item Rank Scale~\cite{pascoe1983evaluation}: stimuli ranked on single horizontal scale, where they are ordered in preference order.
+		\item Likert scale~\cite{likert1932technique}: each stimuli has a five point scale with values: Strongly Agree, Agree, Neutral, Disagree and Strongly Disagree.
+		\item ABC/HR (ITU-R BS. 1116)~\cite{recommendation19971116} (Mean Opinion Score: MOS): each stimulus has a continuous scale (5-1), labeled as Imperceptible, Perceptible but not annoying, slightly annoying, annoying, very annoying.
+		\item -50 to 50 Bipolar with Ref: each stimulus has a continuous scale -50 to 50 with default values as 0 in middle and a reference.
+		\item Absolute Category Rating (ACR) Scale~\cite{rec1996p}: Likert but labels are Bad, Poor, Fair, Good, Excellent
+		\item Degredation Category Rating (DCR) Scale~\cite{rec1996p}: ABC \& Likert but labels are (5) Inaudible, (4) Audible but not annoying, (3) slightly annoying, (2) annoying, (1) very annoying.
+		\item Comparison Category Rating (CCR) Scale~\cite{rec1996p}: ACR \& DCR but 7 point scale: Much Better, Better, Slightly Better, About the same, slightly worse, worse, much worse. There is also a provided reference.
+		\item 9 Point Hedonic Category Rating Scale~\cite{peryam1952advanced}: each stimuli has a seven point scale with values: Like Extremely, Like Very Much, Like Moderate, Like Slightly, Neither Like nor Dislike, dislike Extremely, dislike Very Much, dislike Moderate, dislike Slightly. There is also a provided reference.
+		\item ITU-R 5 Point Continuous Impairment Scale~\cite{rec1997bs}: Same as ABC/HR but with a reference.
+		\item Pairwise Comparison (Better/Worse)~\cite{david1963method}: every stimulus is rated as being either better or worse than the reference.
+		\item APE style \cite{ape}: Multiple stimuli as points on a 2D plane for inter-sample rating (eg. Valence Arousal)
+		\item AB Test~\cite{lipshitz1981great}: Two stimuli presented at a time, participant selects a preferred stimulus.
+		\item ABX Test~\cite{clark1982high}: Two stimuli are presented along with a reference and the participant has to select a preferred stimulus, often the closest to the reference.
+	\end{itemize}
+	
+	It is possible to include any number of references, anchors, hidden references and hidden anchors into all of these listening test formats.
+	
+	Because of the design to separate the core code and interface modules, it is possible for a 3rd party interface to be built with minimal effort. The repository includes documentation on which functions must be called and the specific functions they expect your interface to perform. The core includes an `Interface' object which includes object prototypes for the on-page comment boxes (including those with radio or checkbox responses), start and stop buttons and the playhead / transport bars.
+	
+%%%%	\begin{itemize}[noitemsep,nolistsep]
+%%%%		\item (APE style) \cite{ape}
+%%%%		\item Multi attribute ratings
+%%%%		\item MUSHRA (ITU-R BS. 1534)~\cite{recommendation20031534}
+%%%%		\item Interval Scale~\cite{zacharov1999round}
+%%%%		\item Rank Scale~\cite{pascoe1983evaluation}
+%%%%		
+%%%%		\item 2D Plane rating - e.g. Valence vs. Arousal~\cite{carroll1969individual}
+%%%%		\item Likert scale~\cite{likert1932technique}
+%%%%		
+%%%%		\item {\bf All the following are the interfaces available in HULTI-GEN~\cite{hultigen} }
+%%%%		\item ABC/HR (ITU-R BS. 1116)~\cite{recommendation19971116}
+%%%%		\begin{itemize}
+%%%%			\item Continuous Scale (5-1) Imperceptible, Perceptible but not annoying, slightly annoying, annoying, very annoying. (default Inaudible?)
+%%%%		\end{itemize}
+%%%%		\item -50 to 50 Bipolar with Ref
+%%%%		\begin{itemize}
+%%%%			\item Scale -50 to 50 on Mushra with default values as 0 in middle and a comparison ``Reference'' to compare to 0 value
+%%%%		\end{itemize}
+%%%%		\item Absolute Category Rating (ACR) Scale~\cite{rec1996p}
+%%%%		\begin{itemize}
+%%%%			\item 5 point Scale - Bad, Poor, Fair, Good, Excellent (Default fair?)
+%%%%		\end{itemize}
+%%%%		\item Degredation Category Rating (DCR) Scale~\cite{rec1996p}
+%%%%		\begin{itemize}
+%%%%			\item 5 point Scale - Inaudible, Audible but not annoying, slightly annoying, annoying, very annoying. (default Inaudible?) - {\it Basically just quantised ABC/HR?}
+%%%%		\end{itemize}
+%%%%		\item Comparison Category Rating (CCR) Scale~\cite{rec1996p}
+%%%%		\begin{itemize}
+%%%%			\item 7 point scale: Much Better, Better, Slightly Better, About the same, slightly worse, worse, much worse - Default about the same with reference to compare to
+%%%%		\end{itemize}
+%%%%		\item 9 Point Hedonic Category Rating Scale~\cite{peryam1952advanced}
+%%%%		\begin{itemize}
+%%%%			\item 9 point scale: Like Extremely, Like Very Much, Like Moderate, Like Slightly, Neither Like nor Dislike, dislike Extremely, dislike Very Much, dislike Moderate, dislike Slightly  - Default Neither Like nor Dislike with reference to compare to
+%%%%		\end{itemize}
+%%%%		\item ITU-R 5 Point Continuous Impairment Scale~\cite{rec1997bs}
+%%%%		\begin{itemize}
+%%%%			\item 5 point Scale (5-1) Imperceptible, Perceptible but not annoying, slightly annoying, annoying, very annoying. (default Inaudible?)- {\it Basically just quantised ABC/HR, or Different named DCR}
+%%%%		\end{itemize}
+%%%%		\item Pairwise Comparison (Better/Worse)~\cite{david1963method}
+%%%%		\begin{itemize}
+%%%%			\item 2 point Scale - Better or Worse - (not sure how to default this - they default everything to better, which is an interesting choice)
+%%%%		\end{itemize}
+%%%%	\end{itemize}
+	
+	% Build your own test
+
+\begin{comment}
+{	\bf A screenshot would be nice. 
+
+	Established tests (see below) included as `presets' in the build-your-own-test page. }
+\end{comment}
+
+\section{Analysis and diagnostics}
+\label{sec:analysis}
+	% don't mention Python scripts
+	There are several benefits to providing basic analysis tools in the browser: they allow diagnosing problems, with the interface or with the test subject; they may be sufficient for many researchers' purposes; and test subjects may enjoy seeing an overview of their own results and/or results thus far at the end of their tests. 
+	\begin{figure}[bhf]
+		\centering
+		\includegraphics[width=.5\textwidth]{boxplot.png}
+		%\caption{This timeline of a single subject's listening test shows playback of fragments (red segments) and marker movements on the rating axis in function of time. }
+		\caption{Box and whisker plot showing the aggregated numerical ratings of six stimuli by a group of subjects.}
+		\label{fig:timeline}
+	\end{figure}
+	For this reason, we include a proof-of-concept web page with:
+	\begin{itemize}[noitemsep,nolistsep]
+		\item All audioholder IDs, file names, subject IDs, audio element IDs, ... in the collected XMLs so far (\texttt{saves/*.xml})
+		\item Selection of subjects and/or test samples to zoom in on a subset of the data %Check/uncheck each of the above for analysis (e.g. zoom in on a certain song, or exclude a subset of subjects)
+		\item Embedded audio to hear corresponding test samples % (follow path in XML setup file, which is also embedded in the XML result file)
+		\item Scatter plot, confidence plot and box plot of rating values (see Figure )
+		\item Timeline for a specific subject %(see Figure \ref{fig:timeline})%, perhaps re-playing the experiment in X times realtime. (If actual realtime, you could replay the audio...)
+		\item Distribution plots of any radio button and number questions in pre- and post-test survey %(drop-down menu with `pretest', `posttest', ...; then drop-down menu with question `IDs' like `gender', `age', ...; make pie chart/histogram of these values over selected range of XMLs)
+		\item All `comments' on a specific audioelement
+		\item A `download' function for a CSV of ratings, survey responses and comments% various things (values, survey responses, comments) people might want to use for analysis, e.g. when XML scares them
+		%\item Validation of setup XMLs (easily spot `errors', like duplicate IDs or URLs, missing/dangling tags, ...)
+	\end{itemize}
+
+
+	%A subset of the above would already be nice for this paper. 
+\section{Concluding remarks and future work}
+\label{sec:conclusion}
+
+	We have developed a browser-based tool for the design and deployment of listening tests, essentially requiring no programming experience and third party software. Following the predictions or guidelines in \cite{schoeffler2015mushra}, it supports remote testing, cross-fading between audio streams, collecting information about the system, among others. 
+
+	Whereas many other types of interfaces do exist, we felt that supporting e.g. a range of `method of adjustment' tests would be beyond the scope of a tool that aims to be versatile enough while not claiming to support any custom experiment one might want to set up. Rather, it supports any non-adaptive listening test up to multi-stimulus, multi-attribute evaluation including references, anchors, text boxes, radio buttons and/or checkboxes, with arbitrary placement of the various UI elements. 
+	
+	The code and documentation can be pulled or downloaded from our online repository available at \url{code.soundsoftware.ac.uk/projects/webaudioevaluationtool}.
+		% remote
+		% language support (not explicitly stated)
+		% crossfades
+		% choosing speakers/sound device from within browser? --- NOT POSSIBLE, can only determine channel output counts and its up to the hardware to determine
+		% collect information about software and sound system
+		% buttons, scales, ... UI elements
+		% must be able to load uncompressed PCM
+
+%
+% The following two commands are all you need in the
+% initial runs of your .tex file to
+% produce the bibliography for the citations in your paper.
+\bibliographystyle{ieeetr}
+\small
+\bibliography{WAC2016}  % sigproc.bib is the name of the Bibliography in this case
+% You must have a proper ".bib" file
+%  and remember to run:
+% latex bibtex latex latex
+% to resolve all references
+%
+% ACM needs 'a single self-contained file'!
+%
+\end{document}
Binary file docs/WAC2016/cc.png has changed
Binary file docs/WAC2016/img/boxplot.png has changed
Binary file docs/WAC2016/img/interface.png has changed
Binary file docs/WAC2016/img/test_create.png has changed
Binary file docs/WAC2016/img/test_create_2.png has changed
Binary file docs/WAC2016/img/timeline.pdf has changed
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/docs/WAC2016/sig-alternate.cls	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,1741 @@
+% SIG-ALTERNATE.CLS - VERSION 2.8
+% "COMPATIBLE" WITH THE "ACM_PROC_ARTICLE-SP.CLS" V3.2SP
+% Gerald Murray - May 23rd 2012
+% Boris Veytsman - April 23 2013
+% Boris Veytsman - May 12 2013
+% Boris Veytsman - June 09 2013
+% Boris Veytsman - August 12 2013
+%
+% ---- Start of 'updates'  ----
+% Added new permission/copyright statement - BV
+% Changed $10 fee to $15 -- May 2012  --  Gerry
+% Changed $5 fee to $10 --  April 2009 -- Gerry
+% April 22nd. 2009 - Fixed 'Natbib' incompatibility problem - Gerry
+% April 22nd. 2009 - Fixed 'Babel' incompatibility problem - Gerry
+% April 22nd. 2009 - Inserted various bug-fixes and improvements - Gerry
+%
+% To produce Type 1 fonts in the document plus allow for 'normal LaTeX accenting' in the critical areas;
+% title, author block, section-heads, confname, etc. etc. 
+% i.e. the whole purpose of this version update is to NOT resort to 'inelegant accent patches'.
+% After much research, three extra .sty packages were added to the the tail (ae, aecompl, aeguill) to solve,
+% in particular, the accenting problem(s). We _could_ ask authors (via instructions/sample file) to 'include' these in
+% the source .tex file - in the preamble - but if everything is already provided ('behind the scenes' - embedded IN the .cls)
+% then this is less work for authors and also makes everything appear 'vanilla'.
+% NOTE: all 'patchwork accenting" has been commented out (here) and is no longer 'used' in the sample .tex file (either).
+% Gerry June 2007
+%
+% Patch for accenting in conference name/location. Gerry May 3rd. 2007
+% Rule widths changed to .5, author count (>6) fixed, roll-back for Type 3 problem. Gerry March 20th. 2007
+% Changes made to 'modernize' the fontnames but esp. for MikTeX users V2.4/2.5 - Nov. 30th. 2006
+% Updated the \email definition to allow for its use inside of 'shared affiliations' - Nov. 30th. 2006
+% Fixed the 'section number depth value' - Nov. 30th. 2006
+%
+% Footnotes inside table cells using \minipage (Oct. 2002)
+% Georgia fixed bug in sub-sub-section numbering in paragraphs (July 29th. 2002)
+% JS/GM fix to vertical spacing before Proofs (July 30th. 2002)
+%
+% Made the Permission Statement / Conference Info / Copyright Info
+% 'user definable' in the source .tex file OR automatic if
+% not specified.
+%
+% Allowance made to switch default fonts between those systems using
+% normal/modern font names and those using 'Type 1' or 'Truetype' fonts.
+% See LINE NUMBER 255 for details.
+% Also provided for enumerated/annotated Corollaries 'surrounded' by
+% enumerated Theorems (line 848).
+% Gerry November 11th. 1999
+%
+% ---- End of 'updates' ----
+%
+\def\fileversion{v2.9}          % for ACM's tracking purposes
+\def\filedate{August 12, 2013}    % Gerry Murray's tracking data
+\def\docdate {\filedate}
+\usepackage{epsfig}
+\usepackage{amssymb}
+\usepackage{amsmath}
+\usepackage{amsfonts}
+% Need this for accents in Arial/Helvetica
+%\usepackage[T1]{fontenc}  % Gerry March 12, 2007 - causes Type 3 problems (body text)
+%\usepackage{textcomp}
+%
+% SIG-ALTERNATE DOCUMENT STYLE
+% G.K.M. Tobin August-October 1999
+%    adapted from ARTICLE document style by Ken Traub, Olin Shivers
+%    also using elements of esub2acm.cls
+% HEAVILY MODIFIED, SUBSEQUENTLY, BY GERRY MURRAY 2000
+% ARTICLE DOCUMENT STYLE -- Released 16 March 1988
+%    for LaTeX version 2.09
+% Copyright (C) 1988 by Leslie Lamport
+%
+%
+%%% sig-alternate.cls is an 'ALTERNATE' document style for producing
+%%% two-column camera-ready pages for ACM conferences.
+%%% THIS FILE DOES NOT STRICTLY ADHERE TO THE SIGS (BOARD-ENDORSED)
+%%% PROCEEDINGS STYLE. It has been designed to produce a 'tighter'
+%%% paper in response to concerns over page budgets.
+%%% The main features of this style are:
+%%%
+%%% 1)  Two columns.
+%%% 2)  Side and top margins of 4.5pc, bottom margin of 6pc, column gutter of
+%%%     2pc, hence columns are 20pc wide and 55.5pc tall.  (6pc =3D 1in, approx)
+%%% 3)  First page has title information, and an extra 6pc of space at the
+%%%     bottom of the first column for the ACM copyright notice.
+%%% 4)  Text is 9pt on 10pt baselines; titles (except main) are 9pt bold.
+%%%
+%%%
+%%% There are a few restrictions you must observe:
+%%%
+%%% 1)  You cannot change the font size; ACM wants you to use 9pt.
+%%% 3)  You must start your paper with the \maketitle command.  Prior to the
+%%%     \maketitle you must have \title and \author commands.  If you have a
+%%%     \date command it will be ignored; no date appears on the paper, since
+%%%     the proceedings will have a date on the front cover.
+%%% 4)  Marginal paragraphs, tables of contents, lists of figures and tables,
+%%%     and page headings are all forbidden.
+%%% 5)  The `figure' environment will produce a figure one column wide; if you
+%%%     want one that is two columns wide, use `figure*'.
+%%%
+%
+%%% Copyright Space:
+%%% This style automatically reserves 1" blank space at the bottom of page 1/
+%%% column 1.  This space can optionally be filled with some text using the
+%%% \toappear{...} command.  If used, this command must be BEFORE the \maketitle
+%%% command.  If this command is defined AND [preprint] is on, then the
+%%% space is filled with the {...} text (at the bottom); otherwise, it is
+%%% blank.  If you use \toappearbox{...} instead of \toappear{...} then a
+%%% box will be drawn around the text (if [preprint] is on).
+%%%
+%%% A typical usage looks like this:
+%%%     \toappear{To appear in the Ninth AES Conference on Medievil Lithuanian
+%%%               Embalming Technique, June 1991, Alfaretta, Georgia.}
+%%% This will be included in the preprint, and left out of the conference
+%%% version.
+%%%
+%%% WARNING:
+%%% Some dvi-ps converters heuristically allow chars to drift from their
+%%% true positions a few pixels. This may be noticeable with the 9pt sans-serif
+%%% bold font used for section headers.
+%%% You may turn this hackery off via the -e option:
+%%%     dvips -e 0 foo.dvi >foo.ps
+%%%
+\typeout{Document Class 'sig-alternate' <9th June '13>.  Modified by
+  G.K.M. Tobin/Gerry Murray/Boris Veytsman}
+\typeout{Based in part upon document Style `acmconf' <22 May 89>. Hacked 4/91 by}
+\typeout{shivers@cs.cmu.edu, 4/93 by theobald@cs.mcgill.ca}
+\typeout{Excerpts were taken from (Journal Style) 'esub2acm.cls'.}
+\typeout{****** Bugs/comments/suggestions/technicalities to Gerry Murray -- murray@hq.acm.org ******}
+\typeout{Questions on the style, SIGS policies, etc. to Adrienne Griscti griscti@acm.org}
+
+
+\let\@concepts\@empty
+% Support for CCSXML file
+\RequirePackage{comment}
+\excludecomment{CCSXML}
+
+% New concepts scheme
+%
+% The first argument is the significance, the
+% second is the concept(s)
+%
+\newcommand\ccsdesc[2][100]{%
+  \ccsdesc@parse#1~#2~}
+%
+% The parser of the expression Significance~General~Specific
+%
+\def\ccsdesc@parse#1~#2~#3~{%
+  \expandafter\ifx\csname CCS@#2\endcsname\relax
+    \expandafter\gdef\csname CCS@#2\endcsname{\textbullet\textbf{#2} $\to$ }%
+  \g@addto@macro{\@concepts}{\csname CCS@#2\endcsname}\fi
+  \expandafter\g@addto@macro\expandafter{\csname CCS@#2\endcsname}{%
+    \ifnum#1>499\textbf{#3; }\else
+    \ifnum#1>299\textit{#3; }\else
+    #3; \fi\fi}}
+
+\newcommand\printccsdesc{%
+  \ifx\@concepts\@empty\else
+  \if@twocolumn
+    \section*{CCS Concepts}
+    \@concepts
+    \else \small
+    \quotation{\@concepts}%
+    \fi
+    \fi}
+
+
+
+
+\def\doi#1{\def\@doi{#1}}
+\doi{http://dx.doi.org/10.1145/0000000.0000000}
+
+\oddsidemargin 4.5pc
+\evensidemargin 4.5pc
+\advance\oddsidemargin by -1in  % Correct for LaTeX gratuitousness
+\advance\evensidemargin by -1in % Correct for LaTeX gratuitousness
+\marginparwidth 0pt             % Margin pars are not allowed.
+\marginparsep 11pt              % Horizontal space between outer margin and
+                                % marginal note
+
+                                % Top of page:
+\topmargin 4.5pc                % Nominal distance from top of page to top of
+                                % box containing running head.
+\advance\topmargin by -1in      % Correct for LaTeX gratuitousness
+\headheight 0pt                 % Height of box containing running head.
+\headsep 0pt                    % Space between running head and text.
+                                % Bottom of page:
+\footskip 30pt                  % Distance from baseline of box containing foot
+                                % to baseline of last line of text.
+\@ifundefined{footheight}{\newdimen\footheight}{}% this is for LaTeX2e
+\footheight 12pt                % Height of box containing running foot.
+
+%% Must redefine the top margin so there's room for headers and
+%% page numbers if you are using the preprint option. Footers
+%% are OK as is. Olin.
+\advance\topmargin by -37pt     % Leave 37pt above text for headers
+\headheight 12pt                % Height of box containing running head.
+\headsep 25pt                   % Space between running head and text.
+
+\textheight 666pt       % 9 1/4 column height
+\textwidth 42pc         % Width of text line.
+                        % For two-column mode:
+\columnsep 2pc          %    Space between columns
+\columnseprule 0pt      %    Width of rule between columns.
+\hfuzz 1pt              % Allow some variation in column width, otherwise it's
+                        % too hard to typeset in narrow columns.
+
+\footnotesep 5.6pt      % Height of strut placed at the beginning of every
+                        % footnote =3D height of normal \footnotesize strut,
+                        % so no extra space between footnotes.
+
+\skip\footins 8.1pt plus 4pt minus 2pt  % Space between last line of text and
+                                        % top of first footnote.
+\floatsep 11pt plus 2pt minus 2pt       % Space between adjacent floats moved
+                                        % to top or bottom of text page.
+\textfloatsep 18pt plus 2pt minus 4pt   % Space between main text and floats
+                                        % at top or bottom of page.
+\intextsep 11pt plus 2pt minus 2pt      % Space between in-text figures and
+                                        % text.
+\@ifundefined{@maxsep}{\newdimen\@maxsep}{}% this is for LaTeX2e
+\@maxsep 18pt                           % The maximum of \floatsep,
+                                        % \textfloatsep and \intextsep (minus
+                                        % the stretch and shrink).
+\dblfloatsep 11pt plus 2pt minus 2pt    % Same as \floatsep for double-column
+                                        % figures in two-column mode.
+\dbltextfloatsep 18pt plus 2pt minus 4pt% \textfloatsep for double-column
+                                        % floats.
+\@ifundefined{@dblmaxsep}{\newdimen\@dblmaxsep}{}% this is for LaTeX2e
+\@dblmaxsep 18pt                        % The maximum of \dblfloatsep and
+                                        % \dbltexfloatsep.
+\@fptop 0pt plus 1fil    % Stretch at top of float page/column. (Must be
+                         % 0pt plus ...)
+\@fpsep 8pt plus 2fil    % Space between floats on float page/column.
+\@fpbot 0pt plus 1fil    % Stretch at bottom of float page/column. (Must be
+                         % 0pt plus ... )
+\@dblfptop 0pt plus 1fil % Stretch at top of float page. (Must be 0pt plus ...)
+\@dblfpsep 8pt plus 2fil % Space between floats on float page.
+\@dblfpbot 0pt plus 1fil % Stretch at bottom of float page. (Must be
+                         % 0pt plus ... )
+\marginparpush 5pt       % Minimum vertical separation between two marginal
+                         % notes.
+
+\parskip 0pt plus 1pt            % Extra vertical space between paragraphs.
+\parindent 9pt  % GM July 2000 / was 0pt - width of paragraph indentation.
+\partopsep 2pt plus 1pt minus 1pt% Extra vertical space, in addition to
+                                 % \parskip and \topsep, added when user
+                                 % leaves blank line before environment.
+
+\@lowpenalty   51       % Produced by \nopagebreak[1] or \nolinebreak[1]
+\@medpenalty  151       % Produced by \nopagebreak[2] or \nolinebreak[2]
+\@highpenalty 301       % Produced by \nopagebreak[3] or \nolinebreak[3]
+
+\@beginparpenalty -\@lowpenalty % Before a list or paragraph environment.
+\@endparpenalty   -\@lowpenalty % After a list or paragraph environment.
+\@itempenalty     -\@lowpenalty % Between list items.
+
+%\@namedef{ds@10pt}{\@latexerr{The `10pt' option is not allowed in the `acmconf'
+\@namedef{ds@10pt}{\ClassError{The `10pt' option is not allowed in the `acmconf'	% January 2008
+  document style.}\@eha}
+%\@namedef{ds@11pt}{\@latexerr{The `11pt' option is not allowed in the `acmconf'
+\@namedef{ds@11pt}{\ClassError{The `11pt' option is not allowed in the `acmconf'	% January 2008
+  document style.}\@eha}
+%\@namedef{ds@12pt}{\@latexerr{The `12pt' option is not allowed in the `acmconf'
+\@namedef{ds@12pt}{\ClassError{The `12pt' option is not allowed in the `acmconf'	% January 2008
+  document style.}\@eha}
+
+\@options
+
+\lineskip 2pt           % \lineskip is 1pt for all font sizes.
+\normallineskip 2pt
+\def\baselinestretch{1}
+
+\abovedisplayskip 9pt plus2pt minus4.5pt%
+\belowdisplayskip \abovedisplayskip
+\abovedisplayshortskip  \z@ plus3pt%
+\belowdisplayshortskip  5.4pt plus3pt minus3pt%
+\let\@listi\@listI     % Setting of \@listi added 9 Jun 87
+
+\def\small{\@setsize\small{9pt}\viiipt\@viiipt
+\abovedisplayskip 7.6pt plus 3pt minus 4pt%
+\belowdisplayskip \abovedisplayskip
+\abovedisplayshortskip \z@ plus2pt%
+\belowdisplayshortskip 3.6pt plus2pt minus 2pt
+\def\@listi{\leftmargin\leftmargini %% Added 22 Dec 87
+\topsep 4pt plus 2pt minus 2pt\parsep 2pt plus 1pt minus 1pt
+\itemsep \parsep}}
+
+\def\footnotesize{\@setsize\footnotesize{9pt}\ixpt\@ixpt
+\abovedisplayskip 6.4pt plus 2pt minus 4pt%
+\belowdisplayskip \abovedisplayskip
+\abovedisplayshortskip \z@ plus 1pt%
+\belowdisplayshortskip 2.7pt plus 1pt minus 2pt
+\def\@listi{\leftmargin\leftmargini %% Added 22 Dec 87
+\topsep 3pt plus 1pt minus 1pt\parsep 2pt plus 1pt minus 1pt
+\itemsep \parsep}}
+
+\newcount\aucount
+\newcount\originalaucount
+\newdimen\auwidth
+\auwidth=\textwidth
+\newdimen\auskip
+\newcount\auskipcount
+\newdimen\auskip
+\global\auskip=1pc
+\newdimen\allauboxes
+\allauboxes=\auwidth
+\newtoks\addauthors
+\newcount\addauflag
+\global\addauflag=0 %Haven't shown additional authors yet
+
+\newtoks\subtitletext
+\gdef\subtitle#1{\subtitletext={#1}}
+
+\gdef\additionalauthors#1{\addauthors={#1}}
+
+\gdef\numberofauthors#1{\global\aucount=#1
+\ifnum\aucount>3\global\originalaucount=\aucount \global\aucount=3\fi %g}  % 3 OK - Gerry March 2007
+\global\auskipcount=\aucount\global\advance\auskipcount by 1
+\global\multiply\auskipcount by 2
+\global\multiply\auskip by \auskipcount
+\global\advance\auwidth by -\auskip
+\global\divide\auwidth by \aucount}
+
+% \and was modified to count the number of authors.  GKMT 12 Aug 1999
+\def\alignauthor{%                  % \begin{tabular}
+\end{tabular}%
+  \begin{tabular}[t]{p{\auwidth}}\centering}%
+
+%  *** NOTE *** NOTE *** NOTE *** NOTE ***
+%  If you have 'font problems' then you may need
+%  to change these, e.g. 'arialb' instead of "arialbd".
+%  Gerry Murray 11/11/1999
+%  *** OR ** comment out block A and activate block B or vice versa.
+% **********************************************
+%
+%  -- Start of block A -- (Type 1 or Truetype fonts)
+%\newfont{\secfnt}{timesbd at 12pt} % was timenrb originally - now is timesbd
+%\newfont{\secit}{timesbi at 12pt}   %13 Jan 00 gkmt
+%\newfont{\subsecfnt}{timesi at 11pt} % was timenrri originally - now is timesi
+%\newfont{\subsecit}{timesbi at 11pt} % 13 Jan 00 gkmt -- was times changed to timesbi gm 2/4/2000
+%                         % because "normal" is italic, "italic" is Roman
+%\newfont{\ttlfnt}{arialbd at 18pt} % was arialb originally - now is arialbd
+%\newfont{\ttlit}{arialbi at 18pt}    % 13 Jan 00 gkmt
+%\newfont{\subttlfnt}{arial at 14pt} % was arialr originally - now is arial
+%\newfont{\subttlit}{ariali at 14pt} % 13 Jan 00 gkmt
+%\newfont{\subttlbf}{arialbd at 14pt}  % 13 Jan 00 gkmt
+%\newfont{\aufnt}{arial at 12pt} % was arialr originally - now is arial
+%\newfont{\auit}{ariali at 12pt} % 13 Jan 00 gkmt
+%\newfont{\affaddr}{arial at 10pt} % was arialr originally - now is arial
+%\newfont{\affaddrit}{ariali at 10pt} %13 Jan 00 gkmt
+%\newfont{\eaddfnt}{arial at 12pt} % was arialr originally - now is arial
+%\newfont{\ixpt}{times at 9pt} % was timenrr originally - now is times
+%\newfont{\confname}{timesi at 8pt} % was timenrri - now is timesi
+%\newfont{\crnotice}{times at 8pt} % was timenrr originally - now is times
+%\newfont{\ninept}{times at 9pt} % was timenrr originally - now is times
+
+% *********************************************
+%  -- End of block A --
+%
+%
+% -- Start of block B -- UPDATED FONT NAMES
+% *********************************************
+% Gerry Murray 11/30/2006
+% *********************************************
+\newfont{\secfnt}{ptmb8t at 12pt}
+\newfont{\secit}{ptmbi8t at 12pt}    %13 Jan 00 gkmt
+\newfont{\subsecfnt}{ptmri8t at 11pt}
+\newfont{\subsecit}{ptmbi8t at 11pt}  % 
+\newfont{\ttlfnt}{phvb8t at 18pt}
+\newfont{\ttlit}{phvbo8t at 18pt}    % GM 2/4/2000
+\newfont{\subttlfnt}{phvr8t at 14pt}
+\newfont{\subttlit}{phvro8t at 14pt} % GM 2/4/2000
+\newfont{\subttlbf}{phvb8t at 14pt}  % 13 Jan 00 gkmt
+\newfont{\aufnt}{phvr8t at 12pt}
+\newfont{\auit}{phvro8t at 12pt}     % GM 2/4/2000
+\newfont{\affaddr}{phvr8t at 10pt}
+\newfont{\affaddrit}{phvro8t at 10pt} % GM 2/4/2000
+\newfont{\eaddfnt}{phvr8t at 12pt}
+\newfont{\ixpt}{ptmr8t at 9pt}
+\newfont{\confname}{ptmri8t at 8pt}
+\newfont{\crnotice}{ptmr8t at 8pt}
+\newfont{\ninept}{ptmr8t at 9pt}
+% +++++++++++++++++++++++++++++++++++++++++++++
+% -- End of block B --
+
+%\def\email#1{{{\eaddfnt{\vskip 4pt#1}}}}
+% If we have an email, inside a "shared affiliation" then we need the following instead
+\def\email#1{{{\eaddfnt{\par #1}}}}       % revised  - GM - 11/30/2006
+
+\def\addauthorsection{\ifnum\originalaucount>6  % was 3 - Gerry March 2007
+    \section{Additional Authors}\the\addauthors
+  \fi}
+
+\newcount\savesection
+\newcount\sectioncntr
+\global\sectioncntr=1
+
+\setcounter{secnumdepth}{3}
+
+\def\appendix{\par
+\section*{APPENDIX}
+\setcounter{section}{0}
+ \setcounter{subsection}{0}
+ \def\thesection{\Alph{section}} }
+
+\leftmargini 22.5pt
+\leftmarginii 19.8pt    % > \labelsep + width of '(m)'
+\leftmarginiii 16.8pt   % > \labelsep + width of 'vii.'
+\leftmarginiv 15.3pt    % > \labelsep + width of 'M.'
+\leftmarginv 9pt
+\leftmarginvi 9pt
+
+\leftmargin\leftmargini
+\labelsep 4.5pt
+\labelwidth\leftmargini\advance\labelwidth-\labelsep
+
+\def\@listI{\leftmargin\leftmargini \parsep 3.6pt plus 2pt minus 1pt%
+\topsep 7.2pt plus 2pt minus 4pt%
+\itemsep 3.6pt plus 2pt minus 1pt}
+
+\let\@listi\@listI
+\@listi
+
+\def\@listii{\leftmargin\leftmarginii
+   \labelwidth\leftmarginii\advance\labelwidth-\labelsep
+   \topsep 3.6pt plus 2pt minus 1pt
+   \parsep 1.8pt plus 0.9pt minus 0.9pt
+   \itemsep \parsep}
+
+\def\@listiii{\leftmargin\leftmarginiii
+    \labelwidth\leftmarginiii\advance\labelwidth-\labelsep
+    \topsep 1.8pt plus 0.9pt minus 0.9pt
+    \parsep \z@ \partopsep 1pt plus 0pt minus 1pt
+    \itemsep \topsep}
+
+\def\@listiv{\leftmargin\leftmarginiv
+     \labelwidth\leftmarginiv\advance\labelwidth-\labelsep}
+
+\def\@listv{\leftmargin\leftmarginv
+     \labelwidth\leftmarginv\advance\labelwidth-\labelsep}
+
+\def\@listvi{\leftmargin\leftmarginvi
+     \labelwidth\leftmarginvi\advance\labelwidth-\labelsep}
+
+\def\labelenumi{\theenumi.}
+\def\theenumi{\arabic{enumi}}
+
+\def\labelenumii{(\theenumii)}
+\def\theenumii{\alph{enumii}}
+\def\p@enumii{\theenumi}
+
+\def\labelenumiii{\theenumiii.}
+\def\theenumiii{\roman{enumiii}}
+\def\p@enumiii{\theenumi(\theenumii)}
+
+\def\labelenumiv{\theenumiv.}
+\def\theenumiv{\Alph{enumiv}}
+\def\p@enumiv{\p@enumiii\theenumiii}
+
+\def\labelitemi{$\bullet$}
+\def\labelitemii{\bf --}
+\def\labelitemiii{$\ast$}
+\def\labelitemiv{$\cdot$}
+
+\def\verse{\let\\=\@centercr
+  \list{}{\itemsep\z@ \itemindent -1.5em\listparindent \itemindent
+          \rightmargin\leftmargin\advance\leftmargin 1.5em}\item[]}
+\let\endverse\endlist
+
+\def\quotation{\list{}{\listparindent 1.5em
+    \itemindent\listparindent
+    \rightmargin\leftmargin \parsep 0pt plus 1pt}\item[]}
+\let\endquotation=\endlist
+
+\def\quote{\list{}{\rightmargin\leftmargin}\item[]}
+\let\endquote=\endlist
+
+\def\descriptionlabel#1{\hspace\labelsep \bf #1}
+\def\description{\list{}{\labelwidth\z@ \itemindent-\leftmargin
+       \let\makelabel\descriptionlabel}}
+
+\let\enddescription\endlist
+
+\def\theequation{\arabic{equation}}
+
+\arraycolsep 4.5pt   % Half the space between columns in an array environment.
+\tabcolsep 5.4pt    % Half the space between columns in a tabular environment.
+\arrayrulewidth .5pt % Width of rules in array and tabular environment. % (was .4) updated Gerry March 20 2007
+\doublerulesep 1.8pt % Space between adjacent rules in array or tabular env.
+
+\tabbingsep \labelsep   % Space used by the \' command.  (See LaTeX manual.)
+
+\skip\@mpfootins =\skip\footins
+
+\fboxsep =2.7pt      % Space left between box and text by \fbox and \framebox.
+\fboxrule =.5pt      % Width of rules in box made by \fbox and \framebox. % (was .4) updated Gerry March 20 2007
+
+\def\thepart{\Roman{part}} % Roman numeral part numbers.
+\def\thesection       {\arabic{section}}
+\def\thesubsection    {\thesection.\arabic{subsection}}
+%\def\thesubsubsection {\thesubsection.\arabic{subsubsection}} % GM 7/30/2002
+%\def\theparagraph     {\thesubsubsection.\arabic{paragraph}}  % GM 7/30/2002
+\def\thesubparagraph  {\theparagraph.\arabic{subparagraph}}
+
+\def\@pnumwidth{1.55em}
+\def\@tocrmarg {2.55em}
+\def\@dotsep{4.5}
+\setcounter{tocdepth}{3}
+
+%\def\tableofcontents{\@latexerr{\tableofcontents: Tables of contents are not
+%  allowed in the `acmconf' document style.}\@eha}
+
+\def\tableofcontents{\ClassError{%
+    \string\tableofcontents\space is not allowed in the `acmconf' document	% January 2008
+    style}\@eha}
+
+\def\l@part#1#2{\addpenalty{\@secpenalty}
+   \addvspace{2.25em plus 1pt}  % space above part line
+   \begingroup
+   \@tempdima 3em       % width of box holding part number, used by
+     \parindent \z@ \rightskip \@pnumwidth      %% \numberline
+     \parfillskip -\@pnumwidth
+     {\large \bf        % set line in \large boldface
+     \leavevmode        % TeX command to enter horizontal mode.
+     #1\hfil \hbox to\@pnumwidth{\hss #2}}\par
+     \nobreak           % Never break after part entry
+   \endgroup}
+
+\def\l@section#1#2{\addpenalty{\@secpenalty} % good place for page break
+   \addvspace{1.0em plus 1pt}   % space above toc entry
+   \@tempdima 1.5em             % width of box holding section number
+   \begingroup
+    \parindent  \z@ \rightskip \@pnumwidth
+     \parfillskip -\@pnumwidth
+     \bf                        % Boldface.
+     \leavevmode                % TeX command to enter horizontal mode.
+      \advance\leftskip\@tempdima %% added 5 Feb 88 to conform to
+      \hskip -\leftskip           %% 25 Jan 88 change to \numberline
+     #1\nobreak\hfil \nobreak\hbox to\@pnumwidth{\hss #2}\par
+   \endgroup}
+
+
+\def\l@subsection{\@dottedtocline{2}{1.5em}{2.3em}}
+\def\l@subsubsection{\@dottedtocline{3}{3.8em}{3.2em}}
+\def\l@paragraph{\@dottedtocline{4}{7.0em}{4.1em}}
+\def\l@subparagraph{\@dottedtocline{5}{10em}{5em}}
+
+%\def\listoffigures{\@latexerr{\listoffigures: Lists of figures are not
+%  allowed in the `acmconf' document style.}\@eha}
+
+\def\listoffigures{\ClassError{%
+    \string\listoffigures\space is not allowed in the `acmconf' document	% January 2008
+    style}\@eha}
+
+\def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}}
+
+%\def\listoftables{\@latexerr{\listoftables: Lists of tables are not
+%  allowed in the `acmconf' document style.}\@eha}
+%\let\l@table\l@figure
+
+\def\listoftables{\ClassError{%
+    \string\listoftables\space is not allowed in the `acmconf' document		% January 2008
+    style}\@eha}
+ \let\l@table\l@figure
+
+\def\footnoterule{\kern-3\p@
+  \hrule width .5\columnwidth   % (was .4) updated Gerry March 20 2007
+  \kern 2.6\p@}                 % The \hrule has default height of .4pt % (was .4) updated Gerry March 20 2007
+% ------
+\long\def\@makefntext#1{\noindent 
+%\hbox to .5em{\hss$^{\@thefnmark}$}#1}   % original
+\hbox to .5em{\hss\textsuperscript{\@thefnmark}}#1}  % C. Clifton / GM Oct. 2nd. 2002
+% -------
+
+\long\def\@maketntext#1{\noindent
+#1}
+
+\long\def\@maketitlenotetext#1#2{\noindent
+            \hbox to 1.8em{\hss$^{#1}$}#2}
+
+\setcounter{topnumber}{2}
+\def\topfraction{.7}
+\setcounter{bottomnumber}{1}
+\def\bottomfraction{.3}
+\setcounter{totalnumber}{3}
+\def\textfraction{.2}
+\def\floatpagefraction{.5}
+\setcounter{dbltopnumber}{2}
+\def\dbltopfraction{.7}
+\def\dblfloatpagefraction{.5}
+
+%
+\long\def\@makecaption#1#2{
+   \vskip \baselineskip
+   \setbox\@tempboxa\hbox{\textbf{#1: #2}}
+   \ifdim \wd\@tempboxa >\hsize % IF longer than one line:
+       \textbf{#1: #2}\par               %   THEN set as ordinary paragraph.
+     \else                      %   ELSE  center.
+       \hbox to\hsize{\hfil\box\@tempboxa\hfil}\par
+   \fi}
+
+%
+
+\long\def\@makecaption#1#2{
+   \vskip 10pt
+   \setbox\@tempboxa\hbox{\textbf{#1: #2}}
+   \ifdim \wd\@tempboxa >\hsize % IF longer than one line:
+       \textbf{#1: #2}\par                %   THEN set as ordinary paragraph.
+     \else                      %   ELSE  center.
+       \hbox to\hsize{\hfil\box\@tempboxa\hfil}
+   \fi}
+
+\@ifundefined{figure}{\newcounter {figure}} % this is for LaTeX2e
+
+\def\fps@figure{tbp}
+\def\ftype@figure{1}
+\def\ext@figure{lof}
+\def\fnum@figure{Figure \thefigure}
+\def\figure{\@float{figure}}
+%\let\endfigure\end@float
+\def\endfigure{\end@float} 		% Gerry January 2008
+\@namedef{figure*}{\@dblfloat{figure}}
+\@namedef{endfigure*}{\end@dblfloat}
+
+\@ifundefined{table}{\newcounter {table}} % this is for LaTeX2e
+
+\def\fps@table{tbp}
+\def\ftype@table{2}
+\def\ext@table{lot}
+\def\fnum@table{Table \thetable}
+\def\table{\@float{table}}
+%\let\endtable\end@float
+\def\endtable{\end@float}		% Gerry January 2008
+\@namedef{table*}{\@dblfloat{table}}
+\@namedef{endtable*}{\end@dblfloat}
+
+\newtoks\titleboxnotes
+\newcount\titleboxnoteflag
+
+\def\maketitle{\par
+ \begingroup
+   \def\thefootnote{\fnsymbol{footnote}}
+   \def\@makefnmark{\hbox
+       to 0pt{$^{\@thefnmark}$\hss}}
+     \twocolumn[\@maketitle]
+\@thanks
+ \endgroup
+ \setcounter{footnote}{0}
+ \let\maketitle\relax
+ \let\@maketitle\relax
+ \gdef\@thanks{}\gdef\@author{}\gdef\@title{}\gdef\@subtitle{}\let\thanks\relax
+ \@copyrightspace}
+
+%% CHANGES ON NEXT LINES
+\newif\if@ll % to record which version of LaTeX is in use
+
+\expandafter\ifx\csname LaTeXe\endcsname\relax % LaTeX2.09 is used
+\else% LaTeX2e is used, so set ll to true
+\global\@lltrue
+\fi
+
+\if@ll
+  \NeedsTeXFormat{LaTeX2e}
+  \ProvidesClass{sig-alternate} [2013/05/12 v2.7 based on acmproc.cls V1.3 <Nov. 30 '99>]
+  \RequirePackage{latexsym}% QUERY: are these two really needed?
+  \let\dooptions\ProcessOptions
+\else
+  \let\dooptions\@options
+\fi
+%% END CHANGES
+
+\def\@height{height}
+\def\@width{width}
+\def\@minus{minus}
+\def\@plus{plus}
+\def\hb@xt@{\hbox to}
+\newif\if@faircopy
+\@faircopyfalse
+\def\ds@faircopy{\@faircopytrue}
+
+\def\ds@preprint{\@faircopyfalse}
+
+\@twosidetrue
+\@mparswitchtrue
+\def\ds@draft{\overfullrule 5\p@}
+%% CHANGE ON NEXT LINE
+\dooptions
+
+\lineskip \p@
+\normallineskip \p@
+\def\baselinestretch{1}
+\def\@ptsize{0} %needed for amssymbols.sty
+
+%% CHANGES ON NEXT LINES
+\if@ll% allow use of old-style font change commands in LaTeX2e
+\@maxdepth\maxdepth
+%
+\DeclareOldFontCommand{\rm}{\ninept\rmfamily}{\mathrm}
+\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
+\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
+\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
+\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
+\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
+\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
+\DeclareRobustCommand*{\cal}{\@fontswitch{\relax}{\mathcal}}
+\DeclareRobustCommand*{\mit}{\@fontswitch{\relax}{\mathnormal}}
+\fi
+%
+\if@ll
+ \renewcommand{\rmdefault}{cmr}  % was 'ttm'
+% Note! I have also found 'mvr' to work ESPECIALLY well.
+% Gerry - October 1999
+% You may need to change your LV1times.fd file so that sc is
+% mapped to cmcsc - -for smallcaps -- that is if you decide
+% to change {cmr} to {times} above. (Not recommended)
+  \renewcommand{\@ptsize}{}
+  \renewcommand{\normalsize}{%
+    \@setfontsize\normalsize\@ixpt{10.5\p@}%\ninept%
+    \abovedisplayskip 6\p@ \@plus2\p@ \@minus\p@
+    \belowdisplayskip \abovedisplayskip
+    \abovedisplayshortskip 6\p@ \@minus 3\p@
+    \belowdisplayshortskip 6\p@ \@minus 3\p@
+    \let\@listi\@listI
+  }
+\else
+  \def\@normalsize{%changed next to 9 from 10
+    \@setsize\normalsize{9\p@}\ixpt\@ixpt
+   \abovedisplayskip 6\p@ \@plus2\p@ \@minus\p@
+    \belowdisplayskip \abovedisplayskip
+    \abovedisplayshortskip 6\p@ \@minus 3\p@
+    \belowdisplayshortskip 6\p@ \@minus 3\p@
+    \let\@listi\@listI
+  }%
+\fi
+\if@ll
+  \newcommand\scriptsize{\@setfontsize\scriptsize\@viipt{8\p@}}
+  \newcommand\tiny{\@setfontsize\tiny\@vpt{6\p@}}
+  \newcommand\large{\@setfontsize\large\@xiipt{14\p@}}
+  \newcommand\Large{\@setfontsize\Large\@xivpt{18\p@}}
+  \newcommand\LARGE{\@setfontsize\LARGE\@xviipt{20\p@}}
+  \newcommand\huge{\@setfontsize\huge\@xxpt{25\p@}}
+  \newcommand\Huge{\@setfontsize\Huge\@xxvpt{30\p@}}
+\else
+  \def\scriptsize{\@setsize\scriptsize{8\p@}\viipt\@viipt}
+  \def\tiny{\@setsize\tiny{6\p@}\vpt\@vpt}
+  \def\large{\@setsize\large{14\p@}\xiipt\@xiipt}
+  \def\Large{\@setsize\Large{18\p@}\xivpt\@xivpt}
+  \def\LARGE{\@setsize\LARGE{20\p@}\xviipt\@xviipt}
+  \def\huge{\@setsize\huge{25\p@}\xxpt\@xxpt}
+  \def\Huge{\@setsize\Huge{30\p@}\xxvpt\@xxvpt}
+\fi
+\normalsize
+
+% make aubox hsize/number of authors up to 3, less gutter
+% then showbox gutter showbox gutter showbox -- GKMT Aug 99
+\newbox\@acmtitlebox
+\def\@maketitle{\newpage
+ \null
+ \setbox\@acmtitlebox\vbox{%
+\baselineskip 20pt
+\vskip 2em                   % Vertical space above title.
+   \begin{center}
+    {\ttlfnt \@title\par}       % Title set in 18pt Helvetica (Arial) bold size.
+    \vskip 1.5em                % Vertical space after title.
+%This should be the subtitle.
+{\subttlfnt \the\subtitletext\par}\vskip 1.25em%\fi
+    {\baselineskip 16pt\aufnt   % each author set in \12 pt Arial, in a
+     \lineskip .5em             % tabular environment
+     \begin{tabular}[t]{c}\@author
+     \end{tabular}\par}
+    \vskip 1.5em               % Vertical space after author.
+   \end{center}}
+ \dimen0=\ht\@acmtitlebox
+ \advance\dimen0 by -12.75pc\relax % Increased space for title box -- KBT
+ \unvbox\@acmtitlebox
+ \ifdim\dimen0<0.0pt\relax\vskip-\dimen0\fi}
+
+
+\newcount\titlenotecount
+\global\titlenotecount=0
+\newtoks\tntoks
+\newtoks\tntokstwo
+\newtoks\tntoksthree
+\newtoks\tntoksfour
+\newtoks\tntoksfive
+
+\def\abstract{
+\ifnum\titlenotecount>0 % was =1
+    \insert\footins{%
+    \reset@font\footnotesize
+        \interlinepenalty\interfootnotelinepenalty
+        \splittopskip\footnotesep
+        \splitmaxdepth \dp\strutbox \floatingpenalty \@MM
+        \hsize\columnwidth \@parboxrestore
+        \protected@edef\@currentlabel{%
+        }%
+        \color@begingroup
+\ifnum\titlenotecount=1
+      \@maketntext{%
+         \raisebox{4pt}{$\ast$}\rule\z@\footnotesep\ignorespaces\the\tntoks\@finalstrut\strutbox}%
+\fi
+\ifnum\titlenotecount=2
+      \@maketntext{%
+      \raisebox{4pt}{$\ast$}\rule\z@\footnotesep\ignorespaces\the\tntoks\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\dagger$}\rule\z@\footnotesep\ignorespaces\the\tntokstwo\@finalstrut\strutbox}%
+\fi
+\ifnum\titlenotecount=3
+      \@maketntext{%
+         \raisebox{4pt}{$\ast$}\rule\z@\footnotesep\ignorespaces\the\tntoks\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\dagger$}\rule\z@\footnotesep\ignorespaces\the\tntokstwo\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\ddagger$}\rule\z@\footnotesep\ignorespaces\the\tntoksthree\@finalstrut\strutbox}%
+\fi
+\ifnum\titlenotecount=4
+      \@maketntext{%
+         \raisebox{4pt}{$\ast$}\rule\z@\footnotesep\ignorespaces\the\tntoks\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\dagger$}\rule\z@\footnotesep\ignorespaces\the\tntokstwo\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\ddagger$}\rule\z@\footnotesep\ignorespaces\the\tntoksthree\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\S$}\rule\z@\footnotesep\ignorespaces\the\tntoksfour\@finalstrut\strutbox}%
+\fi
+\ifnum\titlenotecount=5
+      \@maketntext{%
+         \raisebox{4pt}{$\ast$}\rule\z@\footnotesep\ignorespaces\the\tntoks\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\dagger$}\rule\z@\footnotesep\ignorespaces\the\tntokstwo\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\ddagger$}\rule\z@\footnotesep\ignorespaces\the\tntoksthree\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\S$}\rule\z@\footnotesep\ignorespaces\the\tntoksfour\par\@finalstrut\strutbox}%
+\@maketntext{%
+         \raisebox{4pt}{$\P$}\rule\z@\footnotesep\ignorespaces\the\tntoksfive\@finalstrut\strutbox}%
+\fi
+   \color@endgroup} %g}
+\fi
+\setcounter{footnote}{0}
+\section*{ABSTRACT}\normalsize%\ninept
+}
+
+\def\endabstract{\if@twocolumn\else\endquotation\fi}
+
+\def\keywords{\if@twocolumn
+\section*{Keywords}
+\else \small
+\quotation
+\fi}
+
+\def\terms#1{%
+%\if@twocolumn
+%\section*{General Terms}
+%\else \small
+%\quotation
+%\fi
+}
+
+% -- Classification needs to be a bit smart due to optionals - Gerry/Georgia November 2nd. 1999
+\newcount\catcount
+\global\catcount=1
+
+\def\category#1#2#3{%
+\ifnum\catcount=1
+\section*{Categories and Subject Descriptors}
+\advance\catcount by 1\else{\unskip; }\fi
+    \@ifnextchar [{\@category{#1}{#2}{#3}}{\@category{#1}{#2}{#3}[]}%
+}
+
+
+\def\@category#1#2#3[#4]{%
+    \begingroup
+        \let\and\relax
+            #1 [\textbf{#2}]%
+            \if!#4!%
+                \if!#3!\else : #3\fi
+            \else
+                :\space
+                \if!#3!\else #3\kern\z@---\hskip\z@\fi
+                \textit{#4}%
+            \fi
+    \endgroup
+}
+%
+
+
+
+
+
+
+%%% This section (written by KBT) handles the 1" box in the lower left
+%%% corner of the left column of the first page by creating a picture,
+%%% and inserting the predefined string at the bottom (with a negative
+%%% displacement to offset the space allocated for a non-existent
+%%% caption).
+%%%
+\newtoks\copyrightnotice
+\def\ftype@copyrightbox{8}
+\def\@copyrightspace{
+\@float{copyrightbox}[b]
+\begin{center}
+\setlength{\unitlength}{1pc}
+\begin{picture}(20,6) %Space for copyright notice
+\put(0,-0.95){\crnotice{\@toappear}}
+\end{picture}
+\end{center}
+\end@float}
+
+\def\@toappear{} % Default setting blank - commands below change this.
+\long\def\toappear#1{\def\@toappear{\parbox[b]{20pc}{\baselineskip 9pt#1}}}
+\def\toappearbox#1{\def\@toappear{\raisebox{5pt}{\framebox[20pc]{\parbox[b]{19pc}{#1}}}}}
+
+\newtoks\conf
+\newtoks\confinfo
+\def\conferenceinfo#1#2{\global\conf={#1}\global\confinfo{#2}}
+
+
+%\def\marginpar{\@latexerr{The \marginpar command is not allowed in the
+%  `acmconf' document style.}\@eha}
+
+\def\marginpar{\ClassError{%
+    \string\marginpar\space is not allowed in the `acmconf' document		% January 2008
+    style}\@eha}
+
+\mark{{}{}}     % Initializes TeX's marks
+
+\def\today{\ifcase\month\or
+  January\or February\or March\or April\or May\or June\or
+  July\or August\or September\or October\or November\or December\fi
+  \space\number\day, \number\year}
+
+\def\@begintheorem#1#2{%
+    \parskip 0pt % GM July 2000 (for tighter spacing)
+    \trivlist
+    \item[%
+        \hskip 10\p@
+        \hskip \labelsep
+        {{\sc #1}\hskip 5\p@\relax#2.}%
+    ]
+    \it
+}
+\def\@opargbegintheorem#1#2#3{%
+    \parskip 0pt % GM July 2000 (for tighter spacing)
+    \trivlist
+    \item[%
+        \hskip 10\p@
+        \hskip \labelsep
+        {\sc #1\ #2\             % This mod by Gerry to enumerate corollaries
+   \setbox\@tempboxa\hbox{(#3)}  % and bracket the 'corollary title'
+        \ifdim \wd\@tempboxa>\z@ % and retain the correct numbering of e.g. theorems
+            \hskip 5\p@\relax    % if they occur 'around' said corollaries.
+            \box\@tempboxa       % Gerry - Nov. 1999.
+        \fi.}%
+    ]
+    \it
+}
+\newif\if@qeded
+\global\@qededfalse
+
+% -- original
+%\def\proof{%
+%  \vspace{-\parskip} % GM July 2000 (for tighter spacing)
+%    \global\@qededfalse
+%    \@ifnextchar[{\@xproof}{\@proof}%
+%}
+% -- end of original
+
+% (JSS) Fix for vertical spacing bug - Gerry Murray July 30th. 2002
+\def\proof{%
+\vspace{-\lastskip}\vspace{-\parsep}\penalty-51%
+\global\@qededfalse
+\@ifnextchar[{\@xproof}{\@proof}%
+}
+
+\def\endproof{%
+    \if@qeded\else\qed\fi
+    \endtrivlist
+}
+\def\@proof{%
+    \trivlist
+    \item[%
+        \hskip 10\p@
+        \hskip \labelsep
+        {\sc Proof.}%
+    ]
+    \ignorespaces
+}
+\def\@xproof[#1]{%
+    \trivlist
+    \item[\hskip 10\p@\hskip \labelsep{\sc Proof #1.}]%
+    \ignorespaces
+}
+\def\qed{%
+    \unskip
+    \kern 10\p@
+    \begingroup
+        \unitlength\p@
+        \linethickness{.4\p@}%
+        \framebox(6,6){}%
+    \endgroup
+    \global\@qededtrue
+}
+
+\def\newdef#1#2{%
+    \expandafter\@ifdefinable\csname #1\endcsname
+        {\@definecounter{#1}%
+         \expandafter\xdef\csname the#1\endcsname{\@thmcounter{#1}}%
+         \global\@namedef{#1}{\@defthm{#1}{#2}}%
+         \global\@namedef{end#1}{\@endtheorem}%
+    }%
+}
+\def\@defthm#1#2{%
+    \refstepcounter{#1}%
+    \@ifnextchar[{\@ydefthm{#1}{#2}}{\@xdefthm{#1}{#2}}%
+}
+\def\@xdefthm#1#2{%
+    \@begindef{#2}{\csname the#1\endcsname}%
+    \ignorespaces
+}
+\def\@ydefthm#1#2[#3]{%
+    \trivlist
+    \item[%
+        \hskip 10\p@
+        \hskip \labelsep
+        {\it #2%
+%         \savebox\@tempboxa{#3}%
+         \saveb@x\@tempboxa{#3}%		% January 2008
+         \ifdim \wd\@tempboxa>\z@
+            \ \box\@tempboxa
+         \fi.%
+        }]%
+    \ignorespaces
+}
+\def\@begindef#1#2{%
+    \trivlist
+    \item[%
+        \hskip 10\p@
+        \hskip \labelsep
+        {\it #1\ \rm #2.}%
+    ]%
+}
+\def\theequation{\arabic{equation}}
+
+\newcounter{part}
+\newcounter{section}
+\newcounter{subsection}[section]
+\newcounter{subsubsection}[subsection]
+\newcounter{paragraph}[subsubsection]
+\def\thepart{\Roman{part}}
+\def\thesection{\arabic{section}}
+\def\thesubsection{\thesection.\arabic{subsection}}
+\def\thesubsubsection{\thesubsection.\arabic{subsubsection}} %removed \subsecfnt 29 July 2002 gkmt
+\def\theparagraph{\thesubsubsection.\arabic{paragraph}} %removed \subsecfnt 29 July 2002 gkmt
+\newif\if@uchead
+\@ucheadfalse
+
+%% CHANGES: NEW NOTE
+%% NOTE: OK to use old-style font commands below, since they were
+%% suitably redefined for LaTeX2e
+%% END CHANGES
+\setcounter{secnumdepth}{3}
+\def\part{%
+    \@startsection{part}{9}{\z@}{-10\p@ \@plus -4\p@ \@minus -2\p@}
+        {4\p@}{\normalsize\@ucheadtrue}%
+}
+\def\section{%
+    \@startsection{section}{1}{\z@}{-10\p@ \@plus -4\p@ \@minus -2\p@}% GM
+    {4\p@}{\baselineskip 14pt\secfnt\@ucheadtrue}%
+}
+
+\def\subsection{%
+    \@startsection{subsection}{2}{\z@}{-8\p@ \@plus -2\p@ \@minus -\p@}
+    {4\p@}{\secfnt}%
+}
+\def\subsubsection{%
+    \@startsection{subsubsection}{3}{\z@}{-8\p@ \@plus -2\p@ \@minus -\p@}%
+    {4\p@}{\subsecfnt}%
+}
+%\def\paragraph{%
+%    \vskip 12pt\@startsection{paragraph}{3}{\z@}{6\p@ \@plus \p@}% original
+%    {-5\p@}{\subsecfnt}%
+%}
+%  If one wants sections, subsections and subsubsections numbered,
+%  but not paragraphs, one usually sets secnumepth to 3.
+%  For that, the "depth" of paragraphs must be given correctly
+%  in the definition (``4'' instead of ``3'' as second argument
+%  of @startsection):
+\def\paragraph{%
+    \vskip 12pt\@startsection{paragraph}{4}{\z@}{6\p@ \@plus \p@}%    % GM and Wolfgang May - 11/30/06
+    {-5\p@}{\subsecfnt}%
+}
+\let\@period=.
+\def\@startsection#1#2#3#4#5#6{%
+        \if@noskipsec  %gkmt, 11 aug 99
+        \global\let\@period\@empty
+        \leavevmode
+        \global\let\@period.%
+    \fi
+      \par %
+    \@tempskipa #4\relax
+    \@afterindenttrue
+    \ifdim \@tempskipa <\z@
+        \@tempskipa -\@tempskipa
+        \@afterindentfalse
+    \fi
+    \if@nobreak
+    \everypar{}%
+    \else
+        \addpenalty\@secpenalty
+        \addvspace\@tempskipa
+    \fi
+\parskip=0pt % GM July 2000 (non numbered) section heads
+    \@ifstar
+        {\@ssect{#3}{#4}{#5}{#6}}
+        {\@dblarg{\@sect{#1}{#2}{#3}{#4}{#5}{#6}}}%
+}
+\def\@sect#1#2#3#4#5#6[#7]#8{%
+    \ifnum #2>\c@secnumdepth
+        \let\@svsec\@empty
+    \else
+        \refstepcounter{#1}%
+        \edef\@svsec{%
+            \begingroup
+                %\ifnum#2>2 \noexpand\rm \fi % changed to next 29 July 2002 gkmt
+            \ifnum#2>2 \noexpand#6 \fi
+                \csname the#1\endcsname
+            \endgroup
+            \ifnum #2=1\relax .\fi
+            \hskip 1em
+        }%
+    \fi
+    \@tempskipa #5\relax
+    \ifdim \@tempskipa>\z@
+        \begingroup
+            #6\relax
+            \@hangfrom{\hskip #3\relax\@svsec}%
+            \begingroup
+                \interlinepenalty \@M
+                \if@uchead
+                    \uppercase{#8}%
+                \else
+                    #8%
+                \fi
+                \par
+            \endgroup
+        \endgroup
+        \csname #1mark\endcsname{#7}%
+        \vskip -12pt  %gkmt, 11 aug 99 and GM July 2000 (was -14) - numbered section head spacing
+\addcontentsline{toc}{#1}{%
+            \ifnum #2>\c@secnumdepth \else
+                \protect\numberline{\csname the#1\endcsname}%
+            \fi
+            #7%
+        }%
+    \else
+        \def\@svsechd{%
+            #6%
+            \hskip #3\relax
+            \@svsec
+            \if@uchead
+                \uppercase{#8}%
+            \else
+                #8%
+            \fi
+            \csname #1mark\endcsname{#7}%
+            \addcontentsline{toc}{#1}{%
+                \ifnum #2>\c@secnumdepth \else
+                    \protect\numberline{\csname the#1\endcsname}%
+                \fi
+                #7%
+            }%
+        }%
+    \fi
+    \@xsect{#5}\hskip 1pt
+    \par
+}
+\def\@xsect#1{%
+    \@tempskipa #1\relax
+    \ifdim \@tempskipa>\z@
+        \par
+        \nobreak
+        \vskip \@tempskipa
+        \@afterheading
+    \else
+        \global\@nobreakfalse
+        \global\@noskipsectrue
+        \everypar{%
+            \if@noskipsec
+                \global\@noskipsecfalse
+                \clubpenalty\@M
+                \hskip -\parindent
+                \begingroup
+                    \@svsechd
+                    \@period
+                \endgroup
+                \unskip
+                \@tempskipa #1\relax
+                \hskip -\@tempskipa
+            \else
+                \clubpenalty \@clubpenalty
+                \everypar{}%
+            \fi
+        }%
+    \fi
+    \ignorespaces
+}
+\def\@trivlist{%
+    \@topsepadd\topsep
+    \if@noskipsec
+        \global\let\@period\@empty
+        \leavevmode
+        \global\let\@period.%
+    \fi
+    \ifvmode
+        \advance\@topsepadd\partopsep
+    \else
+        \unskip
+        \par
+    \fi
+    \if@inlabel
+        \@noparitemtrue
+        \@noparlisttrue
+    \else
+        \@noparlistfalse
+        \@topsep\@topsepadd
+    \fi
+    \advance\@topsep \parskip
+    \leftskip\z@skip
+    \rightskip\@rightskip
+    \parfillskip\@flushglue
+    \@setpar{\if@newlist\else{\@@par}\fi}
+    \global\@newlisttrue
+    \@outerparskip\parskip
+}
+
+%%% Actually, 'abbrev' works just fine as the default
+%%% Bibliography style.
+
+\typeout{Using 'Abbrev' bibliography style}
+\newcommand\bibyear[2]{%
+    \unskip\quad\ignorespaces#1\unskip
+    \if#2..\quad \else \quad#2 \fi
+}
+\newcommand{\bibemph}[1]{{\em#1}}
+\newcommand{\bibemphic}[1]{{\em#1\/}}
+\newcommand{\bibsc}[1]{{\sc#1}}
+\def\@normalcite{%
+    \def\@cite##1##2{[##1\if@tempswa , ##2\fi]}%
+}
+\def\@citeNB{%
+    \def\@cite##1##2{##1\if@tempswa , ##2\fi}%
+}
+\def\@citeRB{%
+    \def\@cite##1##2{##1\if@tempswa , ##2\fi]}%
+}
+\def\start@cite#1#2{%
+    \edef\citeauthoryear##1##2##3{%
+        ###1%
+        \ifnum#2=\z@ \else\ ###2\fi
+    }%
+    \ifnum#1=\thr@@
+        \let\@@cite\@citeyear
+    \else
+        \let\@@cite\@citenormal
+    \fi
+    \@ifstar{\@citeNB\@@cite}{\@normalcite\@@cite}%
+}
+%\def\cite{\start@cite23}
+\DeclareRobustCommand\cite{\start@cite23}		% January 2008
+\def\citeNP{\cite*}					% No Parentheses e.g. 5
+%\def\citeA{\start@cite10}
+\DeclareRobustCommand\citeA{\start@cite10}		% January 2008
+\def\citeANP{\citeA*}
+%\def\shortcite{\start@cite23}				
+\DeclareRobustCommand\shortcite{\start@cite23}		% January 2008
+\def\shortciteNP{\shortcite*}
+%\def\shortciteA{\start@cite20}
+\DeclareRobustCommand\shortciteA{\start@cite20}		% January 2008
+\def\shortciteANP{\shortciteA*}
+%\def\citeyear{\start@cite30}
+\DeclareRobustCommand\citeyear{\start@cite30}		% January 2008
+\def\citeyearNP{\citeyear*}
+%\def\citeN{%
+\DeclareRobustCommand\citeN{%				% January 2008
+    \@citeRB
+    \def\citeauthoryear##1##2##3{##1\ [##3%
+        \def\reserved@a{##1}%
+        \def\citeauthoryear####1####2####3{%
+            \def\reserved@b{####1}%
+            \ifx\reserved@a\reserved@b
+                ####3%
+            \else
+                \errmessage{Package acmart Error: author mismatch
+                         in \string\citeN^^J^^J%
+                    See the acmart package documentation for explanation}%
+            \fi
+        }%
+    }%
+    \@ifstar\@citeyear\@citeyear
+}
+%\def\shortciteN{%
+\DeclareRobustCommand\shortciteN{%			% January 2008
+    \@citeRB
+    \def\citeauthoryear##1##2##3{##2\ [##3%
+        \def\reserved@a{##2}%
+        \def\citeauthoryear####1####2####3{%
+            \def\reserved@b{####2}%
+            \ifx\reserved@a\reserved@b
+                ####3%
+            \else
+                \errmessage{Package acmart Error: author mismatch
+                         in \string\shortciteN^^J^^J%
+                    See the acmart package documentation for explanation}%
+            \fi
+        }%
+    }%
+    \@ifstar\@citeyear\@citeyear  % GM July 2000
+}
+
+\def\@citenormal{%
+    \@ifnextchar [{\@tempswatrue\@citex;}%
+% original                 {\@tempswafalse\@citex,[]}% was ; Gerry 2/24/00
+{\@tempswafalse\@citex[]}%  	% GERRY FIX FOR BABEL 3/20/2009
+}
+
+\def\@citeyear{%
+    \@ifnextchar [{\@tempswatrue\@citex,}%
+% original                  {\@tempswafalse\@citex,[]}%
+{\@tempswafalse\@citex[]}%	%  GERRY FIX FOR BABEL 3/20/2009
+}
+
+\def\@citex#1[#2]#3{%
+    \let\@citea\@empty
+    \@cite{%
+        \@for\@citeb:=#3\do{%
+            \@citea
+% original            \def\@citea{#1 }%
+            \def\@citea{#1, }% 	% GERRY FIX FOR BABEL 3/20/2009 -- SO THAT YOU GET [1, 2] IN THE BODY TEXT
+            \edef\@citeb{\expandafter\@iden\@citeb}%
+            \if@filesw
+                \immediate\write\@auxout{\string\citation{\@citeb}}%
+            \fi
+            \@ifundefined{b@\@citeb}{%
+                {\bf ?}%
+                \@warning{%
+                    Citation `\@citeb' on page \thepage\space undefined%
+                }%
+            }%
+            {\csname b@\@citeb\endcsname}%
+        }%
+    }{#2}%
+}
+%\let\@biblabel\@gobble   % Dec. 2008 - Gerry
+% ----
+\def\@biblabelnum#1{[#1]} % Gerry's solution #1 - for Natbib -- April 2009
+\let\@biblabel=\@biblabelnum  % Gerry's solution #1 - for Natbib -- April 2009
+\def\newblock{\relax} % Gerry Dec. 2008
+% ---
+\newdimen\bibindent
+\setcounter{enumi}{1}
+\bibindent=0em
+\def\thebibliography#1{% 
+\ifnum\addauflag=0\addauthorsection\global\addauflag=1\fi
+     \section[References]{%    <=== OPTIONAL ARGUMENT ADDED HERE
+        {References} % was uppercased but this affects pdf bookmarks (SP/GM October 2004)
+          {\vskip -9pt plus 1pt} % GM Nov. 2006 / GM July 2000 (for somewhat tighter spacing) 
+         \@mkboth{{\refname}}{{\refname}}%
+     }%
+     \list{[\arabic{enumi}]}{%
+         \settowidth\labelwidth{[#1]}%
+         \leftmargin\labelwidth
+         \advance\leftmargin\labelsep
+         \advance\leftmargin\bibindent
+         \parsep=0pt\itemsep=1pt % GM July 2000
+         \itemindent -\bibindent
+         \listparindent \itemindent
+         \usecounter{enumi}
+     }%
+     \let\newblock\@empty
+     \raggedright % GM July 2000
+     \sloppy
+     \sfcode`\.=1000\relax
+}
+
+
+\gdef\balancecolumns
+{\vfill\eject
+\global\@colht=\textheight
+\global\ht\@cclv=\textheight
+}
+
+\newcount\colcntr
+\global\colcntr=0
+%\newbox\savebox
+\newbox\saveb@x				% January 2008
+
+\gdef \@makecol {%
+\global\advance\colcntr by 1
+\ifnum\colcntr>2 \global\colcntr=1\fi
+   \ifvoid\footins
+     \setbox\@outputbox \box\@cclv
+   \else
+     \setbox\@outputbox \vbox{%
+\boxmaxdepth \@maxdepth
+       \@tempdima\dp\@cclv
+       \unvbox \@cclv
+       \vskip-\@tempdima
+       \vskip \skip\footins
+       \color@begingroup
+         \normalcolor
+         \footnoterule
+         \unvbox \footins
+       \color@endgroup
+       }%
+   \fi
+   \xdef\@freelist{\@freelist\@midlist}%
+   \global \let \@midlist \@empty
+   \@combinefloats
+   \ifvbox\@kludgeins
+     \@makespecialcolbox
+   \else
+     \setbox\@outputbox \vbox to\@colht {%
+\@texttop
+       \dimen@ \dp\@outputbox
+       \unvbox \@outputbox
+   \vskip -\dimen@
+       \@textbottom
+       }%
+   \fi
+   \global \maxdepth \@maxdepth
+}
+\def\titlenote{\@ifnextchar[\@xtitlenote{\stepcounter\@mpfn
+\global\advance\titlenotecount by 1
+\ifnum\titlenotecount=1
+    \raisebox{9pt}{$\ast$}
+\fi
+\ifnum\titlenotecount=2
+    \raisebox{9pt}{$\dagger$}
+\fi
+\ifnum\titlenotecount=3
+    \raisebox{9pt}{$\ddagger$}
+\fi
+\ifnum\titlenotecount=4
+\raisebox{9pt}{$\S$}
+\fi
+\ifnum\titlenotecount=5
+\raisebox{9pt}{$\P$}
+\fi
+         \@titlenotetext
+}}
+
+\long\def\@titlenotetext#1{\insert\footins{%
+\ifnum\titlenotecount=1\global\tntoks={#1}\fi
+\ifnum\titlenotecount=2\global\tntokstwo={#1}\fi
+\ifnum\titlenotecount=3\global\tntoksthree={#1}\fi
+\ifnum\titlenotecount=4\global\tntoksfour={#1}\fi
+\ifnum\titlenotecount=5\global\tntoksfive={#1}\fi
+    \reset@font\footnotesize
+    \interlinepenalty\interfootnotelinepenalty
+    \splittopskip\footnotesep
+    \splitmaxdepth \dp\strutbox \floatingpenalty \@MM
+    \hsize\columnwidth \@parboxrestore
+    \protected@edef\@currentlabel{%
+    }%
+    \color@begingroup
+   \color@endgroup}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%
+\ps@plain
+\baselineskip=11pt
+\let\thepage\relax % For NO page numbers - GM Nov. 30th. 1999 and July 2000
+\def\setpagenumber#1{\global\setcounter{page}{#1}}
+%\pagenumbering{arabic}  % Arabic page numbers GM July 2000
+\twocolumn             % Double column.
+\flushbottom           % Even bottom -- alas, does not balance columns at end of document
+\pagestyle{plain}
+
+% Need Copyright Year and Copyright Data to be user definable (in .tex file).
+% Gerry Nov. 30th. 1999
+\newtoks\copyrtyr
+\newtoks\acmcopyr
+\newtoks\boilerplate
+\global\acmcopyr={X-XXXXX-XX-X/XX/XX}  % Default - 5/11/2001 *** Gerry
+\global\copyrtyr={\the\year}                % Default - 3/3/2003 *** Gerry
+\def\acmPrice#1{\gdef\@acmPrice{#1}}
+\acmPrice{} %article price  % Changed to 15 - June 2012 - Gerry
+
+
+\def\CopyrightYear#1{\global\copyrtyr{#1}}
+\def\crdata#1{\global\acmcopyr{#1}}
+\def\permission#1{\global\boilerplate{#1}}
+
+% ISBN
+%
+\def\isbn#1{\global\acmcopyr={#1}}
+\isbn{978-1-4503-2138-9}
+
+\RequirePackage{url}
+\urlstyle{rm}
+\def\doi#1{\def\@doi{#1}}
+\doi{10.1145/1235}
+\def\printdoi#1{\url{#1}}
+
+
+
+% Copyright
+\RequirePackage{waccopyright}
+\setcopyright{none}
+
+%
+\global\boilerplate={\@copyrightpermission}
+
+
+
+\newtoks\copyrightetc
+\global\copyrightetc{%
+{\noindent\confname\ \the\conf\ \the\confinfo}\par\smallskip
+  \if@printcopyright
+    \copyright\ \the\copyrtyr\ \@copyrightowner
+  \fi
+  \if@acmowned ISBN \else\ifnum\acm@copyrightmode=2 ISBN \else %\par\smallskip ~ 
+\fi\fi
+% \the\acmcopyr
+\ifx\@acmPrice\@empty.\else\dots\@acmPrice\fi\par%\smallskip
+%{DOI: \small\expandafter\printdoi\expandafter{\@doi}%
+} 
+\toappear{\fontsize{7pt}{8pt}\fontfamily{ptm}\selectfont
+  \the\boilerplate\par\smallskip
+ \the\copyrightetc}
+%\DeclareFixedFont{\altcrnotice}{OT1}{tmr}{m}{n}{8}  % << patch needed for accenting e.g. Montreal - Gerry, May 2007
+%\DeclareFixedFont{\altconfname}{OT1}{tmr}{m}{it}{8}  % << patch needed for accenting in italicized confname - Gerry, May 2007
+%
+%{\altconfname{{\the\conf}}} {\altcrnotice\the\confinfo\par} \the\copyrightetc.}  % << Gerry, May 2007
+%
+% The following section (i.e. 3 .sty inclusions) was added in May 2007 so as to fix the problems that many
+% authors were having with accents. Sometimes accents would occur, but the letter-character would be of a different
+% font. Conversely the letter-character font would be correct but, e.g. a 'bar' would appear superimposed on the
+% character instead of, say, an unlaut/diaresis. Sometimes the letter-character would NOT appear at all.
+% Using [T1]{fontenc} outright was not an option as this caused 99% of the authors to 'produce' a Type-3 (bitmapped)
+% PDF file - useless for production. 
+%
+% For proper (font) accenting we NEED these packages to be part of the .cls file i.e. 'ae', 'aecompl' and 'aeguil' 
+% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+%% This is file `ae.sty' 
+\def\fileversion{1.3}
+\def\filedate{2001/02/12}
+\NeedsTeXFormat{LaTeX2e}
+%\ProvidesPackage{ae}[\filedate\space\fileversion\space  % GM
+% Almost European Computer Modern]                       % GM - keeping the log file clean(er)
+\newif\if@ae@slides \@ae@slidesfalse
+\DeclareOption{slides}{\@ae@slidestrue}
+\ProcessOptions
+\fontfamily{aer}
+\RequirePackage[T1]{fontenc}
+\if@ae@slides
+    \renewcommand{\sfdefault}{laess}
+    \renewcommand{\rmdefault}{laess} % no roman
+    \renewcommand{\ttdefault}{laett}
+\else
+    \renewcommand{\sfdefault}{aess}
+    \renewcommand{\rmdefault}{aer}
+    \renewcommand{\ttdefault}{aett}
+\fi
+\DeclareRobustCommand{\ttfamily}{\fontencoding{T1}\fontfamily{lmtt}\selectfont}
+\endinput
+%% 
+%% End of file `ae.sty'.
+%
+%
+\def\fileversion{0.9}
+\def\filedate{1998/07/23}
+\NeedsTeXFormat{LaTeX2e}
+%\ProvidesPackage{aecompl}[\filedate\space\fileversion\space   % GM
+%T1 Complements for AE fonts (D. Roegel)]                      % GM -- keeping the log file clean(er)
+ 
+\def\@ae@compl#1{{\fontencoding{T1}\fontfamily{cmr}\selectfont\symbol{#1}}}
+\def\guillemotleft{\@ae@compl{19}}
+\def\guillemotright{\@ae@compl{20}}
+\def\guilsinglleft{\@ae@compl{14}}
+\def\guilsinglright{\@ae@compl{15}}
+\def\TH{\@ae@compl{222}}
+\def\NG{\@ae@compl{141}}
+\def\ng{\@ae@compl{173}}
+\def\th{\@ae@compl{254}}
+\def\DJ{\@ae@compl{208}}
+\def\dj{\@ae@compl{158}}
+\def\DH{\@ae@compl{208}}
+\def\dh{\@ae@compl{240}}
+\def\@perthousandzero{\@ae@compl{24}}
+\def\textperthousand{\%\@perthousandzero}
+\def\textpertenthousand{\%\@perthousandzero\@perthousandzero}
+\endinput
+%
+%
+%% This is file `aeguill.sty' 
+% This file gives french guillemets (and not guillemots!)
+% built with the Polish CMR fonts (default), WNCYR fonts, the LASY fonts 
+% or with the EC fonts. 
+% This is useful in conjunction with the ae package
+% (this package loads the ae package in case it has not been loaded)
+%  and with or without the french(le) package.
+%
+% In order to get the guillemets, it is necessary to either type
+% \guillemotleft and \guillemotright, or to use an 8 bit encoding
+% (such as ISO-Latin1) which selects these two commands, 
+% or, if you use the french package (but not the frenchle package), 
+% to type << or >>.
+%
+% By default, you get the Polish CMR guillemets; if this package is loaded
+% with the `cm' option, you get the LASY guillemets; with `ec,' you
+% get the EC guillemets, and with `cyr,' you get the cyrillic guillemets.
+%
+% In verbatim mode, you always get the EC/TT guillemets.
+%
+% The default option is interesting in conjunction with PDF,
+% because there is a Type 1 version of the Polish CMR fonts
+% and these guillemets are very close in shape to the EC guillemets.
+% There are no free Type 1 versions of the EC fonts.
+%
+% Support for Polish CMR guillemets was kindly provided by 
+% Rolf Niepraschk <niepraschk@ptb.de> in version 0.99 (2000/05/22).
+% Bernd Raichle provided extensive simplifications to the code
+% for version 1.00.
+%
+% This package is released under the LPPL.
+%
+% Changes:
+%   Date        version
+%   2001/04/12  1.01    the frenchle and french package are now distinguished.
+%
+\def\fileversion{1.01}
+\def\filedate{2001/04/12}
+\NeedsTeXFormat{LaTeX2e}
+%\ProvidesPackage{aeguill}[2001/04/12 1.01 %    % GM
+%AE fonts with french guillemets (D. Roegel)]   % GM - keeping the log file clean(er)
+%\RequirePackage{ae}  % GM May 2007 - already embedded here
+
+\newcommand{\@ae@switch}[4]{#4}
+\DeclareOption{ec}{\renewcommand\@ae@switch[4]{#1}}
+\DeclareOption{cm}{\renewcommand\@ae@switch[4]{#2}}
+\DeclareOption{cyr}{\renewcommand\@ae@switch[4]{#3}}
+\DeclareOption{pl}{\renewcommand\@ae@switch[4]{#4}}
+
+
+%
+% Load necessary packages
+%
+\@ae@switch{% ec
+  % do nothing
+}{% cm
+  \RequirePackage{latexsym}%  GM - May 2007 - already 'mentioned as required' up above
+}{% cyr
+  \RequirePackage[OT2,T1]{fontenc}%
+}{% pl
+  \RequirePackage[OT4,T1]{fontenc}%
+}
+
+% The following command will be compared to \frenchname,
+% as defined in french.sty and frenchle.sty.
+\def\aeguillfrenchdefault{french}%
+
+\let\guill@verbatim@font\verbatim@font
+\def\verbatim@font{\guill@verbatim@font\ecguills{cmtt}%
+                   \let\guillemotleft\@oguills\let\guillemotright\@fguills}
+
+\begingroup \catcode`\<=13 \catcode`\>=13
+\def\x{\endgroup
+ \def\ae@lfguill{<<}%
+ \def\ae@rfguill{>>}%
+}\x
+
+\newcommand{\ecguills}[1]{%
+  \def\selectguillfont{\fontencoding{T1}\fontfamily{#1}\selectfont}%
+  \def\@oguills{{\selectguillfont\symbol{19}}}%
+  \def\@fguills{{\selectguillfont\symbol{20}}}%
+  } 
+
+\newcommand{\aeguills}{%
+  \ae@guills
+  % We redefine \guillemotleft and \guillemotright
+  % in order to catch them when they are used 
+  % with \DeclareInputText (in latin1.def for instance)
+  % We use \auxWARNINGi as a safe indicator that french.sty is used.
+  \gdef\guillemotleft{\ifx\auxWARNINGi\undefined
+                         \@oguills % neither french.sty nor frenchle.sty
+                      \else
+                         \ifx\aeguillfrenchdefault\frenchname
+                           \ae@lfguill  % french.sty
+                         \else
+                           \@oguills    % frenchle.sty
+                         \fi
+                      \fi}%
+  \gdef\guillemotright{\ifx\auxWARNINGi\undefined
+                         \@fguills % neither french.sty nor frenchle.sty
+                       \else
+                         \ifx\aeguillfrenchdefault\frenchname
+                           \ae@rfguill  % french.sty
+                         \else
+                           \@fguills    % frenchle.sty
+                         \fi
+                       \fi}%
+  }
+
+%
+% Depending on the class option
+% define the internal command \ae@guills
+\@ae@switch{% ec
+  \newcommand{\ae@guills}{%
+    \ecguills{cmr}}%
+}{% cm
+  \newcommand{\ae@guills}{%
+    \def\selectguillfont{\fontencoding{U}\fontfamily{lasy}%
+            \fontseries{m}\fontshape{n}\selectfont}%
+    \def\@oguills{\leavevmode\nobreak
+                \hbox{\selectguillfont (\kern-.20em(\kern.20em}\nobreak}%
+    \def\@fguills{\leavevmode\nobreak
+                \hbox{\selectguillfont \kern.20em)\kern-.2em)}%
+                \ifdim\fontdimen\@ne\font>\z@\/\fi}}%
+}{% cyr
+  \newcommand{\ae@guills}{%
+    \def\selectguillfont{\fontencoding{OT2}\fontfamily{wncyr}\selectfont}%
+    \def\@oguills{{\selectguillfont\symbol{60}}}%
+    \def\@fguills{{\selectguillfont\symbol{62}}}}
+}{% pl
+  \newcommand{\ae@guills}{%
+    \def\selectguillfont{\fontencoding{OT4}\fontfamily{cmr}\selectfont}%
+    \def\@oguills{{\selectguillfont\symbol{174}}}%
+    \def\@fguills{{\selectguillfont\symbol{175}}}}
+}
+
+
+\AtBeginDocument{%
+  \ifx\GOfrench\undefined
+    \aeguills
+  \else
+    \let\aeguill@GOfrench\GOfrench
+    \gdef\GOfrench{\aeguill@GOfrench \aeguills}%
+  \fi
+  }
+
+\endinput
+%
+
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/docs/WAC2016/waccopyright.sty	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,228 @@
+%%
+%% This is file `acmcopyright.sty',
+%% generated with the docstrip utility.
+%%
+%% The original source files were:
+%%
+%% acmcopyright.dtx  (with options: `style')
+%% 
+%% IMPORTANT NOTICE:
+%% 
+%% For the copyright see the source file.
+%% 
+%% Any modified versions of this file must be renamed
+%% with new filenames distinct from acmcopyright.sty.
+%% 
+%% For distribution of the original source see the terms
+%% for copying and modification in the file acmcopyright.dtx.
+%% 
+%% This generated file may be distributed as long as the
+%% original source files, as listed above, are part of the
+%% same distribution. (The sources need not necessarily be
+%% in the same archive or directory.)
+%% \CharacterTable
+%%  {Upper-case    \A\B\C\D\E\F\G\H\I\J\K\L\M\N\O\P\Q\R\S\T\U\V\W\X\Y\Z
+%%   Lower-case    \a\b\c\d\e\f\g\h\i\j\k\l\m\n\o\p\q\r\s\t\u\v\w\x\y\z
+%%   Digits        \0\1\2\3\4\5\6\7\8\9
+%%   Exclamation   \!     Double quote  \"     Hash (number) \#
+%%   Dollar        \$     Percent       \%     Ampersand     \&
+%%   Acute accent  \'     Left paren    \(     Right paren   \)
+%%   Asterisk      \*     Plus          \+     Comma         \,
+%%   Minus         \-     Point         \.     Solidus       \/
+%%   Colon         \:     Semicolon     \;     Less than     \<
+%%   Equals        \=     Greater than  \>     Question mark \?
+%%   Commercial at \@     Left bracket  \[     Backslash     \\
+%%   Right bracket \]     Circumflex    \^     Underscore    \_
+%%   Grave accent  \`     Left brace    \{     Vertical bar  \|
+%%   Right brace   \}     Tilde         \~}
+\NeedsTeXFormat{LaTeX2e}
+\ProvidesPackage{waccopyright}
+[2014/06/29 v1.2 Copyright statemens for ACM classes]
+\newif\if@printcopyright
+\@printcopyrighttrue
+\newif\if@printpermission
+\@printpermissiontrue
+\newif\if@acmowned
+\@acmownedtrue
+\RequirePackage{xkeyval}
+\define@choicekey*{ACM@}{acmcopyrightmode}[%
+  \acm@copyrightinput\acm@copyrightmode]{none,acmcopyright,acmlicensed,%
+  rightsretained,usgov,usgovmixed,cagov,cagovmixed,%
+  licensedusgovmixed,licensedcagovmixed,othergov,licensedothergov,waclicense}{%
+  \@printpermissiontrue
+  \@printcopyrighttrue
+  \@acmownedtrue
+  \ifnum\acm@copyrightmode=0\relax % none
+   \@printpermissionfalse
+   \@printcopyrightfalse
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=2\relax % acmlicensed
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=3\relax % rightsretained
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=4\relax % usgov
+   \@printpermissiontrue
+   \@printcopyrightfalse
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=6\relax % cagov
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=8\relax % licensedusgovmixed
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=9\relax % licensedcagovmixed
+   \@acmownedfalse
+  \fi
+  \ifnum\acm@copyrightmode=10\relax % othergov
+   \@acmownedtrue
+  \fi
+  \ifnum\acm@copyrightmode=11\relax % licensedothergov
+   \@acmownedfalse
+   \@printcopyrightfalse
+  \fi
+  \ifnum\acm@copyrightmode=12\relax % waclicense
+   \@acmownedfalse
+  \fi}
+\def\setcopyright#1{\setkeys{ACM@}{acmcopyrightmode=#1}}
+\setcopyright{acmcopyright}
+\def\@copyrightowner{%
+  \ifcase\acm@copyrightmode\relax % none
+  \or % acmcopyright
+  ACM.
+  \or % acmlicensed
+  Copyright held by the owner/author(s). Publication rights licensed to
+  ACM.
+  \or % rightsretained
+  Copyright held by the owner/author(s).
+  \or % usgov
+  \or % usgovmixed
+  ACM.
+  \or % cagov
+  Crown in Right of Canada.
+  \or %cagovmixed
+  ACM.
+  \or %licensedusgovmixed
+  Copyright held by the owner/author(s). Publication rights licensed to
+  ACM.
+  \or %licensedcagovmixed
+  Copyright held by the owner/author(s). Publication rights licensed to
+  ACM.
+  \or % othergov
+  ACM.
+  \or % licensedothergov
+  \or % waclicense
+  Copyright held by the owner/author(s).
+  \fi}
+\def\@copyrightpermission{%
+  \ifcase\acm@copyrightmode\relax % none
+  \or % acmcopyright
+   Permission to make digital or hard copies of all or part of this
+   work for personal or classroom use is granted without fee provided
+   that copies are not made or distributed for profit or commercial
+   advantage and that copies bear this notice and the full citation on
+   the first page. Copyrights for components of this work owned by
+   others than ACM must be honored. Abstracting with credit is
+   permitted. To copy otherwise, or republish, to post on servers or to
+   redistribute to lists, requires prior specific permission
+   and\hspace*{.5pt}/or  a fee. Request permissions from
+   permissions@acm.org.
+  \or % acmlicensed
+   Permission to make digital or hard copies of all or part of this
+   work for personal or classroom use is granted without fee provided
+   that copies are not made or distributed for profit or commercial
+   advantage and that copies bear this notice and the full citation on
+   the first page. Copyrights for components of this work owned by
+   others than the author(s) must be honored. Abstracting with credit
+   is permitted.  To copy otherwise, or republish, to post on servers
+   or to  redistribute to lists, requires prior specific permission
+   and\hspace*{.5pt}/or  a fee. Request permissions from
+   permissions@acm.org.
+  \or % rightsretained
+   Permission to make digital or hard copies of part or all of this work
+   for personal or classroom use is granted without fee provided that
+   copies are not made or distributed for profit or commercial advantage
+   and that copies bear this notice and the full citation on the first
+   page. Copyrights for third-party components of this work must be
+   honored. For all other uses, contact the
+   owner\hspace*{.5pt}/author(s).
+  \or % usgov
+   This paper is authored by an employee(s) of the United States
+   Government and is in the public domain. Non-exclusive copying or
+   redistribution is allowed, provided that the article citation is
+   given and the authors and agency are clearly identified as its
+   source.
+  \or % usgovmixed
+   ACM acknowledges that this contribution was authored or co-authored
+   by an employee, or contractor of the national government. As such,
+   the Government retains a nonexclusive, royalty-free right to
+   publish or reproduce this article, or to allow others to do so, for
+   Government purposes only. Permission to make digital or hard copies
+   for personal or classroom use is granted. Copies must bear this
+   notice and the full citation on the first page. Copyrights for
+   components of this work owned by others than ACM must be
+   honored. To copy otherwise, distribute, republish, or post,
+   requires prior specific permission and\hspace*{.5pt}/or a
+   fee. Request permissions from permissions@acm.org.
+  \or % cagov
+   This article was authored by employees of the Government of Canada.
+   As such, the Canadian government retains all interest in the
+   copyright to this work and grants to ACM a nonexclusive,
+   royalty-free right to publish or reproduce this article, or to allow
+   others to do so, provided that clear attribution is given both to
+   the authors and the Canadian government agency employing them.
+   Permission to make digital or hard copies for personal or classroom
+   use is granted. Copies must bear this notice and the full citation
+   on the first page.  Copyrights for components of this work owned by
+   others than the Canadain Government must be honored. To copy
+   otherwise, distribute, republish, or post, requires prior specific
+   permission and\hspace*{.5pt}/or a fee. Request permissions from
+   permissions@acm.org.
+  \or % cagovmixed
+   ACM acknowledges that this contribution was co-authored by an
+   affiliate of the national government of Canada. As such, the Crown
+   in Right of Canada retains an equal interest in the copyright.
+   Reprints must include clear attribution to ACM and the author's
+   government agency affiliation.  Permission to make digital or hard
+   copies for personal or classroom use is granted.  Copies must bear
+   this notice and the full citation on the first page. Copyrights for
+   components of this work owned by others than ACM must be honored.
+   To copy otherwise, distribute, republish, or post, requires prior
+   specific permission and\hspace*{.5pt}/or a fee. Request permissions
+   from permissions@acm.org.
+  \or % licensedusgovmixed
+   Publication rights licensed to ACM. ACM acknowledges that this
+   contribution was authored or co-authored by an employee, contractor
+   or affiliate of the United States government. As such, the
+   Government retains a nonexclusive, royalty-free right to publish or
+   reproduce this article, or to allow others to do so, for Government
+   purposes only.
+  \or % licensedcagovmixed
+   Publication rights licensed to ACM. ACM acknowledges that this
+   contribution was authored or co-authored by an employee, contractor
+   or affiliate of the national government of Canada. As such, the
+   Government retains a nonexclusive, royalty-free right to publish or
+   reproduce this article, or to allow others to do so, for Government
+   purposes only.
+  \or % othergov
+   ACM acknowledges that this contribution was authored or co-authored
+   by an employee, contractor or affiliate of a national government. As
+   such, the Government retains a nonexclusive, royalty-free right to
+   publish or reproduce this article, or to allow others to do so, for
+   Government purposes only.
+  \or % licensedothergov
+   Publication rights licensed to ACM. ACM acknowledges that this
+   contribution was authored or co-authored by an employee, contractor
+   or affiliate of a national government. As such, the Government
+   retains a nonexclusive, royalty-free right to publish or reproduce
+   this article, or to allow others to do so, for Government purposes
+   only.
+  \or % waclicense
+   \includegraphics[scale=.39]{cc}\\ Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: owner/author(s).
+  \fi}
+\endinput
+%%
+%% End of file `acmcopyright.sty'.
--- a/pythonServer.py	Tue Oct 13 10:20:04 2015 +0100
+++ b/pythonServer.py	Mon Nov 23 09:13:12 2015 +0000
@@ -70,7 +70,7 @@
 	self.send_response(200)
 	self.send_header("Content-type", "text/xml")
 	self.end_headers()
-	self.wfile.write('<response><state>OK</state><file>saves/'+curFileName+'</file></response>')
+	self.wfile.write('<response state="OK"><message>OK</message><file>"saves/'+curFileName+'"</file></response>')
 
 class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
 	def do_HEAD(s):
--- a/save.php	Tue Oct 13 10:20:04 2015 +0100
+++ b/save.php	Mon Nov 23 09:13:12 2015 +0000
@@ -1,9 +1,28 @@
 <?php
-	head('Access-Control-Allow-Origin: *');
+	header('Access-Control-Allow-Origin: *');
+	header("Content-type: text/xml");
 	$postText = file_get_contents('php://input');
 	$datetime = date('ymdHis');
 	$xmlfile = "save".$datetime.".xml";
 	$fileHandle = fopen("saves/".$xmlfile, 'w');
-	fwrite($fileHandle, $postText);
+	if ($fileHandle == FALSE)
+	{
+		// Filehandle failed
+		$xml = '<response state="error"><message>Could not open file</message></response>';
+		echo $xml;
+		return;
+	}
+	$wbytes = fwrite($fileHandle, $postText);
+	if ($wbytes == FALSE)
+	{
+		// FileWrite failed
+		$xml = '<response state="error"><message>Could not write file "saves/'.$xmlfile.'"</message></response>';
+		echo $xml;
+		return;
+	}
 	fclose($fileHandle);
+	
+	// Return JSON confirmation data
+	$xml = '<response state="OK"><message>OK</message><file bytes="'.$wbytes.'">"saves/'.$xmlfile.'"</file></response>';
+	echo $xml;
 ?>
\ No newline at end of file
--- a/scripts/comment_parser.py	Tue Oct 13 10:20:04 2015 +0100
+++ b/scripts/comment_parser.py	Mon Nov 23 09:13:12 2015 +0000
@@ -4,9 +4,35 @@
 import xml.etree.ElementTree as ET
 import os
 import csv
+import sys
 
-# XML results files location (modify as needed):
-folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+# COMMAND LINE ARGUMENTS
+
+assert len(sys.argv)<3, "comment_parser takes at most 1 command line argument\n"+\
+                        "Use: python score_parser.py [rating_folder_location]"
+
+# XML results files location
+if len(sys.argv) == 1:
+    folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+    print "Use: python  comment_parser.py [XML_files_location]"
+    print "Using default path: " + folder_name
+elif len(sys.argv) == 2:
+    folder_name = sys.argv[1]   # First command line argument is folder
+
+# check if folder_name exists
+if not os.path.exists(folder_name):
+    #the file is not there
+    print "Folder '"+folder_name+"' does not exist."
+    sys.exit() # terminate script execution
+elif not os.access(os.path.dirname(folder_name), os.W_OK):
+    #the file does exist but write privileges are not given
+    print "No write privileges in folder '"+folder_name+"'."
+
+
+# CODE
+
+# remember which files have been opened this time
+file_history = []
 
 # get every XML file in folder
 for file in os.listdir(folder_name): 
@@ -33,23 +59,28 @@
                     
                     csv_name = folder_name +'/' + page_name+'/'+page_name+'-comments-'+audio_id+'.csv'
 
-                    # append (!) to file [page_name]/[page_name]-comments-[id].csv
-                    with open(csv_name, 'a') as csvfile:
-                        writer = csv.writer(csvfile, 
-                                            delimiter=',', 
-                                            dialect="excel",
-                                            quoting=csv.QUOTE_ALL)
-                        commentstr = audioelement.find("./comment/response").text
+                    # If file hasn't been opened yet this time, empty
+                    if csv_name not in file_history:
+                        csvfile = open(csv_name, 'w')
+                        file_history.append(csv_name) # remember this file has been written to this time around
+                    else: 
+                        # append (!) to file [page_name]/[page_name]-comments-[id].csv
+                        csvfile = open(csv_name, 'a')
+                    writer = csv.writer(csvfile, 
+                                        delimiter=',', 
+                                        dialect="excel",
+                                        quoting=csv.QUOTE_ALL)
+                    commentstr = audioelement.find("./comment/response").text
                         
-                        if commentstr is None:
-                           commentstr = '';
-                            
-                        # anonymous comments:
-                        #writer.writerow([commentstr.encode("utf-8")]) 
-                        # comments with (file) name:
-                        writer.writerow([file[:-4]] + [commentstr.encode("utf-8")]) 
+                    if commentstr is None:
+                       commentstr = ''
+                        
+                    # anonymous comments:
+                    #writer.writerow([commentstr.encode("utf-8")]) 
+                    # comments with (file) name:
+                    writer.writerow([file[:-4]] + [commentstr.encode("utf-8")]) 
 
-                        #TODO Replace 'new line' in comment with something else?
+                    #TODO Replace 'new line' in comment with something else?
                         
 # PRO TIP: Change from csv to txt by running this in bash: 
 # $ cd folder_where_csvs_are/
--- a/scripts/evaluation_stats.py	Tue Oct 13 10:20:04 2015 +0100
+++ b/scripts/evaluation_stats.py	Mon Nov 23 09:13:12 2015 +0000
@@ -4,9 +4,19 @@
 import xml.etree.ElementTree as ET
 import os       # for getting files from directory
 import operator # for sorting data with multiple keys
+import sys      # for accessing command line arguments
 
-# XML results files location (modify as needed):
-folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+# Command line arguments
+assert len(sys.argv)<3, "evaluation_stats takes at most 1 command line argument\n"+\
+                        "Use: python evaluation_stats.py [results_folder]"
+
+# XML results files location
+if len(sys.argv) == 1:
+    folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+    print "Use: python evaluation_stats.py [results_folder]"
+    print "Using default path: " + folder_name
+elif len(sys.argv) == 2:
+    folder_name = sys.argv[1]   # First command line argument is folder
 
 # Turn number of seconds (int) to '[minutes] min [seconds] s' (string)
 def seconds2timestr(time_in_seconds):
@@ -19,6 +29,8 @@
 number_of_pages      = 0
 number_of_fragments  = 0
 total_empty_comments = 0
+total_not_played     = 0
+total_not_moved      = 0
 time_per_page_accum  = 0
 
 # arrays initialisation
@@ -52,6 +64,14 @@
             
             number_of_comments = 0 # for this page
             number_of_missing_comments = 0 # for this page
+            not_played = 0 # for this page
+            not_moved = 0 # for this page
+            
+            # 'testTime' keeps total duration: subtract time so far for duration of this audioholder
+            duration = float(audioholder.find("./metric/metricresult[@id='testTime']").text) - total_duration
+            
+            # total duration of test
+            total_duration += duration
             
             # number of audio elements
             audioelements = audioholder.findall("./audioelement") # get audioelements
@@ -60,24 +80,39 @@
             # number of comments (interesting if comments not mandatory)
             for audioelement in audioelements:
                 response = audioelement.find("./comment/response")
+                was_played = audioelement.find("./metric/metricresult/[@name='elementFlagListenedTo']")
+                was_moved = audioelement.find("./metric/metricresult/[@name='elementFlagMoved']")
                 if response.text is not None and len(response.text) > 1: 
                     number_of_comments += 1
                 else: 
                     number_of_missing_comments += 1
-                    
+                if was_played is not None and was_played.text == 'false': 
+                    not_played += 1
+                if was_moved is not None and was_moved.text == 'false': 
+                    not_moved += 1
+            
+            # update global counters
             total_empty_comments += number_of_missing_comments
-            
-            # 'testTime' keeps total duration: subtract time so far for duration of this audioholder
-            duration = float(audioholder.find("./metric/metricresult[@id='testTime']").text) - total_duration
-            
-            # total duration of test
-            total_duration += duration
+            total_not_played += not_played
+            total_not_moved += not_moved
             
             # print audioholder id and duration
             print "    " + page_name + ": " + seconds2timestr(duration) + ", "\
                   + str(number_of_comments)+"/"\
                   +str(number_of_comments+number_of_missing_comments)+" comments"
             
+            # number of audio elements not played
+            if not_played > 1:
+                print 'ATTENTION: '+str(not_played)+' fragments were not listened to!'
+            if not_played == 1: 
+                print 'ATTENTION: one fragment was not listened to!'
+            
+            # number of audio element markers not moved
+            if not_moved > 1:
+                print 'ATTENTION: '+str(not_moved)+' markers were not moved!'
+            if not_moved == 1: 
+                print 'ATTENTION: one marker was not moved!'
+            
             # keep track of duration in function of page index
             if len(duration_order)>page_number:
                 duration_order[page_number].append(duration)
@@ -124,8 +159,15 @@
 print "Number of XML files: " + str(number_of_XML_files)
 print "Number of pages: " + str(number_of_pages)
 print "Number of fragments: " + str(number_of_fragments)
-print "Number of empty comments: " + str(total_empty_comments)
+print "Number of empty comments: " + str(total_empty_comments) +\
+      " (" + str(round(100.0*total_empty_comments/number_of_fragments,2)) + "%)"
+print "Number of unplayed fragments: " + str(total_not_played) +\
+      " (" + str(round(100.0*total_not_played/number_of_fragments,2)) + "%)"
+print "Number of unmoved markers: " + str(total_not_moved) +\
+      " (" + str(round(100.0*total_not_moved/number_of_fragments,2)) + "%)"
 print "Average time per page: " + seconds2timestr(time_per_page_accum/number_of_pages)
+
+# Pages and number of times tested
 page_count_strings = list(str(x) for x in page_count)
 count_list = page_names + page_count_strings
 count_list[::2] = page_names
@@ -133,8 +175,9 @@
 print "Pages tested: " + str(count_list)
 
 # Average duration for first, second, ... page
+print "Average duration per page:"
 for page_number in range(len(duration_order)): 
-    print "Average duration page " + str(page_number+1) + ": " +\
+    print "        page " + str(page_number+1) + ": " +\
         seconds2timestr(sum(duration_order[page_number])/len(duration_order[page_number])) +\
             " ("+str(len(duration_order[page_number]))+" subjects)"
 
@@ -153,8 +196,9 @@
 combined_list = sorted(zip(*combined_list), key=operator.itemgetter(1, 2)) # sort
 
 # Show average duration for all songs
+print "Average duration per audioholder:"
 for page_index in range(len(page_names)):
-    print "Average duration audioholder " + combined_list[page_index][0] + ": " \
+    print "        "+combined_list[page_index][0] + ": " \
           + seconds2timestr(combined_list[page_index][1]) \
           + " (" + str(combined_list[page_index][3]) + " subjects, " \
           + str(combined_list[page_index][2]) + " fragments)"
@@ -168,3 +212,5 @@
 # show 'count' per page (in order)
 
 # clear up page_index <> page_count <> page_number confusion
+
+# LaTeX -> PDF print out
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/scripts/generate_report.py	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,525 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import xml.etree.ElementTree as ET
+import os       # for getting files from directory
+import operator # for sorting data with multiple keys
+import sys      # for accessing command line arguments
+import subprocess # for calling pdflatex
+import shlex # for calling pdflatex
+import matplotlib.pyplot as plt # plots
+import numpy as np # numbers
+
+# Command line arguments
+assert len(sys.argv)<4, "evaluation_stats takes at most 2 command line argument\n"+\
+                        "Use: python generate_report.py [results_folder] [no_render | -nr]"
+
+render_figures = True
+
+# XML results files location
+if len(sys.argv) == 1:
+    folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+    print "Use: python generate_report.py [results_folder] [no_render | -nr]"
+    print "Using default path: " + folder_name
+elif len(sys.argv) == 2:
+    folder_name = sys.argv[1]   # First command line argument is folder
+elif len(sys.argv) == 3:
+    folder_name = sys.argv[1]   # First command line argument is folder
+    assert sys.argv[2] in ('no_render','-nr'), "Second argument not recognised. \n" +\
+           "Use: python generate_report.py [results_folder] [no_render | -nr]"
+    # Second command line argument is [no_render | -nr]
+    render_figures = False
+
+def isNaN(num):
+    return num != num
+
+# Turn number of seconds (int) to '[minutes] min [seconds] s' (string)
+def seconds2timestr(time_in_seconds):
+    if time_in_seconds is not None and not isNaN(time_in_seconds): 
+        time_in_minutes = int(time_in_seconds/60)
+        remaining_seconds = int(time_in_seconds%60)
+        return str(time_in_minutes) + " min " + str(remaining_seconds) + " s"
+    else:
+        return 'N/A'
+
+# stats initialisation
+number_of_XML_files  = 0
+number_of_pages      = 0
+number_of_fragments  = 0
+total_empty_comments = 0
+total_not_played     = 0
+total_not_moved      = 0
+time_per_page_accum  = 0
+
+# arrays initialisation
+page_names       = []
+real_page_names  = [] # regardless of differing numbers of fragments
+subject_count    = [] # subjects per audioholder name
+page_count       = []
+duration_page    = []      # duration of experiment in function of page content
+duration_order   = []      # duration of experiment in function of page number
+fragments_per_page = []    # number of fragments for corresponding page
+
+# survey stats
+gender = []
+age    = []
+
+# get username if available
+for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
+    user = os.environ.get(name)
+    if user:
+        break
+    else:
+        user = ''
+
+
+# begin LaTeX document
+header = r'''\documentclass[11pt, oneside]{article} 
+          \usepackage{geometry}
+          \geometry{a4paper}
+          \usepackage[parfill]{parskip} % empty line instead of indent
+          \usepackage{graphicx}    % figures
+          \usepackage{hyperref}
+          \usepackage{tikz}           % pie charts
+          \title{Report}
+          \author{'''+\
+          user+\
+          r'''}
+          \graphicspath{{'''+\
+          folder_name+\
+          r'''/}}
+          %\setcounter{section}{-1} % Summary section 0 so number of sections equals number of files
+          \begin{document}
+          \maketitle
+          This is an automatically generated report using the `generate\_report.py' Python script 
+          included with the Web Audio Evaluation Tool \cite{WAET} distribution which can be found 
+          at \texttt{code.soundsoftware.ac.uk/projects/webaudioevaluationtool}.
+          \tableofcontents
+          
+          '''
+          
+footer = '\n\t\t'+r'''\begin{thebibliography}{9}
+         \bibitem{WAET} % reference to accompanying publication
+        Nicholas Jillings, Brecht De Man, David Moffat and Joshua D. Reiss, 
+        ``Web Audio Evaluation Tool: A browser-based listening test environment,'' 
+        presented at the 12th Sound and Music Computing Conference, July 2015.
+        \end{thebibliography}
+        \end{document}'''
+
+body = ''
+
+# generate images for later use
+if render_figures:
+    subprocess.call("python timeline_view_movement.py "+folder_name, shell=True)
+    subprocess.call("python score_parser.py "+folder_name, shell=True)
+    subprocess.call("python score_plot.py "+folder_name, shell=True)
+
+# get every XML file in folder
+files_list = os.listdir(folder_name)
+for file in files_list: # iterate over all files in files_list
+    if file.endswith(".xml"): # check if XML file
+        number_of_XML_files += 1
+        tree = ET.parse(folder_name + '/' + file)
+        root = tree.getroot()
+        
+        # PRINT name as section
+        body+= '\n\section{'+file[:-4].capitalize()+'}\n' # make section header from name without extension
+        
+        # reset for new subject
+        total_duration = 0
+        page_number = 0
+        
+        individual_table = '\n' # table with stats for this individual test file
+        timeline_plots = '' # plots of timeline (movements and plays)
+        
+        # DEMO survey stats
+        # get gender
+        this_subjects_gender = root.find("./posttest/radio/[@id='gender']")
+        if this_subjects_gender is not None:
+            gender.append(this_subjects_gender.get("name"))
+        else:
+            gender.append('UNAVAILABLE')
+        # get age
+        this_subjects_age = root.find("./posttest/number/[@id='age']")
+        if this_subjects_age is not None:
+            age.append(this_subjects_age.text)
+        #TODO add plot of age
+                
+        # get list of all page names
+        for audioholder in root.findall("./audioholder"):   # iterate over pages
+            page_name = audioholder.get('id')               # get page name
+            
+            if page_name is None: # ignore 'empty' audio_holders
+                print "WARNING: " + file + " contains empty audio holder. (evaluation_stats.py)"
+                break # move on to next
+            
+            number_of_comments = 0 # for this page
+            number_of_missing_comments = 0 # for this page
+            not_played = [] # for this page
+            not_moved = [] # for this page
+            
+            if audioholder.find("./metric/metricresult[@id='testTime']") is not None: # check if time is included
+                # 'testTime' keeps total duration: subtract time so far for duration of this audioholder
+                duration = float(audioholder.find("./metric/metricresult[@id='testTime']").text) - total_duration
+            
+                # total duration of test
+                total_duration += duration
+            else: 
+                duration = float('nan')
+                total_duration = float('nan')
+            
+            # number of audio elements
+            audioelements = audioholder.findall("./audioelement") # get audioelements
+            number_of_fragments += len(audioelements) # add length of this list to total
+            
+            # number of comments (interesting if comments not mandatory)
+            for audioelement in audioelements:
+                response = audioelement.find("./comment/response")
+                was_played = audioelement.find("./metric/metricresult/[@name='elementFlagListenedTo']")
+                was_moved = audioelement.find("./metric/metricresult/[@name='elementFlagMoved']")
+                if response.text is not None and len(response.text) > 1: 
+                    number_of_comments += 1
+                else: 
+                    number_of_missing_comments += 1
+                if was_played is not None and was_played.text == 'false': 
+                    not_played.append(audioelement.get('id'))
+                if was_moved is not None and was_moved.text == 'false': 
+                    not_moved.append(audioelement.get('id'))
+            
+            # update global counters
+            total_empty_comments += number_of_missing_comments
+            total_not_played += len(not_played)
+            total_not_moved += len(not_moved)
+            
+            # PRINT alerts when elements not played or markers not moved
+            # number of audio elements not played
+            if len(not_played) > 1:
+                body += '\t\t\\emph{\\textbf{ATTENTION: '+str(len(not_played))+\
+                        ' fragments were not listened to in '+page_name+'! }}'+\
+                        ', '.join(not_played)+'\\\\ \n'
+            if len(not_played) == 1: 
+                body += '\t\t\\emph{\\textbf{ATTENTION: one fragment was not listened to in '+page_name+'! }}'+\
+                        not_played[0]+'\\\\ \n'
+            
+            # number of audio element markers not moved
+            if len(not_moved) > 1:
+                body += '\t\t\\emph{\\textbf{ATTENTION: '+str(len(not_moved))+\
+                        ' markers were not moved in '+page_name+'! }}'+\
+                        ', '.join(not_moved)+'\\\\ \n'
+            if len(not_moved) == 1: 
+                body += '\t\t\\emph{\\textbf{ATTENTION: one marker was not moved in '+page_name+'! }}'+\
+                        not_moved[0]+'\\\\ \n'
+            
+            # PRINT song-specific statistic
+            individual_table += '\t\t'+page_name+'&'+\
+                                str(number_of_comments) + '/' +\
+                                str(number_of_comments+number_of_missing_comments)+'&'+\
+                                seconds2timestr(duration)+'\\\\\n'
+            
+            # get timeline for this audioholder
+            img_path = 'timelines_movement/'+file[:-4]+'-'+page_name+'.pdf'
+            
+            # check if available
+            if os.path.isfile(folder_name+'/'+img_path):
+                # SHOW timeline image
+                timeline_plots += '\\includegraphics[width=\\textwidth]{'+\
+                         folder_name+'/'+img_path+'}\n\t\t'
+            
+            # keep track of duration in function of page index
+            if len(duration_order)>page_number:
+                duration_order[page_number].append(duration)
+            else:
+                duration_order.append([duration])
+            
+            # keep list of audioholder ids and count how many times each audioholder id
+            # was tested, how long it took, and how many fragments there were 
+            # (if number of fragments is different, store as different audioholder id)
+            if page_name in page_names: 
+                page_index = page_names.index(page_name) # get index
+                # check if number of audioelements the same
+                if len(audioelements) == fragments_per_page[page_index]: 
+                    page_count[page_index] += 1
+                    duration_page[page_index].append(duration)
+                else: # make new entry
+                    alt_page_name = page_name+"("+str(len(audioelements))+")"
+                    if alt_page_name in page_names: # if already there
+                        alt_page_index = page_names.index(alt_page_name) # get index
+                        page_count[alt_page_index] += 1
+                        duration_page[alt_page_index].append(duration)
+                    else: 
+                        page_names.append(alt_page_name)
+                        page_count.append(1)
+                        duration_page.append([duration])
+                        fragments_per_page.append(len(audioelements))
+            else: 
+                page_names.append(page_name)
+                page_count.append(1)
+                duration_page.append([duration])
+                fragments_per_page.append(len(audioelements))
+            
+            # number of subjects per audioholder regardless of differing numbers of 
+            # fragments (for inclusion in box plots)
+            if page_name in real_page_names:
+                page_index = real_page_names.index(page_name) # get index
+                subject_count[page_index] += 1
+            else: 
+                real_page_names.append(page_name)
+                subject_count.append(1)
+            
+            # bookkeeping
+            page_number += 1 # increase page count for this specific test
+            number_of_pages += 1 # increase total number of pages
+            time_per_page_accum += duration # total duration (for average time spent per page)
+
+        # PRINT table with statistics about this test
+        body += '\t\t'+r'''\begin{tabular}{|p{3.5cm}|c|p{2.5cm}|}
+                 \hline
+                 \textbf{Song name} & \textbf{Comments} & \textbf{Duration} \\ \hline '''+\
+                 individual_table+'\t\t'+\
+                 r'''\hline
+                  \textbf{TOTAL} & & \textbf{'''+\
+                  seconds2timestr(total_duration)+\
+                 r'''}\\
+                  \hline 
+                  \end{tabular}
+                  
+                  '''
+        # PRINT timeline plots
+        body += timeline_plots
+
+# join to footer
+footer = body + footer
+
+# empty body again
+body = ''
+
+# PRINT summary of everything (at start) 
+#       unnumbered so that number of sections equals number of files
+body += '\section*{Summary}\n\t\t\\addcontentsline{toc}{section}{Summary}\n'
+
+# PRINT table with statistics
+body += '\t\t\\begin{tabular}{ll}\n\t\t\t'
+body += r'Number of XML files: &' + str(number_of_XML_files) + r'\\'+'\n\t\t\t'
+body += r'Number of pages: &' + str(number_of_pages) + r'\\'+'\n\t\t\t'
+body += r'Number of fragments: &' + str(number_of_fragments) + r'\\'+'\n\t\t\t'
+body += r'Number of empty comments: &' + str(total_empty_comments) +\
+      " (" + str(round(100.0*total_empty_comments/number_of_fragments,2)) + r"\%)\\"+'\n\t\t\t'
+body += r'Number of unplayed fragments: &' + str(total_not_played) +\
+      " (" + str(round(100.0*total_not_played/number_of_fragments,2)) + r"\%)\\"+'\n\t\t\t'
+body += r'Number of unmoved markers: &' + str(total_not_moved) +\
+      " (" + str(round(100.0*total_not_moved/number_of_fragments,2)) + r"\%)\\"+'\n\t\t\t'
+body += r'Average time per page: &' + seconds2timestr(time_per_page_accum/number_of_pages) + r"\\"+'\n\t\t'
+body += '\\end{tabular} \\vspace{1.5cm} \\\\ \n'
+
+# Average duration for first, second, ... page
+body += "\t\t\\vspace{.5cm} \n\n\t\tAverage duration per page (see also Figure \\ref{fig:avgtimeperpage}): \\\\ \n\t\t"
+body += r'''\begin{tabular}{lll}
+                    \textbf{Page} & \textbf{Duration} & \textbf{\# subjects}\\'''
+tpp_averages = [] # store average time per page
+for page_number in range(len(duration_order)): 
+    body += '\n\t\t\t'+str(page_number+1) + "&" +\
+        seconds2timestr(sum(duration_order[page_number])/len(duration_order[page_number])) +\
+            "&"+str(len(duration_order[page_number]))+r"\\"
+    tpp_averages.append(sum(duration_order[page_number])/len(duration_order[page_number]))
+            
+body += '\n\t\t\\end{tabular} \\vspace{1.5cm} \\\\ \n\n\t\t'
+
+# SHOW bar plot of average time per page
+plt.bar(range(1,len(duration_order)+1), np.array(tpp_averages)/60)
+plt.xlabel('Page order')
+plt.xlim(.8, len(duration_order)+1)
+plt.xticks(np.arange(1,len(duration_order)+1)+.4, range(1,len(duration_order)+1))
+plt.ylabel('Average time [minutes]')
+plt.savefig(folder_name+"/time_per_page.pdf", bbox_inches='tight')
+plt.close()
+#TODO add error bars
+
+
+# Sort pages by number of audioelements, then by duration
+
+# average duration and number of subjects per page
+average_duration_page = []
+number_of_subjects_page = []
+for line in duration_page:
+    number_of_subjects_page.append(len(line))
+    average_duration_page.append(sum(line)/len(line))
+
+# combine and sort in function of number of audioelements and duration
+combined_list = [page_names, average_duration_page, fragments_per_page, number_of_subjects_page]
+combined_list = sorted(zip(*combined_list), key=operator.itemgetter(1, 2)) # sort
+
+# Show average duration for all songs
+body += r'''\vspace{.5cm}
+                Average duration per audioholder (see also Figure \ref{fig:avgtimeperaudioholder}): \\
+                \begin{tabular}{llll}
+                        \textbf{Audioholder} & \textbf{Duration} & \textbf{\# subjects} & \textbf{\# fragments} \\'''
+audioholder_names_ordered = []
+average_duration_audioholder_ordered = []
+number_of_subjects = []
+for page_index in range(len(page_names)):
+    audioholder_names_ordered.append(combined_list[page_index][0])
+    average_duration_audioholder_ordered.append(combined_list[page_index][1])
+    number_of_subjects.append(combined_list[page_index][3])
+    body +=  '\n\t\t\t'+combined_list[page_index][0] + "&" +\
+             seconds2timestr(combined_list[page_index][1]) + "&" +\
+             str(combined_list[page_index][3]) + "&" +\
+             str(combined_list[page_index][2]) + r"\\"
+body += '\n\t\t\\end{tabular}\n'
+
+# SHOW bar plot of average time per page
+plt.bar(range(1,len(audioholder_names_ordered)+1), np.array(average_duration_audioholder_ordered)/60)
+plt.xlabel('Audioholder')
+plt.xlim(.8, len(audioholder_names_ordered)+1)
+plt.xticks(np.arange(1,len(audioholder_names_ordered)+1)+.4, audioholder_names_ordered, rotation=90)
+plt.ylabel('Average time [minutes]')
+plt.savefig(folder_name+"/time_per_audioholder.pdf", bbox_inches='tight')
+plt.close()
+
+# SHOW bar plot of average time per page
+plt.bar(range(1,len(audioholder_names_ordered)+1), number_of_subjects)
+plt.xlabel('Audioholder')
+plt.xlim(.8, len(audioholder_names_ordered)+1)
+plt.xticks(np.arange(1,len(audioholder_names_ordered)+1)+.4, audioholder_names_ordered, rotation=90)
+plt.ylabel('Number of subjects')
+ax = plt.gca()
+ylims = ax.get_ylim()
+yint = np.arange(int(np.floor(ylims[0])), int(np.ceil(ylims[1]))+1)
+plt.yticks(yint)
+plt.savefig(folder_name+"/subjects_per_audioholder.pdf", bbox_inches='tight')
+plt.close()
+
+# SHOW both figures
+body += r'''
+         \begin{figure}[htbp]
+         \begin{center}
+         \includegraphics[width=.65\textwidth]{'''+\
+         folder_name+"/time_per_page.pdf"+\
+        r'''}
+        \caption{Average time spent per page.}
+        \label{fig:avgtimeperpage}
+         \end{center}
+         \end{figure}
+         
+         '''
+body += r'''\begin{figure}[htbp]
+         \begin{center}
+         \includegraphics[width=.65\textwidth]{'''+\
+         folder_name+"/time_per_audioholder.pdf"+\
+        r'''}
+        \caption{Average time spent per audioholder.}
+        \label{fig:avgtimeperaudioholder}
+         \end{center}
+         \end{figure}
+         
+         '''
+body += r'''\begin{figure}[htbp]
+         \begin{center}
+         \includegraphics[width=.65\textwidth]{'''+\
+         folder_name+"/subjects_per_audioholder.pdf"+\
+        r'''}
+        \caption{Number of subjects per audioholder.}
+        \label{fig:subjectsperaudioholder}
+         \end{center}
+         \end{figure}
+         
+         '''
+#TODO add error bars
+#TODO layout of figures
+
+# SHOW boxplot per audioholder
+#TODO order in decreasing order of participants
+for audioholder_name in page_names: # get each name
+    # plot boxplot if exists (not so for the 'alt' names)
+    if os.path.isfile(folder_name+'/ratings/'+audioholder_name+'-ratings-box.pdf'):
+        body += r'''\begin{figure}[htbp]
+             \begin{center}
+             \includegraphics[width=.65\textwidth]{'''+\
+             folder_name+"/ratings/"+audioholder_name+'-ratings-box.pdf'+\
+            r'''}
+            \caption{Box plot of ratings for audioholder '''+\
+            audioholder_name+' ('+str(subject_count[real_page_names.index(audioholder_name)])+\
+            ''' participants).}
+            \label{fig:boxplot'''+audioholder_name.replace(" ", "")+'''}
+             \end{center}
+             \end{figure}
+             
+             '''
+
+# DEMO pie chart of gender distribution among subjects
+genders = ['male', 'female', 'other', 'preferNotToSay', 'UNAVAILABLE']
+# TODO: get the above automatically
+gender_distribution = ''
+for item in genders:
+    number = gender.count(item)
+    if number>0:
+        gender_distribution += str("{:.2f}".format((100.0*number)/len(gender)))+\
+                               '/'+item.capitalize()+' ('+str(number)+'),\n'
+
+body += r'''
+        % Pie chart of gender distribution
+        \def\angle{0}
+        \def\radius{3}
+        \def\cyclelist{{"orange","blue","red","green"}}
+        \newcount\cyclecount \cyclecount=-1
+        \newcount\ind \ind=-1
+        \begin{figure}[htbp]
+        \begin{center}\begin{tikzpicture}[nodes = {font=\sffamily}]
+        \foreach \percent/\name in {'''+\
+        gender_distribution+\
+        r'''} {\ifx\percent\empty\else               % If \percent is empty, do nothing
+        \global\advance\cyclecount by 1     % Advance cyclecount
+        \global\advance\ind by 1            % Advance list index
+        \ifnum6<\cyclecount                 % If cyclecount is larger than list
+          \global\cyclecount=0              %   reset cyclecount and
+          \global\ind=0                     %   reset list index
+        \fi
+        \pgfmathparse{\cyclelist[\the\ind]} % Get color from cycle list
+        \edef\color{\pgfmathresult}         %   and store as \color
+        % Draw angle and set labels
+        \draw[fill={\color!50},draw={\color}] (0,0) -- (\angle:\radius)
+          arc (\angle:\angle+\percent*3.6:\radius) -- cycle;
+        \node at (\angle+0.5*\percent*3.6:0.7*\radius) {\percent\,\%};
+        \node[pin=\angle+0.5*\percent*3.6:\name]
+          at (\angle+0.5*\percent*3.6:\radius) {};
+        \pgfmathparse{\angle+\percent*3.6}  % Advance angle
+        \xdef\angle{\pgfmathresult}         %   and store in \angle
+        \fi
+        };
+        \end{tikzpicture}
+        \caption{Representation of gender across subjects}
+        \label{default}
+        \end{center}
+        \end{figure}
+        
+        '''
+# problem: some people entered twice? 
+
+#TODO
+# time per page in function of number of fragments (plot)
+# time per participant in function of number of pages
+# plot total time for each participant
+# show 'count' per page (in order)
+
+# clear up page_index <> page_count <> page_number confusion
+
+
+texfile = header+body+footer # add bits together
+
+# write TeX file
+with open(folder_name + '/' + 'Report.tex','w') as f:
+    f.write(texfile)
+proc=subprocess.Popen(shlex.split('pdflatex -output-directory='+folder_name+' '+ folder_name + '/Report.tex'))
+proc.communicate()
+# run again
+proc=subprocess.Popen(shlex.split('pdflatex -output-directory='+folder_name+' '+ folder_name + '/Report.tex'))
+proc.communicate()
+
+#TODO remove auxiliary LaTeX files
+try:
+    os.remove(folder_name + '/' + 'Report.aux')
+    os.remove(folder_name + '/' + 'Report.log')
+    os.remove(folder_name + '/' + 'Report.out')
+    os.remove(folder_name + '/' + 'Report.toc')
+except OSError:
+    pass
+    
\ No newline at end of file
--- a/scripts/score_parser.py	Tue Oct 13 10:20:04 2015 +0100
+++ b/scripts/score_parser.py	Mon Nov 23 09:13:12 2015 +0000
@@ -2,19 +2,42 @@
 
 import xml.etree.ElementTree as ET
 import os
+import sys
 import csv
 
-#TODO Remove DEBUG statements
+# COMMAND LINE ARGUMENTS
 
-# XML results files location (modify as needed):
-folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+assert len(sys.argv)<3, "score_parser takes at most 1 command line argument\n"+\
+                        "Use: python score_parser.py [rating_folder_location]"
+
+# XML results files location
+if len(sys.argv) == 1:
+    folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+    print "Use: python score_parser.py [rating_folder_location]"
+    print "Using default path: " + folder_name
+elif len(sys.argv) == 2:
+    folder_name = sys.argv[1]   # First command line argument is folder
+
+# check if folder_name exists
+if not os.path.exists(folder_name):
+    #the file is not there
+    print "Folder '"+folder_name+"' does not exist."
+    sys.exit() # terminate script execution
+elif not os.access(os.path.dirname(folder_name), os.W_OK):
+    #the file does exist but write privileges are not given
+    print "No write privileges in folder '"+folder_name+"'."
+
+    
+# CODE
+
+# remember which files have been opened this time
+file_history = []
 
 # get every XML file in folder
 for file in os.listdir(folder_name):
     if file.endswith(".xml"):
         tree = ET.parse(folder_name + '/' + file)
         root = tree.getroot()
-        #print "DEBUG Reading " + file + "..."
 
         # get subject ID from XML file
         subject_id = file[:-4] # file name (without extension) as subject ID
@@ -29,7 +52,7 @@
 
             file_name = folder_name+'/ratings/'+page_name+'-ratings.csv' # score file name
 
-            # create folder 'ratings if not yet created
+            # create folder 'ratings' if not yet created
             if not os.path.exists(folder_name + '/ratings'):
                 os.makedirs(folder_name + '/ratings')
 
@@ -45,38 +68,38 @@
             for audioelement in audiolist: # iterate over all audioelements
                 fragmentnamelist.append(audioelement.get('id')) # add to list
 
-
             # if file exists, get header and add 'new' fragments
             if os.path.isfile(file_name):
-                #print "DEBUG file " + file_name + " already exists - reading header"
                 with open(file_name, 'r') as readfile:
                     filereader = csv.reader(readfile, delimiter=',')
                     headerrow = filereader.next()
 
+                # If file hasn't been opened yet this time, remove all rows except header
+                if file_name not in file_history:
+                    with open(file_name, 'w') as writefile:
+                        filewriter = csv.writer(writefile, delimiter=',')
+                        headerrow = sorted(headerrow)
+                        filewriter.writerow(headerrow)
+                    file_history.append(file_name)
+
                 # Which of the fragmentes are in fragmentnamelist but not in headerrow?
                 newfragments = list(set(fragmentnamelist)-set(headerrow))
                 newfragments = sorted(newfragments) # new fragments in alphabetical order
                 # If not empty, read file and rewrite adding extra columns
                 if newfragments: # if not empty
-                    #print "DEBUG New fragments found: " + str(newfragments)
                     with open('temp.csv', 'w') as writefile:
                         filewriter = csv.writer(writefile, delimiter=',')
                         filewriter.writerow(headerrow + newfragments) # write new header
-                        #print "        "+str(headerrow + newfragments) # DEBUG
                         with open(file_name, 'r') as readfile:
                             filereader = csv.reader(readfile, delimiter=',')
                             filereader.next() # skip header
                             for row in filereader: # rewrite row plus empty cells for every new fragment name
-                                #print "            Old row: " + str(row) # DEBUG
                                 filewriter.writerow(row + ['']*len(newfragments))
-                                #print "            New row: " + str(row + ['']*len(newfragments)) # DEBUG
                     os.rename('temp.csv', file_name) # replace old file with temp file
                     headerrow = headerrow + newfragments
-                    #print "DEBUG New header row: " + str(headerrow)
 
             # if not, create file and make header
             else:
-                #print ["DEBUG file " + file_name + " doesn't exist yet - making new one"]
                 headerrow = sorted(fragmentnamelist) # sort alphabetically
                 headerrow.insert(0,'')
                 fragmentnamelist = fragmentnamelist[1:] #HACKY FIX inserting in firstrow also affects fragmentnamelist
@@ -104,4 +127,3 @@
             # write row: [subject ID, rating fragment ID 1, ..., rating fragment ID M]
             if any(ratingrow[1:]): # append to file if row non-empty (except subject name)
                 filewriter.writerow(ratingrow)
-
--- a/scripts/score_plot.py	Tue Oct 13 10:20:04 2015 +0100
+++ b/scripts/score_plot.py	Mon Nov 23 09:13:12 2015 +0000
@@ -8,20 +8,103 @@
 import scipy as sp
 import scipy.stats
 
-# CONFIGURATION
+# COMMAND LINE ARGUMENTS
 
-# Which type(s) of plot do you want? 
-enable_boxplot    = True      # show box plot
+#TODO: Merge, implement this functionality
+#TODO: Control by CLI arguments (plot types, save and/or show, ...) 
+
+assert len(sys.argv)<4, "score_plot takes at most 2 command line arguments\n"+\
+                        "Use: python score_plot.py [ratings_folder_location]."+\
+                        "Type 'python score_plot.py -h' for more options"
+
+# initialise plot types (false by default) and options
+enable_boxplot    = False     # show box plot
 enable_confidence = False     # show confidence interval
 confidence        = 0.90      # confidence value (for confidence interval plot)
 enable_individual = False     # show all individual ratings
-show_individual   = []        # show specific individuals
+show_individual   = []        # show specific individuals (empty: show all individuals found)
 show_legend       = False     # show names of individuals
-#TODO: Merge, implement this functionality
-#TODO: Control by CLI arguments (plot types, save and/or show, ...) 
 
-# Enter folder where rating CSV files are (generated with score_parser.py or same format).
-rating_folder = '../saves/ratings/' # folder with rating csv files
+# DEFAULT: Looks in 'saves/ratings/' folder from 'scripts/' folder
+rating_folder = "../saves/ratings/" 
+
+# XML results files location
+if len(sys.argv) == 1: # no extra arguments
+    enable_boxplot    = True # show box plot
+    print "Use: python score_plot.py [rating folder] [plot_type] [-l/-legend]"
+    print "Type 'python score_plot.py -h' for help."
+    print "Using default path: " + rating_folder + " with boxplot."
+else:
+    for arg in sys.argv: # go over all arguments
+        if arg == '-h':
+            # show help
+            #TODO: replace with contents of helpfile score_plot.info (or similar)
+            print "Use: python score_plot.py [rating_folder] [plot_type] [-l] [confidence]"
+            print "   rating_folder:"
+            print "            folder where output of 'score_parser' can be found, and"
+            print "            where plots will be stored."
+            print "            By default, '../saves/ratings/' is used."
+            print ""
+            print "PLOT TYPES"
+            print " Can be used in combination."
+            print "    box | boxplot | -b"
+            print "            Enables the boxplot" 
+            print "    conf | confidence | -c"
+            print "            Enables the confidence interval plot" 
+            print "    ind | individual | -i"
+            print "            Enables plot of individual ratings" 
+            print ""
+            print "PLOT OPTIONS"
+            print "    leg | legend | -l"
+            print "            For individual plot: show legend with individual file names"
+            print "    numeric value between 0 and 1, e.g. 0.95"
+            print "            For confidence interval plot: confidence value"
+            assert False, ""# stop immediately after showing help #TODO cleaner way
+            
+        # PLOT TYPES
+        elif arg == 'box' or arg == 'boxplot' or arg == '-b':
+            enable_boxplot    = True     # show box plot
+        elif arg == 'conf' or arg == 'confidence' or arg == '-c':
+            enable_confidence = True     # show confidence interval
+            #TODO add confidence value input
+        elif arg == 'ind' or arg == 'individual' or arg == '-i':
+            enable_individual = True     # show all individual ratings
+            
+        # PLOT OPTIONS
+        elif arg == 'leg' or arg == 'legend' or arg == '-l':
+            if not enable_individual: 
+                print "WARNING: The 'legend' option is only relevant to plots of "+\
+                      "individual ratings"
+            show_legend = True     # show all individual ratings
+        elif arg.isdigit():
+            if not enable_confidence: 
+                print "WARNING: The numeric confidence value is only relevant when "+\
+                      "confidence plot is enabled"
+            if float(arg)>0 and float(arg)<1:
+                confidence = float(arg)
+            else: 
+                print "WARNING: The confidence value needs to be between 0 and 1"
+        
+        # FOLDER NAME
+        else: 
+             # assume it's the folder name
+             rating_folder = arg
+
+# at least one plot type should be selected: box plot by default
+if not enable_boxplot and not enable_confidence and not enable_individual:
+    enable_boxplot = True
+
+# check if folder_name exists
+if not os.path.exists(rating_folder):
+    #the file is not there
+    print "Folder '"+rating_folder+"' does not exist."
+    sys.exit() # terminate script execution
+elif not os.access(os.path.dirname(rating_folder), os.W_OK):
+    #the file does exist but write rating_folder are not given
+    print "No write privileges in folder '"+rating_folder+"'."
+
+
+# CONFIGURATION
 
 # Font settings
 font = {'weight' : 'bold',
@@ -131,7 +214,7 @@
         plt.title(page_name)
         plt.xlabel('Fragment')
         plt.xlim(0, len(headerrow)+1) # only show relevant region, leave space left & right)
-        plt.xticks(range(1, len(headerrow)+1), headerrow) # show fragment names
+        plt.xticks(range(1, len(headerrow)+1), headerrow, rotation=90) # show fragment names
         plt.ylabel('Rating')
         plt.ylim(0,1)
         
@@ -146,5 +229,5 @@
         plot_type = ("-box" if enable_boxplot else "") + \
                     ("-conf" if enable_confidence else "") + \
                     ("-ind" if enable_individual else "")
-        plt.savefig(rating_folder+page_name+plot_type+".png")
+        plt.savefig(rating_folder+page_name+plot_type+".pdf", bbox_inches='tight')
         plt.close()
--- a/scripts/timeline_view.py	Tue Oct 13 10:20:04 2015 +0100
+++ b/scripts/timeline_view.py	Mon Nov 23 09:13:12 2015 +0000
@@ -1,14 +1,36 @@
 #!/usr/bin/python
 
 import xml.etree.ElementTree as ET
-import os
-import matplotlib.pyplot as plt
+import os # list files in directory
+import sys # command line arguments
+import matplotlib.pyplot as plt # plots
+import matplotlib.patches as patches # rectangles
+
+# COMMAND LINE ARGUMENTS
+
+assert len(sys.argv)<3, "timeline_view takes at most 1 command line argument\n"+\
+                        "Use: python timeline_view.py [XML_files_location]"
+
+# XML results files location
+if len(sys.argv) == 1:
+    folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+    print "Use: python timeline_view.py [XML_files_location]"
+    print "Using default path: " + folder_name
+elif len(sys.argv) == 2:
+    folder_name = sys.argv[1]   # First command line argument is folder
+
+# check if folder_name exists
+if not os.path.exists(folder_name):
+    #the file is not there
+    print "Folder '"+folder_name+"' does not exist."
+    sys.exit() # terminate script execution
+elif not os.access(os.path.dirname(folder_name), os.W_OK):
+    #the file does exist but write privileges are not given
+    print "No write privileges in folder '"+folder_name+"'."
+
 
 # CONFIGURATION 
 
-# XML results files location (modify as needed):
-folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
-
 # Folder where to store timelines
 timeline_folder = folder_name + '/timelines/'    # Stores in 'saves/timelines/'
 
@@ -20,6 +42,16 @@
 # Colormap for to cycle through
 colormap = ['b', 'r', 'g', 'c', 'm', 'y', 'k']
 
+# if enabled, x-axis shows time per audioholder, not total test time
+show_audioholder_time = True
+
+# bar height (<1 to avoid overlapping)
+bar_height = 0.6
+
+# figure size
+fig_width = 25
+fig_height = 5
+
 
 # CODE
 
@@ -34,11 +66,14 @@
         root = tree.getroot()
         subject_id = file[:-4] # drop '.xml'
         
+        time_offset = 0 # test starts at zero
+        
         # ONE TIMELINE PER PAGE - make new plot per page
 
         # get list of all page names
         for audioholder in root.findall("./audioholder"):   # iterate over pages
             page_name = audioholder.get('id')               # get page name
+            plot_empty = True                               # check if any data is plotted
             
             if page_name is None: # ignore 'empty' audio_holders
                 break
@@ -56,9 +91,13 @@
             increment = 0 # increased for every new audioelement
             audioelements_names = [] # store names of audioelements
             
+            # get axes handle
+            fig = plt.figure(figsize=(fig_width, fig_height))
+            ax  = fig.add_subplot(111) #, aspect='equal'
+            
             # for page [page_name], print comments related to fragment [id]
             for tuple in data:
-            	audioelement = tuple[1]
+                audioelement = tuple[1]
                 if audioelement is not None: # Check it exists
                     audio_id = str(audioelement.get('id'))
                     audioelements_names.append(audio_id)
@@ -66,41 +105,55 @@
                     # for this audioelement, loop over all listen events
                     listen_events = audioelement.findall("./metric/metricresult/[@name='elementListenTracker']/event")
                     for event in listen_events:
+                        # mark this plot as not empty
+                        plot_empty = False
+                    
                         # get testtime: start and stop
-                        start_time = event.find('testtime').get('start')
-                        stop_time  = event.find('testtime').get('stop')
+                        start_time = float(event.find('testtime').get('start'))-time_offset
+                        stop_time  = float(event.find('testtime').get('stop'))-time_offset
                         # event lines:
-                        plt.plot([start_time, start_time], # x-values
+                        ax.plot([start_time, start_time], # x-values
                             [0, N_audioelements+1], # y-values
                             color='k'
                             )
-                        plt.plot([stop_time, stop_time], # x-values
+                        ax.plot([stop_time, stop_time], # x-values
                             [0, N_audioelements+1], # y-values
                             color='k'
                             )
                         # plot time: 
-                        plt.plot([start_time, stop_time], # x-values
-                            [N_audioelements-increment, N_audioelements-increment], # y-values
-                            color=colormap[increment%len(colormap)],
-                            linewidth=6
+                        ax.add_patch(
+                            patches.Rectangle(
+                                (start_time, N_audioelements-increment-bar_height/2), # (x, y)
+                                stop_time - start_time, # width
+                                bar_height, # height
+                                color=colormap[increment%len(colormap)] # colour
                             )
+                        )
                         
-                increment+=1
-                                           
+                increment+=1 # to next audioelement
+                
+            # subtract total audioholder length from subsequent audioholder event times
+            audioholder_time = audioholder.find("./metric/metricresult/[@id='testTime']")
+            if audioholder_time is not None and show_audioholder_time: 
+                time_offset = float(audioholder_time.text)
+            
+            if not plot_empty:
+                # set plot parameters
+                plt.title('Timeline ' + file + ": "+page_name)
+                plt.xlabel('Time [seconds]')
+                plt.ylabel('Fragment')
+                plt.ylim(0, N_audioelements+1)
+            
+                #y-ticks: fragment IDs, top to bottom
+                plt.yticks(range(N_audioelements, 0, -1), audioelements_names) # show fragment names
+
+
+                #plt.show() # uncomment to show plot; comment when just saving
+                #exit()
+            
+                plt.savefig(timeline_folder+subject_id+"-"+page_name+".pdf", bbox_inches='tight')
+                plt.close()
+            
             #TODO: if 'nonsensical' or unknown: dashed line until next event
             #TODO: Vertical lines for fragment looping point
-            
-            plt.title('Timeline ' + file) #TODO add song too
-            plt.xlabel('Time [seconds]')
-            plt.ylabel('Fragment')
-            plt.ylim(0, N_audioelements+1)
-            
-            #y-ticks: fragment IDs, top to bottom
-            plt.yticks(range(N_audioelements, 0, -1), audioelements_names) # show fragment names
-
-
-            #plt.show() # uncomment to show plot; comment when just saving
-            #exit()
-            
-            plt.savefig(timeline_folder+subject_id+"-"+page_name+".png")
-            plt.close()
\ No newline at end of file
+            
\ No newline at end of file
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/scripts/timeline_view_movement.py	Mon Nov 23 09:13:12 2015 +0000
@@ -0,0 +1,311 @@
+#!/usr/bin/python
+
+import xml.etree.ElementTree as ET
+import os # list files in directory
+import sys # command line arguments
+import matplotlib.pyplot as plt # plots
+import matplotlib.patches as patches # rectangles
+
+
+# COMMAND LINE ARGUMENTS
+
+assert len(sys.argv)<3, "timeline_view_movement takes at most 1 command line argument\n"+\
+                        "Use: python timeline_view_movement.py [XML_files_location]"
+
+# XML results files location
+if len(sys.argv) == 1:
+    folder_name = "../saves"    # Looks in 'saves/' folder from 'scripts/' folder
+    print "Use: python timeline_view_movement.py [XML_files_location]"
+    print "Using default path: " + folder_name
+elif len(sys.argv) == 2:
+    folder_name = sys.argv[1]   # First command line argument is folder
+
+# check if folder_name exists
+if not os.path.exists(folder_name):
+    #the file is not there
+    print "Folder '"+folder_name+"' does not exist."
+    sys.exit() # terminate script execution
+elif not os.access(os.path.dirname(folder_name), os.W_OK):
+    #the file does exist but write privileges are not given
+    print "No write privileges in folder '"+folder_name+"'."
+
+
+# CONFIGURATION 
+
+# Folder where to store timelines
+timeline_folder = folder_name + '/timelines_movement/'    # Stores in 'saves/timelines_movement/' by default
+
+# Font settings
+font = {'weight' : 'bold',
+        'size'   : 16}
+plt.rc('font', **font)
+
+# Colormap for to cycle through
+colormap = ['b', 'g', 'c', 'm', 'y', 'k']
+
+# figure size
+fig_width = 25
+fig_height = 10
+
+
+# CODE
+
+# create timeline_folder if not yet created
+if not os.path.exists(timeline_folder):
+    os.makedirs(timeline_folder)
+
+# get every XML file in folder
+for file in os.listdir(folder_name):
+    if file.endswith(".xml"):
+        tree = ET.parse(folder_name + '/' + file)
+        root = tree.getroot()
+        subject_id = file[:-4] # drop '.xml'
+        
+        previous_audioholder_time = 0 # time spent before current audioholder
+        time_offset = 0 # test starts at zero
+        
+        # ONE TIMELINE PER PAGE - make new plot per page
+
+        # get list of all page names
+        for audioholder in root.findall("./audioholder"):   # iterate over pages
+            page_name = audioholder.get('id')               # get page name
+            plot_empty = True                               # check if any data is plotted
+            
+            if page_name is None: # ignore 'empty' audio_holders
+                print "Skipping empty audioholder name from "+subject_id+"."
+                break
+                
+            # subtract total audioholder length from subsequent audioholder event times
+            audioholder_time_temp = audioholder.find("./metric/metricresult/[@id='testTime']")
+            if audioholder_time_temp is not None: 
+                audioholder_time = float(audioholder_time_temp.text)
+            else: 
+                print "Skipping audioholder without total time specified from "+subject_id+"."
+                break
+
+            # get audioelements
+            audioelements = audioholder.findall("./audioelement")
+            
+            # sort alphabetically
+            data = []
+            for elem in audioelements: # from http://effbot.org/zone/element-sort.htm
+                key = elem.get("id")
+                data.append((key, elem))
+            data.sort()
+            
+            N_audioelements = len(audioelements) # number of audio elements for this page
+            increment = 0 # increased for every new audioelement
+            
+            # get axes handle
+            fig = plt.figure(figsize=(fig_width, fig_height))
+            ax  = fig.add_subplot(111)
+            
+            # for page [page_name], print comments related to fragment [id]
+            #for tuple in data:
+            #    audioelement = tuple[1]
+            for tuple in data:
+                audioelement = tuple[1]
+                if audioelement is not None: # Check it exists
+                    audio_id = str(audioelement.get('id'))
+                    
+                    # break if no initial position or move events registered
+                    initial_position_temp = audioelement.find("./metric/metricresult/[@name='elementInitialPosition']")
+                    if initial_position_temp is None:
+                        print "Skipping "+page_name+" from "+subject_id+": does not have initial positions specified."
+                        break
+                    
+                    # get move events, initial and eventual position
+                    initial_position = float(initial_position_temp.text)
+                    move_events = audioelement.findall("./metric/metricresult/[@name='elementTrackerFull']/timepos")
+                    final_position = float(audioelement.find("./value").text)
+                    
+                    # get listen events
+                    start_times_global = []
+                    stop_times_global  = []
+                    listen_events = audioelement.findall("./metric/metricresult/[@name='elementListenTracker']/event")
+                    for event in listen_events:
+                        # get testtime: start and stop
+                        start_times_global.append(float(event.find('testtime').get('start'))-time_offset)
+                        stop_times_global.append(float(event.find('testtime').get('stop'))-time_offset)
+                    
+                    # display fragment name at start
+                    plt.text(0,initial_position+0.02,audio_id,color=colormap[increment%len(colormap)]) #,rotation=45
+                    
+                    # previous position and time
+                    previous_position = initial_position
+                    previous_time = 0
+                    
+                    # assume not playing at start
+                    currently_playing = False # keep track of whether fragment is playing during move event
+                                        
+                    # draw all segments except final one
+                    for event in move_events: 
+                        # mark this plot as not empty
+                        plot_empty = False
+                    
+                        # get time and final position of move event
+                        new_time = float(event.find("./time").text)-time_offset
+                        new_position = float(event.find("./position").text)
+                        
+                        # get play/stop events since last move until current move event
+                        stop_times = []
+                        start_times = []
+                        # is there a play and/or stop event between previous_time and new_time?
+                        for time in start_times_global:
+                            if time>previous_time and time<new_time:
+                                start_times.append(time)
+                        for time in stop_times_global:
+                            if time>previous_time and time<new_time:
+                                stop_times.append(time)
+                        # if no play/stop events between move events, find out whether playing
+                        
+                        segment_start = previous_time # first segment starts at previous move event
+                        
+                        # draw segments (horizontal line)
+                        while len(start_times)+len(stop_times)>0: # while still play/stop events left
+                            if len(stop_times)<1: # upcoming event is 'play'
+                                # draw non-playing segment from segment_start to 'play'
+                                currently_playing = False
+                                segment_stop = start_times.pop(0) # remove and return first item
+                            elif len(start_times)<1: # upcoming event is 'stop'
+                                # draw playing segment (red) from segment_start to 'stop'
+                                currently_playing = True
+                                segment_stop = stop_times.pop(0) # remove and return first item
+                            elif start_times[0]<stop_times[0]: # upcoming event is 'play'
+                                # draw non-playing segment from segment_start to 'play'
+                                currently_playing = False
+                                segment_stop = start_times.pop(0) # remove and return first item
+                            else: # stop_times[0]<start_times[0]: upcoming event is 'stop'
+                                # draw playing segment (red) from segment_start to 'stop'
+                                currently_playing = True
+                                segment_stop = stop_times.pop(0) # remove and return first item
+                                
+                            # draw segment
+                            plt.plot([segment_start, segment_stop], # x-values
+                                [previous_position, previous_position], # y-values
+                                color='r' if currently_playing else colormap[increment%len(colormap)],
+                                linewidth=3
+                            )
+                            segment_start = segment_stop # move on to next segment
+                            currently_playing = not currently_playing # toggle to draw final segment correctly
+                        
+                        # draw final segment (horizontal line) from last 'segment_start' to current move event time
+                        plt.plot([segment_start, new_time], # x-values
+                            [previous_position, previous_position], # y-values
+                            # color depends on playing during move event or not:
+                            color='r' if currently_playing else colormap[increment%len(colormap)], 
+                            linewidth=3
+                        )
+                        
+                        # vertical line from previous to current position
+                        plt.plot([new_time, new_time], # x-values
+                            [previous_position, new_position], # y-values
+                            # color depends on playing during move event or not:
+                            color='r' if currently_playing else colormap[increment%len(colormap)], 
+                            linewidth=3
+                        )
+                        
+                        # update previous_position value
+                        previous_position = new_position
+                        previous_time     = new_time
+                    
+                    
+                    
+                    # draw final horizontal segment (or only segment if audioelement not moved)
+                    # horizontal line from previous time to end of audioholder
+                    
+                    # get play/stop events since last move until current move event
+                    stop_times = []
+                    start_times = []
+                    # is there a play and/or stop event between previous_time and new_time?
+                    for time in start_times_global:
+                        if time>previous_time and time<audioholder_time-time_offset:
+                            start_times.append(time)
+                    for time in stop_times_global:
+                        if time>previous_time and time<audioholder_time-time_offset:
+                            stop_times.append(time)
+                    # if no play/stop events between move events, find out whether playing
+                    
+                    segment_start = previous_time # first segment starts at previous move event
+                    
+                    # draw segments (horizontal line)
+                    while len(start_times)+len(stop_times)>0: # while still play/stop events left
+                        # mark this plot as not empty
+                        plot_empty = False
+                        if len(stop_times)<1: # upcoming event is 'play'
+                            # draw non-playing segment from segment_start to 'play'
+                            currently_playing = False
+                            segment_stop = start_times.pop(0) # remove and return first item
+                        elif len(start_times)<1: # upcoming event is 'stop'
+                            # draw playing segment (red) from segment_start to 'stop'
+                            currently_playing = True
+                            segment_stop = stop_times.pop(0) # remove and return first item
+                        elif start_times[0]<stop_times[0]: # upcoming event is 'play'
+                            # draw non-playing segment from segment_start to 'play'
+                            currently_playing = False
+                            segment_stop = start_times.pop(0) # remove and return first item
+                        else: # stop_times[0]<start_times[0]: upcoming event is 'stop'
+                            # draw playing segment (red) from segment_start to 'stop'
+                            currently_playing = True
+                            segment_stop = stop_times.pop(0) # remove and return first item
+                            
+                        # draw segment
+                        plt.plot([segment_start, segment_stop], # x-values
+                            [previous_position, previous_position], # y-values
+                            color='r' if currently_playing else colormap[increment%len(colormap)],
+                            linewidth=3
+                        )
+                        segment_start = segment_stop # move on to next segment
+                        currently_playing = not currently_playing # toggle to draw final segment correctly
+                    
+                    # draw final segment (horizontal line) from last 'segment_start' to current move event time
+                    plt.plot([segment_start, audioholder_time-time_offset], # x-values
+                        [previous_position, previous_position], # y-values
+                        # color depends on playing during move event or not:
+                        color='r' if currently_playing else colormap[increment%len(colormap)], 
+                        linewidth=3
+                    )
+                    
+#                     plt.plot([previous_time, audioholder_time-time_offset], # x-values
+#                         [previous_position, previous_position], # y-values
+#                         color=colormap[increment%len(colormap)],
+#                         linewidth=3
+#                     )
+                    
+                    # display fragment name at end
+                    plt.text(audioholder_time-time_offset,previous_position,\
+                             audio_id,color=colormap[increment%len(colormap)]) #,rotation=45
+                        
+                increment+=1 # to next audioelement
+            
+            last_audioholder_duration = audioholder_time-time_offset
+            time_offset = audioholder_time
+            
+            if not plot_empty: # if plot is not empty, show or store
+                # set plot parameters
+                plt.title('Timeline ' + file + ": "+page_name)
+                plt.xlabel('Time [seconds]')
+                plt.xlim(0, last_audioholder_duration)
+                plt.ylabel('Rating') # default
+                plt.ylim(0, 1) # rating between 0 and 1
+            
+                #y-ticks: labels on rating axis
+                label_positions = []
+                label_text = []
+                scale_tags = root.findall("./BrowserEvalProjectDocument/audioHolder/interface/scale")
+                scale_title = root.find("./BrowserEvalProjectDocument/audioHolder/interface/title")
+                for tag in scale_tags:
+                    label_positions.append(float(tag.get('position'))/100) # on a scale from 0 to 100
+                    label_text.append(tag.text)
+                if len(label_positions) > 0: # if any labels available
+                    plt.yticks(label_positions, label_text) # show rating axis labels
+                # set label Y-axis
+                if scale_title is not None: 
+                    plt.ylabel(scale_title.text)
+            
+                #plt.show() # uncomment to show plot; comment when just saving
+                #exit()
+            
+                plt.savefig(timeline_folder+subject_id+"-"+page_name+".pdf", bbox_inches='tight')
+                plt.close()
+            
\ No newline at end of file