# HG changeset patch # User Brecht De Man # Date 1450403154 0 # Node ID 1c5894cdcb9c012e8ad8fa9758d35f2b7b0b31d0 # Parent 930354145f6c4dc1e3d4f588595ccb1d25eff0ca Instructions update (WIP); removed basic instructions from README. diff -r 930354145f6c -r 1c5894cdcb9c README.txt --- a/README.txt Thu Dec 17 11:11:57 2015 +0000 +++ b/README.txt Fri Dec 18 01:45:54 2015 +0000 @@ -1,127 +1,11 @@ WEB AUDIO EVALUATION TOOL -This is not (yet) a fully fledged manual. - - AUTHORS Nicholas Jillings Brecht De Man David Moffat Joshua D. Reiss (supervisor) +INSTRUCTIONS FOR USE -PACKAGE CONTENTS - -- main folder (/) - - ape.css, core.css, graphics.css, structure.css: style files (edit to change appearance) - - ape.js: JavaScript file for APE-style interface [1] - - core.js: JavaScript file with core functionality - - index.html: webpage where interface should appear - - jquery-2.1.4.js: jQuery JavaScript Library - - pythonServer.py: webserver for running tests locally - - pythonServer-legacy.py: webserver with limited functionality (no storing of output XML files) -- Documentation (/docs/) - - Project Specification Document (LaTeX/PDF) - - Results Specification Document (LaTeX/PDF) - - SMC15: PDF and LaTeX source of corresponding SMC2015 publication -- Example project (/example_eval/) - An example of what the set up XML should look like, with example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface). -- Output files (/saves/) - The output XML files of tests will be stored here by default by the pythonServer.py script. -- Auxiliary scripts (/scripts/) - Helpful Python scripts for extraction and visualisation of data. -- Test creation tool (/test_create/) - Webpage for easily setting up a test without having to delve into the XML. - - -QUICK START - -Using the example project: -1. Make sure your system sample rate corresponds with the sample rate of the audio files, if the input XML file enforces the given sample rate. -2. Run pythonServer.py (make sure you have Python installed). -3. Open a browser (anything but Internet Explorer). -4. Go to ‘localhost:8000’. -5. The test should open; complete it and look at the output XML file in /saves/. - - -LEGACY - -The APE interface and most of the functionality of the interface is inspired by the APE toolbox for MATLAB [1]. See https://code.soundsoftware.ac.uk/projects/ape for the source code and corresponding paper. - - -CITING - -We request that you acknowledge the authors and cite our work [2], see CITING.txt. - - -LICENSE - -See LICENSE.txt. This code is shared under the GNU General Public License v3.0 (http://choosealicense.com/licenses/gpl-3.0/). Generally speaking, this is a copyleft license that requires anyone who distributes our code or a derivative work to make the source available under the same terms. - - -FEATURE REQUESTS AND BUG REPORTS - -We continually develop this tool to fix issues and implement features useful to us or our user base. See https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/issues for a list of feature requests and bug reports, and their status. - -Please contact the authors if you experience any bugs, if you would like additional functionality, if you have questions about using the interface or if you would like to give any feedback (even positive!) about the interface. We look forward to learning how the tool has (not) been useful to you. - - -TROUBLESHOOTING - -Thanks to feedback from using the interface in experiments by the authors and others, many bugs have been caught and fatal crashes due to the interface (provided it is set up properly by the user) seem to be a thing of the past. -However, if things do go wrong or the test needs to be interrupted for whatever reason, all data is not lost. In a normal scenario, the test needs to be completed until the end (the final ‘Submit’), at which point the output XML is stored in ‘saves/‘. If this stage is not reached, open the JavaScript Console (see below for how to find it) and type ‘createProjectSave()’ (without the quotes) and hit enter. This will open a pop-up window with a hyperlink that reads ‘Save File’; click it and an XML file with results until that point should be stored in your download folder. -Alternatively, a lot of data can be read from the same console, in which the tool prints a lot of debug information. Specifically: - - the randomisation of pages and fragments are logged; - - any time a slider is played, its ID and the time stamp (in seconds since the start of the test) are displayed; - - any time a slider is dragged and dropped, the location where it is dropped including the time stamp are shown; - - any comments and pre- or post-test questions and their answers are logged as well. - -You can select all this and save into a text file, so that none of this data is lost. You may to choose to do this even when a test was successful as an extra precaution. - -In Google Chrome, the JavaScript Console can be found in View>Developer>JavaScript Console, or via the keyboard shortcut Cmd + Alt + J (Mac OS X). -In Safari, the JavaScript Console can be found in Develop>Show Error Console, or via the keyboard shortcut Cmd + Alt + C (Mac OS X). Note that for the Developer menu to be visible, you have to go to Preferences (Cmd + ,) and enable ‘Show Develop menu in menu bar’ in the ‘Advanced’ tab. -In Firefox, go to Tools>Web Developer>Web Console, or hit Cmd + Alt + K. - - -REMOTE TESTS - -As the test is browser-based, it can be run remotely from a web server without modification. To allow for remote storage of the output XML files (as opposed to saving them locally on the subject’s machine, which is the default if no ‘save’ path is specified or found), a PHP script on the server needs to accept the output XML files. An example of such script will be included in a future version. - - -SCRIPTS - -The tool comes with a few handy Python (2.7) scripts for easy extraction of ratings or comments, and visualisation of ratings and timelines. See below for a quick guide on how to use them. All scripts written for Python 2.7. Visualisation requires the free matplotlib toolbox (http://matplotlib.org), numpy and scipy. -By default, the scripts can be run from the ‘scripts’ folder, with the result files in the ‘saves’ folder (the default location where result XMLs are stored). Each script takes the XML file folder as an argument, along with other arguments in some cases. -Note: to avoid all kinds of problems, please avoid using spaces in file and folder names (this may work on some systems, but others don’t like it). - - comment_parser.py - Extracts comments from the output XML files corresponding with the different subjects found in ‘saves/’. It creates a folder per ‘audioholder’/page it finds, and stores a CSV file with comments for every ‘audioelement’/fragment within these respective ‘audioholders’/pages. In this CSV file, every line corresponds with a subject/output XML file. Depending on the settings, the first column containing the name of the corresponding XML file can be omitted (for anonymisation). - Beware of Excel: sometimes the UTF-8 is not properly imported, leading to problems with special characters in the comments (particularly cumbersome for foreign languages). - - evaluation_stats.py - Shows a few statistics of tests in the ‘saves/‘ folder so far, mainly for checking for errors. Shows the number of files that are there, the audioholder IDs that were tested (and how many of each separate ID), the duration of each page, the duration of each complete test, the average duration per page, and the average duration in function of the page number. - - generate_report.py - Similar to ‘evaluation_stats.py’, but generates a PDF report based on the output files in the ‘saves/‘ folder - or any folder specified as command line argument. Uses pdflatex to write a LaTeX document, then convert to a PDF. - - score_parser.py - Extracts rating values from the XML to CSV - necessary for running visualisation of ratings. Creates the folder ‘saves/ratings/‘ if not yet created, to which it writes a separate file for every ‘audioholder’/page in any of the output XMLs it finds in ‘saves/‘. Within each file, rows represent different subjects (output XML file names) and columns represent different ‘audioelements’/fragments. - - score_plot.py - Plots the ratings as stored in the CSVs created by score_parser.py - Depending on the settings, it displays and/or saves (in ‘saves/ratings/’) a boxplot, confidence interval plot, scatter plot, or a combination of the aforementioned. - Requires the free matplotlib library. - At this point, more than one subjects are needed for this script to work. - - timeline_view_movement.py - Creates a timeline for every subject, for every ‘audioholder’/page, corresponding with any of the output XML files found in ‘/saves’. It shows the marker movements of the different fragments, along with when each fragment was played (red regions). Automatically takes fragment names, rating axis title, rating axis labels, and audioholder name from the XML file (if available). - - timeline_view.py - Creates a timeline for every subject, for every ‘audioholder’/page, corresponding with any of the output XML files found in ‘/saves’. It shows when and for how long the subject listened to each of the fragments. - - - -REFERENCES -[1] B. De Man and Joshua D. Reiss, “APE: Audio Perceptual Evaluation toolbox for MATLAB,” 136th Convention of the Audio Engineering Society, 2014. - -[2] Nicholas Jillings, Brecht De Man, David Moffat and Joshua D. Reiss, "Web Audio Evaluation Tool: A Browser-Based Listening Test Environment," 12th Sound and Music Computing Conference, July 2015. +Please refer to ‘docs/Instructions/Instructions.pdf’ \ No newline at end of file diff -r 930354145f6c -r 1c5894cdcb9c docs/Instructions/Instructions.pdf Binary file docs/Instructions/Instructions.pdf has changed diff -r 930354145f6c -r 1c5894cdcb9c docs/Instructions/Instructions.tex --- a/docs/Instructions/Instructions.tex Thu Dec 17 11:11:57 2015 +0000 +++ b/docs/Instructions/Instructions.tex Fri Dec 18 01:45:54 2015 +0000 @@ -7,6 +7,11 @@ % TeX will automatically convert eps --> pdf in pdflatex \usepackage{listings} % Source code +\usepackage{xcolor} % colour (source code for instance) +\definecolor{grey}{rgb}{0.1,0.1,0.1} +\definecolor{darkblue}{rgb}{0.0,0.0,0.6} +\definecolor{cyan}{rgb}{0.0,0.6,0.6} + \usepackage{amssymb} \usepackage{cite} \usepackage{hyperref} % Hyperlinks @@ -21,7 +26,12 @@ \begin{document} \maketitle -These instructions are about use of the Web Audio Evaluation Tool \cite{waet} on Windows and Mac OS X platforms. +These instructions are about use of the Web Audio Evaluation Tool on Windows and Mac OS X platforms. + +We request that you acknowledge the authors and cite our work when using it \cite{waet}, see also CITING.txt. + +The tool is available in its entirety including source code on \url{https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/}, under the GNU General Public License v3.0 (\url{http://choosealicense.com/licenses/gpl-3.0/}), see also LICENSE.txt. + % TO DO: Linux \tableofcontents @@ -29,7 +39,7 @@ \clearpage \section{Installation} - Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice. + Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice, or pull the source code from \url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool} (Mercurial). \subsection{Contents} The folder should contain the following elements: \\ @@ -39,22 +49,24 @@ \item \texttt{analyse.html}: analysis and diagnostics of a set of result XML files \item \texttt{ape.css, core.css, graphics.css, mushra.css, structure.css}: style files (edit to change appearance) \item \texttt{ape.js}: JavaScript file for APE-style interface \cite{ape} - \item \texttt{mushra.js}: JavaScript file for MUSHRA-style interface \cite{mushra} - \item \texttt{CITING.txt, LICENSE.txt, README.txt}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file. + \item \texttt{CITING.txt, LICENSE.txt, README.txt}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file referring to these instructions. \item \texttt{core.js}: JavaScript file with core functionality \item \texttt{index.html}: webpage where interface should appear (includes link to test configuration XML) \item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library + \item \texttt{loudness.js}: Allows for automatic calculation of loudness of Web Audio API Buffer objects, return gain values to correct for a target loudness or match loudness between multiple objects + \item \texttt{mushra.js}: JavaScript file for MUSHRA-style interface \cite{mushra} \item \texttt{pythonServer.py}: webserver for running tests locally \item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files) \item \texttt{save.php}: PHP script to store result XML files to web server\\ \end{itemize} \textbf{Documentation (./docs/)} \begin{itemize} + \item \href{http://c4dm.eecs.qmul.ac.uk/dmrn/events/dmrnp10/#posters}{DMRN+10}: PDF and \LaTeX source of poster for 10\textsuperscript{th} Digital Music Research Network One-Day workshop (``soft launch'') \item Instructions: PDF and \LaTeX source of these instructions \item Project Specification Document (\LaTeX/PDF) \item Results Specification Document (\LaTeX/PDF) - \item SMC15: PDF and \LaTeX source of corresponding SMC2015 publication \cite{waet} - \item WAC2016: PDF and \LaTeX source of corresponding WAC2016 publication\\ + \item SMC15: PDF and \LaTeX source of 12th Sound and Music Computing Conference paper \cite{waet} + \item WAC2016: PDF and \LaTeX source of 2nd Web Audio Conference paper\\ \end{itemize} \textbf{Example project (./example\_eval/)} \begin{itemize} @@ -73,13 +85,15 @@ \item Webpage for easily setting up your own test without having to delve into the XML.\\ \end{itemize} - \subsection{Browser} + \subsection{Compatibility} As Microsoft Internet Explorer doesn't support the Web Audio API\footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work). + + Firefox does not currently support other bit depths than 8 or 16 bit for PCM wave files. In the future, this will throw a warning message to tell the user that their content is being quantised automatically. %Nick? Right? To be removed if and when actually implemented The tool is platform-independent and works in any browser that supports the Web Audio API. It does not require any specific, proprietary software. However, in case the tool is hosted locally (i.e. you are not hosting it on an actual webserver) you will need Python (2.7), which is a free programming language - see the next paragraph. +\clearpage -\clearpage \section{Test setup} @@ -197,9 +211,263 @@ \subsection{Remote test} Put all files on a web server which supports PHP. This allows the `save.php' script to store the XML result files in the `saves/' folder. If the web server is not able to store the XML file there at the end of the test, it will present the XML file locally to the user, as a `Save file' link. - + + Make sure the \texttt{projectReturn} attribute of the \texttt{setup} node is set to the \texttt{save.php} script. + + Then, just go to the URL of the corresponding HTML file, e.g. \texttt{http://server.com/path/to/WAET/index.html?url=test/my-test.xml}. If storing on the server doesn't work at submission (e.g. if the \texttt{projectReturn} attribute isn't properly set), the result XML file will be presented to the subject on the client side, as a `Save file' link. + + \clearpage + +\section{Interfaces} + + The Web Audio Evaluation Tool comes with a number of interface styles, each of which can be customised extensively, either by configuring them differently using the many optional features, or by modifying the JavaScript files. + + To set the interface style for the whole test, %Nick? change when this is not the case anymore, i.e. when the interface can be set per page + add \texttt{interface="APE"} to the \texttt{setup} node, where \texttt{"APE"} is one of the interface names below. + + \subsection{APE} + The APE interface is based on \cite{ape}, and consists of one or more axes, each corresponding with an attribute to be rated, on which markers are placed. As such, it is a multiple stimulus interface where (for each dimension or attribute) all elements are on one axis so that they can be maximally compared against each other, as opposed to rated individually or with regards to a single reference. + It also contains an optional text box for each element, to allow for clarification by the subject, tagging, and so on. + + \subsection{MUSHRA} + This is a straightforward implementation of \cite{mushra}, especially common for the rating of audio quality, for instance for the evaluation of audio codecs. + +\clearpage + +\section{Features} + + This section goes over the different features implemented in the Web Audio Evaluation Tool, how to use them, and what to know about them. + + Unless otherwise specified, \emph{each} feature described here is optional, i.e. it can be enabled or disabled and adjusted to some extent. + + As the example project showcases (nearly) all of these features, please refer to its configuration XML document for a demonstration of how to enable and adjust them. + + \subsection{Surveys} + \subsubsection{Pre- and post-page surveys} + + \subsubsection{Pre- and post-test surveys} + + \subsubsection{Survey elements} + All survey elements (which `pop up' in the centre of the browser) have an \texttt{id} attribute, for retrieval of the responses in post-processing of the results, and a \texttt{mandatory} attribute, which if set to ``true'' requires the subjects to respond before they can continue. + + \begin{description} + \item[statement] Simply shows text to the subject until `Next' or `Start' is clicked. + \item[question] Expects a text answer (in a text box). Has the \texttt{boxsize} argument: set to ``large'' or ``huge'' for a bigger box size. + \item[number] Expects a numerical value. Attribute \texttt{min="0"} specifies the minimum value - in this case the answer must be stricly positive before the subject can continue. + \item[radio] Radio buttons. + \item[checkbox] Checkboxes.\\ + \end{description} + + \textbf{Example usage:}\\ + + \lstset{ + basicstyle=\ttfamily, + columns=fullflexible, + showstringspaces=false, + commentstyle=\color{grey}\upshape + } + + \lstdefinelanguage{XML} + { + morestring=[b]", + morestring=[s]{>}{<}, + morecomment=[s]{}, + stringstyle=\color{black} \bfseries, + identifierstyle=\color{darkblue} \bfseries, + keywordstyle=\color{cyan} \bfseries, + morekeywords={xmlns,version,type}, + breaklines=true% list your attributes here + } + \scriptsize + \lstset{language=XML} + + \begin{lstlisting} + + Please enter your location. (example mandatory text question) + Please enter your age (example non-mandatory number question) + + Please rate this interface (example radio button question) + + + + + + Thank you for taking this listening test. Please click 'Submit' and your results will appear in the 'saves/' folder. + + \end{lstlisting} + + + \subsection{Randomisation} + + \subsubsection{Randomisation of configuration XML files} + % how to + % explain how this is implemented in the pythonServer + %Nick? already implemented in the PHP? + + + \subsubsection{Randomsation of page order} + + + \subsubsection{Randomisation of axis order} + + \subsubsection{Randomisation of fragment order} + + + \subsubsection{Randomisation of initial slider position} + + % /subsubsection{Randomisation of survey question order} + % should be an attribute of the individual 'pretest' and 'posttest' elements + % uncomment once we have it + + \subsection{Looping} + Loops the fragments + % how to enable? + If the fragments are not of equal length initially, they are padded with zeros so that they are equal length, to enable looping without the fragments going out of sync relative to each other. + + Note that fragments cannot be played until all fragments are loaded when in looped mode, as the engine needs to assess whether all %Nick? Is this accurate? + + \subsection{Sample rate} + If you require the test to be conducted at a certain sample rate (i.e. you do not tolerate resampling of the elements to correspond with the system's sample rate), add \texttt{sampleRate="96000"} - where ``96000'' can be any support sample rate - so that a warning message is shown alerting the subject the system's sample rate is different from this enforced sample rate. This of course means that in one test, all sample rates must be equal as it is impossible to change the system's sample rates during the test (even if you were to manually change it, then the browser must be restarted for it to take effect). + + \subsection{Scrubber bar} + The scrubber bar, or transport bar (that is the name of the visualisation of the playhead thing with an indication of time and showing the portion of the file played so far) is at this point just a visual, and not a controller to adjust the playhead position. + + Make visible by adding \texttt{