annotate docs/Instructions/Instructions.tex @ 1195:b2d06aed8de1

Minor changes to instructions, demo page, and test examples
author Brecht De Man <BrechtDeMan@users.noreply.github.com>
date Fri, 19 Feb 2016 16:08:54 +0100
parents eef2d4ea18fb
children 124e6c702845
rev   line source
BrechtDeMan@765 1 \documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format
BrechtDeMan@765 2 \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots.
BrechtDeMan@765 3 \geometry{letterpaper} % ... or a4paper or a5paper or ...
BrechtDeMan@765 4 %\geometry{landscape} % Activate for rotated page geometry
BrechtDeMan@765 5 \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
BrechtDeMan@765 6 \usepackage{graphicx} % Use pdf, png, jpg, or eps§ with pdflatex; use eps in DVI mode
BrechtDeMan@765 7 % TeX will automatically convert eps --> pdf in pdflatex
BrechtDeMan@765 8
BrechtDeMan@765 9 \usepackage{listings} % Source code
BrechtDeMan@798 10 \usepackage{xcolor} % colour (source code for instance)
BrechtDeMan@798 11 \definecolor{grey}{rgb}{0.1,0.1,0.1}
BrechtDeMan@798 12 \definecolor{darkblue}{rgb}{0.0,0.0,0.6}
BrechtDeMan@798 13 \definecolor{cyan}{rgb}{0.0,0.6,0.6}
BrechtDeMan@798 14
BrechtDeMan@765 15 \usepackage{amssymb}
BrechtDeMan@765 16 \usepackage{cite}
BrechtDeMan@765 17 \usepackage{hyperref} % Hyperlinks
BrechtDeMan@765 18 \usepackage[nottoc,numbib]{tocbibind} % 'References' in TOC
BrechtDeMan@765 19
BrechtDeMan@765 20 \graphicspath{{img/}} % Relative path where the images are stored.
BrechtDeMan@765 21
BrechtDeMan@765 22 \title{Instructions for \\ Web Audio Evaluation Tool}
BrechtDeMan@765 23 \author{Nicholas Jillings, Brecht De Man and David Moffat}
BrechtDeMan@765 24 \date{7 December 2015} % Activate to display a given date or no date
BrechtDeMan@765 25
BrechtDeMan@765 26 \begin{document}
BrechtDeMan@765 27 \maketitle
BrechtDeMan@765 28
BrechtDeMan@798 29 These instructions are about use of the Web Audio Evaluation Tool on Windows and Mac OS X platforms.
BrechtDeMan@798 30
BrechtDeMan@798 31 We request that you acknowledge the authors and cite our work when using it \cite{waet}, see also CITING.txt.
BrechtDeMan@798 32
BrechtDeMan@798 33 The tool is available in its entirety including source code on \url{https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/}, under the GNU General Public License v3.0 (\url{http://choosealicense.com/licenses/gpl-3.0/}), see also LICENSE.txt.
BrechtDeMan@798 34
BrechtDeMan@753 35 % TO DO: Linux (Android, iOS)
BrechtDeMan@765 36
BrechtDeMan@765 37 \tableofcontents
BrechtDeMan@765 38
BrechtDeMan@765 39 \clearpage
BrechtDeMan@765 40
BrechtDeMan@765 41 \section{Installation}
BrechtDeMan@798 42 Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice, or pull the source code from \url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool} (Mercurial).
BrechtDeMan@765 43
BrechtDeMan@765 44 \subsection{Contents}
BrechtDeMan@765 45 The folder should contain the following elements: \\
BrechtDeMan@765 46
BrechtDeMan@765 47 \textbf{Main folder:}
BrechtDeMan@765 48 \begin{itemize}
BrechtDeMan@765 49 \item \texttt{analyse.html}: analysis and diagnostics of a set of result XML files
n@1163 50 \item \texttt{core.css, graphics.css, structure.css}: core style files (edit to change appearance)
BrechtDeMan@798 51 \item \texttt{CITING.txt, LICENSE.txt, README.txt}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file referring to these instructions.
BrechtDeMan@765 52 \item \texttt{core.js}: JavaScript file with core functionality
BrechtDeMan@765 53 \item \texttt{index.html}: webpage where interface should appear (includes link to test configuration XML)
BrechtDeMan@765 54 \item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library
BrechtDeMan@798 55 \item \texttt{loudness.js}: Allows for automatic calculation of loudness of Web Audio API Buffer objects, return gain values to correct for a target loudness or match loudness between multiple objects
BrechtDeMan@765 56 \item \texttt{pythonServer.py}: webserver for running tests locally
BrechtDeMan@765 57 \item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files)
BrechtDeMan@765 58 \item \texttt{save.php}: PHP script to store result XML files to web server\\
BrechtDeMan@765 59 \end{itemize}
BrechtDeMan@765 60 \textbf{Documentation (./docs/)}
BrechtDeMan@765 61 \begin{itemize}
BrechtDeMan@798 62 \item \href{http://c4dm.eecs.qmul.ac.uk/dmrn/events/dmrnp10/#posters}{DMRN+10}: PDF and \LaTeX source of poster for 10\textsuperscript{th} Digital Music Research Network One-Day workshop (``soft launch'')
BrechtDeMan@765 63 \item Instructions: PDF and \LaTeX source of these instructions
BrechtDeMan@765 64 \item Project Specification Document (\LaTeX/PDF)
BrechtDeMan@765 65 \item Results Specification Document (\LaTeX/PDF)
BrechtDeMan@798 66 \item SMC15: PDF and \LaTeX source of 12th Sound and Music Computing Conference paper \cite{waet}
BrechtDeMan@798 67 \item WAC2016: PDF and \LaTeX source of 2nd Web Audio Conference paper\\
BrechtDeMan@765 68 \end{itemize}
BrechtDeMan@765 69 \textbf{Example project (./example\_eval/)}
BrechtDeMan@765 70 \begin{itemize}
BrechtDeMan@765 71 \item An example of what the set up XML should look like, with example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface).\\
BrechtDeMan@765 72 \end{itemize}
n@1163 73 \textbf{Interface files (./interfaces/}
n@1163 74 \begin{itemize}
n@1163 75 \item Each interface class has a JavaScript file and an optional CSS style file. These are loaded as needed.
n@1163 76 \end{itemize}
n@1163 77
BrechtDeMan@765 78 \textbf{Output files (./saves/)}
BrechtDeMan@765 79 \begin{itemize}
BrechtDeMan@765 80 \item The output XML files of tests will be stored here by default by the \texttt{pythonServer.py} script.\\
BrechtDeMan@765 81 \end{itemize}
BrechtDeMan@765 82 \textbf{Auxiliary scripts (./scripts/)}
BrechtDeMan@765 83 \begin{itemize}
BrechtDeMan@765 84 \item Helpful Python scripts for extraction and visualisation of data.\\
BrechtDeMan@765 85 \end{itemize}
BrechtDeMan@765 86 \textbf{Test creation tool (./test\_create/)}
BrechtDeMan@765 87 \begin{itemize}
BrechtDeMan@765 88 \item Webpage for easily setting up your own test without having to delve into the XML.\\
BrechtDeMan@765 89 \end{itemize}
BrechtDeMan@765 90
BrechtDeMan@798 91 \subsection{Compatibility}
BrechtDeMan@765 92 As Microsoft Internet Explorer doesn't support the Web Audio API\footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work).
BrechtDeMan@798 93
BrechtDeMan@798 94 Firefox does not currently support other bit depths than 8 or 16 bit for PCM wave files. In the future, this will throw a warning message to tell the user that their content is being quantised automatically. %Nick? Right? To be removed if and when actually implemented
BrechtDeMan@765 95
BrechtDeMan@765 96 The tool is platform-independent and works in any browser that supports the Web Audio API. It does not require any specific, proprietary software. However, in case the tool is hosted locally (i.e. you are not hosting it on an actual webserver) you will need Python (2.7), which is a free programming language - see the next paragraph.
BrechtDeMan@765 97
BrechtDeMan@798 98 \clearpage
BrechtDeMan@765 99
BrechtDeMan@765 100
BrechtDeMan@765 101 \section{Test setup}
BrechtDeMan@765 102
BrechtDeMan@765 103 \subsection{Sample rate}
BrechtDeMan@765 104 Depending on how the experiment is set up, audio is resampled automatically (the Web Audio default) or the sample rate is enforced. In the latter case, you will need to make sure that the sample rate of the system is equal to the sample rate of these audio files. For this reason, all audio files in the experiment will have to have the same sample rate.
BrechtDeMan@765 105
BrechtDeMan@765 106 Always make sure that all other digital equipment in the playback chain (clock, audio interface, digital-to-analog converter, ...) is set to this same sample rate.
BrechtDeMan@765 107
BrechtDeMan@765 108 Note that upon changing the sampling rate, the browser will have to be restarted for the change to take effect.
BrechtDeMan@765 109
BrechtDeMan@765 110 \subsubsection{Mac OS X}
BrechtDeMan@765 111 To change the sample rate in Mac OS X, go to \textbf{Applications/Utilities/Audio MIDI Setup} or find this application with Spotlight (see Figure \ref{fig:audiomidisetup}). Then select the output of the audio interface you are using and change the `Format' to the appropriate number. Also make sure the bit depth and channel count are as desired.
BrechtDeMan@765 112 If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate.
BrechtDeMan@765 113
BrechtDeMan@765 114 Also make sure left and right channel gains are equal, as some applications alter this without changing it back, leading to a predominantly louder left or right channel. See Figure \ref{fig:audiomidisetup} for an example where the channel gains are different.
BrechtDeMan@765 115
BrechtDeMan@765 116 \begin{figure}[tb]
BrechtDeMan@765 117 \centering
BrechtDeMan@765 118 \includegraphics[width=.65\textwidth]{img/audiomidisetup.png}
BrechtDeMan@765 119 \caption{The Audio MIDI Setup window in Mac OS X}
BrechtDeMan@765 120 \label{fig:audiomidisetup}
BrechtDeMan@765 121 \end{figure}
BrechtDeMan@765 122
BrechtDeMan@765 123 \subsubsection{Windows}
BrechtDeMan@765 124 To change the sample rate in Windows, right-click on the speaker icon in the lower-right corner of your desktop and choose `Playback devices'. Right-click the appropriate playback device and click `Properties'. Click the `Advanced' tab and verify or change the sample rate under `Default Format'. % NEEDS CONFIRMATION
BrechtDeMan@765 125 If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate.
BrechtDeMan@765 126
BrechtDeMan@765 127 \subsection{Local test}
BrechtDeMan@765 128 If the test is hosted locally, you will need to run the local webserver provided with this tool.
BrechtDeMan@765 129
nicholas@809 130 \subsubsection{Mac OS X \& Linux}
BrechtDeMan@765 131
nicholas@809 132 On Mac OS X, Python comes preinstalled, as with most Unix/Linux distributions.
BrechtDeMan@765 133
BrechtDeMan@765 134 Open the Terminal (find it in \textbf{Applications/Terminal} or via Spotlight), and go to the folder you downloaded. To do this, type \texttt{cd [folder]}, where \texttt{[folder]} is the folder where to find the \texttt{pythonServer.py} script you downloaded. For instance, if the location is \texttt{/Users/John/Documents/test/}, then type
BrechtDeMan@765 135
BrechtDeMan@765 136 \texttt{cd /Users/John/Documents/test/}
BrechtDeMan@765 137
BrechtDeMan@765 138 Then hit enter and run the Python script by typing
BrechtDeMan@765 139
BrechtDeMan@765 140 \texttt{python pythonServer.py}
BrechtDeMan@765 141
BrechtDeMan@765 142 and hit enter again. See also Figure \ref{fig:terminal}.
BrechtDeMan@765 143
BrechtDeMan@765 144 \begin{figure}[htbp]
BrechtDeMan@765 145 \begin{center}
BrechtDeMan@765 146 \includegraphics[width=.75\textwidth]{pythonServer.png}
BrechtDeMan@765 147 \caption{Mac OS X: The Terminal window after going to the right folder (\texttt{cd [folder\_path]}) and running \texttt{pythonServer.py}.}
BrechtDeMan@765 148 \label{fig:terminal}
BrechtDeMan@765 149 \end{center}
BrechtDeMan@765 150 \end{figure}
BrechtDeMan@765 151
BrechtDeMan@765 152 Alternatively, you can simply type \texttt{python} (follwed by a space) and drag the file into the Terminal window from Finder. % DOESN'T WORK YET
BrechtDeMan@765 153
nicholas@809 154 You can leave this running throughout the different experiments (i.e. leave the Terminal open). Once running the terminal will report the current URL to type into your browser to initiate the test, usually this is http://localhost:8000/.
BrechtDeMan@765 155
BrechtDeMan@765 156 To start the test, open the browser and type
BrechtDeMan@765 157
BrechtDeMan@765 158 \texttt{localhost:8000}
BrechtDeMan@765 159
BrechtDeMan@765 160 and hit enter. The test should start (see Figure \ref{fig:test}).
BrechtDeMan@765 161
BrechtDeMan@765 162 To quit the server, either close the terminal window or press Ctrl+C on your keyboard to forcibly shut the server.
BrechtDeMan@765 163
BrechtDeMan@765 164 \subsubsection{Windows}
BrechtDeMan@765 165
BrechtDeMan@765 166 On Windows, Python 2.7 is not generally preinstalled and therefore has to be downloaded\footnote{\url{https://www.python.org/downloads/windows/}} and installed to be able to run scripts such as the local webserver, necessary if the tool is hosted locally.
BrechtDeMan@765 167
BrechtDeMan@765 168 Simply double click the Python script \texttt{pythonServer.py} in the folder you downloaded.
BrechtDeMan@765 169
BrechtDeMan@765 170 You may see a warning like the one in Figure \ref{fig:warning}. Click `Allow access'.
BrechtDeMan@765 171
BrechtDeMan@765 172 \begin{figure}[htbp]
BrechtDeMan@765 173 \begin{center}
BrechtDeMan@765 174 \includegraphics[width=.6\textwidth]{warning.png}
BrechtDeMan@765 175 \caption{Windows: Potential warning message when executing \texttt{pythonServer.py}.}
BrechtDeMan@765 176 \label{fig:warning}
BrechtDeMan@765 177 \end{center}
BrechtDeMan@765 178 \end{figure}
BrechtDeMan@765 179
BrechtDeMan@765 180 The process should now start, in the Command prompt that opens - see Figure \ref{fig:python}.
BrechtDeMan@765 181
BrechtDeMan@765 182 \begin{figure}[htbp]
BrechtDeMan@765 183 \begin{center}
BrechtDeMan@765 184 \includegraphics[width=.75\textwidth]{python.png}
BrechtDeMan@765 185 \caption{Windows: The Command Prompt after running \texttt{pythonServer.py} and opening the corresponding website.}
BrechtDeMan@765 186 \label{fig:python}
BrechtDeMan@765 187 \end{center}
BrechtDeMan@765 188 \end{figure}
BrechtDeMan@765 189
BrechtDeMan@765 190 You can leave this running throughout the different experiments (i.e. leave the Command Prompt open).
BrechtDeMan@765 191
BrechtDeMan@765 192 To start the test, open the browser and type
BrechtDeMan@765 193
BrechtDeMan@765 194 \texttt{localhost:8000}
BrechtDeMan@765 195
BrechtDeMan@765 196 and hit enter. The test should start (see Figure \ref{fig:test}).
BrechtDeMan@765 197
BrechtDeMan@765 198 \begin{figure}[htb]
BrechtDeMan@765 199 \begin{center}
BrechtDeMan@765 200 \includegraphics[width=.8\textwidth]{test.png}
BrechtDeMan@765 201 \caption{The start of the test in Google Chrome on Windows 7.}
BrechtDeMan@765 202 \label{fig:test}
BrechtDeMan@765 203 \end{center}
BrechtDeMan@765 204 \end{figure}
BrechtDeMan@765 205
BrechtDeMan@765 206 If at any point in the test the participant reports weird behaviour or an error of some kind, or the test needs to be interrupted, please notify the experimenter and/or refer to Section \ref{sec:troubleshooting}.
BrechtDeMan@765 207
BrechtDeMan@765 208 When the test is over (the subject should see a message to that effect, and click `Submit' one last time), the output XML file containing all collected data should have appeared in `saves/'. The names of these files are `test-0.xml', `test-1.xml', etc., in ascending order. The Terminal or Command prompt running the local web server will display the following file name. If such a file did not appear, please again refer to Section \ref{sec:troubleshooting}.
BrechtDeMan@765 209
BrechtDeMan@765 210 It is advised that you back up these results as often as possible, as a loss of this data means that the time and effort spent by the subject(s) has been in vain. Save the results to an external or network drive, and/or send them to the experimenter regularly.
BrechtDeMan@765 211
BrechtDeMan@765 212 To start the test again for a new participant, you do not need to close the browser or shut down the Terminal or Command Prompt. Simply refresh the page or go to \texttt{localhost:8000} again.
BrechtDeMan@765 213
BrechtDeMan@765 214
BrechtDeMan@765 215 \subsection{Remote test}
BrechtDeMan@765 216 Put all files on a web server which supports PHP. This allows the `save.php' script to store the XML result files in the `saves/' folder. If the web server is not able to store the XML file there at the end of the test, it will present the XML file locally to the user, as a `Save file' link.
BrechtDeMan@798 217
BrechtDeMan@798 218 Make sure the \texttt{projectReturn} attribute of the \texttt{setup} node is set to the \texttt{save.php} script.
BrechtDeMan@798 219
BrechtDeMan@798 220 Then, just go to the URL of the corresponding HTML file, e.g. \texttt{http://server.com/path/to/WAET/index.html?url=test/my-test.xml}. If storing on the server doesn't work at submission (e.g. if the \texttt{projectReturn} attribute isn't properly set), the result XML file will be presented to the subject on the client side, as a `Save file' link.
BrechtDeMan@798 221
n@1163 222 \subsection{Load a test / Multiple test documents}
nicholas@809 223 By default the index page will load a demo page of tests. To automatically load a test document, you need to append the location in the URL. If your URL is normally http://localhost:8000/index.html you would append the following: \texttt{?url=/path/to/your/test.xml}. Replace the fields with your actual path, the path is local to the running directory, so if you have your test in the directory \texttt{example\_eval} called \texttt{project.xml} you would append \texttt{?url=/example\_eval/project.xml}.
BrechtDeMan@798 224
BrechtDeMan@765 225 \clearpage
BrechtDeMan@798 226
BrechtDeMan@798 227 \section{Interfaces}
BrechtDeMan@798 228
BrechtDeMan@798 229 The Web Audio Evaluation Tool comes with a number of interface styles, each of which can be customised extensively, either by configuring them differently using the many optional features, or by modifying the JavaScript files.
BrechtDeMan@798 230
n@1163 231 To set the interface style for the whole test, set the attribute of the \texttt{setup} node to \texttt{interface="APE"}, where \texttt{"APE"} is one of the interface names below.
BrechtDeMan@798 232
BrechtDeMan@798 233 \subsection{APE}
BrechtDeMan@798 234 The APE interface is based on \cite{ape}, and consists of one or more axes, each corresponding with an attribute to be rated, on which markers are placed. As such, it is a multiple stimulus interface where (for each dimension or attribute) all elements are on one axis so that they can be maximally compared against each other, as opposed to rated individually or with regards to a single reference.
BrechtDeMan@798 235 It also contains an optional text box for each element, to allow for clarification by the subject, tagging, and so on.
BrechtDeMan@798 236
BrechtDeMan@798 237 \subsection{MUSHRA}
n@1163 238 This is a straightforward implementation of \cite{mushra}, especially common for the rating of audio quality, for instance for the evaluation of audio codecs. This can also operate any vertical slider style test and does not necessarily have to match the MUSHRA specification.
n@1163 239
n@1163 240 \subsection{AB}
n@1163 241 Performs a pairwise comparison, but supports ABX and n-way comparison (in the example we demonstrate it performing a 7-way comparison).
n@1163 242
BrechtDeMan@1195 243 \subsection{Discrete/Likert}
n@1163 244 Each audio element is given a discrete set of values based on the number of slider options specified. For instance, Likert specifies 5 values and therefore each audio element must be one of those 5 values.
n@1163 245
n@1163 246 \subsection{ACR/CCR/DCR/horizontal}
n@1163 247 Creates the same interfaces as MUSHRA except the sliders are horizontal, not vertical.
BrechtDeMan@798 248
BrechtDeMan@765 249
BrechtDeMan@798 250 \clearpage
BrechtDeMan@798 251
n@1163 252 \section{Project XML}
n@1163 253
n@1163 254 Each test is defined by its project XML file, examples of these can be seen in the ./example\_eval/ directory.
n@1163 255
n@1163 256 In the XML there are several nodes which must be defined:
n@1163 257 \begin{itemize}
n@1163 258 \item \texttt{<waet>}: The root node.
n@1163 259 \item \texttt{<setup>}: The first child node, defines whole-test parameters
n@1163 260 \item \texttt{<page>}: Specifies a test page, attached \emph{after} the \texttt{<setup>}.
n@1163 261 \item \texttt{<audioelement>}: Specifies an audio element.
n@1163 262 \end{itemize}
n@1163 263
n@1163 264 The test uses XML validation, so the ordering of nodes is important to pass this validation. Some nodes also have specific attributes which must be set and may even have a certain format to apply them. This is done so error checking can be performed both quickly and succintly with easy to find errors before loading and running a test session.
n@1163 265
n@1163 266 Before identifying any features, this part will walk you through the available nodes, their function and their attributes.
n@1163 267
n@1163 268 \subsection{Root}
n@1163 269 The root node is \texttt{<waet>}, it must have the following attributes:
n@1163 270
n@1163 271 \texttt{xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"}
n@1163 272
n@1163 273 \texttt{xsi:noNamespaceSchemaLocation="test-schema.xsd"}.
n@1163 274
n@1163 275 This will ensure it is checked against the XML schema for validation.
n@1163 276
n@1163 277 \subsection{Set up}
n@1163 278 The first child node, \texttt{<setup>} specifies any one time and global parameters. It takes the following attributes:
n@1163 279 \begin{itemize}
n@1163 280 \item \texttt{interface}: String, mandatory, specifies the interface to load
BrechtDeMan@1195 281 \item \texttt{projectReturn}: URL, mandatory, specifies the return point. Can be a 3rd party server or the local server. Set to null to disable automatic saving. Specifying ``save.php'' will trigger the return if either the PHP or python servers are used. On error, it will always default to presenting the save on page.
n@1163 282 \item \texttt{randomiseOrder}: Boolean, optional, if true it will randomise the order of the test pages. Default is false.
n@1163 283 \item \texttt{testPages}: non-negative integer, optional. Specifies the number of test pages to actually test with. Combined with randomiseOrder being true will give a random set of test pages per participant from the given pool of \texttt{<page>} nodes. Specifying 0 disables this option, default is 0.
n@1163 284 \item \texttt{loudness}: non-positive integer, optional. Set the default LUFS target value. See \ref{sec:loudness} for more.
n@1163 285 \item \texttt{sampleRate}: positive integer, optional. If set, the sample rate reported by the Web Audio API must match this number. See \ref{sec:samplerate}.
n@1163 286 \end{itemize}
n@1163 287
n@1163 288 The \texttt{<setup>} node takes the following child nodes, note these must appear in this order:
n@1163 289 \begin{itemize}
n@1163 290 \item \texttt{<survey>}: Min of 0, max of 2 occurences. See \ref{sec:survey}
n@1163 291 \item \texttt{<metric>}: Must appear only once.
n@1163 292 \item \texttt{<interface>}: Must appear only once.
n@1163 293 \end{itemize}
n@1163 294
n@1163 295 \subsection{Page}
n@1165 296 \label{sec:page}
n@1163 297 The only other first level child nodes, these specify the test pages. It takes the following attributes:
n@1163 298 \begin{itemize}
n@1163 299 \item \texttt{id}: ID, mandatory. A string which must be unique across the entire XML. It is used to identify the page on test completion as pages are returned in the results in the order they appeared, not specified.
BrechtDeMan@1195 300 \item \texttt{hostURL}: URL, mandatory. Used in conjuction with the \texttt{<audioelement>} url to specify where the audio files are located. For instance if all your files are in the directory \texttt{./test/} you can set this attribute to ``/test/'' and the \texttt{<audioelement>} url attribute only needs to file name. Set to ``'' if no hostURL prefix desired.
n@1163 301 \item \texttt{randomiseOrder}: Boolean, optional. If true the audio fragments are presented randomly rather than the order specified. See \ref{sec:randomisation}. Default is false.
n@1163 302 \item \texttt{repeatCount}: non-negative integer, optional. Specify the number of times to repeat the test page (re-present). Each presentation will appear as an individual page in the results. Default is 0.
n@1163 303 \item \texttt{loop}: Boolean, optional. If true, the audio elements will loop synchronously with each other. See \ref{sec:looping}. Default is false.
n@1163 304 \item \texttt{showElementComments}: Boolean, optional. If true then there will be a comment box on the test page for each audio element presented, see \ref{sec:commentboxes}.
n@1163 305 \item \texttt{loudness}: non-positive integer, optional. Set the LUFS target value for this page. Supersedes the \texttt{<setup>} loudness attribute for this page. See \ref{sec:loudness} for more.
n@1163 306 \end{itemize}
n@1163 307
n@1163 308 The \texttt{<page>} node takes the following child, nodes note these must appear in this order:
n@1163 309 \begin{itemize}
n@1163 310 \item \texttt{<title>}: Appear once or not at all. The text content of this node specifies the title of the test page, for instance \texttt{<title>John Doe's Test</title>}
n@1163 311 \item \texttt{<commentboxprefix}: Appear once or not at all. The text content specifies the prefix of the comment boxes, see \ref{sec:commentboxes}.
n@1163 312 \item \texttt{<interface>}: Must appear only once.
n@1163 313 \item \texttt{<audioelement>}: Minimum of one. Specifies an audio element, see \ref{sec:audioelement}.
n@1165 314 \item \texttt{<commentquestion>}: Min of 0, max unlimited occurences. See \ref{sec:commentboxes}.
n@1163 315 \item \texttt{<survey>}: Min of 0, max of 2 occurences. See \ref{sec:survey}
n@1163 316 \end{itemize}
n@1163 317
n@1163 318 \subsection{Survey}
n@1163 319 \label{sec:survey}
n@1163 320 These specify any survey items to be presented. The must be a maximum of two of these per \texttt{<setup>} and \texttt{<page>} nodes. These have one attribute, location, which must be set to one of the following: before, pre, after or post. In this case before == pre and after == post. This specifies where the survey must appear before or after the node it is associated with. When a child of \texttt{<setup>} then pre/before will be shown before the first test page and after/post shown after completing the last test page. When a child of \texttt{<page>} then pre/before is before the test commences and after/post is once the test has been submitted.
n@1163 321
n@1163 322 The survey node takes as its only set of childs the \texttt{<surveyentry>} node of which there can be any number.
n@1163 323
n@1163 324 \subsubsection{Survey Entry}
n@1163 325 These nodes have the following attributes, which vary depending on the survey type wanted:
n@1163 326 \begin{itemize}
n@1163 327 \item \texttt{id}: ID, mandatory. Must be unique across the entire XML, used to identify the response in the results.
n@1163 328 \item \texttt{type}: String, mandatory. Must be one of the following: statement, question, checkbox, radio or number. This defines the type to show.
n@1163 329 \item \texttt{mandatory}: Boolean, optional. Defines if the survey must have a response or not. Does not apply to statements. Default is false.
n@1163 330 \item \texttt{min}: Number, optional. Only applies when \texttt{type="number"}, the minimum valid response.
n@1163 331 \item \texttt{max}: Number, optional. Only applies when \texttt{type="number"}, the maximum valid response.
n@1163 332 \item \texttt{boxsize}: String, optional. Only applies when \texttt{type="question"} and must be one of the following: normal (default), small, large or huge.
n@1163 333 \end{itemize}
n@1163 334
n@1163 335 The nodes have the following children, which vary depending on the survey type wanted.
n@1163 336 \begin{itemize}
n@1163 337 \item \texttt{<statement>}: Must appear only once. Its text content specifies the text to appear as the statement or question for the user to respond to.
n@1163 338 \item \texttt{<option>}: Only valid if the parent node has the attribute \texttt{type} set to checkbox or radio. Has attribute \texttt{name} to identify the selected option in the results. The text content is the text to show next to the radio/checkbox.
n@1163 339 \end{itemize}
n@1163 340
n@1163 341 \subsection{Interface}
n@1163 342 This node specifies any interface specific options and test parameters. It has an optional \texttt{name} attribute used to set the axis name (where applicable), such as the multi-axis APE interface. Specifying multiple interface nodes in a \texttt{<page>} node will trigger multiple axis where applicable, otherwise only the \emph{first node} will be used and the rest ignored.
n@1163 343
n@1163 344 The node has the following children, note the order these must appear in is as follows:
n@1163 345 \begin{itemize}
n@1163 346 \item \texttt{title}: Min 0, max 1 occurence. The text content specifies the name of the axis as shown to the user.
n@1163 347 \item \texttt{interfaceoption}: Min 0, max unbounded. Specifies the interface options. See \ref{sec:interfaceoption}.
n@1163 348 \item \texttt{scales}: Min 0, max 1 occurence. Contains \texttt{<scalelable>} nodes which define the displayed scales. See \ref{sec:scales}.
n@1163 349 \end{itemize}
n@1163 350
n@1163 351 \subsection{Audio Element}
n@1163 352 \label{sec:audioelement}
n@1163 353 Appear as children of the \texttt{page} node. Each of these specify an individual interface fragment to display. Multiple fragments can reference the same file (allowing for repetition with different parameters or blind-doubles). The node has the following attributes:
n@1163 354 \begin{itemize}
n@1163 355 \item \texttt{id}: ID, mandatory. Must be unique across the test page. Used to identify the specific fragment in the results.
n@1163 356 \item \texttt{url}: URL, mandatory. Used with the parent \texttt{page} nodes' \texttt{hostURL} attribute to get the full url of the audio file to load.
n@1163 357 \item \texttt{gain}: Float, optional. Specify the gain in decibels to apply to the node after loudness normalisation. Default is 0.
n@1163 358 \item \texttt{type}: String, optional. Must be one of the following: normal (default when not specified), anchor, reference or outside-reference. Normal, anchor and reference are presented as normal, outside-reference presents the node as a separate interface option.
n@1163 359 \item \texttt{marker}: Integer between 0 and 100, optional. Only used when \texttt{type="anchor"|"reference"}. See \ref{sec:referencesandanchors}.
n@1163 360 \end{itemize}
n@1163 361
n@1163 362
BrechtDeMan@798 363 \section{Features}
BrechtDeMan@798 364
BrechtDeMan@810 365 This section covers the different features implemented in the Web Audio Evaluation Tool, how to use them, and what to know about them.
BrechtDeMan@798 366
BrechtDeMan@798 367 Unless otherwise specified, \emph{each} feature described here is optional, i.e. it can be enabled or disabled and adjusted to some extent.
BrechtDeMan@798 368
n@1163 369 As the example project showcases (nearly) all of these features, please refer to its configuration XML document for a demonstration of how to enable and adjust them.
BrechtDeMan@798 370
n@1163 371 \subsection{Interface options}
n@1163 372 The interface node has children of interface options which are used to specify modifications to the test environment. These are divided into two catagories: check and show. Check are used to specify conditions which must be met before a page can be completed, these include checking all fragments have been played or checking all fragments have a comment and so on. Show is used to show an optional on page element or control, such as the playhead or master volume.
n@1163 373
BrechtDeMan@1195 374 Check items have the attribute ``type'' set to ``check''. The following list gives the string to give the ``name'' attribute along with a description of the check.
n@1163 375 \begin{itemize}
n@1163 376 \item \texttt{fragmentPlayed}: Checks that all fragments have been at least partially played
n@1163 377 \item \texttt{fragmentFullPlayback}: Checks that all fragments have been fully played. \emph{NOTE:} This will always clear if the page is looping as it is not possible to know every sample has been played.
n@1163 378 \item \texttt{fragmentMoved}: Checks that all fragments have been moved. This is interface dependent, for instance on AB this will always clear as there is no movement.
n@1163 379 \item \texttt{fragmentComments}: Cheks that all fragments have a comment. Will clear if there are no on page comments but with a console warning.
BrechtDeMan@1195 380 \item \texttt{scalerange}: Has two extra attributes ``min'' and ``max''. Checks that at least one element is below the min value and one element is above the max value.
n@1163 381 \end{itemize}
BrechtDeMan@1195 382 % QUANTISATION OF THE SCALE: to be implemented?
BrechtDeMan@810 383
BrechtDeMan@1195 384 Show items have the attribute ``type'' set to ``show''. The following list gives the string to give the ``name'' attribute along with a description.
n@1163 385 \begin{itemize}
n@1163 386 \item \texttt{playhead}: Shows the playhead to the end user indicating where in the file they are currently listening
n@1163 387 \item \texttt{page-count}: Shows the current test page number and the total number of test pages.
n@1163 388 \item \texttt{volume}: Shows a master volume control to the user to manipulate the output gain of the page. This is tracked.
n@1163 389 \end{itemize}
BrechtDeMan@810 390
BrechtDeMan@810 391 \subsubsection{Multiple scales}
BrechtDeMan@810 392 In the case of multiple rating scales, e.g. when the stimuli are to be rated in terms of attributes `timbre' and `spatial impression', multiple interface nodes will have to be added, each specifying the title and annotations.
BrechtDeMan@810 393
BrechtDeMan@810 394 This is where the \texttt{interface}'s \texttt{name} attribute is particularly important: use this to retrieve the rating values, comments and metrics associated with the specified interface.
BrechtDeMan@810 395 If none is given, you can still use the automatically given \texttt{interface-id}, which is the interface number starting with 0 and corresponding to the order in which the rating scales appear.
BrechtDeMan@810 396
BrechtDeMan@798 397 \subsection{Randomisation}
n@1163 398 \label{sec:randomisation}
BrechtDeMan@810 399 [WORK IN PROGRESS]
BrechtDeMan@798 400
BrechtDeMan@798 401 \subsubsection{Randomisation of configuration XML files}
n@1163 402 The python server has a special function to automatically cycle through a list of test pages. Instead of directly requesting an XML, simply setting the url item in the browser URL to \texttt{pseudo.xml} will cycle through a list of XMLs. These XMLs must be in the local directory called \texttt{./pseudo/}.
BrechtDeMan@798 403 % how to
BrechtDeMan@798 404 % explain how this is implemented in the pythonServer
nicholas@809 405 %Nick? already implemented in the PHP?
nicholas@809 406 % Needs to be implemented in PHP and automated better, will complete soon
BrechtDeMan@798 407
BrechtDeMan@798 408
BrechtDeMan@798 409 \subsubsection{Randomsation of page order}
nicholas@809 410 The page order randomisation is set by the \texttt{<setup>} node attribute \texttt{randomise-order}, for example \texttt{<setup ... randomise-order="true">...</setup>} will randomise the test page order. When not set, the default is to \textbf{not} randomise the test page order.
BrechtDeMan@798 411
BrechtDeMan@798 412 \subsubsection{Randomisation of axis order}
BrechtDeMan@798 413
BrechtDeMan@798 414 \subsubsection{Randomisation of fragment order}
nicholas@809 415 The audio fragment randomisation is set by the \texttt{<audioholder>} node attribute \texttt{randomise-order}, for example \texttt{<audioholder ... randomise-order="true">...</audioholder>} will randomise the test page order. When not set, the default is to \textbf{not} randomise the test page order.
BrechtDeMan@798 416
BrechtDeMan@798 417 \subsubsection{Randomisation of initial slider position}
nicholas@809 418 By default slider values are randomised on start. The MUSHRA interface supports setting the initial values of all sliders throught the \texttt{<audioholder>} attribute \texttt{initial-position}. This takes an integer between 0 and 100 to signify the slider position.
BrechtDeMan@798 419 % /subsubsection{Randomisation of survey question order}
BrechtDeMan@798 420 % should be an attribute of the individual 'pretest' and 'posttest' elements
BrechtDeMan@798 421 % uncomment once we have it
BrechtDeMan@798 422
BrechtDeMan@798 423 \subsection{Looping}
n@1163 424 \label{sec:looping}
n@1163 425 Looping enables the fragments to loop until stopped by the user. Looping is synchronous so all fragments start at the same time on each loop.
BrechtDeMan@1195 426 Individual test pages can have their playback looped by the \texttt{<page>} attribute \texttt{loop} with a value of ``true'' or ``false''.
BrechtDeMan@798 427 If the fragments are not of equal length initially, they are padded with zeros so that they are equal length, to enable looping without the fragments going out of sync relative to each other.
BrechtDeMan@798 428
n@1163 429 Note that fragments cannot be played until all page fragments are loaded when in looped mode, as the engine needs to know the length of each fragment to calculate the padding.
BrechtDeMan@798 430
BrechtDeMan@798 431 \subsection{Sample rate}
n@1163 432 \label{sec:samplerate}
n@1163 433 If you require the test to be conducted at a certain sample rate (i.e. you do not tolerate resampling of the elements to correspond with the system's sample rate), add \texttt{sampleRate="96000"} - where ``96000'' can be any support sample rate (in Hz) - so that a warning message is shown alerting the subject that their system's sample rate is different from this enforced sample rate. This is checked immediately after parsing and stops the page loading any other elements if this check has failed.
BrechtDeMan@798 434
BrechtDeMan@798 435 \subsection{Metrics}
nicholas@809 436 The \texttt{Metric} node, which contains the metrics to be tracked during the complete test, is a child of the \texttt{setup} node, and it could look as follows.
BrechtDeMan@798 437
BrechtDeMan@798 438 \begin{lstlisting}
BrechtDeMan@798 439 <Metric>
BrechtDeMan@798 440 <metricEnable>testTimer</metricEnable>
BrechtDeMan@798 441 <metricEnable>elementTimer</metricEnable>
BrechtDeMan@798 442 <metricEnable>elementInitialPosition</metricEnable>
BrechtDeMan@798 443 <metricEnable>elementTracker</metricEnable>
BrechtDeMan@798 444 <metricEnable>elementFlagListenedTo</metricEnable>
BrechtDeMan@798 445 <metricEnable>elementFlagMoved</metricEnable>
BrechtDeMan@798 446 <metricEnable>elementListenTracker</metricEnable>
BrechtDeMan@798 447 </Metric>
BrechtDeMan@798 448 \end{lstlisting}
BrechtDeMan@798 449
n@1165 450 When in doubt, err on the inclusive side, as one never knows which information is needed in the future. Most of these metrics are necessary for post-processing scripts such as timeline\_view\_movement.py. % Brecht: should perhaps list somewhere what metrics are required for which analysis scripts.
BrechtDeMan@798 451
BrechtDeMan@798 452 \subsubsection{Time test duration}
BrechtDeMan@798 453 \texttt{testTimer}\\
nicholas@809 454 One per test page. Presents the total test time from the first playback on the test page to the submission of the test page (exculding test time of the pre-/post- test surveys). This is presented in the results as \texttt{<metricresult id="testTime"> 8.60299319727892 </metricresult>}. The time is in seconds.
BrechtDeMan@798 455
BrechtDeMan@798 456 \subsubsection{Time fragment playback}
BrechtDeMan@798 457 \texttt{elementTimer}\\
nicholas@809 458 One per audio fragment per test page. This totals up the entire time the audio fragment has been listened to in this test and presented \texttt{<metricresult name="enableElementTimer"> 1.0042630385487428 </metricresult>}. The time is in seconds.
BrechtDeMan@798 459
BrechtDeMan@798 460 \subsubsection{Initial positions}
BrechtDeMan@798 461 \texttt{elementInitialPosition}\\
nicholas@809 462 One per audio fragment per test page. Tracks the initial position of the sliders, especially relevant when these are randomised. Example result \texttt{<metricresult name="elementInitialPosition"> 0.8395522388059702 </metricresult>}.
BrechtDeMan@798 463
BrechtDeMan@798 464 \subsubsection{Track movements}
nicholas@809 465 \texttt{elementTracker}\\
nicholas@809 466 One per audio fragment per test page. Tracks the movement of each interface object. Each movement event has the time it occured at and the new value.
nicholas@809 467 \subsubsection{Which fragments listened to}
nicholas@809 468 \texttt{elementFlagListenedTo}\\
nicholas@809 469 One per audio fragment per test page. Boolean response, set to true if listened to.
nicholas@809 470 \subsubsection{Which fragments moved}
nicholas@809 471 \texttt{elementFlagMoved}\\
nicholas@809 472 One per audio fragment per test page. Binary check whether or not a the marker corresponding with a particular fragment was moved at all throughout the experiment.
BrechtDeMan@798 473
nicholas@809 474 \subsubsection{elementListenTracker}
nicholas@809 475 \texttt{elementListenTracker}\\
nicholas@809 476 One per audio fragment per test page. Tracks the playback events of each audio element pairing both the time in the test when playback started and when it stopped, it also gives the buffertime positions.
BrechtDeMan@798 477
BrechtDeMan@798 478 \subsection{References and anchors}
n@1163 479 \label{sec:referencesandanchors}
nicholas@809 480 The audio elements, \texttt{<audioelement>} have the attribute \texttt{type}, which defaults to normal. Setting this to one of the following will have the following effects.
nicholas@809 481 \subsubsection{Outside Reference}
BrechtDeMan@1195 482 Set type to `outside-reference'. This will place the object in a separate playback element clearly labelled as an outside reference. This is exempt of any movement checks but will still be included in any listening checks.
BrechtDeMan@798 483 \subsubsection{Hidden reference}
BrechtDeMan@1195 484 Set type to `reference'. The element will still be randomised as normal (if selected) and presented to the user. However the element will have the `reference' type in the results to quickly find it. The reference can be forced to be below a value before completing the test page by setting the attribute `marker' to be a value between 0 and 100 representing the integer value position it must be equal to or above.
BrechtDeMan@798 485 \subsubsection{Hidden anchor}
BrechtDeMan@1195 486 Set type to `anchor'. The element will still be randomised as normal (if selected) and presented to the user. However the element will have the `anchor' type in the results to quickly find it. The anchor can be forced to be below a value before completing the test page by setting the attribute `marker' to be a value between 0 and 100 representing the integer value position it must be equal to or below.
BrechtDeMan@798 487
BrechtDeMan@798 488 \subsection{Checks}
BrechtDeMan@798 489 \label{sec:checks}
BrechtDeMan@798 490
BrechtDeMan@798 491 %blabla
BrechtDeMan@798 492 These checks are enabled in the \texttt{interface} node, which is a child of the \texttt{setup} node.
BrechtDeMan@798 493 \subsubsection{Playback checks}
BrechtDeMan@798 494 % what it does/is
BrechtDeMan@798 495 Enforce playing each sample at least once, for at least a little bit (e.g. this test is satisfied even if you only play a tiny portion of the file), by alerting the user to which samples have not been played upon clicking `Submit'. When enabled, one cannot proceed to the next page, answer a survey question, or finish the test, before clicking each sample at least once.
BrechtDeMan@798 496 % how to enable/disable
BrechtDeMan@798 497
BrechtDeMan@798 498 Alternatively, one can check whether the \emph{entire} fragment was listened to at least once.
BrechtDeMan@798 499 % how to enable
BrechtDeMan@798 500
BrechtDeMan@798 501 Add \texttt{<check name="fragmentPlayed"/>} to the \texttt{interface} node.
BrechtDeMan@798 502
BrechtDeMan@798 503
BrechtDeMan@798 504 \subsubsection{Movement check}
BrechtDeMan@798 505 Enforce moving each sample at least once, for at least a little bit (e.g. this test is satisfied even if you only play a tiny portion of the file), by alerting the user to which samples have not been played upon clicking `Submit'. When enabled, one cannot proceed to the next page, answer a survey question, or finish the test, before clicking each sample at least once.
BrechtDeMan@798 506 If there are several axes, the warning will specify which samples have to be moved on which axis.
BrechtDeMan@798 507
BrechtDeMan@798 508 Add \texttt{<check name="fragmentMoved"/>} to the \texttt{interface} node.
BrechtDeMan@798 509
BrechtDeMan@798 510 \subsubsection{Comment check}
BrechtDeMan@798 511 % How to enable/disable?
BrechtDeMan@798 512
BrechtDeMan@798 513 Enforce commenting, by alerting the user to which samples have not been commented on upon clicking `Submit'. When enabled, one cannot proceed to the next page, answer a survey question, or finish the test, before putting at least one character in each comment box.
BrechtDeMan@798 514
BrechtDeMan@798 515 Note that this does not apply to any extra (text, radio button, checkbox) elements, unless these have the `mandatory' option enabled. %Nick? is this extra 'mandatory' option implemented?
BrechtDeMan@798 516
BrechtDeMan@798 517 Add \texttt{<check name="fragmentComments"/>} to the \texttt{interface} node.
BrechtDeMan@798 518
BrechtDeMan@798 519 %ADD: how to add a custom comment box
BrechtDeMan@798 520
BrechtDeMan@798 521 \subsubsection{Scale use check}
BrechtDeMan@798 522 It is possible to enforce a certain usage of the scale, meaning that at least one slider needs to be below and/or above a certain percentage of the slider.
BrechtDeMan@798 523
BrechtDeMan@798 524 Add \texttt{<check name="scalerange" min="25" max="75"/>} to the \texttt{interface} node.
BrechtDeMan@798 525
BrechtDeMan@798 526 \subsubsection{Note on the use of multiple rating axes}
BrechtDeMan@798 527 I.e. what if more than one axis? How to specify which axis the checks relate to? %Nick? to add?
BrechtDeMan@798 528
BrechtDeMan@798 529 \subsection{Platform information}
BrechtDeMan@798 530 % what does it do, what does it look like
BrechtDeMan@798 531 % limitations?
BrechtDeMan@810 532 For troubleshooting and usage statistics purposes, information about the browser and the operating system is logged in the results XML file. This is especially useful in the case of remote tests, when it is not certain which operating system, browser and/or browser were used. Note that this information is not always available and/or accurate, e.g. when the subject has taken steps to be more anonymous, so it should be treated as a guide only.
BrechtDeMan@810 533
BrechtDeMan@810 534 Example:
BrechtDeMan@810 535 \begin{lstlisting}
BrechtDeMan@810 536 <navigator>
BrechtDeMan@810 537 <platform>MacIntel</platform>
BrechtDeMan@810 538 <vendor>Google Inc.</vendor>
n@1163 539 <uagent>Mozilla/5.0 ... </uagent>
n@1165 540 <screen innerHeight="1900px" innerWidth="1920px"/>
BrechtDeMan@810 541 </navigator>
BrechtDeMan@810 542 \end{lstlisting}
BrechtDeMan@798 543
BrechtDeMan@798 544 \subsection{Gain}
BrechtDeMan@798 545 It is possible to set the gain (in decibel) applied to the different audioelements, as an attribute of the \texttt{audioelement} nodes in the configuration XML file:
BrechtDeMan@798 546
nicholas@809 547 \texttt{<audioElements url="sample-01.wav" gain="-6" id="sample01quieter" />}\\
n@1165 548 Please note, there are no checks on this to detect if accidentaly typed in linear. This gain is applied \emph{after} any loudness normalisation.
BrechtDeMan@798 549
BrechtDeMan@798 550 \subsection{Loudness}
n@1163 551 \label{sec:loudness}
BrechtDeMan@798 552 % automatic loudness equalisation
BrechtDeMan@798 553 % guide to loudness.js
n@1163 554 Each audio fragment on loading has its loudness calculated. The tool uses the EBU R 128 recommendation following the ITU-R BS.1770-4 loduness calculations to return the integreated LUFS loudness. The attribute \texttt{loudness} will set the loudness from the scope it is applied in. Applying it in the \texttt{<setup>} node will set the loudness for all test pages. Applying it in the \texttt{<page>} node will set the loudness for that page. Applying it in the \texttt{<audioelement>} node will set the loudness for that fragment. The scope is set locally, so if there is a loudness on both the \texttt{<page>} and \texttt{<setup>} nodes, that test page will take the value associated with the \texttt{<page>}. The loudness attribute is set in LUFS
n@1165 555
n@1165 556 \subsection{Comment Boxes}
n@1165 557 \label{sec:commentboxes}
BrechtDeMan@1195 558 There are two types of comment boxes which can be presented, those linked to the audio fragments on the page and those which pose a general question. The audio fragment boxes are shown by setting the attribute \texttt{showElementComments} to true of the page in question. This will then show a comment box below the main interface for every fragment on the page. There is some customisation around the text that accompanies the box, by default the text will read ``Comment on fragment'' followed by the fragment identifier (the number / letter shown by the interface). This `prefix' can be modified using the page node \texttt{<commentboxprefix>}, see \ref{sec:page} for where to place this node in the document. The comment box prefix node takes no attribute and the text contained by the node represents to the prefix. For instance if we have a node \texttt{<commentboxprefix> Describe fragment </commentboxprefix>}, then the interface will show ``Describe fragment'' followed by the identifier.
n@1165 559
n@1165 560 The second type of comment box is slightly more complex because it can handle different types of response data. These are called comment questions because they are located in the comment section of the test but pose a specific question.
BrechtDeMan@798 561
BrechtDeMan@798 562 \clearpage
BrechtDeMan@798 563
BrechtDeMan@765 564
BrechtDeMan@765 565 \section{Using the test create tool}
BrechtDeMan@765 566 We provide a test creation tool, available in the directory test\_create. This tool is a self-contained web page, so doubling clicking will launch the page in your system default browser.
BrechtDeMan@765 567
BrechtDeMan@765 568 The test creation tool can help you build a simple test very quickly. By simply selecting your interface and clicking check-boxes you can build a test in minutes.
BrechtDeMan@765 569
BrechtDeMan@765 570 Include audio by dragging and dropping the stimuli you wish to include.
BrechtDeMan@765 571
BrechtDeMan@765 572 The tool examines your XML before exporting to ensure you do not export an invalid XML structure which would crash the test.
BrechtDeMan@765 573
BrechtDeMan@765 574 This guide will help you to construct your own interface on top of the WAET (Web Audio Evaluation Tool) engine. The WAET engine resides in the core.js file, this contains prototype objects to handle most of the test creation, operation and data collection. The interface simply has to link into this at the correct points.
nicholas@809 575
nicholas@809 576 \section{Building your own interface}
BrechtDeMan@765 577
BrechtDeMan@765 578 \subsection{Nodes to familiarise}
BrechtDeMan@765 579 Core.js handles several very important nodes which you should become familiar with. The first is the Audio Engine, initialised and stored in variable `AudioEngineContext'. This handles the playback of the web audio nodes as well as storing the `AudioObjects'. The `AudioObjects' are custom nodes which hold the audio fragments for playback. These nodes also have a link to two interface objects, the comment box if enabled and the interface providing the ranking. On creation of an `AudioObject' the interface link will be nulled, it is up to the interface to link these correctly.
BrechtDeMan@765 580
BrechtDeMan@765 581 The specification document will be decoded and parsed into an object called `specification'. This will hold all of the specifications various nodes. The test pages and any pre/post test objects are processed by a test state which will proceed through the test when called to by the interface. Any checks (such as playback or movement checks) are to be completed by the interface before instructing the test state to proceed. The test state will call the interface on each page load with the page specification node.
BrechtDeMan@765 582
BrechtDeMan@765 583 \subsection{Modifying \texttt{core.js}}
BrechtDeMan@765 584 Whilst there is very little code actually needed, you do need to instruct core.js to load your interface file when called for from a specification node. There is a function called `loadProjectSpecCallback' which handles the decoding of the specification and setting any external items (such as metric collection). At the very end of this function there is an if statement, add to this list with your interface string to link to the source. There is an example in there for both the APE and MUSHRA tests already included. Note: Any updates to core.js in future work will most likely overwrite your changes to this file, so remember to check your interface is still here after any update that interferes with core.js.
BrechtDeMan@765 585 Any further files can be loaded here as well, such as css styling files. jQuery is already included.
BrechtDeMan@765 586
BrechtDeMan@765 587 \subsection{Building the Interface}
BrechtDeMan@765 588 Your interface file will get loaded automatically when the `interface' attribute of the setup node matches the string in the `loadProjectSpecCallback' function. The following functions must be defined in your interface file.
BrechtDeMan@765 589 \begin{itemize}
BrechtDeMan@765 590 \item \texttt{loadInterface} - Called once when the document is parsed. This creates any necessary bindings, such as to the metric collection classes and any check commands. Here you can also start the structure for your test such as placing in any common nodes (such as the title and empty divs to drop content into later).
BrechtDeMan@765 591 \item \texttt{loadTest(audioHolderObject)} - Called for each page load. The audioHolderObject contains a specification node holding effectively one of the audioHolder nodes.
BrechtDeMan@765 592 \item \texttt{resizeWindow(event)} - Handle for any window resizing. Simply scale your interface accordingly. This function must be here, but can me an empty function call.
BrechtDeMan@765 593 \end{itemize}
BrechtDeMan@765 594
BrechtDeMan@765 595 \subsubsection{loadInterface}
BrechtDeMan@765 596 This function is called by the interface once the document has been parsed since some browsers may parse files asynchronously. The best method is simply to put `loadInterface()' at the top of your interface file, therefore when the JavaScript engine is ready the function is called.
BrechtDeMan@765 597
BrechtDeMan@765 598 By default the HTML file has an element with id ``topLevelBody'' where you can build your interface. Make sure you blank the contents of that object. This function is the perfect time to build any fixed items, such as the page title, session titles, interface buttons (Start, Stop, Submit) and any holding and structure elements for later on.
BrechtDeMan@765 599
BrechtDeMan@765 600 At the end of the function, insert these two function calls: testState.initialise() and testState.advanceState();. This will actually begin the test sequence, including the pre-test options (if any are included in the specification document).
BrechtDeMan@765 601
BrechtDeMan@765 602 \subsubsection{loadTest(audioHolderObject)}
BrechtDeMan@765 603 This function is called on each new test page. It is this functions job to clear out the previous test and set up the new page. Use the function audioEngineContext.newTestPage(); to instruct the audio engine to prepare for a new page. ``audioEngineContext.audioObjects = [];'' will delete any audioObjects, interfaceContext.deleteCommentBoxes(); will delete any comment boxes and interfaceContext.deleteCommentQuestions(); will delete any extra comment boxes specified by commentQuestion nodes.
BrechtDeMan@765 604
BrechtDeMan@765 605 This function will need to instruct the audio engine to build each fragment. Just passing the constructor each element from the audioHolderObject will build the track, audioEngineContext.newTrack(element) (where element is the audioHolderObject audio element). This will return a reference to the constructed audioObject. Decoding of the audio will happen asynchronously.
BrechtDeMan@765 606
BrechtDeMan@765 607 You also need to link audioObject.interfaceDOM with your interface object for that audioObject. The interfaceDOM object has a few default methods. Firstly it must start disabled and become enabled once the audioObject has decoded the audio (function call: enable()). Next it must have a function exportXMLDOM(), this will return the xml node for your interface, however the default is for it to return a value node, with textContent equal to the normalised value. You can perform other functions, but our scripts may not work if something different is specified (as it will breach our results specifications). Finally it must also have a method getValue, which returns the normalised value.
BrechtDeMan@765 608
BrechtDeMan@765 609 It is also the job the interfaceDOM to call any metric collection functions necessary, however some functions may be better placed outside (for example, the APE interface uses drag and drop, therefore the best way was to call the metric functions from the dragEnd function, which is called when the interface object is dropped). Metrics based upon listening are handled by the audioObject. The interfaceDOM object must manage any movement metrics. For a list of valid metrics and their behaviours, look at the project specification document included in the repository/docs location. The same goes for any checks required when pressing the submit button, or any other method to proceed the test state.
BrechtDeMan@765 610
BrechtDeMan@798 611 \clearpage
BrechtDeMan@798 612 \section{Analysis and diagnostics}
BrechtDeMan@798 613 \subsection{In the browser}
BrechtDeMan@798 614 See `analysis.html' in the main folder: immediate visualisation of (by default) all results in the `saves/' folder.
BrechtDeMan@798 615
BrechtDeMan@798 616 \subsection{Python scripts}
BrechtDeMan@798 617 The package includes Python (2.7) scripts (in `scripts/') to extract ratings and comments, generate visualisations of ratings and timelines, and produce a fully fledged report.
BrechtDeMan@798 618
BrechtDeMan@798 619 Visualisation requires the free matplotlib toolbox (http://matplotlib.org), numpy and scipy.
BrechtDeMan@798 620 By default, the scripts can be run from the `scripts' folder, with the result files in the `saves' folder (the default location where result XMLs are stored). Each script takes the XML file folder as an argument, along with other arguments in some cases.
BrechtDeMan@798 621 Note: to avoid all kinds of problems, please avoid using spaces in file and folder names (this may work on some systems, but others don't like it).
BrechtDeMan@798 622
BrechtDeMan@798 623 \subsubsection{comment\_parser.py}
BrechtDeMan@798 624 Extracts comments from the output XML files corresponding with the different subjects found in `saves/'. It creates a folder per `audioholder'/page it finds, and stores a CSV file with comments for every `audioelement'/fragment within these respective `audioholders'/pages. In this CSV file, every line corresponds with a subject/output XML file. Depending on the settings, the first column containing the name of the corresponding XML file can be omitted (for anonymisation).
BrechtDeMan@798 625 Beware of Excel: sometimes the UTF-8 is not properly imported, leading to problems with special characters in the comments (particularly cumbersome for foreign languages).
BrechtDeMan@798 626
BrechtDeMan@798 627 \subsubsection{evaluation\_stats.py}
BrechtDeMan@798 628 Shows a few statistics of tests in the `saves/' folder so far, mainly for checking for errors. Shows the number of files that are there, the audioholder IDs that were tested (and how many of each separate ID), the duration of each page, the duration of each complete test, the average duration per page, and the average duration in function of the page number.
BrechtDeMan@798 629
BrechtDeMan@798 630 \subsubsection{generate\_report.py}
BrechtDeMan@798 631 Similar to `evaluation\_stats.py', but generates a PDF report based on the output files in the `saves/' folder - or any folder specified as command line argument. Uses pdflatex to write a LaTeX document, then convert to a PDF.
BrechtDeMan@798 632
BrechtDeMan@798 633 \subsubsection{score\_parser.py}
BrechtDeMan@798 634 Extracts rating values from the XML to CSV - necessary for running visualisation of ratings. Creates the folder `saves/ratings/' if not yet created, to which it writes a separate file for every `audioholder'/page in any of the output XMLs it finds in `saves/'. Within each file, rows represent different subjects (output XML file names) and columns represent different `audioelements'/fragments.
BrechtDeMan@798 635
BrechtDeMan@798 636 \subsubsection{score\_plot.py}
BrechtDeMan@798 637 Plots the ratings as stored in the CSVs created by score\_parser.py
BrechtDeMan@798 638 Depending on the settings, it displays and/or saves (in `saves/ratings/') a boxplot, confidence interval plot, scatter plot, or a combination of the aforementioned.
BrechtDeMan@798 639 Requires the free matplotlib library.
BrechtDeMan@798 640 At this point, more than one subjects are needed for this script to work.
BrechtDeMan@798 641
BrechtDeMan@798 642 \subsubsection{timeline\_view\_movement.py}
BrechtDeMan@798 643 Creates a timeline for every subject, for every `audioholder'/page, corresponding with any of the output XML files found in `saves/'. It shows the marker movements of the different fragments, along with when each fragment was played (red regions). Automatically takes fragment names, rating axis title, rating axis labels, and audioholder name from the XML file (if available).
BrechtDeMan@798 644
BrechtDeMan@798 645 \subsubsection{timeline\_view.py} % should be omitted or absorbed by the above soon
BrechtDeMan@798 646 Creates a timeline for every subject, for every `audioholder'/page, corresponding with any of the output XML files found in `saves/'. It shows when and for how long the subject listened to each of the fragments.
BrechtDeMan@798 647
BrechtDeMan@765 648
BrechtDeMan@765 649
BrechtDeMan@765 650 \clearpage
BrechtDeMan@765 651 \section{Troubleshooting} \label{sec:troubleshooting}
BrechtDeMan@798 652 \subsection{Reporting bugs and requesting features}
BrechtDeMan@798 653 Thanks to feedback from using the interface in experiments by the authors and others, many bugs have been caught and fatal crashes due to the interface seem to be a thing of the past entirely.
BrechtDeMan@765 654
BrechtDeMan@798 655 We continually develop this tool to fix issues and implement features useful to us or our user base. See \url{https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/issues} for a list of feature requests and bug reports, and their status.
BrechtDeMan@765 656
BrechtDeMan@798 657 Please contact the authors if you experience any bugs, if you would like additional functionality, if you spot any errors or gaps in the documentation, if you have questions about using the interface, or if you would like to give any feedback (even positive!) about the interface. We look forward to learning how the tool has (not) been useful to you.
BrechtDeMan@765 658
BrechtDeMan@765 659
BrechtDeMan@798 660 \subsection{First aid}
BrechtDeMan@798 661 Meanwhile, if things do go wrong or the test needs to be interrupted for whatever reason, all data is not lost. In a normal scenario, the test needs to be completed until the end (the final `Submit'), at which point the output XML is stored in the \texttt{saves/}. If this stage is not reached, open the JavaScript Console (see below for how to find it) and type
BrechtDeMan@765 662
BrechtDeMan@798 663 \texttt{createProjectSave()}
BrechtDeMan@765 664
BrechtDeMan@798 665 to present the result XML file on the client side, or
BrechtDeMan@765 666
BrechtDeMan@798 667 \texttt{createProjectSave(specification.projectReturn)}
BrechtDeMan@765 668
BrechtDeMan@798 669 to try to store it to the specified location, e.g. the `saves/' folder on the web server or the local machine (on failure the result XML should be presented directly in the web browser instead)
BrechtDeMan@765 670
BrechtDeMan@798 671 and hit enter. This will open a pop-up window with a hyperlink that reads `Save File'; click it and an XML file with results until that point should be stored in your download folder.
BrechtDeMan@798 672
BrechtDeMan@798 673 Alternatively, a lot of data can be read from the same console, in which the tool prints a lot of debug information. Specifically:
BrechtDeMan@798 674 \begin{itemize}
BrechtDeMan@798 675 \item the randomisation of pages and fragments are logged;
BrechtDeMan@798 676 \item any time a slider is played, its ID and the time stamp (in seconds since the start of the test) are displayed;
BrechtDeMan@798 677 \item any time a slider is dragged and dropped, the location where it is dropped including the time stamp are shown;
BrechtDeMan@798 678 \item any comments and pre- or post-test questions and their answers are logged as well.
BrechtDeMan@798 679 \end{itemize}
BrechtDeMan@765 680
BrechtDeMan@798 681 You can select all this and save into a text file, so that none of this data is lost. You may to choose to do this even when a test was successful as an extra precaution.
BrechtDeMan@765 682
BrechtDeMan@798 683 If you encounter any issue which you believe to be caused by any aspect of the tool, and/or which the documentation does not mention, please do let us know!
BrechtDeMan@765 684
BrechtDeMan@798 685 \subsubsection*{Opening the JavaScript Console}
BrechtDeMan@798 686 \begin{itemize}
BrechtDeMan@798 687 \item In Google Chrome, the JavaScript Console can be found in \textbf{View$>$Developer$>$JavaScript Console}, or via the keyboard shortcut Cmd + Alt + J (Mac OS X).
BrechtDeMan@798 688 \item In Safari, the JavaScript Console can be found in \textbf{Develop$>$Show Error Console}, or via the keyboard shortcut Cmd + Alt + C (Mac OS X). Note that for the Developer menu to be visible, you have to go to Preferences (Cmd + ,) and enable `Show Develop menu in menu bar' in the `Advanced' tab. \textbf{Note that as long as the Developer menu is not visible, nothing is logged to the console, i.e. you will only be able to see diagnostic information from when you switched on the Developer tools onwards.}
BrechtDeMan@798 689 \item In Firefox, go to \textbf{Tools$>$Web Developer$>$Web Console}, or hit Cmd + Alt + K.
BrechtDeMan@798 690 \end{itemize}
BrechtDeMan@765 691
BrechtDeMan@798 692 \subsection{Known issues and limitations}
BrechtDeMan@798 693 \label{sec:knownissues}
BrechtDeMan@798 694
BrechtDeMan@798 695 The following is a non-exhaustive list of problems and limitations you may experience using this tool, due to not being supported yet by us, or by the Web Audio API and/or (some) browsers.
BrechtDeMan@798 696
BrechtDeMan@798 697 \begin{itemize}
BrechtDeMan@798 698 \item Issue \href{https://code.soundsoftware.ac.uk/issues/1463}{\textbf{\#1463}}: \textbf{Firefox} only supports 8 bit and 16 bit WAV files. Pending automatic requantisation (which deteriorates the audio signal's dynamic range to some extent), WAV format stimuli need to adhere to these limitations in order for the test to be compatible with Firefox.
BrechtDeMan@798 699 \item Issues \href{https://code.soundsoftware.ac.uk/issues/1474}{\textbf{\#1474}} and \href{https://code.soundsoftware.ac.uk/issues/1462}{\textbf{\#1462}}: On occasions, audio is not working - or only a continuous `beep' can be heard - notably in \textbf{Safari}. Refreshing, quitting the browser and even enabling Developer tools in Safari's Preferences pane (`Advanced' tab: ``Show `Develop' menu in menu bar'') has helped resolve this. If no (high quality) audio can be heard, make sure your entire playback system's settings are all correct.
BrechtDeMan@798 700 \end{itemize}
BrechtDeMan@765 701
BrechtDeMan@765 702 \clearpage
BrechtDeMan@765 703 \bibliographystyle{ieeetr}
BrechtDeMan@765 704 \bibliography{Instructions}{}
BrechtDeMan@765 705
BrechtDeMan@765 706
BrechtDeMan@765 707 \clearpage
BrechtDeMan@765 708 \appendix
BrechtDeMan@765 709
BrechtDeMan@798 710 \section{Legacy}
BrechtDeMan@798 711 The APE interface and most of the functionality of the first WAET editions are inspired by the APE toolbox for MATLAB \cite{ape}. See \url{https://code.soundsoftware.ac.uk/projects/ape} for the source code and \url{http://brechtdeman.com/publications/aes136.pdf} for the corresponding paper.
BrechtDeMan@798 712
BrechtDeMan@798 713 \clearpage
BrechtDeMan@798 714
BrechtDeMan@765 715 \section{Listening test instructions example}
BrechtDeMan@765 716
BrechtDeMan@765 717 Before each test, show the instructions below or similar and make sure it is available to the subject throughout the test. Make sure to ask whether the participant has any questions upon seeing and/or reading the instructions.
BrechtDeMan@765 718
BrechtDeMan@765 719 \begin{itemize}
BrechtDeMan@765 720 \item You will be asked for your name (``John Smith'') and location (room identifier).
BrechtDeMan@765 721 \item An interface will appear, where you are asked to
BrechtDeMan@765 722 \begin{itemize}
BrechtDeMan@765 723 \item click green markers to play the different mixes;
BrechtDeMan@765 724 \item drag the markers on a scale to reflect your preference for the mixes;
BrechtDeMan@765 725 \item comment on these mixes, using text boxes with corresponding numbers (in your \textbf{native language});
BrechtDeMan@765 726 \item optionally comment on all mixes together, or on the song, in `General comments'.
BrechtDeMan@765 727 \end{itemize}
BrechtDeMan@765 728 \item You are asked for your personal, honest opinion. Feel free to use the full range of the scale to convey your opinion of the various mixes. Don?t be afraid to be harsh and direct.
BrechtDeMan@765 729 \item The markers appear at random positions at first (which means some markers may hide behind others).
BrechtDeMan@765 730 \item The interface can take a few seconds to start playback, but switching between mixes should be instantaneous.
BrechtDeMan@765 731 \item This is a research experiment, so please forgive us if things go wrong. Let us know immediately and we will fix it or restart the test.
BrechtDeMan@765 732 \item When the test is finished (after all songs have been evaluated), just call the experimenter, do NOT close the window.
BrechtDeMan@765 733 \item After the test, please fill out our survey about your background, experience and feedback on the test.
BrechtDeMan@765 734 \item By participating, you consent to us using all collected data for research. Unless asked explicitly, all data will be anonymised when shared.
BrechtDeMan@765 735 \end{itemize}
BrechtDeMan@765 736
BrechtDeMan@765 737 \clearpage
BrechtDeMan@765 738
BrechtDeMan@753 739 \section{Terminology} % just to keep track of what exactly we call things. Don't use terms that are too different, to avoid confusion.
BrechtDeMan@753 740 As a guide to better understand the Instructions, and to expand them later, here is a list of terms that may be unclear or ambiguous unless properly defined.
BrechtDeMan@798 741 \begin{description}
BrechtDeMan@798 742 \item[Subject] The word we use for a participant, user, ... of the test, i.e. not the experimenter who designs the test but the person who evaluates the audio under test as part of an experiment (or the preparation of one).
BrechtDeMan@753 743 \item[User] The person who uses the tool to configure, run and analyse the test - i.e. the experimenter, most likely a researcher - or at least
BrechtDeMan@798 744 \item[Page] A screen in a test; corresponds with an \texttt{audioholder}
BrechtDeMan@810 745 \item[Fragment] An element, stimulus or sample in a test; corresponds with an \texttt{audioelement}
BrechtDeMan@798 746 \item[Test] A complete test which can consist of several pages; corresponds with an entire configuration XML file
BrechtDeMan@798 747 \item[Configuration XML file] The XML file containing the necessary information on interface, samples, survey questions, configurations, ... which the JavaScript modules read to produce the desired test.
BrechtDeMan@798 748 \item[Results XML file] The output of a successful test, including ratings, comments, survey responses, timing information, and the complete configuration XML file with which the test was generated in the first place.
BrechtDeMan@798 749 \end{description}
BrechtDeMan@798 750
BrechtDeMan@798 751 \clearpage
BrechtDeMan@798 752
BrechtDeMan@798 753 \setcounter{secnumdepth}{0} % don't number this last bit
BrechtDeMan@798 754 \section{Contact details} % maybe add web pages, Twitter accounts, whatever you like
BrechtDeMan@765 755 \label{sec:contact}
BrechtDeMan@765 756
BrechtDeMan@765 757 \begin{itemize}
BrechtDeMan@765 758 \item Nicholas Jillings: \texttt{nicholas.jillings@mail.bcu.ac.uk}
BrechtDeMan@765 759 \item Brecht De Man: \texttt{b.deman@qmul.ac.uk}
BrechtDeMan@765 760 \item David Moffat: \texttt{d.j.moffat@qmul.ac.uk}
BrechtDeMan@765 761 \end{itemize}
BrechtDeMan@765 762
BrechtDeMan@765 763 \end{document}