annotate docs/SMC15/smc2015template.tex @ 1533:317faa29ab11

Feature Feature #1270: Outside reference object added. Some small bugs and GUI to solve.
author Nicholas Jillings <nickjillings@users.noreply.github.com>
date Thu, 23 Jul 2015 09:37:27 +0100
parents 766bff1a8f73
children 8ab5f8969856
rev   line source
nickjillings@1528 1 % -----------------------------------------------
nickjillings@1528 2 % Template for SMC 2012
nickjillings@1528 3 % adapted from the template for SMC 2011, which was adapted from that of SMC 2010
nickjillings@1528 4 % -----------------------------------------------
nickjillings@1528 5
nickjillings@1528 6 \documentclass{article}
nickjillings@1528 7 \usepackage{smc2015}
nickjillings@1528 8 \usepackage{times}
nickjillings@1528 9 \usepackage{ifpdf}
nickjillings@1528 10 \usepackage[english]{babel}
nickjillings@1528 11 \usepackage{cite}
nickjillings@1528 12 \usepackage{enumitem}
nickjillings@1528 13 \usepackage{listings}
nickjillings@1528 14 \setitemize{noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt}
nickjillings@1528 15
nickjillings@1528 16
nickjillings@1528 17
nickjillings@1528 18 \usepackage{color}
nickjillings@1528 19 \definecolor{grey}{rgb}{0.1,0.1,0.1}
nickjillings@1528 20 \definecolor{darkblue}{rgb}{0.0,0.0,0.6}
nickjillings@1528 21 \definecolor{cyan}{rgb}{0.0,0.6,0.6}
nickjillings@1528 22
nickjillings@1528 23
nickjillings@1528 24 \hyphenation{Java-script}
nickjillings@1528 25 \hyphenation{OPA-QUE}
nickjillings@1528 26
nickjillings@1528 27 %%%%%%%%%%%%%%%%%%%%%%%% Some useful packages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nickjillings@1528 28 %%%%%%%%%%%%%%%%%%%%%%%% See related documentation %%%%%%%%%%%%%%%%%%%%%%%%%%
nickjillings@1528 29 %\usepackage{amsmath} % popular packages from Am. Math. Soc. Please use the
nickjillings@1528 30 %\usepackage{amssymb} % related math environments (split, subequation, cases,
nickjillings@1528 31 %\usepackage{amsfonts}% multline, etc.)
nickjillings@1528 32 %\usepackage{bm} % Bold Math package, defines the command \bf{}
nickjillings@1528 33 %\usepackage{paralist}% extended list environments
nickjillings@1528 34 %%subfig.sty is the modern replacement for subfigure.sty. However, subfig.sty
nickjillings@1528 35 %%requires and automatically loads caption.sty which overrides class handling
nickjillings@1528 36 %%of captions. To prevent this problem, preload caption.sty with caption=false
nickjillings@1528 37 %\usepackage[caption=false]{caption}
nickjillings@1528 38 %\usepackage[font=footnotesize]{subfig}
nickjillings@1528 39
nickjillings@1528 40
nickjillings@1528 41 %user defined variables
nickjillings@1528 42 \def\papertitle{WEB AUDIO EVALUATION TOOL: A BROWSER-BASED LISTENING TEST ENVIRONMENT} %?
nickjillings@1528 43 \def\firstauthor{Nicholas Jillings}
nickjillings@1528 44 \def\secondauthor{Brecht De Man}
nickjillings@1528 45 \def\thirdauthor{David Moffat}
nickjillings@1528 46 \def\fourthauthor{Joshua D. Reiss}
nickjillings@1528 47
nickjillings@1528 48 % adds the automatic
nickjillings@1528 49 % Saves a lot of ouptut space in PDF... after conversion with the distiller
nickjillings@1528 50 % Delete if you cannot get PS fonts working on your system.
nickjillings@1528 51
nickjillings@1528 52 % pdf-tex settings: detect automatically if run by latex or pdflatex
nickjillings@1528 53 \newif\ifpdf
nickjillings@1528 54 \ifx\pdfoutput\relax
nickjillings@1528 55 \else
nickjillings@1528 56 \ifcase\pdfoutput
nickjillings@1528 57 \pdffalse
nickjillings@1528 58 \else
nickjillings@1528 59 \pdftrue
nickjillings@1528 60 \fi
nickjillings@1528 61
nickjillings@1528 62 \ifpdf % compiling with pdflatex
nickjillings@1528 63 \usepackage[pdftex,
nickjillings@1528 64 pdftitle={\papertitle},
nickjillings@1528 65 pdfauthor={\firstauthor, \secondauthor, \thirdauthor},
nickjillings@1528 66 bookmarksnumbered, % use section numbers with bookmarks
nickjillings@1528 67 pdfstartview=XYZ % start with zoom=100% instead of full screen;
nickjillings@1528 68 % especially useful if working with a big screen :-)
nickjillings@1528 69 ]{hyperref}
nickjillings@1528 70 %\pdfcompresslevel=9
nickjillings@1528 71
nickjillings@1528 72 \usepackage[pdftex]{graphicx}
nickjillings@1528 73 % declare the path(s) where your graphic files are and their extensions so
nickjillings@1528 74 %you won't have to specify these with every instance of \includegraphics
nickjillings@1528 75 \graphicspath{{./figures/}}
nickjillings@1528 76 \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
nickjillings@1528 77
nickjillings@1528 78 \usepackage[figure,table]{hypcap}
nickjillings@1528 79
nickjillings@1528 80 \else % compiling with latex
nickjillings@1528 81 \usepackage[dvips,
nickjillings@1528 82 bookmarksnumbered, % use section numbers with bookmarks
nickjillings@1528 83 pdfstartview=XYZ % start with zoom=100% instead of full screen
nickjillings@1528 84 ]{hyperref} % hyperrefs are active in the pdf file after conversion
nickjillings@1528 85
nickjillings@1528 86 \usepackage[dvips]{epsfig,graphicx}
nickjillings@1528 87 % declare the path(s) where your graphic files are and their extensions so
nickjillings@1528 88 %you won't have to specify these with every instance of \includegraphics
nickjillings@1528 89 \graphicspath{{./figures/}}
nickjillings@1528 90 \DeclareGraphicsExtensions{.eps}
nickjillings@1528 91
nickjillings@1528 92 \usepackage[figure,table]{hypcap}
nickjillings@1528 93 \fi
nickjillings@1528 94
nickjillings@1528 95 %set up the hyperref package - make the links black without a surrounding frame
nickjillings@1528 96 \hypersetup{
nickjillings@1528 97 colorlinks,%
nickjillings@1528 98 citecolor=black,%
nickjillings@1528 99 filecolor=black,%
nickjillings@1528 100 linkcolor=black,%
nickjillings@1528 101 urlcolor=black
nickjillings@1528 102 }
nickjillings@1528 103
nickjillings@1528 104
nickjillings@1528 105 % Title.
nickjillings@1528 106 % ------
nickjillings@1528 107 \title{\papertitle}
nickjillings@1528 108
nickjillings@1528 109 % Authors
nickjillings@1528 110 % Please note that submissions are NOT anonymous, therefore
nickjillings@1528 111 % authors' names have to be VISIBLE in your manuscript.
nickjillings@1528 112 %
nickjillings@1528 113 % Single address
nickjillings@1528 114 % To use with only one author or several with the same address
nickjillings@1528 115 % ---------------
nickjillings@1528 116 %\oneauthor
nickjillings@1528 117 % {\firstauthor} {Affiliation1 \\ %
nickjillings@1528 118 % {\tt \href{mailto:author1@smcnetwork.org}{author1@smcnetwork.org}}}
nickjillings@1528 119
nickjillings@1528 120 %Two addresses
nickjillings@1528 121 %--------------
nickjillings@1528 122 % \twoauthors
nickjillings@1528 123 % {\firstauthor} {Affiliation1 \\ %
nickjillings@1528 124 % {\tt \href{mailto:author1@smcnetwork.org}{author1@smcnetwork.org}}}
nickjillings@1528 125 % {\secondauthor} {Affiliation2 \\ %
nickjillings@1528 126 % {\tt \href{mailto:author2@smcnetwork.org}{author2@smcnetwork.org}}}
nickjillings@1528 127
nickjillings@1528 128
nickjillings@1528 129
nickjillings@1528 130 % FIX!!!
nickjillings@1528 131 \fourauthors
nickjillings@1528 132 {\firstauthor} {%Affiliation1 \\
nickjillings@1528 133 {\tt \href{mailto:b.deman@qmul.ac.uk}{n.g.r.jillings@se14.qmul.ac.uk, }}}
nickjillings@1528 134 {\secondauthor} {%Affiliation2\\ %
nickjillings@1528 135 {\tt \href{mailto:n.g.r.jillings@se14.qmul.ac.uk}{\{b.deman,}}}
nickjillings@1528 136 {\thirdauthor} {%Affiliation3\\ %
nickjillings@1528 137 {\tt \href{mailto:d.j.moffat@qmul.ac.uk}{d.j.moffat, }}}
nickjillings@1528 138 {\fourthauthor} {%Affiliation4\\ %
nickjillings@1528 139 {\tt \href{mailto:joshua.reiss@qmul.ac.uk}{joshua.reiss\}@qmul.ac.uk}}}
nickjillings@1528 140
nickjillings@1528 141 % ***************************************** the document starts here ***************
nickjillings@1528 142 \begin{document}
nickjillings@1528 143 %
nickjillings@1528 144 \capstartfalse
nickjillings@1528 145 \maketitle
nickjillings@1528 146 \capstarttrue
nickjillings@1528 147 %
nickjillings@1528 148 \begin{abstract}
nickjillings@1528 149 Perceptual evaluation tests where subjects assess certain qualities of different audio fragments are an integral part of audio and music research. These require specialised software, usually custom-made, to collect large amounts of data using meticulously designed interfaces with carefully formulated questions, and play back audio with rapid switching between different samples.
nickjillings@1528 150 New functionality in HTML5 included in the Web Audio API allows for increasingly powerful media applications in a platform independent environment. The advantage of a web application is easy deployment on any platform, without requiring any other application, enabling multiple tests to be easily conducted across locations. In this paper we propose a tool supporting a wide variety of easily configurable, multi-stimulus perceptual audio evaluation tests over the web with multiple test interfaces, pre- and post-test surveys, custom configuration, collection of test metrics and other features. Test design and setup doesn't require programming background, and results are gathered automatically using web friendly formats for easy storing of results on a server.
nickjillings@1528 151 % Currently at 150, don't think anything more needs to be done here??
nickjillings@1528 152 %Place your abstract at the top left column on the first page.
nickjillings@1528 153 %Please write about 150-200 words that specifically highlight the purpose of your work,
nickjillings@1528 154 %its context, and provide a brief synopsis of your results.
nickjillings@1528 155 %Avoid equations in this part.\\
nickjillings@1528 156
nickjillings@1528 157 \end{abstract}
nickjillings@1528 158
nickjillings@1528 159 % TOTAL PAPER: Minimum 4 pages, 6 preferred, max. 8 (6 for demos/posters)\\
nickjillings@1528 160
nickjillings@1528 161 \section{Introduction}\label{sec:introduction}
nickjillings@1528 162
nickjillings@1528 163 %NICK: examples of what kind of audio applications HTML5 has made possible, with references to publications (or website)\\
nickjillings@1528 164
nickjillings@1528 165 Perceptual evaluation of audio plays an important role in a wide range of research on audio quality \cite{schoeffler2013impact,repp}, sound synthesis \cite{de2013real,durr2015implementation}, audio effect design \cite{deman2014a}, source separation \cite{mushram,uhlereiss}, music and emotion analysis \cite{song2013a,eerola2009prediction}, and many others \cite{friberg2011comparison}. % codec design?
nickjillings@1528 166
nickjillings@1528 167 %This work is based in part on the APE audio perceptual evaluation interface for MATLAB \cite{deman2014b}. An important drawback of this toolbox is the need to have MATLAB to create a test and even to run (barring the use of an executable generated by MATLAB), and limited compatibility with both earlier and newer versions of MATLAB, which makes it hard to maintain. On the other hand, a web application generally has the advantage of running in most browsers on most applications.
nickjillings@1528 168
nickjillings@1528 169 % IMPORTANT
nickjillings@1528 170 %[TO ADD: other interfaces for perceptual evaluation of audio, browser-based or not!] \\
nickjillings@1528 171 %BROWSER-BASED: \cite{song2013b,song2013a,beaqlejs} \\
nickjillings@1528 172 %MATLAB: \cite{whisper,mushram,scale}
nickjillings@1528 173 % to add: OPAQUE, Rumsey's repertory grid technique
nickjillings@1528 174
nickjillings@1528 175
nickjillings@1528 176 \begin{table}[htdp]
nickjillings@1528 177 \caption{Available audio perceptual evaluation tools}
nickjillings@1528 178 \begin{center}
nickjillings@1528 179 \begin{tabular}{|*{3}{l|}}
nickjillings@1528 180 % order?
nickjillings@1528 181 \hline
nickjillings@1528 182 \textbf{Name} & \textbf{Language} & \textbf{Ref.}\\
nickjillings@1528 183 \hline
nickjillings@1528 184 APE & MATLAB & \cite{deman2014b} \\
nickjillings@1528 185 BeaqleJS & HTML5/JS & \cite{beaqlejs}\\ % ABX, mushra
nickjillings@1528 186 %C4DM\footnote{http://isophonics.org/test - collection of listening tests developed by Gy\"{o}rgy Fazekas and Thomas Wilmering at Centre for Digital Music.} & JS & \cite{song2013a,song2013b}\\
nickjillings@1528 187 HULTI-GEN & Max & \cite{hulti-gen}\\
nickjillings@1528 188 MUSHRAM & MATLAB & \cite{mushram}\\ % type: mushra
nickjillings@1528 189 Scale & MATLAB & \cite{scale} \\
nickjillings@1528 190 WhisPER & MATLAB & \cite{whisper}\\
nickjillings@1528 191 \hline
nickjillings@1528 192 \end{tabular}
nickjillings@1528 193 \end{center}
nickjillings@1528 194 \label{tab:interfaces}
nickjillings@1528 195 \end{table}%
nickjillings@1528 196
nickjillings@1528 197 Various listening test design tools are already available, see Table \ref{tab:interfaces}. A few other listening test tools, such as OPAQUE \cite{opaque} and GuineaPig \cite{guineapig}, are described but not available to the public at the time of writing.
nickjillings@1528 198
nickjillings@1528 199 Many are MATLAB-based, useful for easily processing and visualising the data produced by the listening tests, but requiring MATLAB to be installed to run or - in the case of an executable created with MATLAB - at least create the test.
nickjillings@1528 200 Furthermore, compatibility is usually limited across different versions of MATLAB.
nickjillings@1528 201 Similarly, Max requires little or no programming background but it is proprietary software as well, which is especially undesirable when tests need to be deployed at different sites.
nickjillings@1528 202 More recently, BeaqleJS \cite{beaqlejs} makes use of the HTML5 audio capabilities and comes with a number of predefined, established test interfaces such as ABX and MUSHRA \cite{mushra}. BeaqleJS provides a number of similar features including saving of test data to a web server. The main difference is that with BeaqleJS, the configuration is done through writting a JavaScript file holding a JavaScript Object of the notation. Instead our presented system uses the XML document standard, which allows configuration outside of a web-centric editor. The results are also presented in XML again allowing 3\textsuperscript{rd} party editors and programs to easily access. Finally, the presented system does not require web access to run, instead being deployed with a Python server script. This is particularly useful in studios where machines may not, by design, be web connected, or use in locations where web access is limited.
nickjillings@1528 203
nickjillings@1528 204 A browser-based perceptual evaluation tool for audio has a number of advantages. First of all, it doesn't need any other software than a browser, meaning deployment is very easy and cheap. As such, it can also run on a variety of devices and platforms. The test can be hosted on a central server with subjects all over the world, who can simply go to a webpage. This means that multiple participants can take the test simultaneously, potentially in their usual listening environment if this is beneficial for the test. Naturally, the constraints on the listening environment and other variables still need to be controlled if they are important to the experiment. Depending on the requirements a survey or a variety of tests preceding the experiment could establish whether remote participants and their environments are adequate for the experiment at hand.
nickjillings@1528 205
nickjillings@1528 206 The Web Audio API is a high-level JavaScript Application Programming Interface (API) designed for real-time processing of audio inside the browser through various processing nodes\footnote{http://webaudio.github.io/web-audio-api/}. Various web sites have used the Web Audio API for creative purposes, such as drum machines and score creation tools\footnote{http://webaudio.github.io/demo-list/},
nickjillings@1528 207 others from the list show real-time captured audio processing such as room reverberation tools and a phase vocoder from the system microphone. The BBC Radiophonic Workshop shows effects used on famous TV shows such as Doctor Who, being simulated inside the browser\footnote{http://webaudio.prototyping.bbc.co.uk/}.
nickjillings@1528 208 Another example is the BBC R\&D personalised compressor which applies a dynamic range compressor on a radio station that dynamically adjusts the compressor settings to match the listener's environment \cite{mason2015compression}.
nickjillings@1528 209
nickjillings@1528 210
nickjillings@1528 211
nickjillings@1528 212 % [How is this one different from all these?] improve
nickjillings@1528 213
nickjillings@1528 214 % FLEXIBLE (reference (not) appropriate)
nickjillings@1528 215 In contrast with the tools listed above, we aim to provide an environment in which a variety of multi-stimulus tests can be designed, with a wide range of configurability, while keeping setup and collecting results as straightforward as possible. For instance, the option to provide free-text comment fields allows for tests with individual vocabulary methods, as opposed to only allowing quantitative scales associated to a fixed set of descriptors.
nickjillings@1528 216 % EASE OF USE: no need to go in the code
nickjillings@1528 217 To make the tool accessible to a wide range of researchers, we aim to offer maximum functionality even to those with little or no programming background. The tool we present can set up a listening test without reading or adjusting any code, provided no new types of interfaces need to be created.
nickjillings@1528 218
nickjillings@1528 219 % ENVIRONMENT %In this paper, we provide a listening test back end that allows for easy set up of a wide variety of listening tests, highly flexible yet very simple and not requiring any programming skills.
nickjillings@1528 220 Specifically, we present a browser-based perceptual evaluation tool from which any kind of multiple stimulus audio evaluation tool where subjects need to rank, rate, select, or comment on different audio samples can be built.
nickjillings@1528 221 We also include an example of the multiple stimulus user interface included with the APE tool \cite{deman2014b}, which presents the subject with a number of axes on which a number of markers, corresponding to audio samples, can be moved to reflect any subjective quality, as well as corresponding comment boxes.
nickjillings@1528 222 However, other graphical user interfaces can be put on top of the engine that we provide with minimal or no modifications. Examples of this are the MUSHRA test \cite{mushra}, single or multiple stimulus evaluation with a two-dimensional interface (such as valence and arousal dimensions), or simple annotation (using free-form text, check boxes, radio buttons or drop-down menus) of one or more audio samples at a time.
nickjillings@1528 223 In some cases, such as method of adjustment, where the audio is processed by the user, or AB test, where the interface does not show all audio samples to be evaluated at once \cite{bech}, the back end of the tool needs to be modified as well.
nickjillings@1528 224
nickjillings@1528 225 In the following sections, we describe the included interface in more detail, discuss the implementation, and cover considerations that were made in the design process of this tool.
nickjillings@1528 226
nickjillings@1528 227 %\section{Requirements}\label{sec:requirements}
nickjillings@1528 228 %???
nickjillings@1528 229 %
nickjillings@1528 230 %\begin{itemize}
nickjillings@1528 231 %\item
nickjillings@1528 232 %\end{itemize}
nickjillings@1528 233 \section{Interface}\label{sec:interface}
nickjillings@1528 234
nickjillings@1528 235 At this point, we have implemented the interface of the MATLAB-based APE (Audio Perceptual Evaluation) toolbox \cite{deman2014b}. This shows one marker for each simultaneously evaluated audio fragment on one or more horizontal axes, that can be moved to rate or rank the respective fragments in terms of any subjective property, as well as a comment box for every marker, and any extra text boxes for extra comments.
nickjillings@1528 236 The reason for such an interface, where all stimuli are presented on a single rating axis (or multiple axes if multiple subjective qualities need to be evaluated), is that it urges the subject to consider the rating and/or ranking of the stimuli relative to one another, as opposed to comparing each individual stimulus to a given reference, as is the case with e.g. a MUSHRA test \cite{mushra}. As such, it is ideal for any type of test where the goal is to carefully compare samples against each other, like perceptual evaluation of different mixes of music recordings \cite{deman2015a} or sound synthesis models \cite{durr2015implementation}, as opposed to comparing results of source separation algorithms \cite{mushram} or audio with lower data rate \cite{mushra} to a high quality reference signal.
nickjillings@1528 237
nickjillings@1528 238 The markers on the slider at the top of the page are positioned randomly, to minimise the bias that may be introduced when the initial positions are near the beginning, end or middle of the slider. Another approach is to place the markers outside of the slider bar at first and have the subject drag them in, but the authors believe this doesn't encourage careful consideration and comparison of the different fragments as the implicit goal of the test becomes to audition and drag each fragment in just once, rather than to compare all fragments rigorously.
nickjillings@1528 239
nickjillings@1528 240 See Figure \ref{fig:interface} for an example of the interface. %? change if a new interface is shown
nickjillings@1528 241
nickjillings@1528 242 %Most of these functions are specific to the APE interface design, for instance the AB test will need a different structure for the audio engine and loading of files, since multiple instances of the same file are required. % more generally these pertain to any typeof multi-stimulus test - not quite useful for AB tests, method of adjustment, ABX, and so on.
nickjillings@1528 243 %There are some areas of the design where certain design choices had to be made such as with the markers.
nickjillings@1528 244
nickjillings@1528 245 %For instance, the option to provide free-text comment fields allows for tests with individual vocabulary methods, as opposed to only allowing quantitative scales associated to a fixed set of descriptors.
nickjillings@1528 246
nickjillings@1528 247 \begin{figure*}[ht]
nickjillings@1528 248 \centering
nickjillings@1528 249 \includegraphics[width=.95\textwidth]{interface.png}
nickjillings@1528 250 \caption{Example interface, with one axis, seven fragments, and text, radio button and check box style comments.}
nickjillings@1528 251 \label{fig:interface}
nickjillings@1528 252 \end{figure*}
nickjillings@1528 253
nickjillings@1528 254
nickjillings@1528 255 \section{Architecture}\label{sec:architecture} % or implementation?
nickjillings@1528 256
nickjillings@1528 257 The tool uses entirely client side processing utilising the new HTML5 Web Audio API, supported by most major web browsers. The API allows for constructing audio processing elements and connecting them together to produce a high quality, real time signal process to manipulate audio streams. The API supports multichannel processing and has an accurate playback timer for precise, scheduled playback control. The API is controlled through the browser JavaScript engine and is therefore highly configurable. Processing is all performed in a low latency thread separate from the main JavaScript thread, so there is no blocking due to real time processing.
nickjillings@1528 258
nickjillings@1528 259 The web tool itself is split into several files to operate:
nickjillings@1528 260 \begin{itemize}
nickjillings@1528 261 \item \texttt{index.html}: The main index file to load the scripts, this is the file the browser must request to load.
nickjillings@1528 262 \item \texttt{core.js}: Contains global functions and object prototypes to define the audio playback engine, audio objects and loading media files
nickjillings@1528 263 \item \texttt{ape.js}: Parses setup files to create the interface as instructed, following the same style chain as the MATLAB APE Tool \cite{deman2014b}.
nickjillings@1528 264 \end{itemize}
nickjillings@1528 265
nickjillings@1528 266 The HTML file loads the \texttt{core.js} file along with a few other ancillary files (such as the jQuery JavaScript extensions\footnote{http://jquery.com/}), at which point the browser JavaScript begins to execute the on-page instructions, which gives the URL of the test setup XML document (outlined in Section \ref{sec:setupresultsformats}). \texttt{core.js} parses this document and executes the functions in \texttt{ape.js} to build the web page. The reason for separating these two files is to allow for further interface designs (such as MUSHRA \cite{mushra} or 2D rating \cite{bech}) to be used, which would still require the same underlying core functions outlined in \texttt{core.js}.
nickjillings@1528 267
nickjillings@1528 268 The \texttt{ape.js} file has several main functions but the most important are documented here. \textit{loadInterface(xmlDoc)} is called to decode the supplied project document in respect for the interface specified and define any global structures (such as the slider interface). It also identifies the number of pages in the test and randomises the order, if specified to do so. This is the only mandatory function in any of the interface files as this is called by \texttt{core.js} when the document is ready. \texttt{core.js} cannot 'see' any interface specific functions and therefore cannot assume any are available. Therefore \textit{loadInterface(xmlDoc)} is essential to set up the entire test environment. Because the interface files are loaded by \texttt{core.js} and because the functions in \texttt{core.js} are global, the interface files can `see' the \texttt{core.js} file and can therefore not only interact with it, but also modify it.
nickjillings@1528 269
nickjillings@1528 270 Each test page is loaded using \textit{loadTest(id)} which performs two major tasks: to populate the interface with the slider elements and comment boxes; and secondly to instruct the \textit{audioEngine} to load the audio fragments and construct the backend audio graph. \textit{loadTest(id)} also instructs the audio engine in \texttt{core.js} to create the \textit{audioObject}.
nickjillings@1528 271 These are custom audio nodes, one representing each audio element specified in each page.
nickjillings@1528 272 They consist of a \textit{bufferSourceNode} (a node which holds a buffer of audio samples for playback) and a \textit{gainNode}, both of which are Web Audio API Nodes. Various functions are applied, depending on which metrics are enabled, to record the interaction with the audio element. These nodes are then connected to the \textit{audioEngine} (itself a custom web audio node) containing a \textit{gainNode} (where the various \textit{audioObject}s connect to) for summation before passing the output to the \textit{destinationNode}, a permanent node of the Web Audio API created as the master output. Here, the browser then passes the audio information to the system. % Does this now make sense?
nickjillings@1528 273 % audio object/audioObject/Audio Object: -- should always be audioObject if talking about the JavaScript object, otherwise should say audio element or audio fragment.
nickjillings@1528 274
nickjillings@1528 275 When an \textit{audioObject} is created, it is given the URL of the audio sample to load. This is downloaded into the browser asynchronously using the \textit{XMLHttpRequest} object, which downloads any file into the JavaScript environment for further processing. This is particularly useful for the Web Audio API because it supports downloading of files in their binary form for decoding. Once downloaded the file is decoded using the Web Audio API offline decoder. This uses the browser available decoding schemes to decode the audio files into raw float32 arrays, which are in turn passed to the relevant \textit{audioObject} for playback.
nickjillings@1528 276
nickjillings@1528 277 Once each page of the test is completed, identified by pressing the Submit button, the \textit{pageXMLSave(testId)} is called to store all of the collected data until all pages of the test are completed. After the final test and any post-test questions are completed, the \textit{interfaceXMLSave()} function is called. This function generates the final XML file for submission as outlined in Section \ref{sec:setupresultsformats}.
nickjillings@1528 278
nickjillings@1528 279 \vspace{-1em}
nickjillings@1528 280
nickjillings@1528 281 \section{Support and limitations}\label{sec:support}
nickjillings@1528 282
nickjillings@1528 283 Different browsers support a different set of audio file formats and are not consistent in any format. Currently the Web Audio API is best supported in Chrome, Firefox, Opera and Safari. All of these support the use of the uncompressed WAV format. Although not a compact, web friendly format, most transport systems are of a high enough bandwidth this should not be a problem. Ogg Vorbis is another well supported format across the four supported major desktop browsers, as well as MP3 (although Firefox may not support all MP3 types\footnote{https://developer.mozilla.org/en-US/docs/Web/HTML/\\Supported\_media\_formats}). %https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats
nickjillings@1528 284 One issue of the Web Audio API is that the sample rate is assigned by the system sound device, rather than requested and does not have the ability to request a different one. % Does this make sense? The problem is across all audio files.
nickjillings@1528 285 As the sampling rate and the effect of resampling may be critical for some listening tests, the default operation when an audio file is loaded with a different sample rate to that of the system is to convert the sample rate. To provide a check for this, the desired sample rate can be supplied with the setup XML and checked against. If the sample rates do not match, a browser alert window is shown asking for the sample rate to be correctly adjusted.
nickjillings@1528 286 This happens before any loading or decoding of audio files so the browser will only be instructed to fetch files if the system sample rate meets the requirements, avoiding multiple requests for large files until they are actually needed.
nickjillings@1528 287
nickjillings@1528 288 %During playback, the playback nodes loop indefinitely until playback is stopped. The gain nodes in the \textit{audioObject}s enable dynamic muting of nodes. When a bar in the sliding ranking is clicked, the audio engine mutes all \textit{audioObject}s and un-mutes the clicked one. Therefore, if the audio samples are perfectly aligned up and of the same sample length, they will remain perfectly aligned with each other.
nickjillings@1528 289 % Don't think this is relevant anymore
nickjillings@1528 290
nickjillings@1528 291
nickjillings@1528 292 \section{Input and result files}\label{sec:setupresultsformats}
nickjillings@1528 293
nickjillings@1528 294 The setup and result files both use the common XML document format to outline the various parameters. The setup file determines the interface to use, the location of audio files, the number of pages and other parameters to define the testing environment. Having one document to modify allows for quick manipulation in a `human readable' form to create new tests, or adjust current ones, without needing to edit multiple web files. Furthermore, we also provide a simple web page to enter all these settings without needing to manipulate the raw XML. An example of such an XML document is presented below. % I mean the .js and .html files, though not sure if any better.
nickjillings@1528 295
nickjillings@1528 296
nickjillings@1528 297
nickjillings@1528 298
nickjillings@1528 299 \lstset{
nickjillings@1528 300 basicstyle=\ttfamily,
nickjillings@1528 301 columns=fullflexible,
nickjillings@1528 302 showstringspaces=false,
nickjillings@1528 303 commentstyle=\color{grey}\upshape
nickjillings@1528 304 }
nickjillings@1528 305
nickjillings@1528 306 \lstdefinelanguage{XML}
nickjillings@1528 307 {
nickjillings@1528 308 morestring=[b]",
nickjillings@1528 309 morestring=[s]{>}{<},
nickjillings@1528 310 morecomment=[s]{<?}{?>},
nickjillings@1528 311 stringstyle=\color{black} \bfseries,
nickjillings@1528 312 identifierstyle=\color{darkblue} \bfseries,
nickjillings@1528 313 keywordstyle=\color{cyan} \bfseries,
nickjillings@1528 314 morekeywords={xmlns,version,type},
nickjillings@1528 315 breaklines=true% list your attributes here
nickjillings@1528 316 }
nickjillings@1528 317 \scriptsize
nickjillings@1528 318 \lstset{language=XML}
nickjillings@1528 319
nickjillings@1528 320 \begin{lstlisting}
nickjillings@1528 321 <?xml version="1.0" encoding="utf-8"?>
nickjillings@1528 322 <BrowserEvalProjectDocument>
nickjillings@1528 323 <setup interface="APE" projectReturn="/save" randomiseOrder='false' collectMetrics='true'>
nickjillings@1528 324 <PreTest>
nickjillings@1528 325 <question id="location" mandatory="true">Please enter your location.</question>
nickjillings@1528 326 <number id="age" min="0">Please enter your age</number>
nickjillings@1528 327 </PreTest>
nickjillings@1528 328 <PostTest>
nickjillings@1528 329 <statement>Thank you for taking this listening test!</statement>
nickjillings@1528 330 </PostTest>
nickjillings@1528 331 <Metric>
nickjillings@1528 332 <metricEnable>testTimer</metricEnable>
nickjillings@1528 333 <metricEnable>elementTimer</metricEnable>
nickjillings@1528 334 <metricEnable>elementInitialPosition</metricEnable>
nickjillings@1528 335 <metricEnable>elementTracker</metricEnable>
nickjillings@1528 336 <metricEnable>elementFlagListenedTo</metricEnable>
nickjillings@1528 337 <metricEnable>elementFlagMoved</metricEnable>
nickjillings@1528 338 </Metric>
nickjillings@1528 339 <interface>
nickjillings@1528 340 <anchor>20</anchor>
nickjillings@1528 341 <reference>80</reference>
nickjillings@1528 342 </interface>
nickjillings@1528 343 </setup>
nickjillings@1528 344 <audioHolder id="test-0" hostURL="example_eval/" randomiseOrder='true'>
nickjillings@1528 345 <interface>
nickjillings@1528 346 <title>Example Test Question</title>
nickjillings@1528 347 <scale position="0">Min</scale>
nickjillings@1528 348 <scale position="100">Max</scale>
nickjillings@1528 349 <commentBoxPrefix>Comment on fragment</commentBoxPrefix>
nickjillings@1528 350 </interface>
nickjillings@1528 351 <audioElements url="1.wav" id="elem1"/>
nickjillings@1528 352 <audioElements url="2.wav" id="elem2"/>
nickjillings@1528 353 <audioElements url="3.wav" id="elem3"/>
nickjillings@1528 354 <CommentQuestion id="generalExperience" type="text">General Comments</CommentQuestion>
nickjillings@1528 355 <PreTest/>
nickjillings@1528 356 <PostTest>
nickjillings@1528 357 <question id="songGenre" mandatory="true">Please enter the genre of the song.</question>
nickjillings@1528 358 </PostTest>
nickjillings@1528 359 </audioHolder>
nickjillings@1528 360 </BrowserEvalProjectDocument>
nickjillings@1528 361
nickjillings@1528 362 \end{lstlisting}
nickjillings@1528 363
nickjillings@1528 364 \normalsize
nickjillings@1528 365 \vspace{-1em}
nickjillings@1528 366
nickjillings@1528 367 \subsection{Setup and configurability}
nickjillings@1528 368
nickjillings@1528 369 The setup document has several defined nodes and structure which are documented with the source code. For example, there is a section for general setup options where any pre-test and post-test questions and statements can be defined. Pre- and post-test dialogue boxes allow for comments or questions to be presented before or after the test, to convey listening test instructions, and gather information about the subject, listening environment, and overall experience of the test. In the example set up document above, a question box with the id `location' is added, which is set to be mandatory to answer. The question is in the PreTest node meaning it will appear before any testing will begin. When the result for the entire test is shown, the response will appear in the PreTest node with the id `location' allowing it to be found easily, provided the id values are meaningful.
nickjillings@1528 370
nickjillings@1528 371 We try to cater to a diverse audience with this toolbox, while ensuring it is simple, elegant and straightforward. To that end, we currently include the following options that can be easily switched on and off, by setting the value in the input XML file.
nickjillings@1528 372
nickjillings@1528 373 \begin{itemize}[leftmargin=*]%Should have used a description list for this.
nickjillings@1528 374 \item \textbf{Snap to corresponding position}: When enabled and a fragment is playing, the playhead skips to the same position in the next fragment that is clicked. Otherwise, each fragment is played from the start.
nickjillings@1528 375 \item \textbf{Loop fragments}: Repeat current fragment when end is reached, until the `Stop' or `Submit' button is clicked.
nickjillings@1528 376 \item \textbf{Comments}: Displays a separate comment box for each fragment in the page.
nickjillings@1528 377 \item \textbf{General comment}: Create additional comment boxes to the fragment comment boxes, with a custom question and various input formats such as checkbox or radio.
nickjillings@1528 378 \item \textbf{Resampling}: When this is enabled, fragments are resampled to match the subject's system's sample rate (a default feature of the Web Audio API). When it is not, an error is shown when the system does not match the requested sample rate.
nickjillings@1528 379 \item \textbf{Randomise page order}: Randomises the order in which different `pages' are presented. % are we calling this 'pages'?
nickjillings@1528 380 \item \textbf{Randomise fragment order}: Randomises the order and numbering of the markers and comment boxes corresponding to the fragments. Fragments are referenced to their given ID so referencing is possible (such as `this is much brighter than fragment 4').
nickjillings@1528 381 \item \textbf{Require (full) playback}: Require that each fragment has been played at least once, partly or fully.
nickjillings@1528 382 \item \textbf{Require moving}: Require that each marker is moved (dragged) at least once.
nickjillings@1528 383 \item \textbf{Require comments}: Require the subject to write a comment for each fragment.
nickjillings@1528 384 \item \textbf{Repeat test}: Number of times each page in the test should be repeated (none by default), to allow familiarisation with the content and experiment, and to investigate consistency of user and variability due to familiarity. These are all gathered before shuffling the order so repeated tests are not back-to-back if possible.
nickjillings@1528 385 \item \textbf{Returning to previous pages}: Indicates whether it is possible to go back to a previous `page' in the test.
nickjillings@1528 386 \item \textbf{Lowest rating below [value]}: To enforce a certain use of the rating scale, it can be required to rate at least one sample below a specified value.
nickjillings@1528 387 \item \textbf{Highest rating above [value]}: To enforce a certain use of the rating scale, it can be required to rate at least one sample above a specified value.
nickjillings@1528 388 \item \textbf{Reference}: Allows for a separate sample (outside of the axis) to be the `reference', which the subject can play back during the test to help with the task at hand \cite{mushra}.
nickjillings@1528 389 \item \textbf{Hidden reference/anchor}: Whether or not an explicit `reference' is provided, the `hidden reference' should be rated above a certain value \cite{mushra} - this can be enforced.
nickjillings@1528 390 Similarly, a `hidden anchor' should be rated lower than a certain value \cite{mushra}.
nickjillings@1528 391 \item \textbf{Show scrub bar}: Display a playhead on a scrub bar to show the position in the current fragment.
nickjillings@1528 392 %\item \textbf{Drag playhead}: If scrub bar is visible, allow dragging to move back or forward in a fragment.
nickjillings@1528 393 \end{itemize}
nickjillings@1528 394
nickjillings@1528 395 When one of these options is not included in the setup file, they assume a default value. As a result, the input file can be kept very compact if default values suffice for the test.
nickjillings@1528 396
nickjillings@1528 397 % loop, snap to corresponding position, comments, 'general' comment, require same sampling rate, different types of randomisation
nickjillings@1528 398
nickjillings@1528 399 \subsection{Results}
nickjillings@1528 400
nickjillings@1528 401 The results file is dynamically generated by the interface upon clicking the `Submit' button. This also executes checks, depending on the setup file, to ensure that all fragments have been played back, rated and commented on. The XML output returned contains a node per fragment and contains both the corresponding marker's position and any comments written in the associated comment box. The rating returned is normalised to be a value between 0 and 1, normalising the pixel representation of different browser windows. The results also contain information collected by any defined pre/post questions. An excerpt of an output file is presented below detailing the data collected for a single audioElement.
nickjillings@1528 402
nickjillings@1528 403 \scriptsize
nickjillings@1528 404 \lstset{language=XML}
nickjillings@1528 405
nickjillings@1528 406 \begin{lstlisting}
nickjillings@1528 407 <browserevaluationresult>
nickjillings@1528 408 <datetime>
nickjillings@1528 409 <date year="2015" month="5" day="28">2015/5/28</date>
nickjillings@1528 410 <time hour="13" minute="19" secs="17">13:19:17</time>
nickjillings@1528 411 </datetime>
nickjillings@1528 412 <pretest>
nickjillings@1528 413 <comment id="location">Control Room</comment>
nickjillings@1528 414 </pretest>
nickjillings@1528 415 <audioholder>
nickjillings@1528 416 <pretest></pretest>
nickjillings@1528 417 <posttest>
nickjillings@1528 418 <comment id="songGenre">Pop</comment>
nickjillings@1528 419 </posttest>
nickjillings@1528 420 <metric>
nickjillings@1528 421 <metricresult id="testTime">813.32</metricresult>
nickjillings@1528 422 </metric>
nickjillings@1528 423 <audioelement id="elem1">
nickjillings@1528 424 <comment>
nickjillings@1528 425 <question>Comment on fragment 1</question>
nickjillings@1528 426 <response>Good, but vocals too quiet.</response>
nickjillings@1528 427 </comment>
nickjillings@1528 428 <value>0.639010989010989</value>
nickjillings@1528 429 <metric>
nickjillings@1528 430 <metricresult id="elementTimer">111.05</metricresult>
nickjillings@1528 431 <metricresult id="elementTrackerFull">
nickjillings@1528 432 <timepos id="0">
nickjillings@1528 433 <time>61.60</time>
nickjillings@1528 434 <position>0.6390</position>
nickjillings@1528 435 </timepos>
nickjillings@1528 436 </metricresult>
nickjillings@1528 437 <metricresult id="elementInitialPosition">0.6571</metricresult>
nickjillings@1528 438 <metricresult id="elementFlagListenedTo">true</metricresult>
nickjillings@1528 439 </metric>
nickjillings@1528 440 </audioelement>
nickjillings@1528 441 </audioHolder>
nickjillings@1528 442 </browserevaluationresult>
nickjillings@1528 443
nickjillings@1528 444 \end{lstlisting}
nickjillings@1528 445
nickjillings@1528 446 \normalsize
nickjillings@1528 447 \vspace{-.5em}
nickjillings@1528 448 Each page of testing is returned with the results of the entire page included in the structure. One \texttt{audioelement} node is created per audio fragment per page, along with its ID. This includes several child nodes including the rating between 0 and 1, the comment, and any other collected metrics including how long the element was listened for, the initial position, and boolean flags showing if the element was listened to, moved and commented on. Furthermore, each user action (manipulation of any interface element, such as playback or moving a marker) can be logged along with a the corresponding time code.
nickjillings@1528 449 We also store session data such as the time the test took place and the duration of the test.
nickjillings@1528 450 We provide the option to store the results locally, and/or to have them sent to a server.
nickjillings@1528 451
nickjillings@1528 452 %Here is an example of the set up XML and the results XML: % perhaps best to refer to each XML after each section (set up <> results)
nickjillings@1528 453 % Should we include an Example of the input and output XML structure?? --> Sure.
nickjillings@1528 454
nickjillings@1528 455 %An example of the returned \textit{audioElement} node in the results XML file is as follows.
nickjillings@1528 456 %
nickjillings@1528 457 %\texttt{<audioelement id="8"> \\
nickjillings@1528 458 %<comment> \\
nickjillings@1528 459 %<question>Comment on track 0</question> \\
nickjillings@1528 460 %<response> The drums were punchy </response> \\
nickjillings@1528 461 %</comment> \\
nickjillings@1528 462 %<value> 0.25169491525423726 </value> \\
nickjillings@1528 463 %<metric> \\
nickjillings@1528 464 %<metricresult id="elementTimer"> \\ 2.3278004535147385< /metricresult> \\
nickjillings@1528 465 %<metricresult id="elementTrackerFull"> \\
nickjillings@1528 466 %<timepos id="0"> \\
nickjillings@1528 467 %<time>1.7937414965986385</time> \\
nickjillings@1528 468 %<position>0.41694915254237286</position> \\
nickjillings@1528 469 %</timepos> \\
nickjillings@1528 470 %<timepos id="1"> \\
nickjillings@1528 471 %<time>2.6993197278911563</time> \\
nickjillings@1528 472 %<position>0.45847457627118643</position> \\
nickjillings@1528 473 %</timepos> \\</metricresult> \\
nickjillings@1528 474 %<metricresult id="elementInitialPosition"> 0.47796610169491527 </metricresult> \\
nickjillings@1528 475 %<metricresult id="elementFlagListenedTo"> true< /metricresult> \\
nickjillings@1528 476 %<metricresult id="elementFlagMoved"> true </metricresult> \\
nickjillings@1528 477 %</metric> \\
nickjillings@1528 478 %</audioelement>}
nickjillings@1528 479
nickjillings@1528 480 % BRECHT: scripts
nickjillings@1528 481
nickjillings@1528 482 \begin{figure}[htpb]
nickjillings@1528 483 \centering
nickjillings@1528 484 \includegraphics[width=.45\textwidth]{boxplot.png}
nickjillings@1528 485 \caption{An example boxplot showing ratings by different subjects on fragments labeled `A' through `G'. }
nickjillings@1528 486 \label{fig:boxplot}
nickjillings@1528 487 \end{figure}
nickjillings@1528 488
nickjillings@1528 489 Python scripts are included to easily store ratings and comments in a CSV file, and to display graphs of numerical ratings (see Figure \ref{fig:boxplot}) or visualise the test's timeline.
nickjillings@1528 490 Visualisation of plots requires the free matplotlib library\footnote{http://matplotlib.org}.
nickjillings@1528 491
nickjillings@1528 492
nickjillings@1528 493 \section{Conclusions and future work}\label{sec:conclusions}
nickjillings@1528 494
nickjillings@1528 495 In this paper we have presented an approach to creating a browser-based listening test environment that can be used for a variety of types of perceptual evaluation of audio.
nickjillings@1528 496 Specifically, we discussed the use of the toolbox in the context of assessment of preference for different production practices, with identical source material.
nickjillings@1528 497 The purpose of this paper is to outline the design of this tool, to describe our implementation using basic HTML5 functionality, and to discuss design challenges and limitations of our approach. This tool differentiates itself from other perceptual audio tools by enabling web technologies for multiple participants to perform the test without the need for proprietary software such as MATLAB. The tool also allows for any interface to be built using HTML5 elements to create a variety of dynamic, multiple-stimulus listening test interfaces. It enables quick setup of simple tests with the ability to manage complex tests through a single file. Finally it uses the XML document format to store the results allowing for processing and analysis of results in various third party software such as MATLAB or Python.
nickjillings@1528 498
nickjillings@1528 499 % future work
nickjillings@1528 500 Further work may include the development of other common test designs, such as MUSHRA \cite{mushra}, 2D valence and arousal/activity \cite{eerola2009prediction}, and others. We will add functionality to assist with setting up large-scale tests with remote subjects, so this becomes straightforward and intuitive.
nickjillings@1528 501 In addition, we will keep on improving and expanding the tool, and highly welcome feedback and contributions from the community.
nickjillings@1528 502
nickjillings@1528 503 The source code of this tool can be found on \\ \texttt{code.soundsoftware.ac.uk/projects/}\\ \texttt{webaudioevaluationtool}.
nickjillings@1528 504
nickjillings@1528 505
nickjillings@1528 506 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nickjillings@1528 507 %bibliography here
nickjillings@1528 508 \bibliography{smc2015template}
nickjillings@1528 509
nickjillings@1528 510 \end{document}