annotate docs/WAC2016/WAC2016.tex @ 715:50c651d27330

Paper: Added interface screenshot and box plot example
author Brecht De Man <BrechtDeMan@users.noreply.github.com>
date Thu, 15 Oct 2015 20:10:00 +0100
parents
children 9da8a3e65a78
rev   line source
BrechtDeMan@715 1 \documentclass{sig-alternate}
BrechtDeMan@715 2 \usepackage{hyperref} % make links (like references, links to Sections, ...) clickable
BrechtDeMan@715 3 \usepackage{enumitem} % tighten itemize etc by appending '[noitemsep,nolistsep]'
BrechtDeMan@715 4 \usepackage{cleveref}
BrechtDeMan@715 5
BrechtDeMan@715 6 \graphicspath{{img/}} % put the images in this folder
BrechtDeMan@715 7
BrechtDeMan@715 8 \begin{document}
BrechtDeMan@715 9
BrechtDeMan@715 10 % Copyright
BrechtDeMan@715 11 \setcopyright{waclicense}
BrechtDeMan@715 12
BrechtDeMan@715 13
BrechtDeMan@715 14 %% DOI
BrechtDeMan@715 15 %\doi{10.475/123_4}
BrechtDeMan@715 16 %
BrechtDeMan@715 17 %% ISBN
BrechtDeMan@715 18 %\isbn{123-4567-24-567/08/06}
BrechtDeMan@715 19 %
BrechtDeMan@715 20 %%Conference
BrechtDeMan@715 21 %\conferenceinfo{PLDI '13}{June 16--19, 2013, Seattle, WA, USA}
BrechtDeMan@715 22 %
BrechtDeMan@715 23 %\acmPrice{\$15.00}
BrechtDeMan@715 24
BrechtDeMan@715 25 %
BrechtDeMan@715 26 % --- Author Metadata here ---
BrechtDeMan@715 27 \conferenceinfo{Web Audio Conference WAC-2016,}{April 4--6, 2016, Atlanta, USA}
BrechtDeMan@715 28 \CopyrightYear{2016} % Allows default copyright year (20XX) to be over-ridden - IF NEED BE.
BrechtDeMan@715 29 %\crdata{0-12345-67-8/90/01} % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE.
BrechtDeMan@715 30 % --- End of Author Metadata ---
BrechtDeMan@715 31
BrechtDeMan@715 32 \title{Web Audio Evaluation Tool: A framework for subjective assessment of audio}
BrechtDeMan@715 33 %\subtitle{[Extended Abstract]
BrechtDeMan@715 34 %\titlenote{A full version of this paper is available as
BrechtDeMan@715 35 %\textit{Author's Guide to Preparing ACM SIG Proceedings Using
BrechtDeMan@715 36 %\LaTeX$2_\epsilon$\ and BibTeX} at
BrechtDeMan@715 37 %\texttt{www.acm.org/eaddress.htm}}}
BrechtDeMan@715 38 %
BrechtDeMan@715 39 % You need the command \numberofauthors to handle the 'placement
BrechtDeMan@715 40 % and alignment' of the authors beneath the title.
BrechtDeMan@715 41 %
BrechtDeMan@715 42 % For aesthetic reasons, we recommend 'three authors at a time'
BrechtDeMan@715 43 % i.e. three 'name/affiliation blocks' be placed beneath the title.
BrechtDeMan@715 44 %
BrechtDeMan@715 45 % NOTE: You are NOT restricted in how many 'rows' of
BrechtDeMan@715 46 % "name/affiliations" may appear. We just ask that you restrict
BrechtDeMan@715 47 % the number of 'columns' to three.
BrechtDeMan@715 48 %
BrechtDeMan@715 49 % Because of the available 'opening page real-estate'
BrechtDeMan@715 50 % we ask you to refrain from putting more than six authors
BrechtDeMan@715 51 % (two rows with three columns) beneath the article title.
BrechtDeMan@715 52 % More than six makes the first-page appear very cluttered indeed.
BrechtDeMan@715 53 %
BrechtDeMan@715 54 % Use the \alignauthor commands to handle the names
BrechtDeMan@715 55 % and affiliations for an 'aesthetic maximum' of six authors.
BrechtDeMan@715 56 % Add names, affiliations, addresses for
BrechtDeMan@715 57 % the seventh etc. author(s) as the argument for the
BrechtDeMan@715 58 % \additionalauthors command.
BrechtDeMan@715 59 % These 'additional authors' will be output/set for you
BrechtDeMan@715 60 % without further effort on your part as the last section in
BrechtDeMan@715 61 % the body of your article BEFORE References or any Appendices.
BrechtDeMan@715 62
BrechtDeMan@715 63 % FIVE authors instead of four, to leave space between first two authors.
BrechtDeMan@715 64 \numberofauthors{5} % in this sample file, there are a *total*
BrechtDeMan@715 65 % of EIGHT authors. SIX appear on the 'first-page' (for formatting
BrechtDeMan@715 66 % reasons) and the remaining two appear in the \additionalauthors section.
BrechtDeMan@715 67 %
BrechtDeMan@715 68 \author{
BrechtDeMan@715 69 % You can go ahead and credit any number of authors here,
BrechtDeMan@715 70 % e.g. one 'row of three' or two rows (consisting of one row of three
BrechtDeMan@715 71 % and a second row of one, two or three).
BrechtDeMan@715 72 %
BrechtDeMan@715 73 % The command \alignauthor (no curly braces needed) should
BrechtDeMan@715 74 % precede each author name, affiliation/snail-mail address and
BrechtDeMan@715 75 % e-mail address. Additionally, tag each line of
BrechtDeMan@715 76 % affiliation/address with \affaddr, and tag the
BrechtDeMan@715 77 % e-mail address with \email.
BrechtDeMan@715 78 %
BrechtDeMan@715 79 % 1st. author
BrechtDeMan@715 80 \alignauthor Nicholas Jillings\\
BrechtDeMan@715 81 \email{n.g.r.jillings@se14.qmul.ac.uk}
BrechtDeMan@715 82 % dummy author for nicer spacing
BrechtDeMan@715 83 \alignauthor
BrechtDeMan@715 84 % 2nd. author
BrechtDeMan@715 85 \alignauthor Brecht De Man\\
BrechtDeMan@715 86 \email{b.deman@qmul.ac.uk}
BrechtDeMan@715 87 \and % use '\and' if you need 'another row' of author names
BrechtDeMan@715 88 % 3rd. author
BrechtDeMan@715 89 \alignauthor David Moffat\\
BrechtDeMan@715 90 \email{d.j.moffat@qmul.ac.uk}
BrechtDeMan@715 91 % 4th. author
BrechtDeMan@715 92 \alignauthor Joshua D. Reiss\\
BrechtDeMan@715 93 \email{joshua.reiss@qmul.ac.uk}
BrechtDeMan@715 94 \and % new line for address
BrechtDeMan@715 95 \affaddr{Centre for Digital Music, School of Electronic Engineering and Computer Science}\\
BrechtDeMan@715 96 \affaddr{Queen Mary University of London}\\
BrechtDeMan@715 97 \affaddr{Mile End Road,}
BrechtDeMan@715 98 \affaddr{London E1 4NS}\\
BrechtDeMan@715 99 \affaddr{United Kingdom}\\
BrechtDeMan@715 100 }
BrechtDeMan@715 101 %Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London
BrechtDeMan@715 102 %% 5th. author
BrechtDeMan@715 103 %\alignauthor Sean Fogarty\\
BrechtDeMan@715 104 % \affaddr{NASA Ames Research Center}\\
BrechtDeMan@715 105 % \affaddr{Moffett Field}\\
BrechtDeMan@715 106 % \email{fogartys@amesres.org}
BrechtDeMan@715 107 %% 6th. author
BrechtDeMan@715 108 %\alignauthor Charles Palmer\\
BrechtDeMan@715 109 % \affaddr{Palmer Research Laboratories}\\
BrechtDeMan@715 110 % \affaddr{8600 Datapoint Drive}\\
BrechtDeMan@715 111 % \email{cpalmer@prl.com}
BrechtDeMan@715 112 %}
BrechtDeMan@715 113 % There's nothing stopping you putting the seventh, eighth, etc.
BrechtDeMan@715 114 % author on the opening page (as the 'third row') but we ask,
BrechtDeMan@715 115 % for aesthetic reasons that you place these 'additional authors'
BrechtDeMan@715 116 % in the \additional authors block, viz.
BrechtDeMan@715 117 %\additionalauthors{Additional authors: John Smith (The Th{\o}rv{\"a}ld Group,
BrechtDeMan@715 118 %email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat
BrechtDeMan@715 119 %(The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).}
BrechtDeMan@715 120 \date{1 October 2015}
BrechtDeMan@715 121 % Just remember to make sure that the TOTAL number of authors
BrechtDeMan@715 122 % is the number that will appear on the first page PLUS the
BrechtDeMan@715 123 % number that will appear in the \additionalauthors section.
BrechtDeMan@715 124
BrechtDeMan@715 125 \maketitle
BrechtDeMan@715 126 \begin{abstract}
BrechtDeMan@715 127 Here comes the abstract.
BrechtDeMan@715 128 \end{abstract}
BrechtDeMan@715 129
BrechtDeMan@715 130
BrechtDeMan@715 131 \section{Introduction}
BrechtDeMan@715 132
BrechtDeMan@715 133 % Listening tests/perceptual audio evaluation: what are they, why are they important
BrechtDeMan@715 134 % As opposed to limited scope of WAC15 paper: also musical features, realism of sound effects / sound synthesis, performance of source separation and other algorithms...
BrechtDeMan@715 135 Perceptual evaluation of audio, in the form of listening tests, is a powerful way to assess anything from audio codec quality to realism of sound synthesis to the performance of source separation, automated music production and other auditory evaluations.
BrechtDeMan@715 136 In less technical areas, the framework of a listening test can be used to measure emotional response to music or test cognitive abilities.
BrechtDeMan@715 137 % maybe some references? If there's space.
BrechtDeMan@715 138
BrechtDeMan@715 139 % check out http://link.springer.com/article/10.1007/s10055-015-0270-8 - only paper that cited WAC15 paper
BrechtDeMan@715 140
BrechtDeMan@715 141 % Why difficult? Challenges? What constitutes a good interface?
BrechtDeMan@715 142 % Technical, interfaces, user friendliness, reliability
BrechtDeMan@715 143 Several applications for performing perceptual listening tests currently exist, as can be seen in Table \ref{tab:toolboxes}. A review of existing listening test frameworks was undertaken and presented in~\Cref{tab:toolboxes}. HULTI-GEN~\cite{hultigen} is a single toolbox that presents the user with a large number of different test interfaces and allows for customisation of each test interface. The Web Audio Evaluation Toolbox (WAET) stands out as it does not require proprietary software or a specific platform. It also provides a wide range of interface and test types in one user friendly environment. Furthermore, it does not require any progamming experience as any test based on the default test types can be configured in the browser as well. Note that the design of an effective listening test further poses many challenges unrelated to interface design, which are beyond the scope of this paper \cite{bech}.
BrechtDeMan@715 144
BrechtDeMan@715 145 % Why in the browser?
BrechtDeMan@715 146 Web Audio API has important features for performing perceptual tests including sample level manipulation of audio streams \cite{schoeffler2015mushra} and the ability for synchronous and flexible playback. Being in the browser allows leveraging the flexible object oriented JavaScript language and native support for web documents, such as the extensible markup language (XML) which is used for configuration and test result files. Using the web also reduces deployment requirements to a basic web server with advanced functionality such as test collection and automatic processing using PHP. As recruiting participants can be very time-consuming, and as for some tests a large number of participants is needed, browser-based tests \cite{schoeffler2015mushra} can enable participants in multiple locations to perform the test. However, to our knowledge, no tool currently exists that allows the creation of a remotely accessible listening test.
BrechtDeMan@715 147
BrechtDeMan@715 148 Both BeaqleJS \cite{beaqlejs} and mushraJS\footnote{https://github.com/akaroice/mushraJS} also operate in the browser. However BeaqleJS does not make use of the Web Audio API and therefore lacks arbitrary manipulation of audio stream samples, and neither offer an adequately wide choice of test designs for them to be useful to many researchers. %requires programming knowledge?...
BrechtDeMan@715 149
BrechtDeMan@715 150 % only browser-based?
BrechtDeMan@715 151 \begin{table*}[ht]
BrechtDeMan@715 152 \caption{Table with existing listening test platforms and their features}
BrechtDeMan@715 153 \begin{center}
BrechtDeMan@715 154 \begin{tabular}{|*{6}{l|}}
BrechtDeMan@715 155 \hline
BrechtDeMan@715 156 \textbf{Name} & \textbf{Ref.} & \textbf{Language} & \textbf{Interfaces} & \textbf{Remote} & \textbf{All UI} \\
BrechtDeMan@715 157 \hline
BrechtDeMan@715 158 APE & \cite{ape} & MATLAB & multi-stimulus, 1 axis per attribute & & \\
BrechtDeMan@715 159 BeaqleJS & \cite{beaqlejs} & JavaScript & ABX, MUSHRA & (not natively supported) & \\
BrechtDeMan@715 160 HULTI-GEN & \cite{hultigen} & MAX & See Table \ref{tab:toolbox_interfaces}& & \checkmark \\
BrechtDeMan@715 161 mushraJS & & JavaScript & MUSHRA & \checkmark & \\
BrechtDeMan@715 162 MUSHRAM & \cite{mushram} & MATLAB & MUSHRA & & \\
BrechtDeMan@715 163 Scale & \cite{scale} & MATLAB & See Table \ref{tab:toolbox_interfaces} & & \\
BrechtDeMan@715 164 WhisPER & \cite{whisper} & MATLAB & See Table \ref{tab:toolbox_interfaces} & & \checkmark \\
BrechtDeMan@715 165 \textbf{WAET} & \cite{waet} & JavaScript & \textbf{All of the above} & \checkmark & \checkmark \\
BrechtDeMan@715 166 \hline
BrechtDeMan@715 167 \end{tabular}
BrechtDeMan@715 168 \end{center}
BrechtDeMan@715 169 \label{tab:toolboxes}
BrechtDeMan@715 170 \end{table*}%
BrechtDeMan@715 171
BrechtDeMan@715 172 \begin{table*}[ht]
BrechtDeMan@715 173 \caption{Table with interfaces and which toolboxes support them}
BrechtDeMan@715 174 \begin{center}
BrechtDeMan@715 175 \begin{tabular}{|*{5}{l|}}
BrechtDeMan@715 176 \hline
BrechtDeMan@715 177 \textbf{Interface} & \textbf{HULTI-GEN} & \textbf{Scale} & \textbf{WhisPER} & \textbf{WAET} \\
BrechtDeMan@715 178 \hline
BrechtDeMan@715 179 MUSHRA (ITU-R BS. 1534) & \checkmark & & & \checkmark \\
BrechtDeMan@715 180 Rank scale & \checkmark & & & \checkmark \\
BrechtDeMan@715 181 Likert scale & \checkmark & & \checkmark & \checkmark \\
BrechtDeMan@715 182 ABC/HR (ITU-R BS. 1116) & \checkmark & & & \checkmark \\
BrechtDeMan@715 183 -50 to 50 Bipolar with Ref & \checkmark & & & \checkmark \\
BrechtDeMan@715 184 Absolute Category Rating (ACR) Scale & \checkmark & & & \checkmark \\
BrechtDeMan@715 185 Degredation Category Rating (DCR) Scale & \checkmark & & & \checkmark \\
BrechtDeMan@715 186 Comparison Category Rating (CCR) Scale & \checkmark & & \checkmark & \checkmark \\
BrechtDeMan@715 187 9 Point Hedonic Category Rating Scale & \checkmark & & \checkmark & \checkmark \\
BrechtDeMan@715 188 ITU-R 5 Point Continuous Impairment Scale & \checkmark & & & \checkmark \\
BrechtDeMan@715 189 Pairwise Comparison / AB test & \checkmark & & & \checkmark \\
BrechtDeMan@715 190 Multi-attribute ratings & \checkmark & & & \checkmark \\
BrechtDeMan@715 191 ABX Test & \checkmark & & & \checkmark \\
BrechtDeMan@715 192 Adaptive psychophysical methods & & & \checkmark & \\
BrechtDeMan@715 193 Repertory Grid Technique (RGT) & & & \checkmark & \\
BrechtDeMan@715 194 Semantic differential & & \checkmark & \checkmark & \\
BrechtDeMan@715 195 n-Alternative Forced choice & & \checkmark & & \\
BrechtDeMan@715 196
BrechtDeMan@715 197 \hline
BrechtDeMan@715 198 \end{tabular}
BrechtDeMan@715 199 \end{center}
BrechtDeMan@715 200 \label{tab:toolbox_interfaces}
BrechtDeMan@715 201 \end{table*}%
BrechtDeMan@715 202
BrechtDeMan@715 203 %
BrechtDeMan@715 204 %Selling points: remote tests, visualisaton, create your own test in the browser, many interfaces, few/no dependencies, flexibility
BrechtDeMan@715 205
BrechtDeMan@715 206 %[Talking about what we do in the various sections of this paper. Referring to \cite{waet}. ]
BrechtDeMan@715 207 To meet the need for a cross-platform, versatile and easy-to-use listening test tool, we previously developed the Web Audio Evaluation Tool \cite{waet} which at the time of its inception was capable of running a listening test in the browser from an XML configuration file, and storing an XML file as well, with one particular interface. We have now expanded this into a tool with which a wide range of listening test types can easily be constructed and set up remotely, without any need for manually altering code or configuration files, and which allows visualisation of the collected results in the browser. In this paper, we discuss these different aspects and explore which future improvements would be possible. Specifically, in Section \ref{sec:architecture} we cover the general implementation aspects, with a focus on the Web Audio API, followed by a discussion of the requirements for successful remote tests in Section \ref{sec:remote}. Section \ref{sec:interfaces} describes the various interfaces the tool supports, as well as how to keep this manageable. Finally, in Section \ref{sec:analysis} we provide an overview of the analysis capabilities in the browser, before summarising our findings and listing future research directions in Section \ref{sec:conclusion}.
BrechtDeMan@715 208
BrechtDeMan@715 209 \begin{figure}[tb]
BrechtDeMan@715 210 \centering
BrechtDeMan@715 211 \includegraphics[width=.5\textwidth]{interface.png}
BrechtDeMan@715 212 \caption{A simple example of a multi-stimulus, single attribute, single rating scale test with a reference and comment fields.}
BrechtDeMan@715 213 \label{fig:interface}
BrechtDeMan@715 214 \end{figure}
BrechtDeMan@715 215
BrechtDeMan@715 216 \begin{comment}
BrechtDeMan@715 217 % MEETING 8 OCTOBER
BrechtDeMan@715 218 \subsection{Meeting 8 October}
BrechtDeMan@715 219 \begin{itemize}
BrechtDeMan@715 220 \item Do we manipulate audio?\\
BrechtDeMan@715 221 \begin{itemize}
BrechtDeMan@715 222 \item Add loudness equalisation? (test\_create.html) Tag with gains.
BrechtDeMan@715 223 \item Add volume slider?
BrechtDeMan@715 224 \item Cross-fade (in interface node): default 0, number of seconds
BrechtDeMan@715 225 \item Also: we use the playback buffer to present metrics of which portion is listened to
BrechtDeMan@715 226 \end{itemize}
BrechtDeMan@715 227 \item Logging system information: whichever are possible (justify others)
BrechtDeMan@715 228 \item Input streams as audioelements
BrechtDeMan@715 229 \item Capture microphone to estimate loudness (especially Macbook)
BrechtDeMan@715 230 \item Test page (in-built oscillators): left-right calibration, ramp up test tone until you hear it; optional compensating EQ (future work implementing own filters) --> Highlight issues!
BrechtDeMan@715 231 \item Record IP address (PHP function, grab and append to XML file)
BrechtDeMan@715 232 \item Expand anchor/reference options
BrechtDeMan@715 233 \item AB / ABX
BrechtDeMan@715 234 \end{itemize}
BrechtDeMan@715 235
BrechtDeMan@715 236 \subsubsection{Issues}
BrechtDeMan@715 237 \begin{itemize}
BrechtDeMan@715 238 \item Filters not consistent (Nick to test across browsers)
BrechtDeMan@715 239 \item Playback audiobuffers need to be destroyed and rebuilt each time
BrechtDeMan@715 240 \item Can't get channel data, hardware input/output...
BrechtDeMan@715 241 \end{itemize}
BrechtDeMan@715 242 \end{comment}
BrechtDeMan@715 243
BrechtDeMan@715 244 \section{Architecture} % title? 'back end'? % NICK
BrechtDeMan@715 245 \label{sec:architecture}
BrechtDeMan@715 246 %A slightly technical overview of the system. Talk about XML, JavaScript, Web Audio API, HTML5.
BrechtDeMan@715 247
BrechtDeMan@715 248 Although WAET uses a sparse subset of the Web Audio API functionality, its performance comes directly from using it. Listening tests can convey large amounts of information other than obtaining the perceptual relationship between the audio fragments. With WAET it is possible to obtain which parts of the audio fragments were listened to and when, at what point in the audio stream the participant switched to a different fragment, and how a fragment's rating was adjusted over time within a session, to name a few. Not only does this allow evaluation of a wealth of perceptual aspects, but it helps detect poor participants whose results are potentially not representative.
BrechtDeMan@715 249
BrechtDeMan@715 250 One of the key initial design parameters for WAET was to make the tool as open as possible to non-programmers and to this end all of the user modifiable options are included in a single XML document. This document is called the specification document and can be designed either by manually writing the XML (or modifying an existing document or template) or using our included test creator. These are standalone HTML pages which do not require any server or internet connection and help a build the test specification document. The first (test\_create.html) is for simpler tests and operates step-by-step to guide the user. It supports media through drag and drop and a clutter free interface. The advanced version is for more advanced tests where raw XML manipulation is not wanted but the same freedom is required (whilst keeping a safety net). Both models support automatic verification to ensure the XML file is valid and will highlight areas which are either incorrect and would cause an error, or options which should be removed as they are blank.
BrechtDeMan@715 251
BrechtDeMan@715 252 The basic test creator utilises the Web Audio API to perform quick playback checks and also allows for loudness normalisation techniques inspired from \cite{ape}. These are calculated offline by accessing the raw audio samples exposed from the buffer before being applied to the audio element as a gain attribute. This is used in the test to perform loudness normalisation without needing to edit any audio files. Equally the gain can be modified in either editor using an HTML5 slider or number box.
BrechtDeMan@715 253
BrechtDeMan@715 254 %Describe and/or visualise audioholder-audioelement-... structure.
BrechtDeMan@715 255 The specification document contains the URL of the audio fragments for each test page. These fragments are downloaded asynchronously in the test and decoded offline by the Web Audio offline decoder. The resulting buffers are assigned to a custom Audio Objects node which tracks the fragment buffer, the playback bufferSourceNode, the XML information including its unique test ID, the interface object(s) associated with the fragment and any metric or data collection objects. The Audio Object is controlled by an over-arching custom Audio Context node (not to be confused with the Web Audio Context). This parent JS Node allows for session wide control of the Audio Objects including starting and stopping playback of specific nodes.
BrechtDeMan@715 256
BrechtDeMan@715 257 The only issue with this model is the bufferNode in the Web Audio API, which is implemented in the standard as a `use once' object. Once the bufferNode has been played, the bufferNode must be discarded as it cannot be instructed to play the same bufferSourceNode again. Therefore on each start request the buffer object must be created and then linked with the stored bufferSourceNode. This is an odd behaviour for such a simple object which has no alternative except to use the HTML5 audio element. However they do not have the ability to synchronously start on a given time and therefore not suited.
BrechtDeMan@715 258
BrechtDeMan@715 259 In the test, each buffer node is connected to a gain node which will operate at the level determined by the specification document. Therefore it is possible to perform a 'Method of Adjustment' test where an interface could directly manipulate these gain nodes. There is also an optional 'Master Volume' slider which can be shown on the test GUI. This slider modifies a gain node before the destination node. This slider can also be monitored and therefore its data tracked providing extra validation. This slider is not indicative of the final volume exiting the speakers and therefore its use should only be considered in a lab condition environment to ensure proper behaviour. Finally the gain nodes allow for cross-fading between samples when operating in synchronous playback. Cross-fading can either be fade-out fade-in or a true cross-fade.
BrechtDeMan@715 260
BrechtDeMan@715 261 %Which type of files? WAV, anything else? Perhaps not exhaustive list, but say something along the lines of 'whatever browser supports'. Compatability?
BrechtDeMan@715 262 The media files supported depend on the browser level support for the initial decoding of information and is the same as the browser support for the HTML5 audio element. The most widely supported media file is the wave (.WAV) format which is accpeted by every browser supporting the Web Audio API. The toolbox will work in any browser which supports the Web Audio API.
BrechtDeMan@715 263
BrechtDeMan@715 264 All the collected session data is returned in an XML document structured similarly to the configuration document, where test pages contain the audio elements with their trace collection, results, comments and any other interface-specific data points.
BrechtDeMan@715 265
BrechtDeMan@715 266 \section{Remote tests} % with previous?
BrechtDeMan@715 267 \label{sec:remote}
BrechtDeMan@715 268
BrechtDeMan@715 269 If the experimenter is willing to trade some degree of control for a higher number of participants, the test can be hosted on a public web server so that participants can take part remotely. This way, a link can be shared widely in the hope of attracting a large amount of subjects, while listening conditions and subject reliability may be less ideal. However, a sound system calibration page and a wide range of metrics logged during the test mitigate these problems. In some experiments, it may be preferred that the subject has a `real life', familiar listening set-up, for instance when perceived quality differences on everyday sound systems are investigated.
BrechtDeMan@715 270 Furthermore, a fully browser-based test, where the collection of the results is automatic, is more efficient and technically reliable even when the test still takes place under lab conditions.
BrechtDeMan@715 271
BrechtDeMan@715 272 The following features allow easy and effective remote testing:
BrechtDeMan@715 273 \begin{description}[noitemsep,nolistsep]
BrechtDeMan@715 274 \item[PHP script to collect result XML files] and store on central server.
BrechtDeMan@715 275 \item[Randomly pick a specified number of pages] to ensure an equal and randomised spread of the different pages (`audioHolders') across participants.
BrechtDeMan@715 276 \item[Calibration of the sound system (and participant)] by a perceptual pre-test to gather information about the frequency response and speaker configuration - this can be supplemented with a survey.
BrechtDeMan@715 277 % In theory calibration could be applied anywhere??
BrechtDeMan@715 278 % \item Functionality to participate multiple times
BrechtDeMan@715 279 % \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 280 % \item Possible to log in with unique ID (no password)
BrechtDeMan@715 281 % \item Pick `new user' (generates new, unique ID) or `already participated' (need already available ID)
BrechtDeMan@715 282 % \item Store XML on server with IDs plus which audioholders have already been listened to
BrechtDeMan@715 283 % \item Don't show `post-test' survey after first time
BrechtDeMan@715 284 % \item Pick `new' audioholders if available
BrechtDeMan@715 285 % \item Copy survey information first time to new XMLs
BrechtDeMan@715 286 % \end{itemize}
BrechtDeMan@715 287 \item[Intermediate saves] for tests which were interrupted or unfinished.
BrechtDeMan@715 288 \item[Collect IP address information] for geographic location, through PHP function which grabs address and appends to XML file.
BrechtDeMan@715 289 \item[Collect Browser and Display information] to the extent it is available and reliable.
BrechtDeMan@715 290 \end{description}
BrechtDeMan@715 291
BrechtDeMan@715 292
BrechtDeMan@715 293 \section{Interfaces} % title? 'Front end'? % Dave
BrechtDeMan@715 294 \label{sec:interfaces}
BrechtDeMan@715 295
BrechtDeMan@715 296 The purpose of this listening test framework is to allow any user the maximum flexibility to design a listening test for their exact application with minimum effort. To this end, a large range of standard listening test interfaces have been implemented.
BrechtDeMan@715 297
BrechtDeMan@715 298 To provide users with a flexible system, a large range of `standard' listening test interfaces have been implemented, including: % pretty much the same wording as two sentences earlier
BrechtDeMan@715 299 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 300 \item MUSHRA (ITU-R BS. 1534)~\cite{recommendation20031534}
BrechtDeMan@715 301 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 302 \item Multiple stimuli are presented and rated on a continuous scale, which includes a reference, hidden reference and hidden anchors.
BrechtDeMan@715 303 \end{itemize}
BrechtDeMan@715 304 \item Rank Scale~\cite{pascoe1983evaluation}
BrechtDeMan@715 305 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 306 \item Stimuli ranked on single horizontal scale, where they are ordered in preference order.
BrechtDeMan@715 307 \end{itemize}
BrechtDeMan@715 308 \item Likert scale~\cite{likert1932technique}
BrechtDeMan@715 309 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 310 \item Each stimuli has a five point scale with values: Strongly Agree, Agree, Neutral, Disagree and Strongly Disagree.
BrechtDeMan@715 311 \end{itemize}
BrechtDeMan@715 312 \item ABC/HR (ITU-R BS. 1116)~\cite{recommendation19971116} (Mean Opinion Score: MOS)
BrechtDeMan@715 313 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 314 \item Each stimulus has a continuous scale (5-1), labeled as Imperceptible, Perceptible but not annoying, slightly annoying, annoying, very annoying.
BrechtDeMan@715 315 \end{itemize}
BrechtDeMan@715 316 \item -50 to 50 Bipolar with Ref
BrechtDeMan@715 317 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 318 \item Each stimulus has a continuous scale -50 to 50 with default values as 0 in middle and a comparison. There is also a provided reference \end{itemize}
BrechtDeMan@715 319 \item Absolute Category Rating (ACR) Scale~\cite{rec1996p}
BrechtDeMan@715 320 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 321 \item Each stimuli has a five point scale with values: Bad, Poor, Fair, Good, Excellent
BrechtDeMan@715 322 \end{itemize}
BrechtDeMan@715 323 \item Degredation Category Rating (DCR) Scale~\cite{rec1996p}
BrechtDeMan@715 324 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 325 \item Each stimuli has a five point scale with values: (5) Inaudible, (4) Audible but not annoying, (3) slightly annoying, (2) annoying, (1) very annoying.
BrechtDeMan@715 326 \end{itemize}
BrechtDeMan@715 327 \item Comparison Category Rating (CCR) Scale~\cite{rec1996p}
BrechtDeMan@715 328 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 329 \item Each stimuli has a seven point scale with values: Much Better, Better, Slightly Better, About the same, slightly worse, worse, much worse. There is also a provided reference.
BrechtDeMan@715 330 \end{itemize}
BrechtDeMan@715 331 \item 9 Point Hedonic Category Rating Scale~\cite{peryam1952advanced}
BrechtDeMan@715 332 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 333 \item Each stimuli has a seven point scale with values: Like Extremely, Like Very Much, Like Moderate, Like Slightly, Neither Like nor Dislike, dislike Extremely, dislike Very Much, dislike Moderate, dislike Slightly. There is also a provided reference.
BrechtDeMan@715 334 \end{itemize}
BrechtDeMan@715 335 \item ITU-R 5 Point Continuous Impairment Scale~\cite{rec1997bs}
BrechtDeMan@715 336 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 337 \item Each stimuli has a five point scale with values: (5) Imperceptible, (4) Perceptible but not annoying, (3) slightly annoying, (2) annoying, (1) very annoying. There is also a provided reference.
BrechtDeMan@715 338 \end{itemize}
BrechtDeMan@715 339 \item Pairwise Comparison (Better/Worse)~\cite{david1963method}
BrechtDeMan@715 340 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 341 \item A reference is provided and ever stimulus is rated as being either better or worse than the reference.
BrechtDeMan@715 342 \end{itemize}
BrechtDeMan@715 343 \item APE style \cite{ape}
BrechtDeMan@715 344 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 345 \item Multiple stimuli on a single horizontal slider for inter-sample rating.
BrechtDeMan@715 346 \end{itemize}
BrechtDeMan@715 347 \item Multi attribute ratings
BrechtDeMan@715 348 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 349 \item Multiple stimuli as points on a 2D plane for inter-sample rating (eg. Valence Arousal)
BrechtDeMan@715 350 \end{itemize}
BrechtDeMan@715 351 \item AB Test~\cite{lipshitz1981great}
BrechtDeMan@715 352 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 353 \item Two stimuli are presented at a time and the participant has to select a preferred stimulus.
BrechtDeMan@715 354 \end{itemize}
BrechtDeMan@715 355 \item ABX Test~\cite{clark1982high}
BrechtDeMan@715 356 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 357 \item Two stimuli are presented along with a reference and the participant has to select a preferred stimulus, often the closest to the reference.
BrechtDeMan@715 358 \end{itemize}
BrechtDeMan@715 359 \end{itemize}
BrechtDeMan@715 360
BrechtDeMan@715 361 It is possible to include any number of references, anchors, hidden references and hidden anchors into all of these listening test formats.
BrechtDeMan@715 362
BrechtDeMan@715 363 Because of the design choice to separate the core code and interface modules, it is possible for a 3rd party interface to be built with minimal effort. The repository includes documentation on which functions must be called and the specific functions they expect your interface to perform. To this end, there is an `Interface' object which includes object prototypes for creating the on-page comment boxes (including those with radio or checkbox responses), start and stop buttons with function handles pre-attached and the playhead / transport bars.
BrechtDeMan@715 364
BrechtDeMan@715 365 %%%% \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 366 %%%% \item (APE style) \cite{ape}
BrechtDeMan@715 367 %%%% \item Multi attribute ratings
BrechtDeMan@715 368 %%%% \item MUSHRA (ITU-R BS. 1534)~\cite{recommendation20031534}
BrechtDeMan@715 369 %%%% \item Interval Scale~\cite{zacharov1999round}
BrechtDeMan@715 370 %%%% \item Rank Scale~\cite{pascoe1983evaluation}
BrechtDeMan@715 371 %%%%
BrechtDeMan@715 372 %%%% \item 2D Plane rating - e.g. Valence vs. Arousal~\cite{carroll1969individual}
BrechtDeMan@715 373 %%%% \item Likert scale~\cite{likert1932technique}
BrechtDeMan@715 374 %%%%
BrechtDeMan@715 375 %%%% \item {\bf All the following are the interfaces available in HULTI-GEN~\cite{hultigen} }
BrechtDeMan@715 376 %%%% \item ABC/HR (ITU-R BS. 1116)~\cite{recommendation19971116}
BrechtDeMan@715 377 %%%% \begin{itemize}
BrechtDeMan@715 378 %%%% \item Continuous Scale (5-1) Imperceptible, Perceptible but not annoying, slightly annoying, annoying, very annoying. (default Inaudible?)
BrechtDeMan@715 379 %%%% \end{itemize}
BrechtDeMan@715 380 %%%% \item -50 to 50 Bipolar with Ref
BrechtDeMan@715 381 %%%% \begin{itemize}
BrechtDeMan@715 382 %%%% \item Scale -50 to 50 on Mushra with default values as 0 in middle and a comparison ``Reference'' to compare to 0 value
BrechtDeMan@715 383 %%%% \end{itemize}
BrechtDeMan@715 384 %%%% \item Absolute Category Rating (ACR) Scale~\cite{rec1996p}
BrechtDeMan@715 385 %%%% \begin{itemize}
BrechtDeMan@715 386 %%%% \item 5 point Scale - Bad, Poor, Fair, Good, Excellent (Default fair?)
BrechtDeMan@715 387 %%%% \end{itemize}
BrechtDeMan@715 388 %%%% \item Degredation Category Rating (DCR) Scale~\cite{rec1996p}
BrechtDeMan@715 389 %%%% \begin{itemize}
BrechtDeMan@715 390 %%%% \item 5 point Scale - Inaudible, Audible but not annoying, slightly annoying, annoying, very annoying. (default Inaudible?) - {\it Basically just quantised ABC/HR?}
BrechtDeMan@715 391 %%%% \end{itemize}
BrechtDeMan@715 392 %%%% \item Comparison Category Rating (CCR) Scale~\cite{rec1996p}
BrechtDeMan@715 393 %%%% \begin{itemize}
BrechtDeMan@715 394 %%%% \item 7 point scale: Much Better, Better, Slightly Better, About the same, slightly worse, worse, much worse - Default about the same with reference to compare to
BrechtDeMan@715 395 %%%% \end{itemize}
BrechtDeMan@715 396 %%%% \item 9 Point Hedonic Category Rating Scale~\cite{peryam1952advanced}
BrechtDeMan@715 397 %%%% \begin{itemize}
BrechtDeMan@715 398 %%%% \item 9 point scale: Like Extremely, Like Very Much, Like Moderate, Like Slightly, Neither Like nor Dislike, dislike Extremely, dislike Very Much, dislike Moderate, dislike Slightly - Default Neither Like nor Dislike with reference to compare to
BrechtDeMan@715 399 %%%% \end{itemize}
BrechtDeMan@715 400 %%%% \item ITU-R 5 Point Continuous Impairment Scale~\cite{rec1997bs}
BrechtDeMan@715 401 %%%% \begin{itemize}
BrechtDeMan@715 402 %%%% \item 5 point Scale (5-1) Imperceptible, Perceptible but not annoying, slightly annoying, annoying, very annoying. (default Inaudible?)- {\it Basically just quantised ABC/HR, or Different named DCR}
BrechtDeMan@715 403 %%%% \end{itemize}
BrechtDeMan@715 404 %%%% \item Pairwise Comparison (Better/Worse)~\cite{david1963method}
BrechtDeMan@715 405 %%%% \begin{itemize}
BrechtDeMan@715 406 %%%% \item 2 point Scale - Better or Worse - (not sure how to default this - they default everything to better, which is an interesting choice)
BrechtDeMan@715 407 %%%% \end{itemize}
BrechtDeMan@715 408 %%%% \end{itemize}
BrechtDeMan@715 409
BrechtDeMan@715 410 % Build your own test
BrechtDeMan@715 411 \begin{comment}
BrechtDeMan@715 412 { \bf A screenshot would be nice.
BrechtDeMan@715 413
BrechtDeMan@715 414 Established tests (see below) included as `presets' in the build-your-own-test page. }
BrechtDeMan@715 415 \end{comment}
BrechtDeMan@715 416
BrechtDeMan@715 417 \section{Analysis and diagnostics}
BrechtDeMan@715 418 \label{sec:analysis}
BrechtDeMan@715 419 % don't mention Python scripts
BrechtDeMan@715 420 There are several benefits to providing basic analysis tools in the browser: they allow diagnosing problems, with the interface or with the test subject; they may be sufficient for many researchers' purposes; and test subjects may enjoy seeing an overview of their own results and/or results thus far at the end of their tests.
BrechtDeMan@715 421 \begin{figure}[bhf]
BrechtDeMan@715 422 \centering
BrechtDeMan@715 423 \includegraphics[width=.5\textwidth]{boxplot.png}
BrechtDeMan@715 424 %\caption{This timeline of a single subject's listening test shows playback of fragments (red segments) and marker movements on the rating axis in function of time. }
BrechtDeMan@715 425 \caption{Box and whisker plot showing the aggregated numerical ratings of six stimuli by a group of subjects.}
BrechtDeMan@715 426 \label{fig:timeline}
BrechtDeMan@715 427 \end{figure}
BrechtDeMan@715 428 For this reason, we include a proof-of-concept web page with:
BrechtDeMan@715 429 \begin{itemize}[noitemsep,nolistsep]
BrechtDeMan@715 430 \item All audioholder IDs, file names, subject IDs, audio element IDs, ... in the collected XMLs so far (\texttt{saves/*.xml})
BrechtDeMan@715 431 \item Selection of subjects and/or test samples to zoom in on a subset of the data %Check/uncheck each of the above for analysis (e.g. zoom in on a certain song, or exclude a subset of subjects)
BrechtDeMan@715 432 \item Embedded audio to hear corresponding test samples % (follow path in XML setup file, which is also embedded in the XML result file)
BrechtDeMan@715 433 \item Scatter plot, confidence plot and box plot of rating values (see Figure )
BrechtDeMan@715 434 \item Timeline for a specific subject %(see Figure \ref{fig:timeline})%, perhaps re-playing the experiment in X times realtime. (If actual realtime, you could replay the audio...)
BrechtDeMan@715 435 \item Distribution plots of any radio button and number questions in pre- and post-test survey %(drop-down menu with `pretest', `posttest', ...; then drop-down menu with question `IDs' like `gender', `age', ...; make pie chart/histogram of these values over selected range of XMLs)
BrechtDeMan@715 436 \item All `comments' on a specific audioelement
BrechtDeMan@715 437 \item A `download' function for a CSV of ratings, survey responses and comments% various things (values, survey responses, comments) people might want to use for analysis, e.g. when XML scares them
BrechtDeMan@715 438 %\item Validation of setup XMLs (easily spot `errors', like duplicate IDs or URLs, missing/dangling tags, ...)
BrechtDeMan@715 439 \end{itemize}
BrechtDeMan@715 440
BrechtDeMan@715 441
BrechtDeMan@715 442 %A subset of the above would already be nice for this paper.
BrechtDeMan@715 443 [Some pictures here please.]
BrechtDeMan@715 444 \section{Concluding remarks and future work}
BrechtDeMan@715 445 \label{sec:conclusion}
BrechtDeMan@715 446
BrechtDeMan@715 447 The code and documentation can be pulled or downloaded from our online repository available at \url{code.soundsoftware.ac.uk/projects/webaudioevaluationtool}.
BrechtDeMan@715 448
BrechtDeMan@715 449 [Talking a little bit about what else might happen. Unless we really want to wrap this up. ]
BrechtDeMan@715 450
BrechtDeMan@715 451 \cite{schoeffler2015mushra} gives a `checklist' for subjective evaluation of audio systems. The Web Audio Evaluation Toolbox meets most of its given requirements including remote testing, crossfading between audio streams, collecting browser information, utilising UI elements and working with various audio formats including uncompressed PCM or WAV format.
BrechtDeMan@715 452 % remote
BrechtDeMan@715 453 % language support (not explicitly stated)
BrechtDeMan@715 454 % crossfades
BrechtDeMan@715 455 % choosing speakers/sound device from within browser? --- NOT POSSIBLE, can only determine channel output counts and its up to the hardware to determine
BrechtDeMan@715 456 % collect information about software and sound system
BrechtDeMan@715 457 % buttons, scales, ... UI elements
BrechtDeMan@715 458 % must be able to load uncompressed PCM
BrechtDeMan@715 459
BrechtDeMan@715 460 [What can we not do? `Method of adjustment', as in \cite{schoeffler2015mushra} is another can of worms, because, like, you could adjust lots of things (volume is just one of them, that could be done quite easily). Same for using input signals like the participant's voice. Either leave out, or mention this requires modification of the code we provide.]
BrechtDeMan@715 461
BrechtDeMan@715 462 %
BrechtDeMan@715 463 % The following two commands are all you need in the
BrechtDeMan@715 464 % initial runs of your .tex file to
BrechtDeMan@715 465 % produce the bibliography for the citations in your paper.
BrechtDeMan@715 466 \bibliographystyle{abbrv}
BrechtDeMan@715 467 \bibliography{WAC2016} % sigproc.bib is the name of the Bibliography in this case
BrechtDeMan@715 468 % You must have a proper ".bib" file
BrechtDeMan@715 469 % and remember to run:
BrechtDeMan@715 470 % latex bibtex latex latex
BrechtDeMan@715 471 % to resolve all references
BrechtDeMan@715 472 %
BrechtDeMan@715 473 % ACM needs 'a single self-contained file'!
BrechtDeMan@715 474 %
BrechtDeMan@715 475 \end{document}