comparison docs/SMC15/smc2015template.tex @ 1729:1e48d1d5fe7d

Paper: Introduction + Interface
author Brecht De Man <b.deman@qmul.ac.uk>
date Mon, 27 Apr 2015 19:38:27 +0100
parents 55ab392db5fd
children 66a32db3d83a
comparison
equal deleted inserted replaced
1728:75f134df305c 1729:1e48d1d5fe7d
7 \usepackage{smc2015} 7 \usepackage{smc2015}
8 \usepackage{times} 8 \usepackage{times}
9 \usepackage{ifpdf} 9 \usepackage{ifpdf}
10 \usepackage[english]{babel} 10 \usepackage[english]{babel}
11 \usepackage{cite} 11 \usepackage{cite}
12 \usepackage{enumitem}
13 \setitemize{noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt}
12 14
13 \hyphenation{Java-script} 15 \hyphenation{Java-script}
14 16
15 %%%%%%%%%%%%%%%%%%%%%%%% Some useful packages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 17 %%%%%%%%%%%%%%%%%%%%%%%% Some useful packages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
16 %%%%%%%%%%%%%%%%%%%%%%%% See related documentation %%%%%%%%%%%%%%%%%%%%%%%%%% 18 %%%%%%%%%%%%%%%%%%%%%%%% See related documentation %%%%%%%%%%%%%%%%%%%%%%%%%%
149 151
150 \section{Introduction}\label{sec:introduction} 152 \section{Introduction}\label{sec:introduction}
151 153
152 %NICK: examples of what kind of audio applications HTML5 has made possible, with references to publications (or website)\\ 154 %NICK: examples of what kind of audio applications HTML5 has made possible, with references to publications (or website)\\
153 155
154 Perceptual evaluation of audio plays an important role in a wide range of research including audio quality \cite{schoeffler2013impact,repp}, sound synthesis \cite{de2013real,durr2015implementation}, audio effect design \cite{deman2014a}, source separation \cite{mushram,uhlereiss}, music and emotion analysis \cite{song2013b,song2013a}, and many others \cite{friberg2011comparison}. % codec design? 156 Perceptual evaluation of audio plays an important role in a wide range of research on audio quality \cite{schoeffler2013impact,repp}, sound synthesis \cite{de2013real,durr2015implementation}, audio effect design \cite{deman2014a}, source separation \cite{mushram,uhlereiss}, music and emotion analysis \cite{song2013b,song2013a}, and many others \cite{friberg2011comparison}. % codec design?
155 157
156 This work is based in part on the APE audio perceptual evaluation interface for MATLAB \cite{deman2014b}. An important drawback of this toolbox is the need to have MATLAB to create a test and even to run (barring the use of an executable generated by MATLAB), and limited compatibility with both earlier and newer versions of MATLAB, which makes it hard to maintain. On the other hand, a web application generally has the advantage of running in most browsers on most applications. 158 %This work is based in part on the APE audio perceptual evaluation interface for MATLAB \cite{deman2014b}. An important drawback of this toolbox is the need to have MATLAB to create a test and even to run (barring the use of an executable generated by MATLAB), and limited compatibility with both earlier and newer versions of MATLAB, which makes it hard to maintain. On the other hand, a web application generally has the advantage of running in most browsers on most applications.
157 159
158 % IMPORTANT 160 % IMPORTANT
159 %[TO ADD: other interfaces for perceptual evaluation of audio, browser-based or not!] \\ 161 %[TO ADD: other interfaces for perceptual evaluation of audio, browser-based or not!] \\
160 %BROWSER-BASED: \cite{song2013b,song2013a,beaqlejs} \\ 162 %BROWSER-BASED: \cite{song2013b,song2013a,beaqlejs} \\
161 %MATLAB: \cite{whisper,mushram,scale} 163 %MATLAB: \cite{whisper,mushram,scale}
162 % to add: OPAQUE, Rumsey's repertory grid technique 164 % to add: OPAQUE, Rumsey's repertory grid technique
163 165
164 166
165
166 \begin{table}[htdp] 167 \begin{table}[htdp]
167 \caption{Available audio perceptual evaluation interfaces} 168 \caption{Available audio perceptual evaluation tools}
168 \begin{center} 169 \begin{center}
169 \begin{tabular}{|*{3}{l|}} 170 \begin{tabular}{|*{3}{l|}}
170 % order? 171 % order?
171 \hline 172 \hline
172 \textbf{Name} & \textbf{Language} & \textbf{Ref.}\\ 173 \textbf{Name} & \textbf{Language} & \textbf{Ref.}\\
182 \end{tabular} 183 \end{tabular}
183 \end{center} 184 \end{center}
184 \label{tab:interfaces} 185 \label{tab:interfaces}
185 \end{table}% 186 \end{table}%
186 187
187 Various perceptual audio interfaces are already available, see Table \ref{tab:interfaces}. 188 Various listening test design tools are already available, see Table \ref{tab:interfaces}. A few other listening test tools, such as OPAQUE \cite{opaque} and GuineaPig \cite{guineapig}, are described but not available to the public at the time of writing.
188 Many are MATLAB-based, useful for easily processing and visualising the data produced by the listening tests, but requiring the application to be installed to run or - in the case of an executable created with MATLAB - at least create the test. 189
189 Furthermore, compatibility is limited across different versions of MATLAB. 190 Many are MATLAB-based, useful for easily processing and visualising the data produced by the listening tests, but requiring MATLAB to be installed to run or - in the case of an executable created with MATLAB - at least create the test.
191 Furthermore, compatibility is usually limited across different versions of MATLAB.
190 Similarly, Max requires little or no programming background but it is proprietary software as well, which is especially undesirable when tests need to be deployed at different sites. 192 Similarly, Max requires little or no programming background but it is proprietary software as well, which is especially undesirable when tests need to be deployed at different sites.
191 More recently, BeaqleJS \cite{beaqlejs} makes use of the HTML5 audio capabilities and comes with a number of predefined, established test interfaces such as ABX and MUSHRA \cite{mushra}. % 193 More recently, BeaqleJS \cite{beaqlejs} makes use of the HTML5 audio capabilities and comes with a number of predefined, established test interfaces such as ABX and MUSHRA \cite{mushra}. %
192 Another listening test tool, GuineaPig \cite{guineapig}, is not available to the public at the time of writing. 194
195 A browser-based perceptual evaluation tool for audio has a number of advantages. First of all, it doesn't need any other software than a browser, meaning deployment is very easy and cheap. As such, it can also run on a variety of devices and platforms. The test can be hosted on a central server with subjects all over the world, who can simply go to a webpage. This means that multiple participants can take the test simultaneously, potentially in their usual listening environment if this is beneficial for the test. Naturally, the constraints on the listening environment and other variables still need to be controlled if they are important to the experiment. Depending on the requirements a survey or a variety of tests preceding the experiment could establish whether remote participants and their environments are adequate for the experiment at hand.
196
197 The Web Audio API is a high-level JavaScript Application Programming Interface (API) designed for real-time processing of audio inside the browser through various processing nodes\footnote{http://webaudio.github.io/web-audio-api/}. Various web sites have used the Web Audio API for creative purposes, such as drum machines and score creation tools\footnote{http://webaudio.github.io/demo-list/},
198 others from the list show real-time captured audio processing such as room reverberation tools and a phase vocoder from the system microphone. The BBC Radiophonic Workshop shows effects used on famous TV shows such as Doctor Who, being simulated inside the browser\footnote{http://webaudio.prototyping.bbc.co.uk/}.
199 Another example is the BBC R\&D personalised compressor which applies a dynamic range compressor on a radio station that dynamically adjusts the compressor settings to match the listener's environment \cite{mason2015compression}.
200
201
193 202
194 % [How is this one different from all these?] improve 203 % [How is this one different from all these?] improve
195 204
196 % FLEXIBLE (reference (not) appropriate) 205 % FLEXIBLE (reference (not) appropriate)
197 206 In contrast with the tools listed above, we aim to provide an environment in which a variety of multi-stimulus tests can be designed, with a wide range of configurability, while keeping setup and collecting results as straightforward as possible. For instance, the option to provide free-text comment fields allows for tests with individual vocabulary methods, as opposed to only allowing quantitative scales associated to a fixed set of descriptors.
198 Furthermore, the option to provide free-text comment fields allows for tests with individual vocabulary methods, as opposed to only allowing quantitative scales associated to a fixed set of descriptors.
199
200
201 % ENVIRONMENT
202 There are a number of advantages to building a web audio based listening test environment. The ability to easily deploy a flexible and scalable testing environment that requires no proprietary software to run makes the web audio evaluation tool a very flexible testing tool. The ability to host a single test server and create multiple clients not only allows multiple participants to be involved in a trial simultaneously, but also permits participants to be located anywhere in the world. There are also less user experience issues, since all users should have some experience with using existing web technologies.
203
204 % EASE OF USE: no need to go in the code 207 % EASE OF USE: no need to go in the code
205 To make the tool accessible to a wide range of researchers, we aim to offer maximum functionality even to those with little or no programming background. The tool we present can set up a listening test without reading or adjusting any code, provided no new types of interfaces need to be created. 208 To make the tool accessible to a wide range of researchers, we aim to offer maximum functionality even to those with little or no programming background. The tool we present can set up a listening test without reading or adjusting any code, provided no new types of interfaces need to be created.
206 209
207 We present a browser-based perceptual evaluation tool from which any kind of multiple stimulus audio evaluation tool where subjects need to rank, rate, select, or comment on different audio samples can be built. %In this paper, we provide a listening test back end that allows for easy set up of a wide variety of listening tests, highly flexible yet very simple and not requiring any programming skills. 210 % ENVIRONMENT %In this paper, we provide a listening test back end that allows for easy set up of a wide variety of listening tests, highly flexible yet very simple and not requiring any programming skills.
208 The Web Audio API is a high-level JavaScript Application Programming Interface (API) designed for real-time processing of audio inside the browser through various processing nodes\footnote{http://webaudio.github.io/web-audio-api/}. Various web sites have used the Web Audio API for either creative purposes, such as drum machines and score creation tools\footnote{http://webaudio.github.io/demo-list/}, 211 Specifically, we present a browser-based perceptual evaluation tool from which any kind of multiple stimulus audio evaluation tool where subjects need to rank, rate, select, or comment on different audio samples can be built.
209 others from the list show real-time captured audio processing such as room reverberation tools and a phase vocoder from the system microphone. The BBC Radiophonic Workshop shows effects used on famous TV shows such as Doctor Who, being simulated inside the browser\footnote{http://webaudio.prototyping.bbc.co.uk/}.
210 Another example is the BBC R\&D personalised compressor which applies a dynamic range compressor on a radio station that dynamically adjusts the compressor settings to match the listener's environment \cite{mason2015compression}.
211
212 We present a browser-based perceptual evaluation tool from which any kind of multiple stimulus audio evaluation tool where subjects need to rank, rate, select, or comment on different audio samples can be built.
213 We also include an example of the multiple stimulus user interface included with the APE tool \cite{deman2014b}, which presents the subject with a number of axes on which a number of markers, corresponding to audio samples, can be moved to reflect any subjective quality, as well as corresponding comment boxes. 212 We also include an example of the multiple stimulus user interface included with the APE tool \cite{deman2014b}, which presents the subject with a number of axes on which a number of markers, corresponding to audio samples, can be moved to reflect any subjective quality, as well as corresponding comment boxes.
214 However, other graphical user interfaces can be put on top of the engine that we provide with minimal or no modifications. Examples of this are the MUSHRA test \cite{mushra}, single or multiple stimulus evaluation with a two-dimensional interface (such as valence and arousal dimensions), or simple annotation (using free-form text, check boxes, radio buttons or drop-down menus) of one or more audio samples at a time. 213 However, other graphical user interfaces can be put on top of the engine that we provide with minimal or no modifications. Examples of this are the MUSHRA test \cite{mushra}, single or multiple stimulus evaluation with a two-dimensional interface (such as valence and arousal dimensions), or simple annotation (using free-form text, check boxes, radio buttons or drop-down menus) of one or more audio samples at a time.
215 In some cases, such as method of adjustment, where the audio is processed by the user \cite{bech}, or AB test \cite{bech}, where the interface does not show all audio samples to be evaluated at once, the back end of the tool needs to be modified as well. 214 In some cases, such as method of adjustment, where the audio is processed by the user, or AB test, where the interface does not show all audio samples to be evaluated at once \cite{bech}, the back end of the tool needs to be modified as well.
216
217 There are a number of advantages to building a web audio based listening test environment. The ability to easily deploy a flexible and scalable testing environment that requires no proprietary software to run makes the web audio evaluation tool a very flexible testing tool. The ability to host a single test server not only allows multiple participants to be involved in a trial simultaneously, but also permits participants to be located anywhere in the world. There are also less user experience issues, since all users should have some experience with using existing web technologies.
218
219 215
220 In the following sections, we describe the included interface in more detail, discuss the implementation, and cover considerations that were made in the design process of this tool. 216 In the following sections, we describe the included interface in more detail, discuss the implementation, and cover considerations that were made in the design process of this tool.
221 217
222 %\section{Requirements}\label{sec:requirements} 218 %\section{Requirements}\label{sec:requirements}
223 %??? 219 %???
227 %\end{itemize} 223 %\end{itemize}
228 224
229 225
230 \section{Interface}\label{sec:interface} 226 \section{Interface}\label{sec:interface}
231 227
232 At this point, we have implemented the interface of the MATLAB-based APE (Audio Perceptual Evaluation) toolbox \cite{deman2014b}. This shows one marker for each simultaneously evaluated audio fragment on one or more horizontal axes, that can be moved to rate or rank the respective fragments in terms of any subjective quality, as well as a comment box for every marker, and any extra text boxes for extra comments. 228 At this point, we have implemented the interface of the MATLAB-based APE (Audio Perceptual Evaluation) toolbox \cite{deman2014b}. This shows one marker for each simultaneously evaluated audio fragment on one or more horizontal axes, that can be moved to rate or rank the respective fragments in terms of any subjective property, as well as a comment box for every marker, and any extra text boxes for extra comments.
233 The reason for such an interface, where all stimuli are presented on a single rating axis (or multiple axes if multiple subjective qualities need to be evaluated), is that it urges the subject to consider the rating and/or ranking of the stimuli relative to one another, as opposed to comparing each individual stimulus to a given reference, as is the case with e.g. a MUSHRA test \cite{mushra}. See Figure \ref{fig:interface} for an example of the interface, with eleven fragments and one axis. %? change if a new interface is shown 229 The reason for such an interface, where all stimuli are presented on a single rating axis (or multiple axes if multiple subjective qualities need to be evaluated), is that it urges the subject to consider the rating and/or ranking of the stimuli relative to one another, as opposed to comparing each individual stimulus to a given reference, as is the case with e.g. a MUSHRA test \cite{mushra}. As such, it is ideal for any type of test where the goal is to carefully compare samples against each other, like perceptual evaluation of different mixes of music recordings \cite{deman2015a} or sound synthesis models \cite{durr2015implementation}, as opposed to comparing results of source separation algorithms \cite{mushram} or audio with lower data rate \cite{mushra} to a high quality reference signal.
234 230 See Figure \ref{fig:interface} for an example of the interface, with eleven fragments and one axis. %? change if a new interface is shown
235 However, the back end of this test environment allows for many more established and novel interfaces for listening tests, particularly ones where the subject only assesses audio without manipulating it (i.e. method of adjustment), which would require additional features to be implemented. 231
232 %For instance, the option to provide free-text comment fields allows for tests with individual vocabulary methods, as opposed to only allowing quantitative scales associated to a fixed set of descriptors.
236 233
237 \begin{figure*}[ht] 234 \begin{figure*}[ht]
238 \begin{center} 235 \begin{center}
239 \includegraphics[width=1.0\textwidth]{interface2.png} 236 \includegraphics[width=1.0\textwidth]{interface2.png}
240 \caption{Example of interface, with 1 axis, 6 fragments and 1 extra comment in Chrome browser} 237 \caption{Example of interface, with 1 axis, 6 fragments and 1 extra comment in Chrome browser}
381 378
382 % future work 379 % future work
383 Further work may include the development of other common test designs, such as MUSHRA \cite{mushra}, 2D valence and arousal rating, and others. We will add functionality to assist with setting up large-scale tests with remote subjects, so this becomes straightforward and intuitive. 380 Further work may include the development of other common test designs, such as MUSHRA \cite{mushra}, 2D valence and arousal rating, and others. We will add functionality to assist with setting up large-scale tests with remote subjects, so this becomes straightforward and intuitive.
384 In addition, we will keep on improving and expanding the tool, and highly welcome feedback and contributions from the community. 381 In addition, we will keep on improving and expanding the tool, and highly welcome feedback and contributions from the community.
385 382
386 The source code of this tool can be found on \url{code.soundsoftware.ac.uk/projects/webaudioevaluationtool}. 383 The source code of this tool can be found on \\ \texttt{code.soundsoftware.ac.uk/projects/}\\ \texttt{webaudioevaluationtool}.
387 384
388 385
389 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 386 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
390 %bibliography here 387 %bibliography here
391 \bibliography{smc2015template} 388 \bibliography{smc2015template}