comparison docs/Instructions/Instructions.tex @ 413:1c5894cdcb9c Dev_main

Instructions update (WIP); removed basic instructions from README.
author Brecht De Man <b.deman@qmul.ac.uk>
date Fri, 18 Dec 2015 01:45:54 +0000
parents 2bbf813c0e61
children 9bf8ecbcdc8a 43801b3d6131
comparison
equal deleted inserted replaced
412:930354145f6c 413:1c5894cdcb9c
5 \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent 5 \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
6 \usepackage{graphicx} % Use pdf, png, jpg, or eps§ with pdflatex; use eps in DVI mode 6 \usepackage{graphicx} % Use pdf, png, jpg, or eps§ with pdflatex; use eps in DVI mode
7 % TeX will automatically convert eps --> pdf in pdflatex 7 % TeX will automatically convert eps --> pdf in pdflatex
8 8
9 \usepackage{listings} % Source code 9 \usepackage{listings} % Source code
10 \usepackage{xcolor} % colour (source code for instance)
11 \definecolor{grey}{rgb}{0.1,0.1,0.1}
12 \definecolor{darkblue}{rgb}{0.0,0.0,0.6}
13 \definecolor{cyan}{rgb}{0.0,0.6,0.6}
14
10 \usepackage{amssymb} 15 \usepackage{amssymb}
11 \usepackage{cite} 16 \usepackage{cite}
12 \usepackage{hyperref} % Hyperlinks 17 \usepackage{hyperref} % Hyperlinks
13 \usepackage[nottoc,numbib]{tocbibind} % 'References' in TOC 18 \usepackage[nottoc,numbib]{tocbibind} % 'References' in TOC
14 19
19 \date{7 December 2015} % Activate to display a given date or no date 24 \date{7 December 2015} % Activate to display a given date or no date
20 25
21 \begin{document} 26 \begin{document}
22 \maketitle 27 \maketitle
23 28
24 These instructions are about use of the Web Audio Evaluation Tool \cite{waet} on Windows and Mac OS X platforms. 29 These instructions are about use of the Web Audio Evaluation Tool on Windows and Mac OS X platforms.
30
31 We request that you acknowledge the authors and cite our work when using it \cite{waet}, see also CITING.txt.
32
33 The tool is available in its entirety including source code on \url{https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/}, under the GNU General Public License v3.0 (\url{http://choosealicense.com/licenses/gpl-3.0/}), see also LICENSE.txt.
34
25 % TO DO: Linux 35 % TO DO: Linux
26 36
27 \tableofcontents 37 \tableofcontents
28 38
29 \clearpage 39 \clearpage
30 40
31 \section{Installation} 41 \section{Installation}
32 Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice. 42 Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice, or pull the source code from \url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool} (Mercurial).
33 43
34 \subsection{Contents} 44 \subsection{Contents}
35 The folder should contain the following elements: \\ 45 The folder should contain the following elements: \\
36 46
37 \textbf{Main folder:} 47 \textbf{Main folder:}
38 \begin{itemize} 48 \begin{itemize}
39 \item \texttt{analyse.html}: analysis and diagnostics of a set of result XML files 49 \item \texttt{analyse.html}: analysis and diagnostics of a set of result XML files
40 \item \texttt{ape.css, core.css, graphics.css, mushra.css, structure.css}: style files (edit to change appearance) 50 \item \texttt{ape.css, core.css, graphics.css, mushra.css, structure.css}: style files (edit to change appearance)
41 \item \texttt{ape.js}: JavaScript file for APE-style interface \cite{ape} 51 \item \texttt{ape.js}: JavaScript file for APE-style interface \cite{ape}
42 \item \texttt{mushra.js}: JavaScript file for MUSHRA-style interface \cite{mushra} 52 \item \texttt{CITING.txt, LICENSE.txt, README.txt}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file referring to these instructions.
43 \item \texttt{CITING.txt, LICENSE.txt, README.txt}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file.
44 \item \texttt{core.js}: JavaScript file with core functionality 53 \item \texttt{core.js}: JavaScript file with core functionality
45 \item \texttt{index.html}: webpage where interface should appear (includes link to test configuration XML) 54 \item \texttt{index.html}: webpage where interface should appear (includes link to test configuration XML)
46 \item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library 55 \item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library
56 \item \texttt{loudness.js}: Allows for automatic calculation of loudness of Web Audio API Buffer objects, return gain values to correct for a target loudness or match loudness between multiple objects
57 \item \texttt{mushra.js}: JavaScript file for MUSHRA-style interface \cite{mushra}
47 \item \texttt{pythonServer.py}: webserver for running tests locally 58 \item \texttt{pythonServer.py}: webserver for running tests locally
48 \item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files) 59 \item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files)
49 \item \texttt{save.php}: PHP script to store result XML files to web server\\ 60 \item \texttt{save.php}: PHP script to store result XML files to web server\\
50 \end{itemize} 61 \end{itemize}
51 \textbf{Documentation (./docs/)} 62 \textbf{Documentation (./docs/)}
52 \begin{itemize} 63 \begin{itemize}
64 \item \href{http://c4dm.eecs.qmul.ac.uk/dmrn/events/dmrnp10/#posters}{DMRN+10}: PDF and \LaTeX source of poster for 10\textsuperscript{th} Digital Music Research Network One-Day workshop (``soft launch'')
53 \item Instructions: PDF and \LaTeX source of these instructions 65 \item Instructions: PDF and \LaTeX source of these instructions
54 \item Project Specification Document (\LaTeX/PDF) 66 \item Project Specification Document (\LaTeX/PDF)
55 \item Results Specification Document (\LaTeX/PDF) 67 \item Results Specification Document (\LaTeX/PDF)
56 \item SMC15: PDF and \LaTeX source of corresponding SMC2015 publication \cite{waet} 68 \item SMC15: PDF and \LaTeX source of 12th Sound and Music Computing Conference paper \cite{waet}
57 \item WAC2016: PDF and \LaTeX source of corresponding WAC2016 publication\\ 69 \item WAC2016: PDF and \LaTeX source of 2nd Web Audio Conference paper\\
58 \end{itemize} 70 \end{itemize}
59 \textbf{Example project (./example\_eval/)} 71 \textbf{Example project (./example\_eval/)}
60 \begin{itemize} 72 \begin{itemize}
61 \item An example of what the set up XML should look like, with example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface).\\ 73 \item An example of what the set up XML should look like, with example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface).\\
62 \end{itemize} 74 \end{itemize}
71 \textbf{Test creation tool (./test\_create/)} 83 \textbf{Test creation tool (./test\_create/)}
72 \begin{itemize} 84 \begin{itemize}
73 \item Webpage for easily setting up your own test without having to delve into the XML.\\ 85 \item Webpage for easily setting up your own test without having to delve into the XML.\\
74 \end{itemize} 86 \end{itemize}
75 87
76 \subsection{Browser} 88 \subsection{Compatibility}
77 As Microsoft Internet Explorer doesn't support the Web Audio API\footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work). 89 As Microsoft Internet Explorer doesn't support the Web Audio API\footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work).
90
91 Firefox does not currently support other bit depths than 8 or 16 bit for PCM wave files. In the future, this will throw a warning message to tell the user that their content is being quantised automatically. %Nick? Right? To be removed if and when actually implemented
78 92
79 The tool is platform-independent and works in any browser that supports the Web Audio API. It does not require any specific, proprietary software. However, in case the tool is hosted locally (i.e. you are not hosting it on an actual webserver) you will need Python (2.7), which is a free programming language - see the next paragraph. 93 The tool is platform-independent and works in any browser that supports the Web Audio API. It does not require any specific, proprietary software. However, in case the tool is hosted locally (i.e. you are not hosting it on an actual webserver) you will need Python (2.7), which is a free programming language - see the next paragraph.
80 94
81 95 \clearpage
82 \clearpage 96
83 97
84 \section{Test setup} 98 \section{Test setup}
85 99
86 \subsection{Sample rate} 100 \subsection{Sample rate}
87 Depending on how the experiment is set up, audio is resampled automatically (the Web Audio default) or the sample rate is enforced. In the latter case, you will need to make sure that the sample rate of the system is equal to the sample rate of these audio files. For this reason, all audio files in the experiment will have to have the same sample rate. 101 Depending on how the experiment is set up, audio is resampled automatically (the Web Audio default) or the sample rate is enforced. In the latter case, you will need to make sure that the sample rate of the system is equal to the sample rate of these audio files. For this reason, all audio files in the experiment will have to have the same sample rate.
195 To start the test again for a new participant, you do not need to close the browser or shut down the Terminal or Command Prompt. Simply refresh the page or go to \texttt{localhost:8000} again. 209 To start the test again for a new participant, you do not need to close the browser or shut down the Terminal or Command Prompt. Simply refresh the page or go to \texttt{localhost:8000} again.
196 210
197 211
198 \subsection{Remote test} 212 \subsection{Remote test}
199 Put all files on a web server which supports PHP. This allows the `save.php' script to store the XML result files in the `saves/' folder. If the web server is not able to store the XML file there at the end of the test, it will present the XML file locally to the user, as a `Save file' link. 213 Put all files on a web server which supports PHP. This allows the `save.php' script to store the XML result files in the `saves/' folder. If the web server is not able to store the XML file there at the end of the test, it will present the XML file locally to the user, as a `Save file' link.
200 214
201 \clearpage 215 Make sure the \texttt{projectReturn} attribute of the \texttt{setup} node is set to the \texttt{save.php} script.
216
217 Then, just go to the URL of the corresponding HTML file, e.g. \texttt{http://server.com/path/to/WAET/index.html?url=test/my-test.xml}. If storing on the server doesn't work at submission (e.g. if the \texttt{projectReturn} attribute isn't properly set), the result XML file will be presented to the subject on the client side, as a `Save file' link.
218
219
220 \clearpage
221
222 \section{Interfaces}
223
224 The Web Audio Evaluation Tool comes with a number of interface styles, each of which can be customised extensively, either by configuring them differently using the many optional features, or by modifying the JavaScript files.
225
226 To set the interface style for the whole test, %Nick? change when this is not the case anymore, i.e. when the interface can be set per page
227 add \texttt{interface="APE"} to the \texttt{setup} node, where \texttt{"APE"} is one of the interface names below.
228
229 \subsection{APE}
230 The APE interface is based on \cite{ape}, and consists of one or more axes, each corresponding with an attribute to be rated, on which markers are placed. As such, it is a multiple stimulus interface where (for each dimension or attribute) all elements are on one axis so that they can be maximally compared against each other, as opposed to rated individually or with regards to a single reference.
231 It also contains an optional text box for each element, to allow for clarification by the subject, tagging, and so on.
232
233 \subsection{MUSHRA}
234 This is a straightforward implementation of \cite{mushra}, especially common for the rating of audio quality, for instance for the evaluation of audio codecs.
235
202 236
237 \clearpage
238
239 \section{Features}
240
241 This section goes over the different features implemented in the Web Audio Evaluation Tool, how to use them, and what to know about them.
242
243 Unless otherwise specified, \emph{each} feature described here is optional, i.e. it can be enabled or disabled and adjusted to some extent.
244
245 As the example project showcases (nearly) all of these features, please refer to its configuration XML document for a demonstration of how to enable and adjust them.
246
247 \subsection{Surveys}
248 \subsubsection{Pre- and post-page surveys}
249
250 \subsubsection{Pre- and post-test surveys}
251
252 \subsubsection{Survey elements}
253 All survey elements (which `pop up' in the centre of the browser) have an \texttt{id} attribute, for retrieval of the responses in post-processing of the results, and a \texttt{mandatory} attribute, which if set to ``true'' requires the subjects to respond before they can continue.
254
255 \begin{description}
256 \item[statement] Simply shows text to the subject until `Next' or `Start' is clicked.
257 \item[question] Expects a text answer (in a text box). Has the \texttt{boxsize} argument: set to ``large'' or ``huge'' for a bigger box size.
258 \item[number] Expects a numerical value. Attribute \texttt{min="0"} specifies the minimum value - in this case the answer must be stricly positive before the subject can continue.
259 \item[radio] Radio buttons.
260 \item[checkbox] Checkboxes.\\
261 \end{description}
262
263 \textbf{Example usage:}\\
264
265 \lstset{
266 basicstyle=\ttfamily,
267 columns=fullflexible,
268 showstringspaces=false,
269 commentstyle=\color{grey}\upshape
270 }
271
272 \lstdefinelanguage{XML}
273 {
274 morestring=[b]",
275 morestring=[s]{>}{<},
276 morecomment=[s]{<?}{?>},
277 stringstyle=\color{black} \bfseries,
278 identifierstyle=\color{darkblue} \bfseries,
279 keywordstyle=\color{cyan} \bfseries,
280 morekeywords={xmlns,version,type},
281 breaklines=true% list your attributes here
282 }
283 \scriptsize
284 \lstset{language=XML}
285
286 \begin{lstlisting}
287 <PostTest>
288 <question id="location" mandatory="true" boxsize="large">Please enter your location. (example mandatory text question)</question>
289 <number id="age" min="0">Please enter your age (example non-mandatory number question)</number>
290 <radio id="rating">
291 <statement>Please rate this interface (example radio button question)</statement>
292 <option name="bad">Bad</option>
293 <option name="poor">Poor</option>
294 <option name="good">Good</option>
295 <option name="great">Great</option>
296 </radio>
297 <statement>Thank you for taking this listening test. Please click 'Submit' and your results will appear in the 'saves/' folder.</statement>
298 </PostTest>
299 \end{lstlisting}
300
301
302 \subsection{Randomisation}
303
304 \subsubsection{Randomisation of configuration XML files}
305 % how to
306 % explain how this is implemented in the pythonServer
307 %Nick? already implemented in the PHP?
308
309
310 \subsubsection{Randomsation of page order}
311
312
313 \subsubsection{Randomisation of axis order}
314
315 \subsubsection{Randomisation of fragment order}
316
317
318 \subsubsection{Randomisation of initial slider position}
319
320 % /subsubsection{Randomisation of survey question order}
321 % should be an attribute of the individual 'pretest' and 'posttest' elements
322 % uncomment once we have it
323
324 \subsection{Looping}
325 Loops the fragments
326 % how to enable?
327 If the fragments are not of equal length initially, they are padded with zeros so that they are equal length, to enable looping without the fragments going out of sync relative to each other.
328
329 Note that fragments cannot be played until all fragments are loaded when in looped mode, as the engine needs to assess whether all %Nick? Is this accurate?
330
331 \subsection{Sample rate}
332 If you require the test to be conducted at a certain sample rate (i.e. you do not tolerate resampling of the elements to correspond with the system's sample rate), add \texttt{sampleRate="96000"} - where ``96000'' can be any support sample rate - so that a warning message is shown alerting the subject the system's sample rate is different from this enforced sample rate. This of course means that in one test, all sample rates must be equal as it is impossible to change the system's sample rates during the test (even if you were to manually change it, then the browser must be restarted for it to take effect).
333
334 \subsection{Scrubber bar}
335 The scrubber bar, or transport bar (that is the name of the visualisation of the playhead thing with an indication of time and showing the portion of the file played so far) is at this point just a visual, and not a controller to adjust the playhead position.
336
337 Make visible by adding \texttt{<option name='playhead'/>} to the \texttt{interface} node (see Section \ref{sec:checks}: Checks).
338
339 \subsection{Metrics}
340 Enable the collection of metrics by adding \texttt{collectMetrics=`true'} in the \texttt{setup} node.
341
342 The \texttt{Metric} node, which contains the metrics to be tracked during the complete test, is a child of the \texttt{setup} node, and it could look as follows.
343
344 \begin{lstlisting}
345 <Metric>
346 <metricEnable>testTimer</metricEnable>
347 <metricEnable>elementTimer</metricEnable>
348 <metricEnable>elementInitialPosition</metricEnable>
349 <metricEnable>elementTracker</metricEnable>
350 <metricEnable>elementFlagListenedTo</metricEnable>
351 <metricEnable>elementFlagMoved</metricEnable>
352 <metricEnable>elementListenTracker</metricEnable>
353 </Metric>
354 \end{lstlisting}
355
356 When in doubt, err on the inclusive side, as one never knows which information is needed in the future. Most of these metrics are necessary for post-processing scripts such as timeline\_view\_movement.py.
357
358 \subsubsection{Time test duration}
359 \texttt{testTimer}\\
360
361 \subsubsection{Time fragment playback}
362 \texttt{elementTimer}\\
363 This keeps track of when each fragment is played back and stopped again, \emph{and} which part of the fragment has been played back at that time.
364 % example output?
365
366 \subsubsection{Initial positions}
367 \texttt{elementInitialPosition}\\
368 Tracks the initial position of the sliders, especially relevant when these are randomised, so their influence
369
370 \subsubsection{Track movements}
371
372 \subsubsection{Which fragments listened to}
373
374 \subsubsection{Which fragments moved}
375 Binary check whether or not a the marker corresponding with a particular fragment was moved at all throughout the experiment.
376
377 \subsubsection{elementListenTracker} %Nick? No idea what this does, if it's not what I wrote under 'Time fragment playback'
378
379 \subsection{References and anchors}
380 \subsubsection{Reference}
381 %...
382 \subsubsection{Hidden reference}
383 %...
384 \subsubsection{Hidden anchor}
385 %...
386
387 \subsection{Checks}
388 \label{sec:checks}
389
390 %blabla
391 These checks are enabled in the \texttt{interface} node, which is a child of the \texttt{setup} node.
392 \subsubsection{Playback checks}
393 % what it does/is
394 Enforce playing each sample at least once, for at least a little bit (e.g. this test is satisfied even if you only play a tiny portion of the file), by alerting the user to which samples have not been played upon clicking `Submit'. When enabled, one cannot proceed to the next page, answer a survey question, or finish the test, before clicking each sample at least once.
395 % how to enable/disable
396
397 Alternatively, one can check whether the \emph{entire} fragment was listened to at least once.
398 % how to enable
399
400 Add \texttt{<check name="fragmentPlayed"/>} to the \texttt{interface} node.
401
402
403 \subsubsection{Movement check}
404 Enforce moving each sample at least once, for at least a little bit (e.g. this test is satisfied even if you only play a tiny portion of the file), by alerting the user to which samples have not been played upon clicking `Submit'. When enabled, one cannot proceed to the next page, answer a survey question, or finish the test, before clicking each sample at least once.
405 If there are several axes, the warning will specify which samples have to be moved on which axis.
406
407 Add \texttt{<check name="fragmentMoved"/>} to the \texttt{interface} node.
408
409 \subsubsection{Comment check}
410 % How to enable/disable?
411
412 Enforce commenting, by alerting the user to which samples have not been commented on upon clicking `Submit'. When enabled, one cannot proceed to the next page, answer a survey question, or finish the test, before putting at least one character in each comment box.
413
414 Note that this does not apply to any extra (text, radio button, checkbox) elements, unless these have the `mandatory' option enabled. %Nick? is this extra 'mandatory' option implemented?
415
416 Add \texttt{<check name="fragmentComments"/>} to the \texttt{interface} node.
417
418 %ADD: how to add a custom comment box
419
420 \subsubsection{Scale use check}
421 It is possible to enforce a certain usage of the scale, meaning that at least one slider needs to be below and/or above a certain percentage of the slider.
422
423 Add \texttt{<check name="scalerange" min="25" max="75"/>} to the \texttt{interface} node.
424
425 \subsubsection{Note on the use of multiple rating axes}
426 I.e. what if more than one axis? How to specify which axis the checks relate to? %Nick? to add?
427
428 \subsection{Layout options}
429 \texttt{title}, \texttt{scale}, \texttt{position}, \texttt{commentBoxPrefix}
430
431 \subsection{Multiple sliders}
432 (APE example)
433
434 \begin{lstlisting}
435 <interface name="preference">
436 <title>Preference</title>
437 <scale position="0">Min</scale>
438 <scale position="100">Max</scale>
439 <scale position="50">Middle</scale>
440 <commentBoxPrefix>Comment on fragment</commentBoxPrefix>
441 </interface>
442 <interface name="depth">
443 <title>Depth</title>
444 <scale position="0">Low</scale>
445 <scale position="100">High</scale>
446 <scale position="50">Middle</scale>
447 <commentBoxPrefix>Comment on fragment</commentBoxPrefix>
448 </interface>
449 \end{lstlisting}
450 where the \texttt{interface} nodes are children of the \texttt{audioholder} node.
451
452 \subsection{Platform information}
453 % what does it do, what does it look like
454 % limitations?
455
456 \subsection{Show progress}
457 Add \texttt{<option name="page-count"/>} to the \texttt{interface} node (see Section \ref{sec:checks}: Checks) to add the current page number and the total number of pages to the interface.
458
459 \subsection{Gain}
460 It is possible to set the gain (in decibel) applied to the different audioelements, as an attribute of the \texttt{audioelement} nodes in the configuration XML file:
461
462 \texttt{<audioElements url="sample-01.wav" gain="-6" id="sample01quieter" />}
463
464 \subsection{Loudness}
465 % automatic loudness equalisation
466 % guide to loudness.js
467 Set the loudness for a complete test by adding \texttt{loudness="-23"} to the \texttt{setup} node in the configuration XML file, where -23 is an example loudness in LUFS.
468
469 \clearpage
470
203 471
204 \section{Using the test create tool} 472 \section{Using the test create tool}
205 We provide a test creation tool, available in the directory test\_create. This tool is a self-contained web page, so doubling clicking will launch the page in your system default browser. 473 We provide a test creation tool, available in the directory test\_create. This tool is a self-contained web page, so doubling clicking will launch the page in your system default browser.
206 474
207 The test creation tool can help you build a simple test very quickly. By simply selecting your interface and clicking check-boxes you can build a test in minutes. 475 The test creation tool can help you build a simple test very quickly. By simply selecting your interface and clicking check-boxes you can build a test in minutes.
243 511
244 You also need to link audioObject.interfaceDOM with your interface object for that audioObject. The interfaceDOM object has a few default methods. Firstly it must start disabled and become enabled once the audioObject has decoded the audio (function call: enable()). Next it must have a function exportXMLDOM(), this will return the xml node for your interface, however the default is for it to return a value node, with textContent equal to the normalised value. You can perform other functions, but our scripts may not work if something different is specified (as it will breach our results specifications). Finally it must also have a method getValue, which returns the normalised value. 512 You also need to link audioObject.interfaceDOM with your interface object for that audioObject. The interfaceDOM object has a few default methods. Firstly it must start disabled and become enabled once the audioObject has decoded the audio (function call: enable()). Next it must have a function exportXMLDOM(), this will return the xml node for your interface, however the default is for it to return a value node, with textContent equal to the normalised value. You can perform other functions, but our scripts may not work if something different is specified (as it will breach our results specifications). Finally it must also have a method getValue, which returns the normalised value.
245 513
246 It is also the job the interfaceDOM to call any metric collection functions necessary, however some functions may be better placed outside (for example, the APE interface uses drag and drop, therefore the best way was to call the metric functions from the dragEnd function, which is called when the interface object is dropped). Metrics based upon listening are handled by the audioObject. The interfaceDOM object must manage any movement metrics. For a list of valid metrics and their behaviours, look at the project specification document included in the repository/docs location. The same goes for any checks required when pressing the submit button, or any other method to proceed the test state. 514 It is also the job the interfaceDOM to call any metric collection functions necessary, however some functions may be better placed outside (for example, the APE interface uses drag and drop, therefore the best way was to call the metric functions from the dragEnd function, which is called when the interface object is dropped). Metrics based upon listening are handled by the audioObject. The interfaceDOM object must manage any movement metrics. For a list of valid metrics and their behaviours, look at the project specification document included in the repository/docs location. The same goes for any checks required when pressing the submit button, or any other method to proceed the test state.
247 515
516 \clearpage
517 \section{Analysis and diagnostics}
518 \subsection{In the browser}
519 See `analysis.html' in the main folder: immediate visualisation of (by default) all results in the `saves/' folder.
520
521 \subsection{Python scripts}
522 The package includes Python (2.7) scripts (in `scripts/') to extract ratings and comments, generate visualisations of ratings and timelines, and produce a fully fledged report.
523
524 Visualisation requires the free matplotlib toolbox (http://matplotlib.org), numpy and scipy.
525 By default, the scripts can be run from the `scripts' folder, with the result files in the `saves' folder (the default location where result XMLs are stored). Each script takes the XML file folder as an argument, along with other arguments in some cases.
526 Note: to avoid all kinds of problems, please avoid using spaces in file and folder names (this may work on some systems, but others don't like it).
527
528 \subsubsection{comment\_parser.py}
529 Extracts comments from the output XML files corresponding with the different subjects found in `saves/'. It creates a folder per `audioholder'/page it finds, and stores a CSV file with comments for every `audioelement'/fragment within these respective `audioholders'/pages. In this CSV file, every line corresponds with a subject/output XML file. Depending on the settings, the first column containing the name of the corresponding XML file can be omitted (for anonymisation).
530 Beware of Excel: sometimes the UTF-8 is not properly imported, leading to problems with special characters in the comments (particularly cumbersome for foreign languages).
531
532 \subsubsection{evaluation\_stats.py}
533 Shows a few statistics of tests in the `saves/' folder so far, mainly for checking for errors. Shows the number of files that are there, the audioholder IDs that were tested (and how many of each separate ID), the duration of each page, the duration of each complete test, the average duration per page, and the average duration in function of the page number.
534
535 \subsubsection{generate\_report.py}
536 Similar to `evaluation\_stats.py', but generates a PDF report based on the output files in the `saves/' folder - or any folder specified as command line argument. Uses pdflatex to write a LaTeX document, then convert to a PDF.
537
538 \subsubsection{score\_parser.py}
539 Extracts rating values from the XML to CSV - necessary for running visualisation of ratings. Creates the folder `saves/ratings/' if not yet created, to which it writes a separate file for every `audioholder'/page in any of the output XMLs it finds in `saves/'. Within each file, rows represent different subjects (output XML file names) and columns represent different `audioelements'/fragments.
540
541 \subsubsection{score\_plot.py}
542 Plots the ratings as stored in the CSVs created by score\_parser.py
543 Depending on the settings, it displays and/or saves (in `saves/ratings/') a boxplot, confidence interval plot, scatter plot, or a combination of the aforementioned.
544 Requires the free matplotlib library.
545 At this point, more than one subjects are needed for this script to work.
546
547 \subsubsection{timeline\_view\_movement.py}
548 Creates a timeline for every subject, for every `audioholder'/page, corresponding with any of the output XML files found in `saves/'. It shows the marker movements of the different fragments, along with when each fragment was played (red regions). Automatically takes fragment names, rating axis title, rating axis labels, and audioholder name from the XML file (if available).
549
550 \subsubsection{timeline\_view.py} % should be omitted or absorbed by the above soon
551 Creates a timeline for every subject, for every `audioholder'/page, corresponding with any of the output XML files found in `saves/'. It shows when and for how long the subject listened to each of the fragments.
552
248 553
249 554
250 \clearpage 555 \clearpage
251 \section{Troubleshooting} \label{sec:troubleshooting} 556 \section{Troubleshooting} \label{sec:troubleshooting}
252 Thanks to feedback from using the interface in experiments by the authors and others, many bugs have been caught and fatal crashes due to the interface (provided it is set up properly by the user) seem to be a thing of the past. 557 \subsection{Reporting bugs and requesting features}
253 558 Thanks to feedback from using the interface in experiments by the authors and others, many bugs have been caught and fatal crashes due to the interface seem to be a thing of the past entirely.
254 However, if things do go wrong or the test needs to be interrupted for whatever reason, all data is not lost. In a normal scenario, the test needs to be completed until the end (the final `Submit'), at which point the output XML is stored in the \texttt{saves/}. If this stage is not reached, open the JavaScript Console (see below for how to find it) and type 559
255 560 We continually develop this tool to fix issues and implement features useful to us or our user base. See \url{https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/issues} for a list of feature requests and bug reports, and their status.
256 \texttt{createProjectSave()} 561
257 562 Please contact the authors if you experience any bugs, if you would like additional functionality, if you spot any errors or gaps in the documentation, if you have questions about using the interface, or if you would like to give any feedback (even positive!) about the interface. We look forward to learning how the tool has (not) been useful to you.
258 to present the result XML file on the client side, or 563
259 564
260 \texttt{createProjectSave(specification.projectReturn)} 565 \subsection{First aid}
261 566 Meanwhile, if things do go wrong or the test needs to be interrupted for whatever reason, all data is not lost. In a normal scenario, the test needs to be completed until the end (the final `Submit'), at which point the output XML is stored in the \texttt{saves/}. If this stage is not reached, open the JavaScript Console (see below for how to find it) and type
262 to try to store it to the specified location, e.g. the `saves/' folder on the web server or the local machine (on failure the result XML should be presented directly in the web browser instead) 567
263 568 \texttt{createProjectSave()}
264 and hit enter. This will open a pop-up window with a hyperlink that reads `Save File'; click it and an XML file with results until that point should be stored in your download folder. 569
265 570 to present the result XML file on the client side, or
266 Alternatively, a lot of data can be read from the same console, in which the tool prints a lot of debug information. Specifically: 571
267 \begin{itemize} 572 \texttt{createProjectSave(specification.projectReturn)}
268 \item the randomisation of pages and fragments are logged; 573
269 \item any time a slider is played, its ID and the time stamp (in seconds since the start of the test) are displayed; 574 to try to store it to the specified location, e.g. the `saves/' folder on the web server or the local machine (on failure the result XML should be presented directly in the web browser instead)
270 \item any time a slider is dragged and dropped, the location where it is dropped including the time stamp are shown; 575
271 \item any comments and pre- or post-test questions and their answers are logged as well. 576 and hit enter. This will open a pop-up window with a hyperlink that reads `Save File'; click it and an XML file with results until that point should be stored in your download folder.
272 \end{itemize} 577
273 578 Alternatively, a lot of data can be read from the same console, in which the tool prints a lot of debug information. Specifically:
274 You can select all this and save into a text file, so that none of this data is lost. You may to choose to do this even when a test was successful as an extra precaution. 579 \begin{itemize}
275 580 \item the randomisation of pages and fragments are logged;
276 If you encounter any issue which you believe to be caused by any aspect of the , or which the documentation does not mention, please do let us know! 581 \item any time a slider is played, its ID and the time stamp (in seconds since the start of the test) are displayed;
277 582 \item any time a slider is dragged and dropped, the location where it is dropped including the time stamp are shown;
278 \subsection{Opening the JavaScript Console} 583 \item any comments and pre- or post-test questions and their answers are logged as well.
279 \begin{itemize} 584 \end{itemize}
280 \item In Google Chrome, the JavaScript Console can be found in \textbf{View$>$Developer$>$JavaScript Console}, or via the keyboard shortcut Cmd + Alt + J (Mac OS X). 585
281 \item In Safari, the JavaScript Console can be found in \textbf{Develop$>$Show Error Console}, or via the keyboard shortcut Cmd + Alt + C (Mac OS X). Note that for the Developer menu to be visible, you have to go to Preferences (Cmd + ,) and enable `Show Develop menu in menu bar' in the `Advanced' tab. \textbf{Note that as long as the Developer menu is not visible, nothing is logged to the console, i.e. you will only be able to see diagnostic information from when you switched on the Developer tools onwards.} 586 You can select all this and save into a text file, so that none of this data is lost. You may to choose to do this even when a test was successful as an extra precaution.
282 \item In Firefox, go to \textbf{Tools$>$Web Developer$>$Web Console}, or hit Cmd + Alt + K. 587
283 \end{itemize} 588 If you encounter any issue which you believe to be caused by any aspect of the tool, and/or which the documentation does not mention, please do let us know!
284 589
285 \clearpage 590 \subsubsection*{Opening the JavaScript Console}
286 591 \begin{itemize}
287 \section{Known issues and limitations} 592 \item In Google Chrome, the JavaScript Console can be found in \textbf{View$>$Developer$>$JavaScript Console}, or via the keyboard shortcut Cmd + Alt + J (Mac OS X).
288 \label{sec:issues} 593 \item In Safari, the JavaScript Console can be found in \textbf{Develop$>$Show Error Console}, or via the keyboard shortcut Cmd + Alt + C (Mac OS X). Note that for the Developer menu to be visible, you have to go to Preferences (Cmd + ,) and enable `Show Develop menu in menu bar' in the `Advanced' tab. \textbf{Note that as long as the Developer menu is not visible, nothing is logged to the console, i.e. you will only be able to see diagnostic information from when you switched on the Developer tools onwards.}
289 594 \item In Firefox, go to \textbf{Tools$>$Web Developer$>$Web Console}, or hit Cmd + Alt + K.
290 The following is a non-exhaustive list of problems and limitations you may experience using this tool, due to not being supported yet by us, or by the Web Audio API and/or (some) browsers. 595 \end{itemize}
291 596
292 \begin{itemize} 597 \subsection{Known issues and limitations}
293 \item Issue \href{https://code.soundsoftware.ac.uk/issues/1463}{\textbf{\#1463}}: \textbf{Firefox} only supports 8 bit and 16 bit WAV files. Pending automatic requantisation (which deteriorates the audio signal's dynamic range to some extent), WAV format stimuli need to adhere to these limitations in order for the test to be compatible with Firefox. 598 \label{sec:knownissues}
294 \item Issues \href{https://code.soundsoftware.ac.uk/issues/1474}{\textbf{\#1474}} and \href{https://code.soundsoftware.ac.uk/issues/1462}{\textbf{\#1462}}: On occasions, audio is not working - or only a continuous `beep' can be heard - notably in \textbf{Safari}. Refreshing, quitting the browser and even enabling Developer tools in Safari's Preferences pane (`Advanced' tab: ``Show `Develop' menu in menu bar'') has helped resolve this. If no (high quality) audio can be heard, make sure your entire playback system's settings are all correct. 599
295 \end{itemize} 600 The following is a non-exhaustive list of problems and limitations you may experience using this tool, due to not being supported yet by us, or by the Web Audio API and/or (some) browsers.
601
602 \begin{itemize}
603 \item Issue \href{https://code.soundsoftware.ac.uk/issues/1463}{\textbf{\#1463}}: \textbf{Firefox} only supports 8 bit and 16 bit WAV files. Pending automatic requantisation (which deteriorates the audio signal's dynamic range to some extent), WAV format stimuli need to adhere to these limitations in order for the test to be compatible with Firefox.
604 \item Issues \href{https://code.soundsoftware.ac.uk/issues/1474}{\textbf{\#1474}} and \href{https://code.soundsoftware.ac.uk/issues/1462}{\textbf{\#1462}}: On occasions, audio is not working - or only a continuous `beep' can be heard - notably in \textbf{Safari}. Refreshing, quitting the browser and even enabling Developer tools in Safari's Preferences pane (`Advanced' tab: ``Show `Develop' menu in menu bar'') has helped resolve this. If no (high quality) audio can be heard, make sure your entire playback system's settings are all correct.
605 \end{itemize}
296 606
297 \clearpage 607 \clearpage
298 \bibliographystyle{ieeetr} 608 \bibliographystyle{ieeetr}
299 \bibliography{Instructions}{} 609 \bibliography{Instructions}{}
300 610
301 611
302 \clearpage 612 \clearpage
303 \appendix 613 \appendix
614
615 \section{Legacy}
616 The APE interface and most of the functionality of the first WAET editions are inspired by the APE toolbox for MATLAB \cite{ape}. See \url{https://code.soundsoftware.ac.uk/projects/ape} for the source code and \url{http://brechtdeman.com/publications/aes136.pdf} for the corresponding paper.
617
618 \clearpage
304 619
305 \section{Listening test instructions example} 620 \section{Listening test instructions example}
306 621
307 Before each test, show the instructions below or similar and make sure it is available to the subject throughout the test. Make sure to ask whether the participant has any questions upon seeing and/or reading the instructions. 622 Before each test, show the instructions below or similar and make sure it is available to the subject throughout the test. Make sure to ask whether the participant has any questions upon seeing and/or reading the instructions.
308 623
324 \item By participating, you consent to us using all collected data for research. Unless asked explicitly, all data will be anonymised when shared. 639 \item By participating, you consent to us using all collected data for research. Unless asked explicitly, all data will be anonymised when shared.
325 \end{itemize} 640 \end{itemize}
326 641
327 \clearpage 642 \clearpage
328 643
329 \section*{Contact details} 644 \section{Vocabulary} % just to keep track of what exactly we call things. Don't use terms that are too different, to avoid confusion.
645 \begin{description}
646 \item[Subject] The word we use for a participant, user, ... of the test, i.e. not the experimenter who designs the test but the person who evaluates the audio under test as part of an experiment (or the preparation of one).
647 \item[Page] A screen in a test; corresponds with an \texttt{audioholder}
648 \item[Fragment] An element or sample in a test; corresponds with an \texttt{audioelement}
649 \item[Test] A complete test which can consist of several pages; corresponds with an entire configuration XML file
650 \item[Configuration XML file] The XML file containing the necessary information on interface, samples, survey questions, configurations, ... which the JavaScript modules read to produce the desired test.
651 \item[Results XML file] The output of a successful test, including ratings, comments, survey responses, timing information, and the complete configuration XML file with which the test was generated in the first place.
652 \end{description}
653
654 \clearpage
655
656 \setcounter{secnumdepth}{0} % don't number this last bit
657 \section{Contact details} % maybe add web pages, Twitter accounts, whatever you like
330 \label{sec:contact} 658 \label{sec:contact}
331 659
332 \begin{itemize} 660 \begin{itemize}
333 \item Nicholas Jillings: \texttt{nicholas.jillings@mail.bcu.ac.uk} 661 \item Nicholas Jillings: \texttt{nicholas.jillings@mail.bcu.ac.uk}
334 \item Brecht De Man: \texttt{b.deman@qmul.ac.uk} 662 \item Brecht De Man: \texttt{b.deman@qmul.ac.uk}