Mercurial > hg > webaudioevaluationtool
changeset 735:ac51271a1a77
Updates to all sections.
author | Nicholas Jillings <nicholas.jillings@eecs.qmul.ac.uk> |
---|---|
date | Wed, 14 Oct 2015 20:15:31 +0100 |
parents | 33ff8ddc1b1b |
children | 33d7a1faa50b |
files | docs/WAC2016/WAC2016.pdf docs/WAC2016/WAC2016.tex |
diffstat | 2 files changed, 22 insertions(+), 30 deletions(-) [+] |
line wrap: on
line diff
--- a/docs/WAC2016/WAC2016.tex Wed Oct 14 16:02:45 2015 +0100 +++ b/docs/WAC2016/WAC2016.tex Wed Oct 14 20:15:31 2015 +0100 @@ -153,12 +153,12 @@ \hline APE & \cite{ape} & MATLAB & multi-stimulus, 1 axis per attribute & & \\ BeaqleJS & \cite{beaqlejs} & JavaScript & ABX, MUSHRA & (not natively supported) & \\ - HULTI-GEN & \cite{hultigen} & MAX & & & \checkmark \\ + HULTI-GEN & \cite{hultigen} & MAX & See Table \ref{tab:toolbox_interfaces} & & \checkmark \\ mushraJS & \footnote{https://github.com/akaroice/mushraJS} & JavaScript & MUSHRA & \checkmark & \\ MUSHRAM & \cite{mushram} & MATLAB & MUSHRA & & \\ - Scale & \cite{scale} & MATLAB & & & \\ - WhisPER & \cite{whisper} & MATLAB & & & \checkmark \\ - \textbf{WAET} & \cite{waet} & JavaScript & \textbf{all of the above} & \checkmark & \checkmark \\ + Scale & \cite{scale} & MATLAB & See Table \ref{tab:toolbox_interfaces} & & \\ + WhisPER & \cite{whisper} & MATLAB & See Table \ref{tab:toolbox_interfaces} & & \checkmark \\ + \textbf{WAET} & \cite{waet} & JavaScript & \textbf{all of the above, see Table \ref{tab:toolbox_interfaces}} & \checkmark & \checkmark \\ \hline \end{tabular} \end{center} @@ -189,7 +189,7 @@ ABX Test & \checkmark & & & \checkmark \\ ``Adaptive psychophysical methods'' & & & \checkmark & \\ Repertory Grid Technique (RGT) & & & \checkmark & \\ - (Semantic differential) & & & (\checkmark) & \\ % same as a few of the above + (Semantic differential) & & \checkmark & (\checkmark) & \\ % same as a few of the above \hline \end{tabular} \end{center} @@ -199,7 +199,9 @@ % %Selling points: remote tests, visualisaton, create your own test in the browser, many interfaces, few/no dependencies, flexibility - [Talking about what we do in the various sections of this paper. Referring to \cite{waet}. ] + %[Talking about what we do in the various sections of this paper. Referring to \cite{waet}. ] + This paper is divided into five sections. The Architecture section aims to introduce the toolbox by expanding on \cite{waet}, how the Web Audio Evaluation Tool uses the Web Audio API and the relationship of the various modules for configuration, operation and collection. The Remote Tests section aims to briefly highlight the performance of the server-side implementations enabling for powerful remote testing for seemless deployment to many locations. The Interfaces section outlines the various interfaces currently supported by the toolbox along with a brief description of each interface. Analysis and Diagnosis shows the online analysis tools available for processing the gathered data before concluding this paper and highlighting out future work. + \begin{comment} % MEETING 8 OCTOBER \subsection{Meeting 8 October} @@ -229,35 +231,24 @@ \end{comment} \section{Architecture} % title? 'back end'? % NICK +%A slightly technical overview of the system. Talk about XML, JavaScript, Web Audio API, HTML5. WAET utilises the Web Audio API for audio playback and uses a sparse subset of the Web Audio API functionality, however the performance of WAET comes directly from the Web Audio API. Listening tests can convey large amounts of information other than obtaining the perceptual relationship between the audio fragments. WAET specifically can obtain which parts of the audio fragments were listened to and when, at what point in the audio stream did the participant switch to a different fragment and what new rating did they give a fragment. Therefore it is possible to not only evaluate the perceptual research question but also evaluate if the participant performed the test well and therefore if their results are representative or should be discarded as an outlier. One of the key initial design parameters for WAET is to make the tool as open as possible to non-programmers and to this end the tool has been designed in such a way that all of the user modifiable options are included in a single XML document. This document is loaded up automatically by the web page and the JavaScript code parses and loads any extra resources required to create the test. + %Describe and/or visualise audioholder-audioelement-... structure. The specification document also contains the URL of the audio fragments for each test page. These fragments are downloaded asynchronously and decoded offline by the Web Audio offline decoder. The resulting buffers are assigned to a custom Audio Objects node which tracks the fragment buffer, the playback bufferSourceNode, the XML information including its unique test ID, the interface object(s) associated with the fragment and any metric or data collection objects. The Audio Object is controlled by an over-arching custom Audio Context node (not to be confused with the Web Audio Context), this parent JS Node allows for session wide control of the Audio Objects including starting and stopping playback of specific nodes. The only issue with this model is the bufferNode in the Web Audio API, which is implemented as a 'use once' object which, once the buffer has been played, the buffer must be discarded as it cannot be instructed to play the buffer again. Therefore on each start request the buffer object must be created and then linked with the stored bufferSourceNode. This is an odd behaviour for such a simple object which has no alternative except to use the HTML5 audio element, however they do not have the ability to synchronously start on a given time and therefore not suited. + %Which type of files? WAV, anything else? Perhaps not exhaustive list, but say something along the lines of 'whatever browser supports'. Compatability? The media files supported depend on the browser level support for the initial decoding of information and is the same as the browser support for the HTML5 audio element. Therefore the most widely supported media file is the wave (.WAV) format which can be accpeted by every browser supporting the Web Audio API. The next best supported audio only formats are MP3 and AAC (in MP4) which are supported by all major browsers, Firefox relies on OS decoders and therefore its support is predicated by the OS support. All the collected session data is returned in an XML document structured similarly to the configuration document, where test pages contain the audio elements with their trace collection, results, comments and any other interface-specific data points. - A slightly technical overview of the system. Talk about XML, JavaScript, Web Audio API, HTML5. - Describe and/or visualise audioholder-audioelement-... structure. - - % see also SMC12 - less detail here - - Which type of files? % WAV, anything else? Perhaps not exhaustive list, but say something along the lines of 'whatever browser supports' - - Streaming audio? % probably not, unless it's easy - - Compatibility? % not IE, everything else fine? - - - - \section{Remote tests} % with previous? - If the experimenter is willing to trade some degree of control for a higher number of participants, the test can be hosted on a web server so that subjects can take part remotely. This way, a link can be shared widely in the hope of attracting a large amount of subjects, while listening conditions and subject reliability may be less ideal. However, a sound system calibration page and a wide range of metrics logged during the test mitigate these problems. Note also that in some experiments, it may be preferred that the subject has a `real life', familiar listening set-up, for instance when perceived quality differences on everyday sound systems are investigated. + If the experimenter is willing to trade some degree of control for a higher number of participants, the test can be hosted on a web server so that participants can take part remotely. This way, a link can be shared widely in the hope of attracting a large amount of subjects, while listening conditions and subject reliability may be less ideal. However, a sound system calibration page and a wide range of metrics logged during the test mitigate these problems. Note also that in some experiments, it may be preferred that the subject has a `real life', familiar listening set-up, for instance when perceived quality differences on everyday sound systems are investigated. Furthermore, a fully browser-based test, where the collection of the results is automatic, is more efficient and technically reliable even when the test still takes place under lab conditions. The following features allow easy and effective remote testing: @@ -406,25 +397,26 @@ \section{Analysis and diagnostics} % don't mention Python scripts - It would be great to have easy-to-use analysis tools to visualise the collected data and even do science with it. Even better would be to have all this in the browser. Complete perfection would be achieved if and when only limited setup, installation time, and expertise are required for the average non-CS researcher to use this. + It would be great to have easy-to-use analysis tools to visualise the collected data and even do science with it. Even better would be to have all this in the browser. Complete perfection would be achieved if and when only limited setup, installation time, and expertise are required for the average non-CS researcher to use this. Tools such as \cite{scale} include analysis features inside their packages as well. + One advantage to web based tests is the ability to process data as it becomes available using server-side programming. Since entire test sessions are uploaded the results can be immediately parsed and the current test results updated, meaning the researcher simply needs to browse to the web page to collect the current test results in a friendly interface rather than downloading the XML files. - The following could be nice: + The following functionality is available: \begin{itemize}[noitemsep,nolistsep] \item Web page showing all audioholder IDs, file names, subject IDs, audio element IDs, ... in the collected XMLs so far (\texttt{saves/*.xml}) \item Check/uncheck each of the above for analysis (e.g. zoom in on a certain song, or exclude a subset of subjects) \item Click a mix to hear it (follow path in XML setup file, which is also embedded in the XML result file) \item Box plot, confidence plot, scatter plot of values (for a given audioholder) - \item Timeline for a specific subject (see Python scripts), perhaps re-playing the experiment in X times realtime. (If actual realtime, you could replay the audio...) - \item Distribution plots of any radio button and number questions (drop-down menu with `pretest', `posttest', ...; then drop-down menu with question `IDs' like `gender', `age', ...; make pie chart/histogram of these values over selected range of XMLs) - \item All `comments' on a specific audioelement - \item A `download' button for a nice CSV of various things (values, survey responses, comments) people might want to use for analysis, e.g. when XML scares them - \item Validation of setup XMLs (easily spot `errors', like duplicate IDs or URLs, missing/dangling tags, ...) + \item Timeline for a specific subject / song %(see Python scripts), perhaps re-playing the experiment in X times realtime. (If actual realtime, you could replay the audio...) ---> A LOT of work, not sure I can guarantee this one + \item Distribution plots of any radio button and number questions %(drop-down menu with `pretest', `posttest', ...; then drop-down menu with question `IDs' like `gender', `age', ...; make pie chart/histogram of these values over selected range of XMLs) + \item All `comments' on a specific audioelement and export to CSV / XML + \item A `download' button for a nice CSV of various things (values, survey responses, comments) %people might want to use for analysis, e.g. when XML scares them + %\item Validation of setup XMLs (easily spot `errors', like duplicate IDs or URLs, missing/dangling tags, ...) --> Took this out as a feature as the test_create will already do this as will the test console. \end{itemize} - A subset of the above would already be nice for this paper. + %A subset of the above would already be nice for this paper. - Some pictures here please. + [Some pictures here please.] \section{Concluding remarks and future work}