b@1402: \documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format b@1402: \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. b@1402: \geometry{letterpaper} % ... or a4paper or a5paper or ... b@1402: %\geometry{landscape} % Activate for rotated page geometry b@1402: \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent b@1402: \usepackage{graphicx} % Use pdf, png, jpg, or eps§ with pdflatex; use eps in DVI mode b@1402: % TeX will automatically convert eps --> pdf in pdflatex b@1402: b@1402: \usepackage{listings} % Source code b@1435: \usepackage{xcolor} % colour (source code for instance) b@1435: \definecolor{grey}{rgb}{0.1,0.1,0.1} b@1435: \definecolor{darkblue}{rgb}{0.0,0.0,0.6} b@1435: \definecolor{cyan}{rgb}{0.0,0.6,0.6} b@1435: b@1402: \usepackage{amssymb} b@1402: \usepackage{cite} b@1402: \usepackage{hyperref} % Hyperlinks b@1402: \usepackage[nottoc,numbib]{tocbibind} % 'References' in TOC b@2233: \usepackage{url} b@1402: b@1402: \graphicspath{{img/}} % Relative path where the images are stored. b@1402: b@1402: \title{Instructions for \\ Web Audio Evaluation Tool} b@1402: \author{Nicholas Jillings, Brecht De Man and David Moffat} b@2209: %\date{7 December 2015} % Activate to display a given date or no date b@1402: b@1402: \begin{document} b@1402: \maketitle b@1402: b@1435: These instructions are about use of the Web Audio Evaluation Tool on Windows and Mac OS X platforms. b@1435: b@1435: We request that you acknowledge the authors and cite our work when using it \cite{waet}, see also CITING.txt. b@1435: b@2209: The tool is available for academic use in its entirety including source code on \url{https://github.com/BrechtDeMan/WebAudioEvaluationTool}, under the GNU General Public License v3.0 (\url{http://choosealicense.com/licenses/gpl-3.0/}), see also LICENSE.txt. b@2201: b@2201: The SoundSoftware project page, including a Mercurial repository, is \url{https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool/}. b@1435: b@2429: \textbf{The most current version of these instructions can be found on \url{https://github.com/BrechtDeMan/WebAudioEvaluationTool/wiki}.} b@2429: b@1402: b@1402: \tableofcontents b@1402: b@1402: \clearpage b@1402: b@1402: \section{Installation} b@2233: \label{sec:installation} b@2233: Download the folder (\url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool/archive/tip.zip}) and unzip in a location of your choice, or pull the source code from \url{https://github.com/BrechtDeMan/WebAudioEvaluationTool.git} (git) or \url{https://code.soundsoftware.ac.uk/hg/webaudioevaluationtool} (Mercurial). b@1402: b@1402: \subsection{Contents} b@1402: The folder should contain the following elements: \\ b@1402: b@1402: \textbf{Main folder:} b@1402: \begin{itemize} b@2231: \item \texttt{CITING.txt, LICENSE.txt, README.md}: text files with, respectively, the citation which we ask to include in any work where this tool or any portion thereof is used, modified or otherwise; the license under which the software is shared; and a general readme file. b@2231: \item \texttt{demo.html}: Several demonstrations of listening tests, using examples from the example\_eval folder b@2231: \item \texttt{index.html}: webpage where interface should appear (append link to configuration XML, e.g. index.html?url=config.xml) b@2231: \item \texttt{pythonServer.py}: webserver for running tests locally b@2231: \item \texttt{pythonServer-legacy.py}: webserver with limited functionality (no automatic storing of output XML files) b@2231: \\ b@2203: \end{itemize} b@2209: \textbf{Analysis of results (\texttt{./analysis/})} b@2231: \begin{itemize} b@2231: \item \texttt{analyse.html}: analysis and diagnostics of a set of result XML files (legacy) b@2231: \item \texttt{analysis.css}: analysis page style file b@2231: \item \texttt{analysis.js}: analysis functions b@2231: \item \texttt{index.html}: web page where analysis of stored results can be performed b@1402: \end{itemize} b@2231: \textbf{CSS files (\texttt{./css/})} b@2231: \begin{itemize} b@2231: \item \texttt{core.css}: core style file (edit to change appearance) b@1402: \end{itemize} b@2231: \textbf{Documentation (\texttt{./docs/})} b@2231: \begin{itemize} b@2231: \item AESPosterComp: PDF and \LaTeX source of Audio Engineering Society UK Sustaining Members event at Solid State Logic, Begbroke b@2231: \item \href{http://c4dm.eecs.qmul.ac.uk/dmrn/events/dmrnp10/#posters}{DMRN+10}: PDF and \LaTeX source of poster for 10\textsuperscript{th} Digital Music Research Network One-Day workshop (``soft launch'') b@2231: \item Instructions: PDF and \LaTeX source of these instructions b@2231: \item Project Specification Document (\LaTeX/PDF) b@2231: \item Results Specification Document (\LaTeX/PDF) b@2231: \item SMC15: PDF and \LaTeX source of 12th Sound and Music Computing Conference paper \cite{waet} b@2231: \item WAC2016: PDF and \LaTeX source of 2nd Web Audio Conference paper \cite{waetwac} b@2231: \item WAC2016Poster: PDF and \LaTeX source of 2nd Web Audio Conference poster\\ b@2231: \end{itemize} b@2231: \textbf{Interface files (\texttt{./interfaces/})} b@2231: \begin{itemize} b@2231: \item Each interface class has a JavaScript file and an optional CSS style file. These are loaded as needed. b@2231: \end{itemize} b@2231: \textbf{JavaScript code (\texttt{./js/})} b@2231: \begin{itemize} b@2231: \item \texttt{core.js}: JavaScript file with core functionality b@2231: \item \texttt{jquery-2.1.4.js}: jQuery JavaScript Library b@2231: \item \texttt{loudness.js}: Allows for automatic calculation of loudness of Web Audio API Buffer objects, return gain values to correct for a target loudness or match loudness between multiple objects b@2231: \item \texttt{specification.js}: decodes configuration XML to JavaScript object b@2231: \item \texttt{WAVE.js}: decodes and performs WAVE file byte level manipulation b@2231: \item \texttt{xmllint.js}: XML validation b@2231: \end{itemize} b@2231: \textbf{Media files (\texttt{./media/})} b@2231: \begin{itemize} b@2231: \item \texttt{example}: contains example audio files 0.wav-10.wav which are short recordings at 44.1kHz, 16bit of a woman saying the corresponding number (useful for testing randomisation and general familiarisation with the interface). b@2231: \end{itemize} b@2231: \textbf{PHP scripts (\texttt{./php/})} b@2231: \begin{itemize} b@2231: \item \texttt{keygen.php}: generates a unique file name for saved results b@2231: \item \texttt{pseudo.php}: allows for pseudo-random selection from a range of configuration XML files b@2231: \item \texttt{save.php}: PHP script to store result XML files to web server b@2231: \item PHP analysis scripts % ELABORATE b@2231: \end{itemize} b@2264: \textbf{Python scripts (\texttt{./python/})} b@2264: \begin{itemize} b@2264: \item Helpful Python and PHP scripts for extraction and visualisation of data.\\ b@2264: \end{itemize} b@2231: \textbf{Output files (\texttt{./saves/})} b@2231: \begin{itemize} b@2231: \item The output XML files of tests will be stored here by default by the \texttt{pythonServer.py} script.\\ b@2231: \end{itemize} b@2231: \textbf{Test creation tool (\texttt{./test\_create/})} b@2231: \begin{itemize} b@2231: \item Webpage for easily setting up your own test without having to delve into the XML.\\ b@2231: \end{itemize} b@2231: \textbf{Tests (\texttt{./tests/})} b@2231: \begin{itemize} b@2231: \item This is where you can store your configuration XML files. b@2231: \item Contains a folder with examples.\\ % ELABORATE b@2231: \end{itemize} b@2231: \textbf{XML specifications (\texttt{./xml/})} b@2231: \begin{itemize} b@2231: \item \texttt{scaledefinitions.xml}: marker text and positions for various scales b@2231: \item \texttt{test-schema.xsd}: definition of configuration and result XML file structure\\ % ELABORATE b@2231: \end{itemize} b@2231: b@2231: % \textbf{Example project (\texttt{./example\_eval/})} b@2231: % \begin{itemize} b@2231: % \item An example of what the set up XML should look like, b@2231: % \end{itemize} b@1402: b@1435: \subsection{Compatibility} b@1402: As Microsoft Internet Explorer doesn't support the Web Audio API\footnote{\url{http://caniuse.com/\#feat=audio-api}}, you will need another browser like Google Chrome, Safari or Firefox (all three are tested and confirmed to work). b@1435: nicholas@2235: %Firefox does not currently support other bit depths than 8 or 16 bit for PCM wave files. In the future, this will throw a warning message to tell the user that their content is being quantised automatically. %Nick? Right? To be removed if and when actually implemented nicholas@2235: % REPLY: Brecht, implemented our own in WAVE.js. Firefox have said they will support all bit-depth in the future. b@1402: nicholas@2235: The tool is platform-independent and works in any browser that supports the Web Audio API. It does not require any specific, proprietary software. However, in case the tool is hosted locally (i.e. you are not hosting it on an actual webserver) you will need Python (2.7 or 3.x), which is a free programming language - see the next paragraph. b@1402: b@1435: \clearpage b@1402: b@2233: \section{Quick start} b@2233: This document aims to provide an overview of all features and how to use them. However, if you are just trying out this tool, or you need to put together a test very quickly, or you simply don't want to read through all the details first, this section gives you the bare necessities to put together a simple listening test very quickly. b@2233: b@2233: \begin{itemize} % WIP b@2233: \item Download the tool (see Section~\ref{sec:installation}) b@2233: \item Copy the tool to a PHP-enabled web server if you have access to one. nicholas@2235: \item Go to \path{test_create.html} and configure your test. nicholas@2235: \item Save your test file in the folder \path{.\tests\}. nicholas@2235: \item Your test will be live at \path{[web server address]/index.html?url=tests/[testname].xml}. If you are not using a web server, you can simulate one locally by running b@2264: \path{python/pythonServer.py} (requires Python), after which you can access the test at \\ % hack nicholas@2235: \path{http://localhost:8000/index.html?url=tests/[testname].xml} b@2233: \end{itemize} b@2233: b@2233: \clearpage b@1402: b@2209: \section{Test setup} % TO DO: Linux (Android, iOS) b@1402: b@1402: \subsection{Sample rate} b@1402: Depending on how the experiment is set up, audio is resampled automatically (the Web Audio default) or the sample rate is enforced. In the latter case, you will need to make sure that the sample rate of the system is equal to the sample rate of these audio files. For this reason, all audio files in the experiment will have to have the same sample rate. b@1402: b@1402: Always make sure that all other digital equipment in the playback chain (clock, audio interface, digital-to-analog converter, ...) is set to this same sample rate. b@1402: b@1402: Note that upon changing the sampling rate, the browser will have to be restarted for the change to take effect. b@1402: b@1402: \subsubsection{Mac OS X} b@1402: To change the sample rate in Mac OS X, go to \textbf{Applications/Utilities/Audio MIDI Setup} or find this application with Spotlight (see Figure \ref{fig:audiomidisetup}). Then select the output of the audio interface you are using and change the `Format' to the appropriate number. Also make sure the bit depth and channel count are as desired. b@1402: If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate. b@1402: b@1402: Also make sure left and right channel gains are equal, as some applications alter this without changing it back, leading to a predominantly louder left or right channel. See Figure \ref{fig:audiomidisetup} for an example where the channel gains are different. b@1402: b@1402: \begin{figure}[tb] b@1402: \centering b@1402: \includegraphics[width=.65\textwidth]{img/audiomidisetup.png} b@1402: \caption{The Audio MIDI Setup window in Mac OS X} b@1402: \label{fig:audiomidisetup} b@1402: \end{figure} b@1402: b@1402: \subsubsection{Windows} b@1402: To change the sample rate in Windows, right-click on the speaker icon in the lower-right corner of your desktop and choose `Playback devices'. Right-click the appropriate playback device and click `Properties'. Click the `Advanced' tab and verify or change the sample rate under `Default Format'. % NEEDS CONFIRMATION nicholas@2235: If you are using an external audio interface, you may have to go to the preference pane of that device to change the sample rate. b@1402: b@1402: \subsection{Local test} b@1402: If the test is hosted locally, you will need to run the local webserver provided with this tool. b@1402: nickjillings@1446: \subsubsection{Mac OS X \& Linux} b@1402: nickjillings@1446: On Mac OS X, Python comes preinstalled, as with most Unix/Linux distributions. b@1402: nicholas@2235: Open the Terminal (find it in \textbf{Applications/Terminal} or via Spotlight), and go to the folder you downloaded. To do this, type \texttt{cd [folder]}, where \texttt{[folder]} is the folder where to find the \texttt{pythonServer.py} script you downloaded. For instance, if the location is \texttt{/Users/John/Documents/WebAudioEvaluationToolbox/}, then type b@1402: nicholas@2235: \texttt{cd /Users/John/Documents/WebAudioEvaluationToolbox/} b@1402: b@1402: Then hit enter and run the Python script by typing b@1402: b@2264: \texttt{python python/pythonServer.py} b@1402: b@1402: and hit enter again. See also Figure \ref{fig:terminal}. b@1402: b@1402: \begin{figure}[htbp] b@1402: \begin{center} b@1402: \includegraphics[width=.75\textwidth]{pythonServer.png} b@1402: \caption{Mac OS X: The Terminal window after going to the right folder (\texttt{cd [folder\_path]}) and running \texttt{pythonServer.py}.} b@1402: \label{fig:terminal} b@1402: \end{center} b@1402: \end{figure} b@1402: b@2365: Alternatively, you can simply type \texttt{python} (followed by a space) and drag the file into the Terminal window from Finder. % DOESN'T WORK YET b@1402: nickjillings@1446: You can leave this running throughout the different experiments (i.e. leave the Terminal open). Once running the terminal will report the current URL to type into your browser to initiate the test, usually this is http://localhost:8000/. nicholas@2235: On OSX 10.10 or newer, you may get a dialogue asking if Python can accept incomming connections, click yes. b@1402: b@1402: To start the test, open the browser and type b@1402: b@1402: \texttt{localhost:8000} b@1402: b@1402: and hit enter. The test should start (see Figure \ref{fig:test}). b@1402: b@1402: To quit the server, either close the terminal window or press Ctrl+C on your keyboard to forcibly shut the server. b@1402: b@1402: \subsubsection{Windows} b@1402: nicholas@2235: On Windows, Python is not generally preinstalled and therefore has to be downloaded\footnote{\url{https://www.python.org/downloads/windows/}} and installed to be able to run scripts such as the local webserver, necessary if the tool is hosted locally. b@1402: nicholas@2235: Once installed, simply double click the Python script \texttt{pythonServer.py} in the folder you downloaded. b@1402: b@1402: You may see a warning like the one in Figure \ref{fig:warning}. Click `Allow access'. b@1402: b@1402: \begin{figure}[htbp] b@1402: \begin{center} b@1402: \includegraphics[width=.6\textwidth]{warning.png} b@1402: \caption{Windows: Potential warning message when executing \texttt{pythonServer.py}.} b@1402: \label{fig:warning} b@1402: \end{center} b@1402: \end{figure} b@1402: b@1402: The process should now start, in the Command prompt that opens - see Figure \ref{fig:python}. b@1402: b@1402: \begin{figure}[htbp] b@1402: \begin{center} b@1402: \includegraphics[width=.75\textwidth]{python.png} b@1402: \caption{Windows: The Command Prompt after running \texttt{pythonServer.py} and opening the corresponding website.} b@1402: \label{fig:python} b@1402: \end{center} b@1402: \end{figure} b@1402: b@1402: You can leave this running throughout the different experiments (i.e. leave the Command Prompt open). b@1402: b@1402: To start the test, open the browser and type b@1402: b@1402: \texttt{localhost:8000} b@1402: b@1402: and hit enter. The test should start (see Figure \ref{fig:test}). b@1402: b@1402: \begin{figure}[htb] b@1402: \begin{center} b@1402: \includegraphics[width=.8\textwidth]{test.png} b@1402: \caption{The start of the test in Google Chrome on Windows 7.} b@1402: \label{fig:test} b@1402: \end{center} b@1402: \end{figure} b@1402: b@2365: If at any point in the test the participant reports weird behaviour or an error of some kind, or the test needs to be interrupted, please notify the experimenter and/or refer to Section~\ref{sec:troubleshooting}. b@2365: b@2365: When the test is over (the subject should see a message to that effect), the output XML file containing all collected data should have appeared in `saves/'. The names of these files are `test-0.xml', `test-1.xml', etc., in ascending order. The Terminal or Command prompt running the local web server will display the following file name. If such a file did not appear, please again refer to Section~\ref{sec:troubleshooting}. % Is this still the case? b@2365: b@2365: It is advised that you back up these results as often as possible, as a loss of this data means that the time and effort spent by the subject(s) has been in vain. Save the results to an external or network drive, and/or send them to the experimenter regularly. b@2365: b@2365: To start the test again for a new participant, you do not need to close the browser or shut down the Terminal or Command Prompt. Simply refresh the page or go to \texttt{localhost:8000} again, a new session will be created. b@1402: b@1402: b@1402: \subsection{Remote test} b@2365: Put all files on a web server which supports PHP. This allows the `save.php' script to store the XML result files in the `saves/' folder. nicholas@2235: nicholas@2235: Ensure that the \path{saves/} directory has public read-write access. On most linux servers this can be achieved using the command \texttt{sudo chmod 777 ./saves}. b@1435: b@1435: Make sure the \texttt{projectReturn} attribute of the \texttt{setup} node is set to the \texttt{save.php} script. b@1435: b@2386: Then, just go to the URL of the corresponding HTML file, e.g. \url{http://server.com/path/to/WAET/index.html?url=test/my-test.xml}. If storing on the server doesn't work at submission (e.g. if the \texttt{projectReturn} attribute isn't properly set or PHP does not have the correct permissions), the result XML file will be presented to the subject on the client side, as a `Save file' link. b@1435: nickjillings@1363: \subsection{Load a test / Multiple test documents} b@2386: By default the \texttt{test.html} page will load an empty page. To automatically load a test document, you need to append the location in the URL. If your URL is normally \url{http://localhost:8000/index.html} you would append the following: \url{?url=/path/to/your/test.xml}. Replace the fields with your actual path, the path is local to the running directory, so if you have your test in the directory \texttt{example\_eval} called \texttt{project.xml} you would append \url{?url=/example\_eval/project.xml}. b@1435: b@1402: \clearpage b@1435: b@1435: \section{Interfaces} b@1435: b@1435: The Web Audio Evaluation Tool comes with a number of interface styles, each of which can be customised extensively, either by configuring them differently using the many optional features, or by modifying the JavaScript files. b@1435: nickjillings@1363: To set the interface style for the whole test, set the attribute of the \texttt{setup} node to \texttt{interface="APE"}, where \texttt{"APE"} is one of the interface names below. b@1435: b@2209: \subsection{Templates} b@2209: This section describes the different templates available in the Interfaces folder (\texttt{./interfaces}), b@1435: b@2209: \begin{description} b@2209: \item[Blank] Use this template to start building your own, custom interface (JavaScript and CSS). b@2209: b@2209: \item[AB] Performs a pairwise comparison, but supports n-way comparison (in the example we demonstrate it performing a 7-way comparison). b@2209: b@2209: \item[ABX] Like AB, but with an unknown sample X which has to be identified as being either A or B. b@2209: b@2209: \item[APE] The APE interface is based on \cite{ape}, and consists of one or more axes, each corresponding with an attribute to be rated, on which markers are placed. As such, it is a multiple stimulus interface where (for each dimension or attribute) all elements are on one axis so that they can be maximally compared against each other, as opposed to rated individually or with regards to a single reference. b@2209: It also contains an optional text box for each element, to allow for clarification by the subject, tagging, and so on. b@2209: b@2209: \item[Discrete] Each audio element is given a discrete set of values based on the number of slider options specified. For instance, Likert specifies 5 values and therefore each audio element must be one of those 5 values. b@2209: b@2209: \item[Horizontal sliders] Creates the same interfaces as MUSHRA except the sliders are horizontal, not vertical. b@2209: b@2209: \item[MUSHRA] This is a straightforward implementation of \cite{mushra}, especially common for the rating of audio quality, for instance for the evaluation of audio codecs. This can also operate any vertical slider style test and does not necessarily have to match the MUSHRA specification. b@2209: \end{description} b@2209: b@2209: b@2209: \subsection{Examples} b@2209: Below are a number of established interface types, which are all supported using the templates from the previous section. % Confirm? b@2209: From \cite{waetwac}. b@2209: b@2209: % TODO: add labels like (\textbf{\texttt{horizontal-sliders}}) to show which type of interface can be created using which template b@2273: b@2209: \begin{itemize} b@2209: \item AB Test / Pairwise comparison~\cite{lipshitz1981great,david1963method}: Two stimuli presented simultaneously, participant selects a preferred stimulus. b@2209: \item ABC/HR (ITU-R BS. 1116)~\cite{recommendation19971116} (Mean Opinion Score: MOS): each stimulus has a continuous scale (5-1), labeled as Imperceptible, Perceptible but not annoying, Slightly annoying, Annoying, Very annoying. b@2209: \item -50 to 50 Bipolar with Ref: each stimulus has a continuous scale -50 to 50 with default values as 0 in middle and a reference. b@2209: \item Absolute Category Rating (ACR) Scale~\cite{rec1996p}: Likert but labels are Bad, Poor, Fair, Good, Excellent b@2209: \item ABX Test~\cite{clark1982high}: Two stimuli are presented along with a reference and the participant has to select a preferred stimulus, often the closest to the reference. b@2273: \item APE~\cite{ape}: Multiple stimuli on one or more axes for inter-sample rating. b@2209: %\item APE style 2D \cite{ape}: Multiple stimuli on a 2D plane for inter-sample rating (e.g. Valence Arousal). % TO BE IMPLEMENTED b@2209: \item Comparison Category Rating (CCR) Scale~\cite{rec1996p}: ACR \& DCR but 7 point scale, with reference: Much better, Better, Slightly better, About the same, Slightly worse, Worse, Much worse. b@2209: \item Degredation Category Rating (DCR) Scale~\cite{rec1996p}: ABC \& Likert but labels are (5) Inaudible, (4) Audible but not annoying, (3) Slightly annoying, (2) Annoying, (1) Very annoying. b@2209: \item ITU-R 5 Point Continuous Impairment Scale~\cite{rec1997bs}: Same as ABC/HR but with a reference. b@2209: \item Likert scale~\cite{likert1932technique}: each stimulus has a five point scale with values: Strongly agree, Agree, Neutral, Disagree and Strongly disagree. b@2209: \item MUSHRA (ITU-R BS. 1534)~\cite{recommendation20031534} Multiple stimuli are presented and rated on a continuous scale, which includes a reference, hidden reference and hidden anchors. b@2209: \item Pairwise Comparison (Better/Worse)~\cite{david1963method}: every stimulus is rated as being either better or worse than the reference. b@2209: \item Rank Scale~\cite{pascoe1983evaluation}: stimuli ranked on single horizontal scale, where they are ordered in preference order. b@2209: \item 9 Point Hedonic Category Rating Scale~\cite{peryam1952advanced}: each stimulus has a seven point scale with values: Like extremely, Like very much, Like moderate, Like slightly, Neither like nor dislike, Dislike extremely, Dislike very much, Dislike moderate, Dislike slightly. There is also a provided reference. b@2209: \end{itemize} b@2209: b@2273: b@2209: \subsection{Building your own interface} b@2209: b@2209: \subsubsection{Nodes to familiarise} b@2209: Core.js handles several very important nodes which you should become familiar with. The first is the Audio Engine, initialised and stored in variable `AudioEngineContext'. This handles the playback of the web audio nodes as well as storing the `AudioObjects'. The `AudioObjects' are custom nodes which hold the audio fragments for playback. These nodes also have a link to two interface objects, the comment box if enabled and the interface providing the ranking. On creation of an `AudioObject' the interface link will be nulled, it is up to the interface to link these correctly. b@2209: b@2209: The specification document will be decoded and parsed into an object called `specification'. This will hold all of the specifications various nodes. The test pages and any pre/post test objects are processed by a test state which will proceed through the test when called to by the interface. Any checks (such as playback or movement checks) are to be completed by the interface before instructing the test state to proceed. The test state will call the interface on each page load with the page specification node. b@2209: b@2209: \subsubsection{Modifying \texttt{core.js}} b@2209: Whilst there is very little code actually needed, you do need to instruct core.js to load your interface file when called for from a specification node. There is a function called `loadProjectSpecCallback' which handles the decoding of the specification and setting any external items (such as metric collection). At the very end of this function there is an if statement, add to this list with your interface string to link to the source. There is an example in there for both the APE and MUSHRA tests already included. Note: Any updates to core.js in future work will most likely overwrite your changes to this file, so remember to check your interface is still here after any update that interferes with core.js. b@2209: Any further files can be loaded here as well, such as css styling files. jQuery is already included. b@2209: b@2209: \subsubsection{Building the Interface} nicholas@2235: Your interface file will get loaded automatically when the `interface' attribute of the setup node matches the string in the `loadProjectSpecCallback' function. The following functions must be defined in your interface file. A template file is provided in \path{interfaces\blank.js}. b@2209: \begin{itemize} b@2209: \item \texttt{loadInterface} - Called once when the document is parsed. This creates any necessary bindings, such as to the metric collection classes and any check commands. Here you can also start the structure for your test such as placing in any common nodes (such as the title and empty divs to drop content into later). b@2209: \item \texttt{loadTest(audioHolderObject)} - Called for each page load. The audioHolderObject contains a specification node holding effectively one of the audioHolder nodes. b@2209: \item \texttt{resizeWindow(event)} - Handle for any window resizing. Simply scale your interface accordingly. This function must be here, but can me an empty function call. b@2209: \end{itemize} b@2209: b@2209: \textbf{loadInterface}\\ b@2209: This function is called by the interface once the document has been parsed since some browsers may parse files asynchronously. The best method is simply to put `loadInterface()' at the top of your interface file, therefore when the JavaScript engine is ready the function is called. b@2209: b@2209: By default the HTML file has an element with id ``topLevelBody'' where you can build your interface. Make sure you blank the contents of that object. This function is the perfect time to build any fixed items, such as the page title, session titles, interface buttons (Start, Stop, Submit) and any holding and structure elements for later on. b@2209: b@2209: At the end of the function, insert these two function calls: testState.initialise() and testState.advanceState();. This will actually begin the test sequence, including the pre-test options (if any are included in the specification document). b@2209: b@2209: \textbf{loadTest(audioHolderObject)}\\ b@2209: This function is called on each new test page. It is this functions job to clear out the previous test and set up the new page. Use the function audioEngineContext.newTestPage(); to instruct the audio engine to prepare for a new page. ``audioEngineContext.audioObjects = [];'' will delete any audioObjects, interfaceContext.deleteCommentBoxes(); will delete any comment boxes and interfaceContext.deleteCommentQuestions(); will delete any extra comment boxes specified by commentQuestion nodes. b@2209: b@2209: This function will need to instruct the audio engine to build each fragment. Just passing the constructor each element from the audioHolderObject will build the track, audioEngineContext.newTrack(element) (where element is the audioHolderObject audio element). This will return a reference to the constructed audioObject. Decoding of the audio will happen asynchronously. b@2209: b@2209: You also need to link audioObject.interfaceDOM with your interface object for that audioObject. The interfaceDOM object has a few default methods. Firstly it must start disabled and become enabled once the audioObject has decoded the audio (function call: enable()). Next it must have a function exportXMLDOM(), this will return the xml node for your interface, however the default is for it to return a value node, with textContent equal to the normalised value. You can perform other functions, but our scripts may not work if something different is specified (as it will breach our results specifications). Finally it must also have a method getValue, which returns the normalised value. b@2209: b@2209: It is also the job the interfaceDOM to call any metric collection functions necessary, however some functions may be better placed outside (for example, the APE interface uses drag and drop, therefore the best way was to call the metric functions from the dragEnd function, which is called when the interface object is dropped). Metrics based upon listening are handled by the audioObject. The interfaceDOM object must manage any movement metrics. For a list of valid metrics and their behaviours, look at the project specification document included in the repository/docs location. The same goes for any checks required when pressing the submit button, or any other method to proceed the test state. b@1435: b@1402: b@1435: \clearpage b@1435: nickjillings@1363: \section{Project XML} nickjillings@1363: nicholas@2235: Each test is defined by its project XML file, examples of these can be seen in the ./tests/examplesl/ directory. nickjillings@1363: nickjillings@1363: In the XML there are several nodes which must be defined: nickjillings@1363: \begin{itemize} nickjillings@1363: \item \texttt{}: The root node. nickjillings@1363: \item \texttt{}: The first child node, defines whole-test parameters nicholas@2235: \item \texttt{}: Specifies a test page, attached \emph{after} the \texttt{} nodes. nickjillings@1363: \item \texttt{}: Specifies an audio element. nickjillings@1363: \end{itemize} nickjillings@1363: nicholas@2235: The test uses XML validation, so the ordering of nodes is important to pass this validation. Some nodes also have specific attributes which must be set and may even have a certain format to apply them. This is done so error checking can be performed to catch easy to find errors before loading and running a test session. If your project XML fails this validation, all the errors will be listed. nickjillings@1363: nickjillings@1363: Before identifying any features, this part will walk you through the available nodes, their function and their attributes. nickjillings@1363: nickjillings@1363: \subsection{Root} nickjillings@1363: The root node is \texttt{}, it must have the following attributes: nickjillings@1363: nickjillings@1363: \texttt{xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"} nickjillings@1363: nickjillings@1363: \texttt{xsi:noNamespaceSchemaLocation="test-schema.xsd"}. nickjillings@1363: nickjillings@1363: This will ensure it is checked against the XML schema for validation. nickjillings@1363: nickjillings@1363: \subsection{Set up} nickjillings@1363: The first child node, \texttt{} specifies any one time and global parameters. It takes the following attributes: nickjillings@1363: \begin{itemize} nickjillings@1363: \item \texttt{interface}: String, mandatory, specifies the interface to load b@2103: \item \texttt{projectReturn}: URL, mandatory, specifies the return point. Can be a 3rd party server or the local server. Set to null to disable automatic saving. Specifying ``save.php'' will trigger the return if either the PHP or python servers are used. On error, it will always default to presenting the save on page. b@2386: \item \texttt{returnURL}: Upon successful completion and submission of the test, this URL will be opened. This can be a presentation of the results thus far, some type of reward, or a page with links to other tests. nickjillings@1363: \item \texttt{randomiseOrder}: Boolean, optional, if true it will randomise the order of the test pages. Default is false. nicholas@2235: \item \texttt{poolSize}: non-negative integer, optional. Specifies the number of test pages to actually test with. Combined with randomiseOrder being true will give a random set of test pages per participant from the given pool of \texttt{} nodes. Specifying 0 disables this option, default is 0. b@2209: \item \texttt{loudness}: non-positive integer, optional. Set the default LUFS target value. See Section~\ref{sec:loudness} for more. b@2209: \item \texttt{sampleRate}: positive integer, optional. If set, the sample rate reported by the Web Audio API must match this number. See Section~\ref{sec:samplerate}. nicholas@2235: \item \texttt{calibration}: boolean, optional. If true, a simple hearing test is presented to user to gather system frequency response (DAC, listening device and subject hearing). Default is false. nicholas@2235: \item \texttt{crossFade}: decimal greater than or equal to 0.0, optional. Define the crossFade between fragments when clicked in seconds. Default is 0.0s. nicholas@2235: \item \texttt{preSilence}: decimal greater than or equal to 0.0, optional. Add a portion of silence to all elements in the test at the beginning of the buffer in seconds. Default is 0.0s nicholas@2235: \item \texttt{postSilence}: decimal greater than or equal to 0.0, optional. Add a portion of silence to all elements in the test at the end of the buffer in seconds. Default is 0.0s nickjillings@1363: \end{itemize} nickjillings@1363: nickjillings@1363: The \texttt{} node takes the following child nodes, note these must appear in this order: nickjillings@1363: \begin{itemize} b@2209: \item \texttt{}: Min of 0, max of 2 occurences. See Section~\ref{sec:survey} nickjillings@1363: \item \texttt{}: Must appear only once. nickjillings@1363: \item \texttt{}: Must appear only once. nickjillings@1363: \end{itemize} nickjillings@1363: nickjillings@1363: \subsection{Page} nickjillings@1365: \label{sec:page} nickjillings@1363: The only other first level child nodes, these specify the test pages. It takes the following attributes: nickjillings@1363: \begin{itemize} nickjillings@1363: \item \texttt{id}: ID, mandatory. A string which must be unique across the entire XML. It is used to identify the page on test completion as pages are returned in the results in the order they appeared, not specified. b@2103: \item \texttt{hostURL}: URL, mandatory. Used in conjuction with the \texttt{} url to specify where the audio files are located. For instance if all your files are in the directory \texttt{./test/} you can set this attribute to ``/test/'' and the \texttt{} url attribute only needs to file name. Set to ``'' if no hostURL prefix desired. b@2209: \item \texttt{randomiseOrder}: Boolean, optional. If true the audio fragments are presented randomly rather than the order specified. See Section~\ref{sec:randomisation}. Default is false. nickjillings@1363: \item \texttt{repeatCount}: non-negative integer, optional. Specify the number of times to repeat the test page (re-present). Each presentation will appear as an individual page in the results. Default is 0. nickjillings@1363: \item \texttt{loop}: Boolean, optional. If true, the audio elements will loop synchronously with each other. See \ref{sec:looping}. Default is false. b@2209: \item \texttt{loudness}: non-positive integer, optional. Set the LUFS target value for this page. Supersedes the \texttt{} loudness attribute for this page. See Section~\ref{sec:loudness} for more. nicholas@2235: \item \texttt{label}: enumeration, optional. Set the label to one of the following nicholas@2235: \begin{itemize} nicholas@2235: \item \texttt{default}: The default by the interface (Default if undefined) nicholas@2235: \item \texttt{none}: Show no labels nicholas@2235: \item \texttt{number}: Show natural numbers starting at index 1 b@2365: \item \texttt{letter}: Show letters starting at `a' b@2365: \item \texttt{capital}: Show letters starting at `A' nicholas@2235: \end{itemize} nicholas@2235: \item \texttt{poolSize}: non-negative integer, optional. Determine the number of \texttt{} nodes to take from those defined. For instance if \texttt{poolSize=3} and there are 4 audio elements, only 3 will actually be loaded and presented to the user. nicholas@2235: \item \texttt{alwaysInclude}: boolean, optional. If the parent \texttt{} node has poolSize set, you can enforce the page to always be selected by setting alwaysInclude to true. Default is false nicholas@2235: \item \texttt{preSilence}: decimal greater than or equal to 0.0, optional. Add a portion of silence to all elements in the page at the beginning of the buffer in seconds. Supercedes any value set in \texttt{}. Default is 0.0s nicholas@2235: \item \texttt{postSilence}: decimal greater than or equal to 0.0, optional. Add a portion of silence to all elements in the test at the end of the buffer in seconds. Supercedes any value set in \texttt{}. Default is 0.0s nicholas@2235: nickjillings@1363: \end{itemize} nickjillings@1363: nickjillings@1363: The \texttt{} node takes the following child, nodes note these must appear in this order: nickjillings@1363: \begin{itemize} nickjillings@1363: \item \texttt{}: Appear once or not at all. The text content of this node specifies the title of the test page, for instance \texttt{<title>John Doe's Test} nickjillings@1363: \item \texttt{}: Must appear only once. b@2209: \item \texttt{}: Minimum of one. Specifies an audio element, see Section~\ref{sec:audioelement}. b@2209: \item \texttt{}: Min of 0, max unlimited occurences. See Section~\ref{sec:commentboxes}. b@2209: \item \texttt{}: Min of 0, max of 2 occurences. See Section~\ref{sec:survey} nickjillings@1363: \end{itemize} nickjillings@1363: nickjillings@1363: \subsection{Survey} nickjillings@1363: \label{sec:survey} nickjillings@1363: These specify any survey items to be presented. The must be a maximum of two of these per \texttt{} and \texttt{} nodes. These have one attribute, location, which must be set to one of the following: before, pre, after or post. In this case before == pre and after == post. This specifies where the survey must appear before or after the node it is associated with. When a child of \texttt{} then pre/before will be shown before the first test page and after/post shown after completing the last test page. When a child of \texttt{} then pre/before is before the test commences and after/post is once the test has been submitted. nickjillings@1363: nickjillings@1363: The survey node takes as its only set of childs the \texttt{} node of which there can be any number. nickjillings@1363: nickjillings@1363: \subsubsection{Survey Entry} nickjillings@1363: These nodes have the following attributes, which vary depending on the survey type wanted: nickjillings@1363: \begin{itemize} nickjillings@1363: \item \texttt{id}: ID, mandatory. Must be unique across the entire XML, used to identify the response in the results. nickjillings@1363: \item \texttt{type}: String, mandatory. Must be one of the following: statement, question, checkbox, radio or number. This defines the type to show. nickjillings@1363: \item \texttt{mandatory}: Boolean, optional. Defines if the survey must have a response or not. Does not apply to statements. Default is false. nickjillings@1363: \item \texttt{min}: Number, optional. Only applies when \texttt{type="number"}, the minimum valid response. nickjillings@1363: \item \texttt{max}: Number, optional. Only applies when \texttt{type="number"}, the maximum valid response. nickjillings@1363: \item \texttt{boxsize}: String, optional. Only applies when \texttt{type="question"} and must be one of the following: normal (default), small, large or huge. nickjillings@1363: \end{itemize} nickjillings@1363: nickjillings@1363: The nodes have the following children, which vary depending on the survey type wanted. nickjillings@1363: \begin{itemize} nickjillings@1363: \item \texttt{}: Must appear only once. Its text content specifies the text to appear as the statement or question for the user to respond to. nickjillings@1363: \item \texttt{