Mercurial > hg > webaudioevaluationtool
comparison docs/SMC15/smc2015template.tex @ 1672:b572136b9ac1
Remove some redundancies from paper
author | Brecht De Man <b.deman@qmul.ac.uk> |
---|---|
date | Tue, 21 Apr 2015 16:50:10 +0100 |
parents | 876616d83f56 |
children | 657d63ab4458 |
comparison
equal
deleted
inserted
replaced
1671:876616d83f56 | 1672:b572136b9ac1 |
---|---|
7 \usepackage{smc2015} | 7 \usepackage{smc2015} |
8 \usepackage{times} | 8 \usepackage{times} |
9 \usepackage{ifpdf} | 9 \usepackage{ifpdf} |
10 \usepackage[english]{babel} | 10 \usepackage[english]{babel} |
11 \usepackage{cite} | 11 \usepackage{cite} |
12 | |
13 \hyphenation{Java-script} | |
12 | 14 |
13 %%%%%%%%%%%%%%%%%%%%%%%% Some useful packages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | 15 %%%%%%%%%%%%%%%%%%%%%%%% Some useful packages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
14 %%%%%%%%%%%%%%%%%%%%%%%% See related documentation %%%%%%%%%%%%%%%%%%%%%%%%%% | 16 %%%%%%%%%%%%%%%%%%%%%%%% See related documentation %%%%%%%%%%%%%%%%%%%%%%%%%% |
15 %\usepackage{amsmath} % popular packages from Am. Math. Soc. Please use the | 17 %\usepackage{amsmath} % popular packages from Am. Math. Soc. Please use the |
16 %\usepackage{amssymb} % related math environments (split, subequation, cases, | 18 %\usepackage{amssymb} % related math environments (split, subequation, cases, |
144 \end{abstract} | 146 \end{abstract} |
145 % | 147 % |
146 | 148 |
147 \section{Introduction}\label{sec:introduction} | 149 \section{Introduction}\label{sec:introduction} |
148 | 150 |
149 tiny mock change | |
150 | |
151 TOTAL PAPER: Minimum 4 pages, 6 preferred, max. 8 (6 for demos/posters)\\ | 151 TOTAL PAPER: Minimum 4 pages, 6 preferred, max. 8 (6 for demos/posters)\\ |
152 | 152 |
153 NICK: examples of what kind of audio applications HTML5 has made possible, with references to publications (or website)\\ | 153 NICK: examples of what kind of audio applications HTML5 has made possible, with references to publications (or website)\\ |
154 | 154 |
155 background (types of research where this type of perceptual evaluation of audio is relevant)\\ | 155 background (types of research where this type of perceptual evaluation of audio is relevant)\\ |
226 During playback, the playback nodes loop indefinitely until playback is stopped. The gain nodes in the \textit{audioObject}s enable dynamic muting of nodes. When a bar in the sliding ranking is clicked, the audio engine mutes all \textit{audioObject}s and un-mutes the clicked one. Therefore, if the audio samples are perfectly aligned up and of the same sample length, they will remain perfectly aligned with each other. | 226 During playback, the playback nodes loop indefinitely until playback is stopped. The gain nodes in the \textit{audioObject}s enable dynamic muting of nodes. When a bar in the sliding ranking is clicked, the audio engine mutes all \textit{audioObject}s and un-mutes the clicked one. Therefore, if the audio samples are perfectly aligned up and of the same sample length, they will remain perfectly aligned with each other. |
227 | 227 |
228 | 228 |
229 \subsection{Setup and results formats}\label{sec:setupresultsformats} | 229 \subsection{Setup and results formats}\label{sec:setupresultsformats} |
230 | 230 |
231 [somewhere: check all fragments are played] | 231 Setup and the results both use the common XML document format to outline the various parameters. The setup file contains all the information needed to initialise a test session. Several nodes can be defined to outline the audio samples to use, questions to be asked and any pre- or post-test questions or instructions. Having one document to modify allows for quick manipulation in a `human readable' form to create new tests, or adjust current ones, without needing to edit which web files. % 'which web files'? |
232 | 232 |
233 Setup and the results both use the common XML document format to outline the various parameters. The setup file contains all the information needed to initialise a test session. Several Nodes % capital letter? | 233 The results file is dynamically generated by the interface upon clicking the `Submit' button. This also executes checks, depending on the setup file, to ensure that all tracks have been played back, rated and commented on. The XML output returned contains a node per audioObject and contains both the corresponding marker's position and any comments written in the associated comment box. The rating returned is normalised to be a value between 0 and 1, normalising the pixel representation of different browser windows. |
234 can be defined to outline the audio samples to use, questions to be asked and any pre- or post-test questions or instructions. Having one document to modify allows for quick manipulation in a `human readable' form to create new tests, or adjust current ones, without needing to edit which web files. % 'which web files'? | |
235 | |
236 The results file is dynamically generated by the interface upon clicking the `Submit' button. This also executes checks, depending on the setup file, to ensure that all tracks have been played back, rated and commented on. The XML output returned contains a node per audioObject and contains both the corresponding marker's position and any comments written in the associated comment box. The rating returned is normalised to be a value between 0 and 100, normalising the pixel representation of different browser windows. | |
237 | 234 |
238 Pre- and post-test dialog boxes allow for comments or questions to be presented before or after the test, to convey listening test instructions, and gather information about the subject, listening environment, and overall experience of the test. These are automatically generated from the setup XML and allow nearly any form of question and comment to be included in a window on its own. Questions are stored and presented in the response section labelled `pretest' and `posttest', along with the question ID and its response, and can be made mandatory. | 235 Pre- and post-test dialog boxes allow for comments or questions to be presented before or after the test, to convey listening test instructions, and gather information about the subject, listening environment, and overall experience of the test. These are automatically generated from the setup XML and allow nearly any form of question and comment to be included in a window on its own. Questions are stored and presented in the response section labelled `pretest' and `posttest', along with the question ID and its response, and can be made mandatory. |
239 Further options in the setup file are: | 236 Further options in the setup file are: |
240 | 237 |
241 \begin{itemize} | 238 \begin{itemize} |
248 \item \textbf{Randomise fragment order}: Randomises the order and numbering of the markers and comment boxes corresponding with the fragments. This permutation is stored as well, to be able to interpret references to the numbers in the comments (such as `this is much [brighter] then 4'). | 245 \item \textbf{Randomise fragment order}: Randomises the order and numbering of the markers and comment boxes corresponding with the fragments. This permutation is stored as well, to be able to interpret references to the numbers in the comments (such as `this is much [brighter] then 4'). |
249 \item \textbf{Require playback}: Require that each fragment has been played at least once, if not in full. | 246 \item \textbf{Require playback}: Require that each fragment has been played at least once, if not in full. |
250 \item \textbf{Require full playback}: If `Require playback' is active, require that each fragment has been played in full. | 247 \item \textbf{Require full playback}: If `Require playback' is active, require that each fragment has been played in full. |
251 \item \textbf{Require moving}: Require that each marker is moved (dragged) at least once. | 248 \item \textbf{Require moving}: Require that each marker is moved (dragged) at least once. |
252 \item \textbf{Require comments}: This option allows requiring the subject to require a comment for each track. | 249 \item \textbf{Require comments}: This option allows requiring the subject to require a comment for each track. |
250 \item \textbf{Repeat test}: Number of times test should be repeated (none by default), to allow familiarisation with the content and experiment, and to investigate consistency of user and variability due to familiarity. | |
253 % explanation on how this is implemented? | 251 % explanation on how this is implemented? |
254 \end{itemize} | 252 \end{itemize} |
255 | 253 |
256 When one of these options is not included in the setup file, they assume a default value. | 254 When one of these options is not included in the setup file, they assume a default value. |
257 | 255 |
259 | 257 |
260 | 258 |
261 | 259 |
262 The results will also contain information collected by any defined pre/post questions. These are referenced against the setup XML by using the same ID as well as printing in the same question, so readable responses can be obtained. Future development will also evolve to include any session data, such as the browser the tool was used in, how long the test took and any other metrics. Currently the results files are downloaded on the user side of the browser as a .xml file to be manually returned. However the end goal is to allow the XML files to be submitted over the web to a receiving server to store them, allowing for automated collection. | 260 The results will also contain information collected by any defined pre/post questions. These are referenced against the setup XML by using the same ID as well as printing in the same question, so readable responses can be obtained. Future development will also evolve to include any session data, such as the browser the tool was used in, how long the test took and any other metrics. Currently the results files are downloaded on the user side of the browser as a .xml file to be manually returned. However the end goal is to allow the XML files to be submitted over the web to a receiving server to store them, allowing for automated collection. |
263 | 261 |
264 Furthermore, each user action (manipulation of any interface element, such as playback, moving a marker, or typing a comment) is logged along with a the corresponding time code and stored or sent along with the results. % right? | 262 Furthermore, each user action (manipulation of any interface element, such as playback or moving a marker) is logged along with a the corresponding time code and stored or sent along with the results. % right? |
265 | 263 |
266 %Here is an example of the setup XML and the results XML: % perhaps best to refer to each XML after each section (setup <> results) | 264 %Here is an example of the setup XML and the results XML: % perhaps best to refer to each XML after each section (setup <> results) |
267 % Should we include an Example of the input and output XML structure?? --> Sure. | 265 % Should we include an Example of the input and output XML structure?? --> Sure. |
268 | 266 |
269 ADD XML STRUCTURE EXAMPLE | 267 ADD XML STRUCTURE EXAMPLE |
270 | |
271 \section{Applications}\label{sec:applications} %? | |
272 discussion of use of this toolbox (possibly based on a quick mock test using my research data, to be repeated with a large number of participants and more data later)\\ | |
273 | |
274 \subsection{Listening environment standardisation} | |
275 | |
276 In order to reduce the impact of having a non-standardised listening environment and unobservable participants, a series of pre-test standard questions have been put together to ask every participant. The first part of this is that every participant is asked to carry out the test, wherever possible, with a pair of quality headphones. | |
277 | |
278 % I think the following should be different for every type of test, so I think it looks better to say any type of question (with text box, or radio buttons, or dropdown menu?) is possible to add. | |
279 %\begin{itemize} | |
280 %\item Name (text box) | |
281 %%\item I am happy for name to be used in an academic publication (check box) % never really necessary, as far as I'm concerned | |
282 %\item First language (text box) | |
283 %\item Location: country, city (text box) | |
284 %\item Playback system (ratio box: headphone or speaker) | |
285 %\item Make and Model of Playback System (text box) | |
286 %\item Listening environment (text box) | |
287 %%\item Please assess how good you believe your hearing to be, where 1 is deaf, 10 is professional critical listener (Dropdown box 1-10 ) % not sure | |
288 %\end{itemize} | |
289 | |
290 | |
291 There are also a series of considerations that have been made towards ensuring there is a standardised listening environment, so it is possible to | |
292 \begin{itemize} | |
293 \item Begin with standardised listening test - to confirm listening experience | |
294 \item Perform loudness equalisation on all tracks | |
295 \\** OR THIS SHOULD BE DONE BEFORE THE EXPERIMENT | |
296 \item Randomise order of tests | |
297 \item Randomise order of tracks in each test | |
298 \item Repeat each experiment a number of times | |
299 \\** TO REMOVE THE FAMILIARISATION WITH EXPERIMENT VARIABLE | |
300 \\** TO ENSURE CONSISTENCY OF USER | |
301 \item Track all user interactions with system | |
302 \end{itemize} | |
303 | |
304 | |
305 | |
306 [Regarding randomisation: keep the randomisation 'vector' so you can keep track of what subjects are referring to in comment fields] | |
307 | 268 |
308 | 269 |
309 \section{Conclusions and future work}\label{sec:conclusions} | 270 \section{Conclusions and future work}\label{sec:conclusions} |
310 | 271 |
311 In this paper we have presented an approach to creating a browser-based listening test environment that can be used for a variety of types of perceptual evaluation of audio. | 272 In this paper we have presented an approach to creating a browser-based listening test environment that can be used for a variety of types of perceptual evaluation of audio. |
323 %\item ABX test | 284 %\item ABX test |
324 %\item Method of adjustment tests | 285 %\item Method of adjustment tests |
325 %\end{itemize} | 286 %\end{itemize} |
326 | 287 |
327 | 288 |
328 The source code of this tool can be found on \url{code.soundsoftware.ac.uk/projects/webaudioevaluationtool}. % FIX | 289 The source code of this tool can be found on \url{code.soundsoftware.ac.uk/projects/webaudioevaluationtool}. The repository includes an issue tracker, where bug reports and feature requests can inform further development. |
329 | 290 |
330 | 291 |
331 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | 292 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
332 %bibliography here | 293 %bibliography here |
333 \bibliography{smc2015template} | 294 \bibliography{smc2015template} |