2 During The Research » History » Version 5

Steve Welburn, 2012-09-26 08:49 AM

1 1 Steve Welburn
h2. During The Research
2 1 Steve Welburn
3 1 Steve Welburn
During the course of a piece of research, data management is largely risk mitigation - it makes your research more robust and allows you to continue if something goes wrong.
4 1 Steve Welburn
5 1 Steve Welburn
The two main areas to consider are:
6 1 Steve Welburn
* [[backing up]] research data - in case you lose, or corrupt, the main copy of your data;
7 1 Steve Welburn
* [[documenting data]] - in case you need to to return to it later.
8 1 Steve Welburn
9 1 Steve Welburn
In addition to the immediate benefits during research, applying good research data management practices makes it easier to manage your research data at the end of your research project.
10 1 Steve Welburn
11 1 Steve Welburn
We have identified three basic types of research projects, two quantitative (one based on new data, one based on a new algorithm) and one qualitative, and consider the data management techniques appropriate to those workflows. More complex research projects may required a combination of the techniques from these.
12 1 Steve Welburn
13 1 Steve Welburn
h3. Quantitative research - New Data
14 1 Steve Welburn
15 1 Steve Welburn
For this use case, the research workflow involves:
16 1 Steve Welburn
* creating a new dataset
17 1 Steve Welburn
* testing outputs of existing algorithms on the dataset
18 1 Steve Welburn
* publication of results
19 1 Steve Welburn
20 1 Steve Welburn
21 1 Steve Welburn
The new dataset may include:
22 1 Steve Welburn
* Selection or creation of underlying (audio) data (the actual audio may be in the dataset or the dataset may reference material - e.g. for [[Copyright|copyright]] reasons)
23 1 Steve Welburn
* creation of ground-truth annotations for the audio and the type of algorithm (e.g. chord sequences for chord estimation, onset times for onset detection)
24 1 Steve Welburn
25 1 Steve Welburn
The content of the dataset will need to be [[documenting data|documented]].
26 1 Steve Welburn
27 1 Steve Welburn
Data involved includes:
28 1 Steve Welburn
* [[Managing Software As Data|software]] for the algorithms
29 1 Steve Welburn
* the new dataset
30 1 Steve Welburn
* identification of existing datasets against which results will be compared
31 1 Steve Welburn
* results of applying the algorithms to the dataset
32 1 Steve Welburn
* documentation of the testing methodology - including algorithm parameters
33 1 Steve Welburn
34 1 Steve Welburn
Note that *if* existing algorithms have published results using the same existing datasets and methodology, then results should be directly comparable between the published results and the results for the new dataset. In this case, most of the methodology is already documented and only details specific to the new dataset need separately recording.
35 1 Steve Welburn
36 1 Steve Welburn
If the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.
37 1 Steve Welburn
38 1 Steve Welburn
h3. Quantitative research - New Algorithm
39 1 Steve Welburn
40 1 Steve Welburn
bq. A common use-case in C4DM research is to run a newly-developed analysis algorithm on a set of audio examples and evaluate the algorithm by comparing its output with that of a human annotator. Results are then compared with published results using the same input data to determine whether the newly proposed approach makes any improvement on the state of the art.
41 1 Steve Welburn
42 1 Steve Welburn
Data involved includes:
43 1 Steve Welburn
* [[Managing Software As Data|software]] for the algorithm
44 1 Steve Welburn
* An annotated dataset against which the algorithm can be tested
45 1 Steve Welburn
* Results of applying the new algorithm and competing algorithms to the dataset
46 1 Steve Welburn
* Documentation of the testing methodology
47 1 Steve Welburn
48 1 Steve Welburn
Note that *if* other algorithms have published results using the same dataset and methodology, then results should be directly comparable between the published results and the results for the new algorithm. In this case, most of the methodology is already documented and only details specific to the new algorithm (e.g. parameters) need separately recording.
49 1 Steve Welburn
50 1 Steve Welburn
Also, if the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.
51 1 Steve Welburn
52 1 Steve Welburn
h3. Qualitative research
53 1 Steve Welburn
54 1 Steve Welburn
An example would be using interviews with performers to evaluate a new instrument design.
55 1 Steve Welburn
56 1 Steve Welburn
The workflow is:
57 1 Steve Welburn
* Gather data for the experiment (e.g. though interviews)
58 1 Steve Welburn
* Analyse data
59 4 Steve Welburn
* [[Publishing Research Data|Publish data]]
60 1 Steve Welburn
61 1 Steve Welburn
Data involved may include:
62 1 Steve Welburn
* the interface design
63 1 Steve Welburn
* Captured audio from performances
64 1 Steve Welburn
* Recorded interviews with performers (possibly audio or video)
65 1 Steve Welburn
* Interview transcripts
66 1 Steve Welburn
67 1 Steve Welburn
The research may also involve:
68 1 Steve Welburn
* Demographic details of participants
69 5 Steve Welburn
* Identifiable participants ([[Data Protection]])
70 1 Steve Welburn
* Release forms for people taking part
71 1 Steve Welburn
72 3 Steve Welburn
and *is likely* to involve:
73 1 Steve Welburn
* ethics-board approval