2 During The Research » History » Version 16
Simon Dixon, 2013-02-25 06:47 PM
1 | 1 | Steve Welburn | h2. During The Research |
---|---|---|---|
2 | 1 | Steve Welburn | |
3 | 1 | Steve Welburn | During the course of a piece of research, data management is largely risk mitigation - it makes your research more robust and allows you to continue if something goes wrong. |
4 | 1 | Steve Welburn | |
5 | 1 | Steve Welburn | The two main areas to consider are: |
6 | 1 | Steve Welburn | * [[backing up]] research data - in case you lose, or corrupt, the main copy of your data; |
7 | 1 | Steve Welburn | * [[documenting data]] - in case you need to to return to it later. |
8 | 1 | Steve Welburn | |
9 | 1 | Steve Welburn | In addition to the immediate benefits during research, applying good research data management practices makes it easier to manage your research data at the end of your research project. |
10 | 1 | Steve Welburn | |
11 | 16 | Simon Dixon | We have identified three basic types of research projects, two quantitative (one based on new data, one based on a new algorithm) and one qualitative, and consider the data management techniques appropriate to those workflows. More complex research projects might require a combination of these techniques. |
12 | 1 | Steve Welburn | |
13 | 1 | Steve Welburn | h3. Quantitative research - New Data |
14 | 1 | Steve Welburn | |
15 | 1 | Steve Welburn | For this use case, the research workflow involves: |
16 | 1 | Steve Welburn | * creating a new dataset |
17 | 1 | Steve Welburn | * testing outputs of existing algorithms on the dataset |
18 | 1 | Steve Welburn | * publication of results |
19 | 1 | Steve Welburn | |
20 | 1 | Steve Welburn | |
21 | 16 | Simon Dixon | The new dataset might include: |
22 | 16 | Simon Dixon | * selection or creation of underlying (audio) data (the actual audio might be in the dataset or the dataset might reference material - e.g. for [[Copyright|copyright]] reasons) |
23 | 1 | Steve Welburn | * creation of ground-truth annotations for the audio and the type of algorithm (e.g. chord sequences for chord estimation, onset times for onset detection) |
24 | 1 | Steve Welburn | |
25 | 14 | Steve Welburn | Although the research is producing a single new dataset, the full set of research data involved includes: |
26 | 1 | Steve Welburn | * [[Managing Software As Data|software]] for the algorithms |
27 | 1 | Steve Welburn | * the new dataset |
28 | 1 | Steve Welburn | * identification of existing datasets against which results will be compared |
29 | 1 | Steve Welburn | * results of applying the algorithms to the dataset |
30 | 14 | Steve Welburn | * documentation of the testing methodology - e.g. method and algorithm parameters (including any default parameter values). |
31 | 14 | Steve Welburn | |
32 | 14 | Steve Welburn | All of these should be [[documenting data|documented]] and [[backing up|backed up]]. |
33 | 1 | Steve Welburn | |
34 | 16 | Simon Dixon | Note that *if* existing algorithms have published results using the same existing datasets and methodology, then results should be directly comparable between the published results and the results for the new dataset. In this case, most of the methodology is already documented and only details specific to the new dataset need to be recorded separately. |
35 | 1 | Steve Welburn | |
36 | 1 | Steve Welburn | If the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication. |
37 | 1 | Steve Welburn | |
38 | 1 | Steve Welburn | h3. Quantitative research - New Algorithm |
39 | 1 | Steve Welburn | |
40 | 1 | Steve Welburn | bq. A common use-case in C4DM research is to run a newly-developed analysis algorithm on a set of audio examples and evaluate the algorithm by comparing its output with that of a human annotator. Results are then compared with published results using the same input data to determine whether the newly proposed approach makes any improvement on the state of the art. |
41 | 1 | Steve Welburn | |
42 | 1 | Steve Welburn | Data involved includes: |
43 | 1 | Steve Welburn | * [[Managing Software As Data|software]] for the algorithm |
44 | 15 | Steve Welburn | * an annotated dataset against which the algorithm can be tested |
45 | 15 | Steve Welburn | * results of applying the new algorithm and competing algorithms to the dataset |
46 | 15 | Steve Welburn | * documentation of the testing methodology |
47 | 1 | Steve Welburn | |
48 | 16 | Simon Dixon | Note that *if* other algorithms have published results using the same dataset and methodology, then results should be directly comparable between the published results and the results for the new algorithm. In this case, most of the methodology is already documented and only details specific to the new algorithm (e.g. parameters) need to be recorded separately. |
49 | 1 | Steve Welburn | |
50 | 1 | Steve Welburn | Also, if the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication. |
51 | 1 | Steve Welburn | |
52 | 1 | Steve Welburn | h3. Qualitative research |
53 | 1 | Steve Welburn | |
54 | 1 | Steve Welburn | An example would be using interviews with performers to evaluate a new instrument design. |
55 | 1 | Steve Welburn | |
56 | 1 | Steve Welburn | The workflow is: |
57 | 1 | Steve Welburn | * Gather data for the experiment (e.g. though interviews) |
58 | 1 | Steve Welburn | * Analyse data |
59 | 4 | Steve Welburn | * [[Publishing Research Data|Publish data]] |
60 | 1 | Steve Welburn | |
61 | 16 | Simon Dixon | Data involved might include: |
62 | 1 | Steve Welburn | * the interface design |
63 | 1 | Steve Welburn | * Captured audio from performances |
64 | 1 | Steve Welburn | * Recorded interviews with performers (possibly audio or video) |
65 | 1 | Steve Welburn | * Interview transcripts |
66 | 1 | Steve Welburn | |
67 | 16 | Simon Dixon | Survey participants and interviewees retain [[Copyright|copyright]] over their contributions unless they are specifically assigned to you! In order to have the freedom to publish the content a suitable rights waiver / transfer of copyright / clearance form / licence agreement should be signed. Or agreed on tape. Also, the people (or organisation) recording the event will have copyright on their materials... unless assigned/waived/licensed (e.g. video / photos / sound recordings). Most of this can be dealt with fairly informally for most research, but if you want to publish data then a more formal agreement is sensible. Rather than transferring copyright, an agreement to publish the (possibly edited) materials under a particular license might be appropriate. |
68 | 7 | Steve Welburn | |
69 | 9 | Steve Welburn | Creators of materials (e.g. interviewees) always retain moral rights to their words: they have the right to be named as the author of their content; and they maintain the right to object to derogatory treatment of their material. Note that this means that in order to publish anonymised interviews, you should have an agreement that allows this. |
70 | 6 | Steve Welburn | |
71 | 16 | Simon Dixon | If people are named in interviews (even if they're not the interviewee) then the [[Data Protection]] Act might be relevant. |
72 | 11 | Steve Welburn | |
73 | 16 | Simon Dixon | The research might also involve: |
74 | 1 | Steve Welburn | * Demographic details of participants |
75 | 5 | Steve Welburn | * Identifiable participants ([[Data Protection]]) |
76 | 1 | Steve Welburn | * Release forms for people taking part |
77 | 1 | Steve Welburn | |
78 | 3 | Steve Welburn | and *is likely* to involve: |
79 | 13 | Steve Welburn | * [[Ethical Concerns|ethics-board approval]] |