2 During The Research » History » Version 3
Version 2 (Steve Welburn, 2012-08-22 03:42 PM) → Version 3/16 (Steve Welburn, 2012-09-26 08:32 AM)
h2. During The Research
During the course of a piece of research, data management is largely risk mitigation - it makes your research more robust and allows you to continue if something goes wrong.
The two main areas to consider are:
* [[backing up]] research data - in case you lose, or corrupt, the main copy of your data;
* [[documenting data]] - in case you need to to return to it later.
In addition to the immediate benefits during research, applying good research data management practices makes it easier to manage your research data at the end of your research project.
We have identified three basic types of research projects, two quantitative (one based on new data, one based on a new algorithm) and one qualitative, and consider the data management techniques appropriate to those workflows. More complex research projects may required a combination of the techniques from these.
h3. Quantitative research - New Data
For this use case, the research workflow involves:
* creating a new dataset
* testing outputs of existing algorithms on the dataset
* publication of results
The new dataset may include:
* Selection or creation of underlying (audio) data (the actual audio may be in the dataset or the dataset may reference material - e.g. for [[Copyright|copyright]] reasons)
* creation of ground-truth annotations for the audio and the type of algorithm (e.g. chord sequences for chord estimation, onset times for onset detection)
The content of the dataset will need to be [[documenting data|documented]].
Data involved includes:
* [[Managing Software As Data|software]] for the algorithms
* the new dataset
* identification of existing datasets against which results will be compared
* results of applying the algorithms to the dataset
* documentation of the testing methodology - including algorithm parameters
Note that *if* existing algorithms have published results using the same existing datasets and methodology, then results should be directly comparable between the published results and the results for the new dataset. In this case, most of the methodology is already documented and only details specific to the new dataset need separately recording.
If the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.
h3. Quantitative research - New Algorithm
bq. A common use-case in C4DM research is to run a newly-developed analysis algorithm on a set of audio examples and evaluate the algorithm by comparing its output with that of a human annotator. Results are then compared with published results using the same input data to determine whether the newly proposed approach makes any improvement on the state of the art.
Data involved includes:
* [[Managing Software As Data|software]] for the algorithm
* An annotated dataset against which the algorithm can be tested
* Results of applying the new algorithm and competing algorithms to the dataset
* Documentation of the testing methodology
Note that *if* other algorithms have published results using the same dataset and methodology, then results should be directly comparable between the published results and the results for the new algorithm. In this case, most of the methodology is already documented and only details specific to the new algorithm (e.g. parameters) need separately recording.
Also, if the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.
h3. Qualitative research
An example would be using interviews with performers to evaluate a new instrument design.
The workflow is:
* Gather data for the experiment (e.g. though interviews)
* Analyse data
* Publish data
Data involved may include:
* the interface design
* Captured audio from performances
* Recorded interviews with performers (possibly audio or video)
* Interview transcripts
The research may also involve:
* Demographic details of participants
* Identifiable participants (Data Protection])
* Release forms for people taking part
and *is likely* to *will* involve:
* ethics-board approval
During the course of a piece of research, data management is largely risk mitigation - it makes your research more robust and allows you to continue if something goes wrong.
The two main areas to consider are:
* [[backing up]] research data - in case you lose, or corrupt, the main copy of your data;
* [[documenting data]] - in case you need to to return to it later.
In addition to the immediate benefits during research, applying good research data management practices makes it easier to manage your research data at the end of your research project.
We have identified three basic types of research projects, two quantitative (one based on new data, one based on a new algorithm) and one qualitative, and consider the data management techniques appropriate to those workflows. More complex research projects may required a combination of the techniques from these.
h3. Quantitative research - New Data
For this use case, the research workflow involves:
* creating a new dataset
* testing outputs of existing algorithms on the dataset
* publication of results
The new dataset may include:
* Selection or creation of underlying (audio) data (the actual audio may be in the dataset or the dataset may reference material - e.g. for [[Copyright|copyright]] reasons)
* creation of ground-truth annotations for the audio and the type of algorithm (e.g. chord sequences for chord estimation, onset times for onset detection)
The content of the dataset will need to be [[documenting data|documented]].
Data involved includes:
* [[Managing Software As Data|software]] for the algorithms
* the new dataset
* identification of existing datasets against which results will be compared
* results of applying the algorithms to the dataset
* documentation of the testing methodology - including algorithm parameters
Note that *if* existing algorithms have published results using the same existing datasets and methodology, then results should be directly comparable between the published results and the results for the new dataset. In this case, most of the methodology is already documented and only details specific to the new dataset need separately recording.
If the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.
h3. Quantitative research - New Algorithm
bq. A common use-case in C4DM research is to run a newly-developed analysis algorithm on a set of audio examples and evaluate the algorithm by comparing its output with that of a human annotator. Results are then compared with published results using the same input data to determine whether the newly proposed approach makes any improvement on the state of the art.
Data involved includes:
* [[Managing Software As Data|software]] for the algorithm
* An annotated dataset against which the algorithm can be tested
* Results of applying the new algorithm and competing algorithms to the dataset
* Documentation of the testing methodology
Note that *if* other algorithms have published results using the same dataset and methodology, then results should be directly comparable between the published results and the results for the new algorithm. In this case, most of the methodology is already documented and only details specific to the new algorithm (e.g. parameters) need separately recording.
Also, if the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.
h3. Qualitative research
An example would be using interviews with performers to evaluate a new instrument design.
The workflow is:
* Gather data for the experiment (e.g. though interviews)
* Analyse data
* Publish data
Data involved may include:
* the interface design
* Captured audio from performances
* Recorded interviews with performers (possibly audio or video)
* Interview transcripts
The research may also involve:
* Demographic details of participants
* Identifiable participants (Data Protection])
* Release forms for people taking part
and *is likely* to *will* involve:
* ethics-board approval