Printable Version » History » Version 9

« Previous - Version 9/14 (diff) - Next » - Current version
Steve Welburn, 2012-11-19 05:55 PM


Sound Data Management Training

We consider three stages of a research project, and the appropriate research data management considerations for each of those stages. The stages are:

In addition, we consider the responsibilities of a Principal Investigator regarding data management.

Background material is also available on "why manage research data ?", and there is an alternate view of the content based on individual research data management skills.

Before The Research - Planning Research Data Management

A data management plan is an opportunity to think about the resources that will be required during the lifetime of the research project and to make sure that any necessary resources will be available for the project. In addition, it is likely that some form of data management plan will be required as part of a grant proposal.

The main questions the plan will cover are:
  • What type of storage do you require ?
    Do you need a lot of local disk space to store copies of standard datasets ? Will you be creating data which should be deposited in a long-term archive, or published online ? How will you back up your data ?
  • How much storage do you require ?
    Does it fit within the standard allocation for backed-up storage ?
  • How long will you require the storage for ?
    Is data being archived or published ? Does your funder require data publication ?
  • How will this storage be provided ?
Appropriate answers will relate to: Additional questions may include:
  • What is the appropriate license under which to publish data ?
  • Are there any ethical concerns relating to data management e.g. identifiable participants ?
  • Does your research data management plan comply with relevant legislation ?
    e.g. Data Protection, Intellectual Property and Freedom of Information

A minimal data management plan for a project using standard C4DM/QMUL facilities could say:

During the project, data will be created locally on researchers machines and will be backed up to the QMUL network. Software will be managed through the code.soundsoftware.ac.uk site which provides a Mercurial version control system and issue tracking. At the end of the project, software will be published through soundsoftware and data will be published on the C4DM Research Data Repository.

For larger proposals, a more complete plan may be required. The Digital Curation Centre have an online tool (DMP Online) for creating data management plans which asks (many) questions related to RCUK principles and builds a long-form plan to match research council requirements.

It is important to review the data management plan during the project as it is likely that actual requirements will differ from initial estimates. Reviewing the data management plan against actual data use will allow you to assess whether additional resources are required before resourcing becomes a critical issue.

In order to create an appropriate data management plan, it is necessary to consider data management requirements during and after the project.

The Digital Curation Centre (DCC) provide DMP Online, a tool for creating data management plans. The tool can provide a data management questionnaire based on institutional and funder templates and produce a data management plan from the responses. Documents are available describing how to use DMP Online.

During The Research

During the course of a piece of research, data management is largely risk mitigation - it makes your research more robust and allows you to continue if something goes wrong.

The two main areas to consider are:
  • backing up research data - in case you lose, or corrupt, the main copy of your data;
  • documenting data - in case you need to to return to it later.

In addition to the immediate benefits during research, applying good research data management practices makes it easier to manage your research data at the end of your research project.

We have identified three basic types of research projects, two quantitative (one based on new data, one based on a new algorithm) and one qualitative, and consider the data management techniques appropriate to those workflows. More complex research projects might require a combination of these techniques.

Quantitative research - New Data

For this use case, the research workflow involves:
  • creating a new dataset
  • testing outputs of existing algorithms on the dataset
  • publication of results
The new dataset might include:
  • selection or creation of underlying (audio) data (the actual audio might be in the dataset or the dataset might reference material - e.g. for copyright reasons)
  • creation of ground-truth annotations for the audio and the type of algorithm (e.g. chord sequences for chord estimation, onset times for onset detection)
Although the research is producing a single new dataset, the full set of research data involved includes:
  • software for the algorithms
  • the new dataset
  • identification of existing datasets against which results will be compared
  • results of applying the algorithms to the dataset
  • documentation of the testing methodology - e.g. method and algorithm parameters (including any default parameter values).

All of these should be documented and backed up.

Note that if existing algorithms have published results using the same existing datasets and methodology, then results should be directly comparable between the published results and the results for the new dataset. In this case, most of the methodology is already documented and only details specific to the new dataset need to be recorded separately.

If the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.

Quantitative research - New Algorithm

A common use-case in C4DM research is to run a newly-developed analysis algorithm on a set of audio examples and evaluate the algorithm by comparing its output with that of a human annotator. Results are then compared with published results using the same input data to determine whether the newly proposed approach makes any improvement on the state of the art.

Data involved includes:
  • software for the algorithm
  • an annotated dataset against which the algorithm can be tested
  • results of applying the new algorithm and competing algorithms to the dataset
  • documentation of the testing methodology

Note that if other algorithms have published results using the same dataset and methodology, then results should be directly comparable between the published results and the results for the new algorithm. In this case, most of the methodology is already documented and only details specific to the new algorithm (e.g. parameters) need to be recorded separately.

Also, if the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.

Qualitative research

An example would be using interviews with performers to evaluate a new instrument design.

The workflow is:
  • Gather data for the experiment (e.g. though interviews)
  • Analyse data
  • Publish data
Data involved might include:
  • the interface design
  • Captured audio from performances
  • Recorded interviews with performers (possibly audio or video)
  • Interview transcripts

Survey participants and interviewees retain copyright over their contributions unless they are specifically assigned to you! In order to have the freedom to publish the content a suitable rights waiver / transfer of copyright / clearance form / licence agreement should be signed. Or agreed on tape. Also, the people (or organisation) recording the event will have copyright on their materials... unless assigned/waived/licensed (e.g. video / photos / sound recordings). Most of this can be dealt with fairly informally for most research, but if you want to publish data then a more formal agreement is sensible. Rather than transferring copyright, an agreement to publish the (possibly edited) materials under a particular license might be appropriate.

Creators of materials (e.g. interviewees) always retain moral rights to their words: they have the right to be named as the author of their content; and they maintain the right to object to derogatory treatment of their material. Note that this means that in order to publish anonymised interviews, you should have an agreement that allows this.

If people are named in interviews (even if they're not the interviewee) then the Data Protection Act might be relevant.

The research might also involve:
  • Demographic details of participants
  • Identifiable participants (Data Protection)
  • Release forms for people taking part
and is likely to involve:

At The End Of The Research

Whether you have finished a research project or simply completed an identifiable unit of research (e.g. published a paper based on your research), you should look at:

Publication of the results of your research will require:
  • Summarising the results
  • Publishing a relevant sub-set of research data / summarised data to support your paper
  • Publishing the paper

Note that the EPSRC data management principles require sources of data to be referenced.

Research Management

The data management concerns of a PI will largely revolve around planning and appraisal of data management for research projects: to make sure that they conform with institutional policy and funder requirements; and to ensure that the data management needs of the research project are met.

A data management plan (e.g. for use in a grant proposal) will show that you have considered:
  • the costs of preserving your data;
  • funder requirements for data preservation and publication;
  • institutional data management policy
  • and ethical issues surrounding data management (e.g. data relating to human participants).
Specific areas to examine may include:

After the project is completed, an appraisal of how the data was managed should be carried out as part of the project's "lessons learned".

Data management training should provide an overview of all the above, and keep PIs informed of any changes in the above that affect data management requirements.

Data Management Skills

Archiving research data
Backing up
Documenting data
Managing software as data
Licensing research data
Publishing research data

Data Management Background

Research Council requirements
Relevant legislation

Data Management Motivation

Why manage research data ?

Available Resources

Resources available for C4DM researchers

Backing up

Why back up your data ?

How to back up data

The core principle is that backup copies of data should regularly be stored in a different location to the main copy.

Suitable locations for backups are:
  • A firesafe, preferably in a different building
  • A network copy
    • A network drive e.g. provided by the institution
    • Internet storage (in the cloud)
    • A data repository - this could be a public thematic / institutional repository for publishing completed research datasets, or an internal repository for archiving datasets during research
  • A portable device / portable media which you keep somewhere other than under your desk / with your laptop.

Backing up on external devices means that you need access to the device... network drives and "internal" backups are usually more available. e.g. backup every time you're in the office / lab or at home.

The best backup is the one you do. The question of how often you need to back up depends very much on how much new data you've generated / how difficult it would be to recreate the data. For primary data (e.g. digital audio recordings of interviews) you should back them up as soon as possible as they may be very time consuming to redo. If an algorithm runs for days generating data files, you may want to set it up to also create backup copies as it proceeds rather than requiring backing up at the end of the processing. If you've changed some source code and can regenerate the data in an afternoon, you may not need to back up the data - but the source code should be safely stored in a version control system somewhere. If you feel too busy too back up your data, it may be a hint that you should make sure there's a copy somewhere safe!

Remember that if you delete your local copy of the data then the primary copy will be the original backup... is that copy backed up anywhere ? If a network drive is used, it may be backed up to tape - but this should be checked with your IT provider.

Details of resources available for C4DM researchers are available here.

Can't I just put it in the cloud ?

You can, but the service agreement with the provider may give them a lot of rights... review the service agreement and decide whether you are happy with it!

Looking at service agreements in November 2012, we found that Google's terms let them use your data in any way which will improve their services - including publishing your data and creating derivative works. This is partly a side-effect of Google switching to a single set of terms for all their services. For Microsoft SkyDrive, the Windows Live services agreement is pretty similar.

Apple's iCloud is better as they restrict publication rights to data which you want to make public / share. Dropbox is relatively good - probably because they just provide storage and aren't mining it to use in all their other services!

Even so, there are issues. Data stored in the cloud is still stored somewhere... you just don't have control over where that location is. Your data may be stored in a country which gives the government the right to access data. Also, the firm that stores your data may still be required to comply with the laws of its home country when the data is stored elsewhere. It is, however, unlikely that digital audio research data will be sensitive enough to find this an issue.

A Forbes article on Can European Firms Legally Use US Clouds To Store Data stated that:

Both Amazon Web Services and Microsoft have recently acknowledged that they would comply with U.S. government requests to release data stored in their European clouds, even though those clouds are located outside of direct U.S. jurisdiction and would conflict with European laws.

If you are worried about what rights a service provider may have to your data in their cloud, then consider encrypting it - e.g. using an encrypted .dmg file on a Mac, or using Truecrypt for a cross-platform solution. These create an encrypted "disc" in a file which you can mount and treat like a real disc - but all the content is encrypted. Note that changing data on an encrypted disc may change the entire contents of the disc and need to resync the whole disc to the cloud storage. Alternatively, BoxCryptor or encFs (also available for Windows) will encrypt individual files separately allowing synchronisation to operate more effectively.

SpiderOak provide "zero knowledge" privacy in which all data is encrypted locally before being submitted to the cloud, and SpiderOak do not have a copy of your decryption key - i.e. they can't actually examine your data.

See JISC/DCC document "Curation In The Cloud" - http://tinyurl.com/8nogtmv

Surely there must be a quicker way...

Figuring out which files to copy can be very tedious, and usually leads to just backing up large chunks of data together. However, utilities can be used to copy just those files that have been updated - or even just update the parts of files that have changed.

The main command-line utility for this on UNIX-like systems (Mac OS X, Linux) is rsync. From the rsync man page:

Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.

Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.

For Windows, there is a rsync tool for Windows, and DeltaCopy provides a GUI over rsync.

In addition, there are modern continuous backup programs (e.g. Apple's "Time Machine") which will synchronise data to a backup device and allow you to revert to any point in time. However, these solutions may not be appropriate if your data is large.

Version control systems for source code are optimised for storing plain text content and are not an appropriate way to store data unless the data is text (e.g. CSV files).

Archiving research data

For archival purposes data needs to be stored in a location which provides facilities for long-term preservation of data. As well as standard data management concerns (e.g. backup, documentation) the media and the file formats will need to be appropriate for long-term use.

Whereas work-in-progress data is expected to change regularly during the research process, archived data will change rarely, if at all. Archived data can therefore be stored on write-once media (e.g. CD-R).

In addition, it is not necessary to archive all intermediate results - reuse of archived data means that requiring a few days to regenerate results is reasonable. However, all necessary documentation, software and data should be archived to allow results to be recreated. Existing archived datasets will not need archiving "again". However, if the archiving system supports deduplication then storing multiple copies of the same content will require minimal additional storage.

Once archived, the archive copy should not be modified directly and data access should only be required to create a new work-in-progress copy of the data to work from. Access to archived data will therefore be sporadic. Hence, it is possible to store archived data "off-line" only to be accessed when required.

It is important that archiving data is performed in an appropriate manner to allow future use of the data. This will require the use of appropriate formats for the data and storage on suitable media.

If the original content is not in an open format, then providing copies in multiple formats may be appropriate - e.g. an original Microsoft Word document, a PDF version to show how the document should look and the plain-text content so the document can be recreated.

Within C4DM, there are currently few resources available to support this. The best available option is the research group network folder as this is backed up to tape.

Archiving Data

BBC Domesday Project

1986 Project to do a modern-day Domesday book (early crowd-sourcing)
  • Used “BBC Master” computers with data on laserdisc
  • Collected 147,819 pages of text and 23,225 photos
  • Media expiring and obsolete technology put the data at risk!
Domesday Reloaded (2011) To allow long-term access to data
  • Don't use obscure formats!
  • Don't use obscure media!
  • Don't rely on technology being available!
  • Do keep original source material!

Google images for BBC Domesday

Media

Archive copies of data may be held on the same types of media as used during research. Additionally, Write-Once media (e.g. CD-R, DVD+/-R, BDR) may be appropriate.

Removable drives (e.g. USB flash drives, firewire HDD) may be used, but there is a risk of hardware failure with these devices - they are not "just" data storage.

Removable media (e.g. CD-R, tapes) do not have the risk of hardware failure but the media themselves may be damaged or become unusable - the estimated lifetime of an optical disc is 2-100 years. Whether a specific disc will last 2 years or 100 is not something that can easily be judged - although buying high quality media rather than cheap packs of 100 discs may help.

As with all technology, there is a risk of obsolescence
  • devices to read removable media may no longer be commonplace (e.g. floppy disc drives, ZIP drives)
  • formats used for removable media may no longer be supported (e.g. various formats for DVD-RAM discs)
  • interfaces used for removable drives may no longer be commonplace (e.g. parallel or SCSI ports, PATA/IDE disc drives)

All media decay / become obsolete over time. It is therefore necessary to refresh the media by copying the data to new media at intervals. Doing this regularly reduces the risk of discovering that your archived data is inaccessible.

If data is stored on a RAID (Redundant Array of Independent Disks), then it is possible to replace an individual disk in the array and rebuild it's content, thus refreshing the media.

Archived data is still at risk of data loss, and should be backed up somewhere else!

Archiving data is best supported through provision of a data archiving service (e.g. through a library). The burden of maintaining archival standards of storage for the media is then taken on by the service provider. This may appear to the user as a network drive, or as an archive system to which data packages may be submitted. Such a system may be part of a data management system which also supports publication of data.

File Formats

File formats also become obsolete. Although the original data should be archived, it is also recommended that copies of data are stored in more accessible formats. e.g. storing PDF outputs from LaTeX source, TIFF versions of images, FLAC copies of audio files. The more specific the source format the stronger the requirement for readable formats! Closed formats (e.g. Microsoft Word documents) are particularly vulnerable to obsolescence - e.g. if you change the application you use from MS Word to Open Office, even if the document can be opened you may find that the formatting no longer works without purchasing MS Office.

  • LaTeX source - will all the required packages be available if you want to rebuild the document ?
  • Images - will the format be available ? is it a closed format (e.g. GIF) ?

If data is stored in lossy formats (e.g. MP3) then future decoders for that format may not produce precisely the same output (audio) as the decoder used in the initial experiments. A copy of the data should always include a lossless version of the data (e.g. PCM or FLAC for audio). Preferably, research should take place on lossless data extracted from the lossy files.

In the future, current audio formats may become obsolete, we therefore recommend that when archiving audio files, copies of the data should be stored in an open lossless format as well as in the original format. We would currently recommend using FLAC to compress audio files - FLAC files use less space than the raw data and allow metadata tags to be included (e.g. artist and track name). If the use of compressed files is not appropriate we would recommend use of uncompressed PCM audio in WAV format.

Summary

Archiving data requires:
  • refreshing the media at suitable intervals by moving data onto new media
  • creating copies of the data in new formats to allow their use (e.g. converting data in closed formats to open formats, updating data to new versions of file formats).

Documenting data

What should you document ?

You should document the data so that people can understand it - what units the data is in, how the data was created, why the data was created and possible uses for the data.

As well as summary documentation for the entire dataset, individual data files should have their own documentation.

How to document data

  • Use a suitable directory structure. Documentation can then give a summary of all the files within a folder.
  • Use meaningful filenames
    • The more meaningful the better
    • However, they should be succinct
    • It may be necessary to refer to an explanation of the filenames to identify their content
    • Files may be moved from their original directory structure so filenames should be sufficient to identify a particular file
  • If documentation is required to understand file contents, copy the documentation when copying the files
  • Use standard file formats where possible - and preferably open formats so that files can be reused
  • Create README files with textual explanations of file content
  • Use the capabilities of file formats for self-documentation
    • If you have text files of data, consider including comment lines for explanations
    • Fill in author, title, date and comments for file formats that support them (e.g. PDF, Word .doc etc.)
    • Consider including <!-- --> comments in XML data
  • If data is created algorithmically / by code
    • Consider automatically writing out textual descriptions when the data is created
    • Document the values of all the parameters used to create the data
    • Remember to document the actual values of parameters for which default values were accepted - the default values might change with different versions of the code

Managing Software As Data

For existing software used in research, the appropriate citation, version and source should be documented. This may need to include versions of any libraries required by the software as changes to the libraries might affect the outputs.

For new software, as for data, the management issues are:

However, whereas data changes slowly / infrequently, software is subject to ongoing changes during a project. Source code for software usually consists of text files and should therefore be stored in a suitable version control system (e.g. Mercurial, Subversion, git). Binary releases of software may also be created as downloads for a project.

Additionally, software documentation has broader requirements - including both documentation to make the code maintainable (e.g. comments in the code, documenting APIs, Javadoc style documentation) and user documentation to explain how to install and use the software.

The Sound Software project provides software project management facilities for digital music and audio research including Mercurial version control, downloads, documentation, issue lists and wikis through its code repository

Other possible repositories for source code include:

The Sound Software project has information on choosing a version control system and provides a cross-platform, easy-to-use, graphical client for use with Mercurial.

Publishing research data

Research data publication allows your data to be reused by other researchers e.g. to validate your research or to carry out follow-on research. To that end, a suitable data publication host will allow your data to be discovered (e.g. by publishing metadata) and will be publicly accessible (i.e. on the internet).

Research data can be published on the internet through:
  • project web sites
  • research group web-sites
  • generic web archives (e.g. archive.org)
  • research data sites (e.g. figshare)
  • more general open access research hosts (e.g. f1000 Research)
  • thematic repositories dedicated to a specific discipline / subject area - sadly there is no sign of an appropriate repository for digital music and audio research
  • institutional repositories dedicated to research from a specific organisation (e.g. QMUL have a repository through which Green open access copies of papers by QM research staff can be published).
  • supplementary materials attached to journal articles

An appropriate license should be granted to allow other researchers to use your research data.

Within the Centre for Digital Music, we now have a research data repository for publishing research data outputs from the group. Publishing data though the C4DM repository gives a single point for publishing C4DM data on the internet without relying on (possibly ephemeral) project-specific web-sites. Other repositories that may be of interest to researchers are listed here.

If the web-site through which the data is published is also to be the long-term archive for you data, then you should check that the meets the criteria for an archival storage system. Note that although data will be written to the host irregularly, it is expected that published data will be accessed more frequently than archived data making offline storage unsuitable.

If an external publisher is used for your research data, you should check the Terms and Conditions e.g. to see whether copyright on the data is transferred to the publisher and to check for how long they will publish your data.

If data is published through a publisher or repository, then it may also be held on institutional storage as long as the publisher's license is followed, which might e.g. require that there is a link back to the publisher from the institutional repository. Publishing under a Creative Commons license makes this easy.

If data is available in multiple places, different versions of the data might arise (e.g. changes between dates uploaded, data corruption). You should therefore make it easy to identify which specific version of the data is correct by publishing a digital fingerprint (e.g. a MD5 hash). MD5 fingerprints can be generated in Windows using MD5summer, in Linux with the Gnu md5sum utility and on Max OS X using md5 or openssl

Persistent IDs for data

In order to ensure ongoing access to your data, should look to acquire a persistent ID for your dataset. However, persistence is a continuum with some IDs more persistent than others. DOIs and handles are designed to be persistent in the long term, allowing a unique identifier to be redirected to the current location of your dataset - if the dataset moves, the DOI/handle can be pointed at the new location. Repositories and research data sites may provide DOIs for data submitted to them. Institutional URLs may be persistent if the institution makes a policy decision to make them so. Other URLs may change when web-sites are revamped making the published URL for your data return a "404 Not Found" message.

Persistent IDs are useful for referencing datasets, and are particularly handy if they are short. Long or ugly DOIs can be shortened using the ShortDOI service.

And more repositories

Repositories

The Digital Curation Centre have a (very short) list of repositories .

Repositories using DSpace can be registered on the DSpace web-site, for inclusion in the list of Who's using DSpace ? .

Within the University of London, the School of Advanced Study has a repository of humanities-related items.

University of the Arts London have an online repository

Edina provides a national data centre

EDINA is a UK national academic data centre, designated by JISC on behalf of UK funding bodies to support the activity of universities, colleges and research institutes in the UK, by delivering access to a range of online data services through a UK academic infrastructure, as well as supporting knowledge exchange and ICT capacity building, nationally and internationally.

Services hosted at EDINA include:

Pre-press e-Prints of articles can be published through http://arxiv.org/ and the related Computing Research Repository

Other repositories that may be of interest include:

NB: This list has been accumulated from various sites including:

Training the Trainers

I-Tech Training Toolkit

Performance Juxtaposition web-site:

ADDIE

http://www.learning-theories.com/addie-model.html

Kirkpatrick

Bloom

http://www.nwlink.com/~donclark/hrd/bloom.html

Cognitive, Affective and Psychomotor learning

Why do Data Management ?

Evidence Promoting Good Data Management

Data Reuse

Do you reuse other people's data ? Can they reuse your's ?

Researcher Development Framework

SCONUL Information Literacy 7 Pillars Diagrams

Licensing

Whose data is it anyway ?

QMUL HR Contract Terms and Conditions :

16. Patents & Copyright
a) Any discovery, design, computer software program or other work or invention which might reasonably be exploitable (‘Invention’) which is discovered, invented or created by the Employee (either alone or with any other person) either directly or indirectly in the course of their normal duties or in the course of duties specifically assigned to him in the course of his employment shall promptly be disclosed in writing to the College. All intellectual property rights in such Invention shall be the absolute property of the College and the College shall have the right to apply for, prosecute and obtain patent or other similar protection in its own name. Intellectual property rights include all patent rights, copyright and rights in respect of confidential information and know-how. The ownership of copyright in research papers, review articles and books will normally be waived by the College in favour of the author unless subject to any conditions placed on the works by the funder.

The important bit being...

Any ... work ... which might reasonably be exploitable ... which is ... created by the Employee ... in the course of duties ... in the course of his employment ... shall be the absolute property of the College

In the research contract, there is another clause:

The Employee will be expected to publish the results of his/her research work, subject to the conditions of any contract providing funding for the research

Therefore if funding bodies make funding contingent on publishing data as part of the results of research, then data publication will be allowed.

Research policies at QMUL Academic Registry and Council Secretariat

Creative Commons: http://wiki.creativecommons.org/Data CC Licenses / CC0

Science Commons: http://sciencecommons.org/projects/publishing/open-access-data-protocol/

Restrictions based on data ownership

Restrictions based on data parentage - use of e.g. CC-SA data

Article on CC-BY and data

Where possible, CC0 with a request for citations is preferred (Why does Dyad use CC0)

If data is based on copyright works it may be appropriate to restrict the license to allow only research / non-commercial use (e.g. this would prevent chord annnotations being published commercially).

Practical Steps Towards Data Management

Even if you don't have a readily available data repository, there are still steps you can take to manage your data even if it can't be published.

File formats - use open formats where possible to future-proof files.

File naming - give files meaningful names.

Metadata - include a plain-text README file describing the contents of the files.

License - include a plain-text LICENSE file describing the license for the dataset.

Check that a copy of your data will be backed up - e.g. check that the network drive you store your data on is actually backed up.

If you're really bothered about recovering your data make sure it's backed up off-site!

This could be (i) in the cloud (i.e. DropBox etc.); (ii) USB drive (hard/flash); (iii) a specific network location (e.g. a NAS box at home).

Repositories

The appropriate repository will partly depend upon the data.

It could be... C4DM RDR, Dryad, Flickr, figshare, Archiv.Org...

However, if you want data to be reused in a citable manner remember to package the license and the required citation with the data. It means that however the data reaches the final user the only excuse for not being able to cite the data is that someone has bothered to remove the info...

Open Source Learning Tools

Xerte

Media to use in Training

Disk Drives Break

DataCent collection of disk drive failure sounds

Laptops Break / Get Broken

Legislation

JISC Web2 Rights
JISC Legal

There are three main areas of law affecting data management:

In addition, for data stored in the cloud, the USA PATRIOT Act may be relevant.

Copyright

Copyright grants the copyright holder rights relating to the use of the copyright material, in addition certain moral rights are granted to the creator of the materials. Copyright is automatically granted when new creative material is produced - i.e. the material must be more than a simple collection of other data. Copyright is a separate item of property to the original work and the sale of the original work does not automatically pass copyright on to the new owner of that work (e.g. selling a score or painting does not automatically transfer the copyright). The particular rights and the duration of the copyright period are affected by the type of material.

For audio and digital music research, rights of particular interest relate to:
  • musical compositions and audio recordings - a CD can be covered by three separate copyrights, one for the design of the packaging, one for the sound recording on the CD and one for the musical composition recorded
  • typographical arrangements - these cover not only papers (which are also covered as literary works) but also the layout of spreadsheets and design of databases.

Pay The Piper has a very good post explaining music copyright, which includes:

If you compose a completely original piece of music then it is your own property - you own the copyright, in other words.

Arranging existing music is fraught with difficulties. To put it very simply (and this is indeed a gross simplification) until the composer has been dead for seventy years his music is copyright and you may not make a written arrangement of it without permission.

Lots more in the post though, so it's worth reading if you want to know more about music copyright!

It is important to note that copyright does not cover the ideas expressed within a work, only the particular form that that work has been captured in. The data within a spreadsheet is not copyright, only the particular layout of that data.

We note that simple anthologies - e.g. a collection of "complete works" or works created during a certain period - do not get copyright on the content, although the typographical layout may be copyright.

Fair dealing / fair use regulations allow specific uses of copies from original copyright materials (NB: not copies of copies!) without breaching copyright. However, fair use does not apply to sound recordings, films and broadcasts. There are JISC Guidelines for Fair Dealing in an Electronic Environment and specific clauses in the legislation on use in education in training or for personal study.

The legislation:

Moral Rights

The author of a work always retains two moral rights regarding the content:
  • The right to be identified as the author
  • The right to object to derogatory use of the material.

Database Rights

In the UK, if a "substantial investment" is made in "obtaining, verifying or presenting" the contents of a database then the database will be protected by database rights. The owner of those rights will be the person that "takes the initiative" in the creation of the database - that "person" being the employer if the database is made by an employee in the course of his work. Database rights are infringed by extraction or re-utilisation of a substantial part of the database.

Fair dealing rules exist for database rights - users of databases are allowed to extract data for non-commercial use in research and teaching (with acknowledgment of the source).

Database rights last for 15 years from the creation/publication of the database and may be renewed if the database changes substantially.

More information at: The act itself is at:

More Information

UK university materials regarding copyright and intellectual property: Further sources of information:

Some articles of interest from outside the UK

Australian IP law blog posts re. media and copyright. Includes: US articles from Public Domain Sherpa Tutorial on Copyright and the Public Domain
  • What makes a derivative work
    derivative must use enough of the prior work that the average person would conclude that it had been based on or adapted from the prior work
  • Compilations
    compilations are (c) if they show minimal creativity (e.g. not just all works by someone or by date)
  • Copyright Renewal
    Many works did not have copyright renewed and therefore went out of copyright and into the public domain in the US - estimated 15% of works had copyright renewed. Renewals will appear in the online US copyright database for works from 1950-1963,

CHM Super Sound (a South Pacific record company) state that :

A melodic phrase of a song is in copyright. The lyrics are in copyright. Chord progressions in a music composition however, are not copyright material.

University of Washington Copyright Connection

WIPO Understanding Copyright and Related Rights

Berne Convention for the Protection of Literary and Artistic Works

Chord Progressions and Copyright:

Data Protection

Data protection protects the rights of individuals over their personal information. In particular, The Data Protection Act covers the processing of data relating to identifiable living individuals. The core of the Data Protection Act is a set of data protection principles. These state that personal data shall be processed fairly and lawfully and shall not be processed unless the subject gave their consent except under specific conditions (for sensitive personal data such as marital status, ethnic origin or health information there are further restrictions). Fair and lawful processing requires that the data was not obtained by deception and is kept confidential and that the data subject was given information about who will process the data and for what purpose. In addition, personal data should be:
  • obtained only for specified purposes, and should not be used for anything else;
  • adequate, relevant and not excessive in relation to the purposes (i.e. only the data that is required);
  • accurate and, where necessary, kept up to date;
  • kept no longer than is necessary for the purposes;
  • processed in accordance with the rights of the data subjects under the Act;
  • protected from:
    • unauthorised or unlawful processing
    • and loss, destruction; or damage
  • shall not be transferred outside the European Economic Area without similar protection being provided.

In general, data subjects have a right to access to data held about them. The onus to provide this data is on QMUL as the data controller, and, as such, QMUL should be able to find any personal data relating to identifiable living individuals which is held within the college.

However, there is a specific exemption, for research which is not targeted at particular individuals and will not cause distress or damage to a data subject, which allows data to be processed for other purposes and held indefinitely. Data subjects also have no immediate right of access for personal data where the data is processed for research purposes and the results do not identify the data subjects.

JISC state:

Data controllers are required by the Act to process personal data only where they have a clear purpose for doing so, and then only as necessitated by that purpose. A data controller’s purpose for any personal data processing operation should thus be clearly set out in advance of the processing, and should be readily demonstrable to data subjects.

They also note:
  • that the majority of the Data Protection principles do apply to research data;
  • that there should be a review to ensure compliance with Data Protection requirements;
  • that a mechanism should be in place for subjects to object to the processing if they believe it would cause them damage or distress;
  • and that particular care must still be taken when processing involves sensitive data.

As data protection applies to identifiable living individuals, it is generally best practice to anonymise any data relating to individuals as soon as possible, discarding any information that allows individuals to be identified. In order to comply with the Data Protection Act, a suitable consent form should be provided allowing the use of data relating to identifiable living individuals in research. Alternatively, such consent may be recorded in interviews. Within QMUL, research which involves human participants and data relating to them should be approved by the college Research Ethics Committee - the fast-track ethics review should be sufficient for most C4DM research.

Further information: The Act:

Freedom Of Information

The Freedom Of Information Act (FoI) gives people the right to request data held by public bodies. It does not matter where the data originated, only who holds it. Copyright relating to information supplied under FoI requests remains unchanged - and provides you with protection from other people (mis)using your data.

The Freedom of Information Act states that research data:
  • can be held indefinitely;
  • is not subject to FoI requests unless individuals are identified in published research;
  • can be used for other research uses;
  • and may be exempt from FoI requests on grounds of (imminent) future publication or commercial interest.

Note that this means that if a researcher from another institution published research identifying individuals and you use their data, then individuals will have the right to request the data from QMUL.

Additionally, if data will be published through the college's normal publication scheme, then there is no onus on the college to provide the data under FoI requests - publishing data removes any additional requirements for FoI.

Further information: The Act:

USA PATRIOT Act

The 2001 USA PATRIOT Act provides the US government with the right to search/seize data held by any US company or its subsidiaries. It does not matter where the data is physically stored, if it is held by a US company (Microsoft, Apple, Google, DropBox, Amazon...) then the US government can seize the data. However, in order to do so it is necessary for the US government to obtain a court order for the purpose of an anti-terrorism investigation - they can't just idly decide to grab your data.

Note that these rights are not terribly different to the rights of other countries to access data (see Hogan Lovells' white paper).

Further information: The Act:
  • 2001 "Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism" (USA PATRIOT) Act (Link).

Files

Research Council Requirements

Research councils are requiring data management plans as part of grant proposals and their policies also stipulate that research data created through their funding should be published for other researchers to use.

The DCC provides an overview of funders' data policies and individual pages for each funder's policy. The London School of Hygiene and Tropical Medicine (LSHTM) have also published a report on funder requirements for data preservation and publication.

The AHRC and EPSRC policies are most relevant to work at C4DM.

Arts and Humanities Research Council (AHRC)

From AHRC Funding Guide (PDF downloadable from AHRC web-site)

Deposit of resources or datasets
Grant Holders in all areas must make any significant electronic resources or datasets created as a result of research funded by the Council available in an accessible and appropriate depository for at least three years after the end of their grant. The choice of repository should be appropriate to the nature of the project and accessible to the targeted audiences for the material produced.
If you are a Grant Holder in the area of archaeology and decide to deposit with The Archaeology Data Service (ADS), then you should consult them at or before the start of the proposed research to discuss and agree the form and extent of electronic materials to be deposited with the ADS. If the deposit occurs after 31 March 2013, then there will be charge for this deposit.

Self Archiving
The AHRC requires that funded researchers:
• ensure deposit of a copy of any resultant articles published in journals or conference proceedings in appropriate repository
• wherever possible, ensure deposit of the bibliographical metadata relating to such articles, including a link to the publisher’s website, at or around the time of publication.
Full implementation of these requirements must be undertaken such that current copyright and licensing policies, for example, embargo periods and provisions limiting the use of deposited content to non-commercial purposes, are respected by authors.

The DCC provides a summary of AHRC policy.

Engineering and Physical Sciences Research Council (EPSRC)

The EPSRC data management principles state that:
  • research data should be made freely available with as few restrictions as possible
  • data with long term value should remain accessible and usable for future research
  • metadata should be made available to enable other researchers to understand the potential for further research and re-use of the data
  • data management policies and plans should exist for all data – and be adhered to!
  • published results should always include information on how to access the supporting data
  • all users of research data should acknowledge the sources of their data

The DCC provides a summary of EPSRC policy.

MUSHRA

(Wikipedia)

ITU/BS standard BS.1534-1

Frameworks for creating MUSHRA tests:
  • MUSHRAM - Matlab interface for MUSHRA audio tests
  • MUSHRA patcher for Max/MSP
  • mushraJS HTML5 and JavaScript based framework to create MUSHRA listening tests

Additional Notes

Data ownership issues - who owns your research data ?

Mapping to Vitae RDF

Specifics on: using the C4DM RDR and where data can safely be stored at QM

Paul Lamere's The Tools We Use

Bibliographic data