Sound Data Management Training

Managing research data is basic good practice. It ensures your research data is available to complete the project, reducing risk in the project; and preserves your research for future use after the project is complete, increasing the impact of the project. In addition, good research data management will ensure that: you comply with funder and institutional requirements; and consider the ethical and legal implications related to your research data.

There are many counter-examples showing that poor research data management can result in lost research. Additionally, there are the success stories where good research data management has allowed research to continue after disasters.

We consider three stages of a research project, and the appropriate research data management considerations for each of those stages. The stages are:

In addition, we consider the responsibilities of a Principal Investigator regarding data management.

There is also an alternate view of the content based on individual research data management skills and a summary of data management resources available to C4DM researchers.

These online materials are an output of the JISC-funded Sound Data Management Training (SoDaMaT) project.

Before The Research - Planning Research Data Management

A data management plan is an opportunity to think about the resources that will be required during the lifetime of the research project and to make sure that any necessary resources will be available for the project. In addition, it is likely that some form of data management plan will be required as part of a grant proposal.

The main questions the plan will cover are:
  • What type of storage do you require ?
    Do you need a lot of local disk space to store copies of standard datasets ? Will you be creating data which should be deposited in a long-term archive, or published online ? How will you back up your data ?
  • How much storage do you require ?
    Does it fit within the standard allocation for backed-up storage ?
  • How long will you require the storage for ?
    Is data being archived or published ? Does your funder require data publication ?
  • How will this storage be provided ?
Appropriate answers will relate to: Additional questions may include:
  • What is the appropriate license under which to publish data ?
  • Are there any ethical concerns relating to data management e.g. identifiable participants ?
  • Does your research data management plan comply with relevant legislation ?
    e.g. Data Protection, Intellectual Property and Freedom of Information

A minimal data management plan for a project using standard C4DM/QMUL facilities could say:

During the project, data will be created locally on researchers machines and will be backed up to the QMUL network. Software will be managed through the site which provides a Mercurial version control system and issue tracking. At the end of the project, software will be published through soundsoftware and data will be published on the C4DM Research Data Repository.

For larger proposals, a more complete plan may be required. The Digital Curation Centre have an online tool (DMP Online) for creating data management plans which asks (many) questions related to RCUK principles and builds a long-form plan to match research council requirements.

It is important to review the data management plan during the project as it is likely that actual requirements will differ from initial estimates. Reviewing the data management plan against actual data use will allow you to assess whether additional resources are required before resourcing becomes a critical issue.

In order to create an appropriate data management plan, it is necessary to consider data management requirements during and after the project.

The Digital Curation Centre (DCC) provide DMP Online, a tool for creating data management plans. The tool can provide a data management questionnaire based on institutional and funder templates and produce a data management plan from the responses. Documents are available describing how to use DMP Online.

During The Research

During the course of a piece of research, data management is largely risk mitigation - it makes your research more robust and allows you to continue if something goes wrong.

The two main areas to consider are:
  • backing up research data - in case you lose, or corrupt, the main copy of your data;
  • documenting data - in case you need to to return to it later.

In addition to the immediate benefits during research, applying good research data management practices makes it easier to manage your research data at the end of your research project.

We have identified three basic types of research projects, two quantitative (one based on new data, one based on a new algorithm) and one qualitative, and consider the data management techniques appropriate to those workflows. More complex research projects might require a combination of these techniques.

Quantitative research - New Data

For this use case, the research workflow involves:
  • creating a new dataset
  • testing outputs of existing algorithms on the dataset
  • publication of results
The new dataset might include:
  • selection or creation of underlying (audio) data (the actual audio might be in the dataset or the dataset might reference material - e.g. for copyright reasons)
  • creation of ground-truth annotations for the audio and the type of algorithm (e.g. chord sequences for chord estimation, onset times for onset detection)
Although the research is producing a single new dataset, the full set of research data involved includes:
  • software for the algorithms
  • the new dataset
  • identification of existing datasets against which results will be compared
  • results of applying the algorithms to the dataset
  • documentation of the testing methodology - e.g. method and algorithm parameters (including any default parameter values).

All of these should be documented and backed up.

Note that if existing algorithms have published results using the same existing datasets and methodology, then results should be directly comparable between the published results and the results for the new dataset. In this case, most of the methodology is already documented and only details specific to the new dataset need to be recorded separately.

If the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.

Quantitative research - New Algorithm

A common use-case in C4DM research is to run a newly-developed analysis algorithm on a set of audio examples and evaluate the algorithm by comparing its output with that of a human annotator. Results are then compared with published results using the same input data to determine whether the newly proposed approach makes any improvement on the state of the art.

Data involved includes:
  • software for the algorithm
  • an annotated dataset against which the algorithm can be tested
  • results of applying the new algorithm and competing algorithms to the dataset
  • documentation of the testing methodology

Note that if other algorithms have published results using the same dataset and methodology, then results should be directly comparable between the published results and the results for the new algorithm. In this case, most of the methodology is already documented and only details specific to the new algorithm (e.g. parameters) need to be recorded separately.

Also, if the testing is scripted, then the code used would be sufficient documentation during the research - readable documentation only being required at publication.

Qualitative research

An example would be using interviews with performers to evaluate a new instrument design.

The workflow is:
  • Gather data for the experiment (e.g. though interviews)
  • Analyse data
  • Publish data
Data involved might include:
  • the interface design
  • Captured audio from performances
  • Recorded interviews with performers (possibly audio or video)
  • Interview transcripts

Survey participants and interviewees retain copyright over their contributions unless they are specifically assigned to you! In order to have the freedom to publish the content a suitable rights waiver / transfer of copyright / clearance form / licence agreement should be signed. Or agreed on tape. Also, the people (or organisation) recording the event will have copyright on their materials... unless assigned/waived/licensed (e.g. video / photos / sound recordings). Most of this can be dealt with fairly informally for most research, but if you want to publish data then a more formal agreement is sensible. Rather than transferring copyright, an agreement to publish the (possibly edited) materials under a particular license might be appropriate.

Creators of materials (e.g. interviewees) always retain moral rights to their words: they have the right to be named as the author of their content; and they maintain the right to object to derogatory treatment of their material. Note that this means that in order to publish anonymised interviews, you should have an agreement that allows this.

If people are named in interviews (even if they're not the interviewee) then the Data Protection Act might be relevant.

The research might also involve:
  • Demographic details of participants
  • Identifiable participants (Data Protection)
  • Release forms for people taking part
and is likely to involve:

At The End Of The Research

Whether you have finished a research project or simply completed an identifiable unit of research (e.g. published a paper based on your research), you should look at:

Publication of the results of your research will require:
  • Summarising the results
  • Publishing a relevant sub-set of research data / summarised data to support your paper
  • Publishing the paper

Note that the EPSRC data management principles require sources of data to be referenced.

Research Management

The data management concerns of a PI will largely revolve around planning and appraisal of data management for research projects: to make sure that they conform with institutional policy and funder requirements; and to ensure that the data management needs of the research project are met.

A data management plan (e.g. for use in a grant proposal) will show that you have considered:
  • the costs of preserving your data;
  • funder requirements for data preservation and publication;
  • institutional data management policy
  • and ethical issues surrounding data management (e.g. data relating to human participants).
Specific areas to examine may include:

After the project is completed, an appraisal of how the data was managed should be carried out as part of the project's "lessons learned".

Data management training should provide an overview of all the above, and keep PIs informed of any changes in the above that affect data management requirements.

Data Management Skills

Archiving research data
Backing up
Documenting data
Managing software as data
Licensing research data
Publishing research data

Data Management Background

Research Council requirements
Relevant legislation

Data Management Motivation

Why manage research data ?

Available Resources

Resources available for C4DM researchers

Backing up

Why back up your data ?

How to back up data

The core principle is that backup copies of data should regularly be stored in a different location to the main copy.

Suitable locations for backups are:
  • A firesafe, preferably in a different building
  • A network copy
    • A network drive e.g. provided by the institution
    • Internet storage (in the cloud)
    • A data repository - this could be a public thematic / institutional repository for publishing completed research datasets, or an internal repository for archiving datasets during research
  • A portable device / portable media which you keep somewhere other than under your desk / with your laptop.

Backing up on external devices means that you need access to the device... network drives and "internal" backups are usually more available. e.g. backup every time you're in the office / lab or at home.

The best backup is the one you do. The question of how often you need to back up depends very much on how much new data you've generated / how difficult it would be to recreate the data. For primary data (e.g. digital audio recordings of interviews) you should back them up as soon as possible as they may be very time consuming to redo. If an algorithm runs for days generating data files, you may want to set it up to also create backup copies as it proceeds rather than requiring backing up at the end of the processing. If you've changed some source code and can regenerate the data in an afternoon, you may not need to back up the data - but the source code should be safely stored in a version control system somewhere. If you feel too busy too back up your data, it may be a hint that you should make sure there's a copy somewhere safe!

Remember that if you delete your local copy of the data then the primary copy will be the original backup... is that copy backed up anywhere ? If a network drive is used, it may be backed up to tape - but this should be checked with your IT provider.

Details of resources available for C4DM researchers are available here.

Can't I just put it in the cloud ?

You can, but the service agreement with the provider may give them a lot of rights... review the service agreement and decide whether you are happy with it!

Looking at service agreements in November 2012, we found that Google's terms let them use your data in any way which will improve their services - including publishing your data and creating derivative works. This is partly a side-effect of Google switching to a single set of terms for all their services. For Microsoft SkyDrive, the Windows Live services agreement is pretty similar.

Apple's iCloud is better as they restrict publication rights to data which you want to make public / share. Dropbox is relatively good - probably because they just provide storage and aren't mining it to use in all their other services!

Even so, there are issues. Data stored in the cloud is still stored somewhere... you just don't have control over where that location is. Your data may be stored in a country which gives the government the right to access data. Also, the firm that stores your data may still be required to comply with the laws of its home country when the data is stored elsewhere. It is, however, unlikely that digital audio research data will be sensitive enough to find this an issue.

A Forbes article on Can European Firms Legally Use US Clouds To Store Data stated that:

Both Amazon Web Services and Microsoft have recently acknowledged that they would comply with U.S. government requests to release data stored in their European clouds, even though those clouds are located outside of direct U.S. jurisdiction and would conflict with European laws.

If you are worried about what rights a service provider may have to your data in their cloud, then consider encrypting it - e.g. using an encrypted .dmg file on a Mac, or using Truecrypt for a cross-platform solution. These create an encrypted "disc" in a file which you can mount and treat like a real disc - but all the content is encrypted. Note that changing data on an encrypted disc may change the entire contents of the disc and need to resync the whole disc to the cloud storage. Alternatively, BoxCryptor or encFs (also available for Windows) will encrypt individual files separately allowing synchronisation to operate more effectively.

SpiderOak provide "zero knowledge" privacy in which all data is encrypted locally before being submitted to the cloud, and SpiderOak do not have a copy of your decryption key - i.e. they can't actually examine your data.

See JISC/DCC document "Curation In The Cloud" -

Surely there must be a quicker way...

Figuring out which files to copy can be very tedious, and usually leads to just backing up large chunks of data together. However, utilities can be used to copy just those files that have been updated - or even just update the parts of files that have changed.

The main command-line utility for this on UNIX-like systems (Mac OS X, Linux) is rsync. From the rsync man page:

Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.

Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.

For Windows, there is a rsync tool for Windows, and DeltaCopy provides a GUI over rsync.

In addition, there are modern continuous backup programs (e.g. Apple's "Time Machine") which will synchronise data to a backup device and allow you to revert to any point in time. However, these solutions may not be appropriate if your data is large.

Version control systems for source code are optimised for storing plain text content and are not an appropriate way to store data unless the data is text (e.g. CSV files).

Archiving research data

For archival purposes data needs to be stored in a location which provides facilities for long-term preservation of data. As well as standard data management concerns (e.g. backup, documentation) the media and the file formats will need to be appropriate for long-term use.

Whereas work-in-progress data is expected to change regularly during the research process, archived data will change rarely, if at all. Archived data can therefore be stored on write-once media (e.g. CD-R).

In addition, it is not necessary to archive all intermediate results - reuse of archived data means that requiring a few days to regenerate results is reasonable. However, all necessary documentation, software and data should be archived to allow results to be recreated. Existing archived datasets will not need archiving "again". However, if the archiving system supports deduplication then storing multiple copies of the same content will require minimal additional storage.

Once archived, the archive copy should not be modified directly and data access should only be required to create a new work-in-progress copy of the data to work from. Access to archived data will therefore be sporadic. Hence, it is possible to store archived data "off-line" only to be accessed when required.

It is important that archiving data is performed in an appropriate manner to allow future use of the data. This will require the use of appropriate formats for the data and storage on suitable media.

If the original content is not in an open format, then providing copies in multiple formats may be appropriate - e.g. an original Microsoft Word document, a PDF version to show how the document should look and the plain-text content so the document can be recreated.

Within C4DM, there are currently few resources available to support this. The best available option is the research group network folder as this is backed up to tape.

Archiving Data

BBC Domesday Project

1986 Project to do a modern-day Domesday book (early crowd-sourcing)
  • Used “BBC Master” computers with data on laserdisc
  • Collected 147,819 pages of text and 23,225 photos
  • Media expiring and obsolete technology put the data at risk!
Domesday Reloaded (2011) To allow long-term access to data
  • Don't use obscure formats!
  • Don't use obscure media!
  • Don't rely on technology being available!
  • Do keep original source material!

Google images for BBC Domesday


Archive copies of data may be held on the same types of media as used during research. Additionally, Write-Once media (e.g. CD-R, DVD+/-R, BDR) may be appropriate.

Removable drives (e.g. USB flash drives, firewire HDD) may be used, but there is a risk of hardware failure with these devices - they are not "just" data storage.

Removable media (e.g. CD-R, tapes) do not have the risk of hardware failure but the media themselves may be damaged or become unusable - the estimated lifetime of an optical disc is 2-100 years. Whether a specific disc will last 2 years or 100 is not something that can easily be judged - although buying high quality media rather than cheap packs of 100 discs may help.

As with all technology, there is a risk of obsolescence
  • devices to read removable media may no longer be commonplace (e.g. floppy disc drives, ZIP drives)
  • formats used for removable media may no longer be supported (e.g. various formats for DVD-RAM discs)
  • interfaces used for removable drives may no longer be commonplace (e.g. parallel or SCSI ports, PATA/IDE disc drives)

All media decay / become obsolete over time. It is therefore necessary to refresh the media by copying the data to new media at intervals. Doing this regularly reduces the risk of discovering that your archived data is inaccessible.

If data is stored on a RAID (Redundant Array of Independent Disks), then it is possible to replace an individual disk in the array and rebuild it's content, thus refreshing the media.

Archived data is still at risk of data loss, and should be backed up somewhere else!

Archiving data is best supported through provision of a data archiving service (e.g. through a library). The burden of maintaining archival standards of storage for the media is then taken on by the service provider. This may appear to the user as a network drive, or as an archive system to which data packages may be submitted. Such a system may be part of a data management system which also supports publication of data.

File Formats

File formats also become obsolete. Although the original data should be archived, it is also recommended that copies of data are stored in more accessible formats. e.g. storing PDF outputs from LaTeX source, TIFF versions of images, FLAC copies of audio files. The more specific the source format the stronger the requirement for readable formats! Closed formats (e.g. Microsoft Word documents) are particularly vulnerable to obsolescence - e.g. if you change the application you use from MS Word to Open Office, even if the document can be opened you may find that the formatting no longer works without purchasing MS Office.

  • LaTeX source - will all the required packages be available if you want to rebuild the document ?
  • Images - will the format be available ? is it a closed format (e.g. GIF) ?

If data is stored in lossy formats (e.g. MP3) then future decoders for that format may not produce precisely the same output (audio) as the decoder used in the initial experiments. A copy of the data should always include a lossless version of the data (e.g. PCM or FLAC for audio). Preferably, research should take place on lossless data extracted from the lossy files.

In the future, current audio formats may become obsolete, we therefore recommend that when archiving audio files, copies of the data should be stored in an open lossless format as well as in the original format. We would currently recommend using FLAC to compress audio files - FLAC files use less space than the raw data and allow metadata tags to be included (e.g. artist and track name). If the use of compressed files is not appropriate we would recommend use of uncompressed PCM audio in WAV format.


Archiving data requires:
  • refreshing the media at suitable intervals by moving data onto new media
  • creating copies of the data in new formats to allow their use (e.g. converting data in closed formats to open formats, updating data to new versions of file formats).

Documenting data

What should you document ?

You should document the data so that people can understand it - what units the data is in, how the data was created, why the data was created and possible uses for the data.

As well as summary documentation for the entire dataset, individual data files should have their own documentation.

How to document data

  • Use a suitable directory structure. Documentation can then give a summary of all the files within a folder.
  • Use meaningful filenames
    • The more meaningful the better
    • However, they should be succinct
    • It may be necessary to refer to an explanation of the filenames to identify their content
    • Files may be moved from their original directory structure so filenames should be sufficient to identify a particular file
  • If documentation is required to understand file contents, copy the documentation when copying the files
  • Use standard file formats where possible - and preferably open formats so that files can be reused
  • Create README files with textual explanations of file content
  • Use the capabilities of file formats for self-documentation
    • If you have text files of data, consider including comment lines for explanations
    • Fill in author, title, date and comments for file formats that support them (e.g. PDF, Word .doc etc.)
    • Consider including <!-- --> comments in XML data
  • If data is created algorithmically / by code
    • Consider automatically writing out textual descriptions when the data is created
    • Document the values of all the parameters used to create the data
    • Remember to document the actual values of parameters for which default values were accepted - the default values might change with different versions of the code

Managing Software As Data

For existing software used in research, the appropriate citation, version and source should be documented. This may need to include versions of any libraries required by the software as changes to the libraries might affect the outputs.

For new software, as for data, the management issues are:

However, whereas data changes slowly / infrequently, software is subject to ongoing changes during a project. Source code for software usually consists of text files and should therefore be stored in a suitable version control system (e.g. Mercurial, Subversion, git). Binary releases of software may also be created as downloads for a project.

Additionally, software documentation has broader requirements - including both documentation to make the code maintainable (e.g. comments in the code, documenting APIs, Javadoc style documentation) and user documentation to explain how to install and use the software.

The Sound Software project provides software project management facilities for digital music and audio research including Mercurial version control, downloads, documentation, issue lists and wikis through its code repository

Other possible repositories for source code include:

The Sound Software project has information on choosing a version control system and provides a cross-platform, easy-to-use, graphical client for use with Mercurial.

Publishing research data

Research data publication allows your data to be reused by other researchers e.g. to validate your research or to carry out follow-on research. To that end, a suitable data publication host will allow your data to be discovered (e.g. by publishing metadata) and will be publicly accessible (i.e. on the internet).

Research data can be published on the internet through:
  • project web sites
  • research group web-sites
  • generic web archives (e.g.
  • research data sites (e.g. figshare)
  • more general open access research hosts (e.g. f1000 Research)
  • thematic repositories dedicated to a specific discipline / subject area - sadly there is no sign of an appropriate repository for digital music and audio research
  • institutional repositories dedicated to research from a specific organisation (e.g. QMUL have a repository through which Green open access copies of papers by QM research staff can be published).
  • supplementary materials attached to journal articles

An appropriate license should be granted to allow other researchers to use your research data.

Within the Centre for Digital Music, we now have a research data repository for publishing research data outputs from the group. Publishing data though the C4DM repository gives a single point for publishing C4DM data on the internet without relying on (possibly ephemeral) project-specific web-sites. Other repositories that may be of interest to researchers are listed here.

If the web-site through which the data is published is also to be the long-term archive for you data, then you should check that the meets the criteria for an archival storage system. Note that although data will be written to the host irregularly, it is expected that published data will be accessed more frequently than archived data making offline storage unsuitable.

If an external publisher is used for your research data, you should check the Terms and Conditions e.g. to see whether copyright on the data is transferred to the publisher and to check for how long they will publish your data.

If data is published through a publisher or repository, then it may also be held on institutional storage as long as the publisher's license is followed, which might e.g. require that there is a link back to the publisher from the institutional repository. Publishing under a Creative Commons license makes this easy.

If data is available in multiple places, different versions of the data might arise (e.g. changes between dates uploaded, data corruption). You should therefore make it easy to identify which specific version of the data is correct by publishing a digital fingerprint (e.g. a MD5 hash). MD5 fingerprints can be generated in Windows using MD5summer, in Linux with the Gnu md5sum utility and on Max OS X using md5 or openssl

Persistent IDs for data

In order to ensure ongoing access to your data, should look to acquire a persistent ID for your dataset. However, persistence is a continuum with some IDs more persistent than others. DOIs and handles are designed to be persistent in the long term, allowing a unique identifier to be redirected to the current location of your dataset - if the dataset moves, the DOI/handle can be pointed at the new location. Repositories and research data sites may provide DOIs for data submitted to them. Institutional URLs may be persistent if the institution makes a policy decision to make them so. Other URLs may change when web-sites are revamped making the published URL for your data return a "404 Not Found" message.

Persistent IDs are useful for referencing datasets, and are particularly handy if they are short. Long or ugly DOIs can be shortened using the ShortDOI service.

And more repositories


The Digital Curation Centre have a (very short) list of repositories .

Repositories using DSpace can be registered on the DSpace web-site, for inclusion in the list of Who's using DSpace ? .

Within the University of London, the School of Advanced Study has a repository of humanities-related items.

University of the Arts London have an online repository

Edina provides a national data centre

EDINA is a UK national academic data centre, designated by JISC on behalf of UK funding bodies to support the activity of universities, colleges and research institutes in the UK, by delivering access to a range of online data services through a UK academic infrastructure, as well as supporting knowledge exchange and ICT capacity building, nationally and internationally.

Services hosted at EDINA include:

Pre-press e-Prints of articles can be published through and the related Computing Research Repository

Other repositories that may be of interest include:

NB: This list has been accumulated from various sites including:

Training the Trainers

I-Tech Training Toolkit

Performance Juxtaposition web-site:




Cognitive, Affective and Psychomotor learning

Why do Data Management ?

Evidence Promoting Good Data Management

Data Reuse

Do you reuse other people's data ? Can they reuse your's ?

Researcher Development Framework

SCONUL Information Literacy 7 Pillars Diagrams


Whose data is it anyway ?

QMUL HR Contract Terms and Conditions :

16. Patents & Copyright
a) Any discovery, design, computer software program or other work or invention which might reasonably be exploitable (‘Invention’) which is discovered, invented or created by the Employee (either alone or with any other person) either directly or indirectly in the course of their normal duties or in the course of duties specifically assigned to him in the course of his employment shall promptly be disclosed in writing to the College. All intellectual property rights in such Invention shall be the absolute property of the College and the College shall have the right to apply for, prosecute and obtain patent or other similar protection in its own name. Intellectual property rights include all patent rights, copyright and rights in respect of confidential information and know-how. The ownership of copyright in research papers, review articles and books will normally be waived by the College in favour of the author unless subject to any conditions placed on the works by the funder.

The important bit being...

Any ... work ... which might reasonably be exploitable ... which is ... created by the Employee ... in the course of duties ... in the course of his employment ... shall be the absolute property of the College

In the research contract, there is another clause:

The Employee will be expected to publish the results of his/her research work, subject to the conditions of any contract providing funding for the research

Therefore if funding bodies make funding contingent on publishing data as part of the results of research, then data publication will be allowed.

Research policies at QMUL Academic Registry and Council Secretariat

Creative Commons: CC Licenses / CC0

Science Commons:

Restrictions based on data ownership

Restrictions based on data parentage - use of e.g. CC-SA data

Article on CC-BY and data

Where possible, CC0 with a request for citations is preferred (Why does Dyad use CC0)

If data is based on copyright works it may be appropriate to restrict the license to allow only research / non-commercial use (e.g. this would prevent chord annnotations being published commercially).

Practical Steps Towards Data Management

Even if you don't have a readily available data repository, there are still steps you can take to manage your data even if it can't be published.

File formats - use open formats where possible to future-proof files.

File naming - give files meaningful names.

Metadata - include a plain-text README file describing the contents of the files.

License - include a plain-text LICENSE file describing the license for the dataset.

Check that a copy of your data will be backed up - e.g. check that the network drive you store your data on is actually backed up.

If you're really bothered about recovering your data make sure it's backed up off-site!

This could be (i) in the cloud (i.e. DropBox etc.); (ii) USB drive (hard/flash); (iii) a specific network location (e.g. a NAS box at home).


The appropriate repository will partly depend upon the data.

It could be... C4DM RDR, Dryad, Flickr, figshare, Archiv.Org...

However, if you want data to be reused in a citable manner remember to package the license and the required citation with the data. It means that however the data reaches the final user the only excuse for not being able to cite the data is that someone has bothered to remove the info...

Open Source Learning Tools


Media to use in Training

Disk Drives Break

DataCent collection of disk drive failure sounds

Laptops Break / Get Broken

Evidence Promoting Good Data Management


General list of destroyed libraries on Wikipedia!


There are some 3 million shipwrecks scattered across the ocean floor, UNESCO has estimated, and most of them are still waiting to be found. One of those ships, which sank off the French coast in 1843, carried a treasure trove of science — most of the papers and research equipment of Jeanne Villepreux-Power, who was one of the leading cephalopod researchers of her time.


L'Aquila earthquake, Italy

A major casualty of the last week’s earthquake in Italy could be valuable research work done by a UK-based charity over the last two years.

Leukaemia Busters, Southampton, has been developing pioneering drugs in a clinic in the quake-hit city of L'Aquila.

Dr David Flavell, from the charity, said it was likely specially engineered leukaemia cells used to produce anti-bodies had been lost.

Two years of life-saving research into the treatment of a killer disease feared lost forever by a Hampshire charity has incredibly survived the Italian earthquake disaster.

Leukaemia Busters were delighted to discover that laboratories where scientists had spent the past two-and-a-half years working to develop pioneering drugs to fight leukaemia remain standing.

The unbelievable news came after rescue workers allowed Professor Rodolfo Ippoliti into the devastated city of L’Aquila and see for himself the destruction caused by the 6.3 magnitude quake.

Tohoku earthquake, Japan 2011

We have heard that research facilities and equipment at many universities and research institutions in the Tohoku and Kanto regions were damaged as a result of this disaster, and many scientists and students have been forced to stop their research because their valuable research samples or data have been lost. All of the staff and the researchers at NIH are deeply distressed by the devastation that has struck Japan.


Southampton University Mountbatten building U. of York Chemistry - 1980 U. of York History - 1992 U. of York, fire in student room - 1993 University of Glasgow

Professor Sir Graeme Davies said that a substantial amount of research had been lost in the fire.

U. of York chemistry building Strathclyde university engineering department
  • Further disruption for Strathclyde teaching students 12 September 2012 (The Journal)
    bq. The disruption began on 7 February when 150 students had to be evacuated as a fire started in the Roche Lab in the university's chemical engineering department, forcing the university to relocate lectures across the campus including the Royal College, and Students' Association building on John Street.

U. of Nottingham - GlaxoSmithKline Carbon Neutral Laboratory for Sustainable Chemistry

The state-of-the art building, which had been partly funded by a £12m grant from GlaxoSmithKline (GSK), was still under construction and due to be completed by next year. It was to be "the world's first carbon neutral lab", the university said, and would have housed work aimed at "fundamentally changing how we do chemistry in a more sustainable way".


Hurricane Katrina Hurricane Sandy

When Hurricane Sandy struck New York, it washed away years of scientific research from the New York University School of Medicine, including genetically modified mice, enzymes, antibodies and DNA strands.

Flooding and blackouts caused by super storm Sandy have had a devastating impact on scores of scientists in the Big Apple, with one research center losing thousands of lab mice as well as precious reagents—a situation that could set some researchers back years.

Although New York University (NYU) was clearly the research facility hardest hit by this week’s storm, others were also affected. Leslie Vosshall, who studies the olfactory system of mosquitoes at Rockefeller University, located about 35 blocks further up river from NYU, shut down a computer server in the basement on Sunday, but fears it could have been damaged from flooding. She has had to wait for the university to pump out the water, before she can check on it. “We do have some of the data backed up elsewhere, but it would set us back significantly.”

Sandy and Allison...

In 2001, a tropical storm called Allison flooded Houston with several feet of rain and pushed 10 million gallons of water into the medical-school basements at the University of Texas. The disaster drowned at least 4,000 rats and mice, along with 78 monkeys, 35 dogs, and 300 rabbits. (More than half the animals on campus had been living underground.) Nearby, at the Baylor College of Medicine, basement flooding killed 30,000 mice.

Tropical storm Allison

Soaked hard drives and drowned lab animals may delay new medical discoveries by months or years, but hope survives as research facilities dry out.

Tropical Storm Allison's flood caused the following losses at Baylor College of Medicine and the University of Texas-Houston Medical School:
One calf
Thirty-five dogs
Seventy-eight monkeys
Several hundred rabbits
More than 30,000 transgenic mice and rats
A state-of-the-art MRI machine worth $2 million
Ten years' worth of data on spinal cord injuries
A 20-year collection of 60,000 breast tumor samples

As well as destroying research animals, the floodwater has swamped computers. It has also caused power failures, knocking out the refrigerators and freezers used to store samples for research. Back-up cell cultures used for research into cancer at the Baylor College of Medicine will have died, say local officials.


HONOLULU — Heavy rain sent water as much as 8 feet deep rushing through the University of Hawaii's main research library Saturday, destroying irreplaceable documents and books, toppling doors and walls and forcing a few students to break a window to escape.

Lyttle's genetic research on the Drosophila goes back 35 years and some of it is irretrievably lost, he said.

McBride and much of the library staff worked all day Sunday to try to save some of the 90,000 photographs stored in the basement along with rare government documents and Hawaiian maps.

The flood also destroyed computers, books, magazines and equipment.

  • Classes canceled at UH on Wednesday 2 November 2004 (
    bq. ...But researchers at the University of Hawaii, which was hard hit, say the flash flood caused untold losses of research damage in computers damaged by flood waters.
  • Also risk during research for physical data... Cereal research programs set back a season from summer flooding 12 August 2014 (Manitoba Co-Operator)
    bq. ...All three programs have been set back a season due to data lost after their plots were inundated by the rising Assiniboine at July’s beginning ... they are starting to talk about what to do to mitigate the risk of this happening again...

Tales Of Lost Data

Recovery of Overwritten Hard Disk Data

5 October 2005 Linux Forums -

Hi, a friend of mine just overwrote two months of her PhD thesis with an older version. I know recovery of overwritten data is possible, but wonder if I'd need special hardware to do it. Does anyone know something about this ?

Thank You.

Stolen laptop had PhD research

19 March 2008 Surrey Leader -

Thirty-five minutes spent in Langley’s Willowbrook Shopping Centre cost a Surrey woman much more than she had anticipated.

Langley RCMP say that while she was shopping from 1-1:35 p.m. last Monday, someone broke into her vehicle and stole a number of items, including a Mac iBook laptop containing the research she had compiled as she worked towards her PhD.

“All that information was on that computer and she has no back-up file,” said Langley RCMP spokesman Cpl. Brenda Marshall.

Google images of Langley Willowbrook

Happiness is the return of a stolen computer, with data intact

27 May 2010 The Press, NZ -

Never has a man been so happy to see a computer full of data spreadsheets.

Claudio De Sassi's world fell apart when a car containing almost three years work towards his PhD was stolen two weeks ago. De Sassi, a Canterbury University academic, could not hide his joy yesterday as police reunited him with his stolen laptop and backpack.

Thugs steal Christmas, doctoral dreams

22 December 2010 KRQE -

A tiny television sits where a big screen used to, and a Christmas tree stands with little underneath it...

Even worse than the gifts, the crooks stole a MacBook Pro laptop and a LaCie hard drive.

The hard drive had … her dissertation and nearly seven years of research for her doctoral degree she was set to fnish in a few weeks. Osuna had everything backed up on a separate hard drive in a safe, but burglars made off with that too.

"All I could think about is that all that time is gone, all that effort, everything is gone," Osuna said.

Stolen hard drive contained almost completed PhD thesis

11 October 2012 Wanneroo Times -

A HOCKING mother has pleaded for the people who broke into her home on Saturday to return a portable hard drive containing her almost completed PhD thesis.

...“They stole two laptops, one of which has my thesis on it, as well as my portable hard drive, which had my back-up on it, as well as a TV and my husband’s mobile phone,” she said.

... The ECU Joondalup postgraduate student said the portable hard drive was worth less than $30, but was priceless to her.

Laptop Stolen From OSU Doctoral Student

NBC4i January 06 2011 -

...her car was broken into and her chrome Mac book pro was stolen. She has a back-up for all but the last six months of research, but the most important part of the research had happened recently.

Lost Thesis Poster

Recovery > Current PhD Students, PhD Life. 29 September 2011 -

I've 'lost' my thesis

Yes, I 'lost' my thesis today, at around 12:42pm (thesis RIP), microsoft word couldn't cope with the size of the document and my file got corrupted. I'd removed a small chunk of it and did some formatting to decrease its size yesterday but that obviously didn't stop it happening. After a few hours trying to recover it, I gave in and called for help. I then found out that, even if I'd managed to recover it, it probably wouldn't be the whole document, there could be parts missing, formatting gone awol, etc No sweat though, I regularly back up my work so it's just today's work that's been lost, well morning and lunch really as I spent the afternoon attempting to savage it,-) bit stressful but hey ho, not the end of the world. So for those of you who don't back your work up, start doing it now! And regularly! I can't possibly imagine what would have happened to me if I'd really lost everything weeks before submission...

Saving the data!

AG Daws blog Back It Up 1 August 2011 -

I was busy in the lab one day writing my Honours thesis when the fire alarm went off. I assumed it was a drill. I kept on writing. That is, until the fire warden found me. He said the lab next door was on fire and told me to get the hell outside with everybody else.

I stared at him, then at the ageing Apple Macintosh computer with all of my precious words painstakingly hammered into place with two fingers. (This was before I could touch-type.) Then I looked at the jars of extremely flammable fixative and solvents and God-only-knows-what-else lining the shelves. (This was also before occupational health and safety was given much credence.)

I can tell you one thing—Word’s auto-save feature didn’t give me much comfort on that day. I fought off the fire warden long enough to unplug the computer from the wall and disentangle it from various peripherals. Then I carried the damned thing downstairs in my arms.

That was when I started backing up my work religiously.

Thesis Writing: Backing Up

Making Bones blog, 4 September 2012 -

I used to transfer my files between computers on an external hard drive. This meant I had all my files on both my work and home computer and the external hard drive. This worked until instead of working on the actual computer and then transferring files between computers I decided it was easier to just keep the most recent copy on the external. Soon I was only using the external and my computer files were a few months out of date. Then, one day, the external got knocked off a table and broke when it hit the floor. The files had to be restored by a technology company for $1600. This, obviously, was not what was meant by “backing up”.

...I do have one special backup method for my thesis write-up. A USB necklace. If the internet dies, my hard-drive gets smashed by a bulldozer, and both of my computers go up in flames, I’ll still have my thesis around my neck.

Laptop stolen through a window

It’s like half of my brain has been removed, 4 June 2014 -

She said: “It’s like half of my brain has been removed. It’s got five years’ work on it including my teaching notes, which are quite precious. It will definitely hamper my teaching, as I will have to go back and rewrite lectures.

“It also contains notes on my students, on research and human rights, and the book I’ve been working on since 2011, which is subtitled ‘refugee writing’, about refugees and literature.

“I was looking to get the book published next year, but it will probably be 2016 now. I will have to do a lot of the work again.

“While a lot of the work has been saved elsewhere, a lot of it hasn’t, so there’s a lot of archive work that I will have lost.

“The lesson is to always back up your work and not to leave things in your home near windows.”

Mistakes happens everywhere...

Toy Story II Blu-Ray extras

Someone /rm */ing the movie... and the backups having been failing... and recovering the movie from a copy that someone working at home had taken with tem.

When a size command was run on the Toy Story 2 directory, it was only 10% of the size it should have been. 90% of the movie had been deleted by the stray command.

YouTube copy of video:

Top 10 Data Disasters from Kroll OnTrack data recovery



  • 10. Rinse cycle
  • 9. Don't drink and work!
  • 8. Lost in the desert
  • 7. Erase all traces
  • 6. Slippery hands
  • 5. Lost in transit
  • 4. Disgruntled employee
  • 3. Careful driver
  • 2. Sweeping illness
  • 1. Don't ignore blinking RED lights




2008 - can't find...




Armed Robber Stole Laptops At Lark Café Last Night

Ditmas Park Corner, November 14, 2014

A writers group meeting at Lark Café (1007 Church Avenue) was robbed at gunpoint around 8:50pm Thursday night, when neighbors told us a man walked in with a gun and stole laptops from the nine-person group. We confirmed the robbery with Lark this morning, and we are relieved to hear that no one was physically injured.

Replace my stolen MacBook Pro

GoFundMe, September 16, 2014

Right after starting graduate school, my apartment was violently broken into and much of my belongings were taken. Many things were recovered, but one thing that wasn't was my laptop. My laptop had all of my undergraduate work, honors thesis work, and all of my debate work on it. It was my right hand when it came to my college career.

Wits student returns stolen textbooks to UJ student

Wits Vuvuzela, June 3, 2014

Gideon Chatanga lost three years of his doctoral thesis and personal belongings in a robbery two weeks ago but thanks to Witsie Emery Kalema, he now has some of his textbooks back.

Student reunited with her stolen hard drive

Otago Daily Times, Fri, 26 Dec 2014

...The return of the hard drive was made on payment of the $300 reward...

Losing Portable Devices

Rising Trend in Lost USB Flash Drives

All USB, 2 March 2011 -

At more than 500 laundromats and dry cleaners in the UK, 17,000 USB flash drives were left behind between December 2010 and January 2011. According to the study’s researchers at Credant Technologies, that’s a 400 percent increase in lost devices compared to the year before.

London’s Businesses "Facing Daily Data Loss Risks", Says EMC

Tech Week Europe

Research from EMC and Mozy found that over a third (34 percent) of workers admitted to losing a work device with data stored on it over the past 12 months, with laptops, smartphones, USB drives and hard drives the main victims.

Additional research by EMC found that 45 percent of organisations aren’t able to recover all their data following a data loss incident, with the average business facing an annual financial loss of $585,892.

However, see also: Time to stop the 'Fake' research

The Lost Laptop Problem

  • 2010 Ponemon Institute report for Intel re. US laptops
    • On average, 2.3% of laptops assigned to employees are lost each year
    • In education & research that rises to 3.7%, with 10.8% of laptops being lost before the end of their useful life
      • ~3 years i.e. within 1 PhD of allocation!
    • 75% lost outside the workplace
  • Very similar results from 2011 European report!

Intel 2010, The Billion Dollar Lost Laptop Problem -

Intel 2011, The Billion Euro Laptop Problem -


Laptop Reliability

  • 2011 PC World Laptop Reliability Survey from 63,000 readers:
    • 22.6% had signifcant problems during the product's lifetime
    • Of which...
      • 19% had OS problems ~1 in 25 of all laptops
      • 18% had HDD problems ~1 in 25 of all laptops
      • 10% PSU problems ~1 in 50 of all laptops

PC World 2011 -

Hard Disk Failures

  • Failure Trends In A Large Disk Drive Population
    • Usenix conference on File and Storage Technologies 2007 (FAST '07)
    • Eduardo Pinheiro & Wolf-Dietrich Weber, Google Inc.
  • Data collected from over 100,000 disk drives at Google
  • As part of repairs procedures:
    • ~13% of disk drives replaced over 3 years
    • ~20% of disk drives replaced over 4 years


More info

Cloud Failures

In short, work done on one aspect of Dedoose led to the failure of another, cascading to pull down all of Dedoose. The timing was particularly bad because it occurred in the midst of a full database encryption and backup. This backup process, in turn, corrupted our entire storage system.

The backup file of data through April 11th has been pieced back together, however it remains encrypted and corrupted. We are running a variety of tools on the file to restore things to a state where we can merge the data back into the live database.

At this point, we are very happy to report that we have recovered data entered to Dedoose through March 30th. We are still working on the details of how these data will be safely merged into the master database.

The data that have been viewable on our staging environment ( represent those that have been recovered for work added to Dedoose between March 2nd and March 30th. These data will be merged back into the live database beginning tonight at 8pm PST. It is necessary to shut down Dedoose services during this procedure which should last approximately 4 hours.

Terms of use in the cloud

Google Terms Of Service

20 April 2015 Google Terms Of Service

When you upload, submit, store, send or receive content to or through
our Services, you give Google (and those we work with) a worldwide
license to use, host, store, reproduce, modify, create derivative works
(such as those resulting from translations, adaptations or other changes
we make so that your content works better with our Services), communicate,
publish, publicly perform, publicly display and distribute such content.
The rights you grant in this license are for the limited purpose of operating,
promoting, and improving our Services, and to develop new ones. This license
continues even if you stop using our Services (for example, for a business
listing you have added to Google Maps). Some Services may offer you ways to
access and remove content that has been provided to that Service. Also, in
some of our Services, there are terms or settings that narrow the scope of
our use of the content submitted in those Services. Make sure you have the
necessary rights to grant us this license for any content that you submit
to our Services. 

In short, you retain IP over the content, but grant Google and those they work with the rights to use your content to develop and promote Google services.

These conditions have been present since 1 March 2012.

Microsoft Services Agreement

19 October 2012 Microsoft services agreement :

When you upload your content to the services, you agree that it may
be used, modifed, adapted, saved, reproduced, distributed, and
displayed to the extent necessary to protect you and to provide, protect
and improve Microsoft products and services. For example, we may
occasionally use automated means to isolate information from email,
chats, or photos in order to help detect and protect against spam and
malware, or to improve the services with new features that makes them
easier to use. When processing your content, Microsoft takes steps to
help preserve your privacy.

20 April 2015 Microsoft services agreement

3.1. Who owns my Content that I put on the Services? You do. Some Services
enable you to communicate with others and share or store various types of
files, such as photos, documents, music and video. The contents of your
communications and your files are your “Content” and, except for material
that we license to you that may be incorporated into your own Content (such
as clip art), we don't claim ownership of the Content you provide on the
Services. Your Content remains your Content, and you're responsible for it.

3.2. Who can access my Content? You have initial control over who may access
your Content. However, if you share Content in public areas of the Services,
through features that permit public sharing of Content, or in shared areas
available to others you’ve chosen, you agree that anyone you've shared Content
with may, for free, use, save, reproduce, distribute, display, and transmit
that Content in connection with their use of the Services and other Microsoft,
or its licensees’, products, and services. If you don't want others to have
that ability, don't use the Services to share your Content. You represent and
warrant that for the duration of this Agreement you have (and will have) all
the rights necessary for the Content you upload or share on the Services and
that the use of the Content, as contemplated in this section 3.2, won't violate
any law.

3.3. What does Microsoft do with my Content? When you transmit or upload Content
to the Services, you're giving Microsoft the worldwide right, without charge, to
use Content as necessary: to provide the Services to you, to protect you, and to
improve Microsoft products and services. Microsoft uses and protects your Content
as outlined in the Windows Services Privacy Statement, Bing Privacy Statement,
MSN Privacy Statement, and Office Services Privacy Statement
(collectively the “Privacy Statements”). 

In short, once you share data you give the people you shared it with the right to treat it as free for reuse.

DropBox Terms Of Service

DropBox Terms Of Service

24 April 2014 DropBox Terms of Service

When you use our Services, you provide us with things like your files, content,
email messages, contacts and so on ("Your Stuff"). Your Stuff is yours. These 
Terms don't give us any rights to Your Stuff except for the limited rights that
enable us to offer the Services.

We need your permission to do things like hosting Your Stuff, backing it up, and
sharing it when you ask us to. Our Services also provide you with features like
photo thumbnails, document previews, email organization, easy sorting, editing,
sharing and searching. These and other features may require our systems to access, 
store and scan Your Stuff. You give us permission to do those things, and this 
permission extends to trusted third parties we work with.


Our Services let you share Your Stuff with others, so please think carefully about
what you share. 

Archiving Data

BBC Domesday Project

1986 Project to do a modern-day Domesday book (early crowd-sourcing)
  • Used “BBC Master” computers with data on laserdisc
  • Collected 147,819 pages of text and 23,225 photos
  • Media expiring and obsolete technology put the data at risk!
Domesday Reloaded (2011) To allow long-term access to data
  • Don't use obscure formats!
  • Don't use obscure media!
  • Don't rely on technology being available!
  • Do keep original source material!

Google images for BBC Domesday

Sharing Data

Piwowar, Heather A., Roger S. Day, and Douglas B. Fridsma. Sharing detailed research data is associated with increased citation rate.
PLoS One 2.3 (2007): e308.

More To Read

Albers, S. Editorial: Well Documented Articles Achieve More Impact
BuR Business Research Journal, Vol. 2, No.2, May 2009

Anderson, Richard G., et al. The role of data/code archives in the future of economic research.
Journal of Economic Methodology 15.1 (2008): 99-119.

Borgman, Christine L. "The conundrum of sharing research data."
Journal of the American Society for Information Science and Technology 63.6 (2012): 1059-1078.

Campbell, Eric G., et al. "Data withholding in academic genetics."
JAMA: the journal of the American Medical Association 287.4 (2002): 473-480.

Evanschitzky, Heiner, et al. Replication research's disturbing trend.
Journal of Business Research 60.4 (2007): 411-415.

Fischer, Beth A., and Michael J. Zigmond. "The essential nature of sharing in science."
Science and engineering ethics 16.4 (2010): 783-799.

Freckleton, R.P., P. Hulme, P. Giller and G. Kerby. 2005. The changing face of applied ecology.
J. Appl. Ecol. 42:1–3.

Gleditsch, N.P., C. Metelits and H. Strand. 2003. Posting your data: Will you be scooped or will you be famous?.
Int. Stud. Perspect. 4:89–97.

Lancaster, Larry, and Alan Rowe. Measuring Real World Data Availability.
Proceedings of the LISA 2001 15th Systems Administration Conference. 2001.

McCullough, Bruce D., Kerry Anne McGeary, and Teresa D. Harrison. Lessons from the JMCB Archive.
Journal of Money, Credit, and Banking 38.4 (2006): 1093-1107.

Piwowar, Heather A., and Wendy W. Chapman. "Public sharing of research datasets: a pilot study of associations."
Journal of informetrics 4.2 (2010): 148-156.

Piwowar, Heather A., et al. "Towards a data sharing culture: recommendations for leadership from academic health centers."
PLoS medicine 5.9 (2008): e183.

Schroeder, Bianca, and Garth A. Gibson. Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you.
Proceedings of the 5th USENIX Conference on File and Storage Technologies (FAST). 2007.

Vandewalle, Patrick, Jelena Kovacevic, and Martin Vetterli. "Reproducible research in signal processing."
Signal Processing Magazine, IEEE 26.3 (2009): 37-47.

Whitlock, Michael C. "Data archiving in ecology and evolution: best practices."
Trends in ecology & evolution 26.2 (2011): 61-65.

Whitlock, Michael C., et al. "Data archiving."
The American Naturalist 175.2 (2010): 145-146.

Wicherts, Jelte M., Marjan Bakker, and Dylan Molenaar. "Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results."
PloS one 6.11 (2011): e26828.

Thatcher, 70 (1807): 167-168
Science 16 August 1929: Vol. 70 no. 1807 pp. 167-168
DOI: 10.1126/science.70.1807.167

Research Data in the Digital Age
Daniel Kleppner and Phillip A. Sharp
Science 24 July 2009: Vol. 325 no. 5939 p. 368
DOI: 10.1126/science.1178927

Sharing Research Data Urged
Science 16 August 1985: Vol. 229 no. 4714 p. 632
DOI: 10.1126/science.229.4714.632


JISC Web2 Rights
JISC Legal

There are three main areas of law affecting data management:

In addition, for data stored in the cloud, the USA PATRIOT Act may be relevant.


Copyright grants the copyright holder rights relating to the use of the copyright material, in addition certain moral rights are granted to the creator of the materials. Copyright is automatically granted when new creative material is produced - i.e. the material must be more than a simple collection of other data. Copyright is a separate item of property to the original work and the sale of the original work does not automatically pass copyright on to the new owner of that work (e.g. selling a score or painting does not automatically transfer the copyright). The particular rights and the duration of the copyright period are affected by the type of material.

For audio and digital music research, rights of particular interest relate to:
  • musical compositions and audio recordings - a CD can be covered by three separate copyrights, one for the design of the packaging, one for the sound recording on the CD and one for the musical composition recorded
  • typographical arrangements - these cover not only papers (which are also covered as literary works) but also the layout of spreadsheets and design of databases.

Pay The Piper has a very good post explaining music copyright, which includes:

If you compose a completely original piece of music then it is your own property - you own the copyright, in other words.

Arranging existing music is fraught with difficulties. To put it very simply (and this is indeed a gross simplification) until the composer has been dead for seventy years his music is copyright and you may not make a written arrangement of it without permission.

Lots more in the post though, so it's worth reading if you want to know more about music copyright!

It is important to note that copyright does not cover the ideas expressed within a work, only the particular form that that work has been captured in. The data within a spreadsheet is not copyright, only the particular layout of that data.

We note that simple anthologies - e.g. a collection of "complete works" or works created during a certain period - do not get copyright on the content, although the typographical layout may be copyright.

Fair dealing / fair use regulations allow specific uses of copies from original copyright materials (NB: not copies of copies!) without breaching copyright. However, fair use does not apply to sound recordings, films and broadcasts. There are JISC Guidelines for Fair Dealing in an Electronic Environment and specific clauses in the legislation on use in education in training or for personal study.

The legislation:

Moral Rights

The author of a work always retains two moral rights regarding the content:
  • The right to be identified as the author
  • The right to object to derogatory use of the material.

Database Rights

In the UK, if a "substantial investment" is made in "obtaining, verifying or presenting" the contents of a database then the database will be protected by database rights. The owner of those rights will be the person that "takes the initiative" in the creation of the database - that "person" being the employer if the database is made by an employee in the course of his work. Database rights are infringed by extraction or re-utilisation of a substantial part of the database.

Fair dealing rules exist for database rights - users of databases are allowed to extract data for non-commercial use in research and teaching (with acknowledgment of the source).

Database rights last for 15 years from the creation/publication of the database and may be renewed if the database changes substantially.

More information at: The act itself is at:

More Information

UK university materials regarding copyright and intellectual property: Further sources of information:

Some articles of interest from outside the UK

Australian IP law blog posts re. media and copyright. Includes: US articles from Public Domain Sherpa Tutorial on Copyright and the Public Domain
  • What makes a derivative work
    derivative must use enough of the prior work that the average person would conclude that it had been based on or adapted from the prior work
  • Compilations
    compilations are (c) if they show minimal creativity (e.g. not just all works by someone or by date)
  • Copyright Renewal
    Many works did not have copyright renewed and therefore went out of copyright and into the public domain in the US - estimated 15% of works had copyright renewed. Renewals will appear in the online US copyright database for works from 1950-1963,

CHM Super Sound (a South Pacific record company) state that :

A melodic phrase of a song is in copyright. The lyrics are in copyright. Chord progressions in a music composition however, are not copyright material.

University of Washington Copyright Connection

WIPO Understanding Copyright and Related Rights

Berne Convention for the Protection of Literary and Artistic Works

Chord Progressions and Copyright:

Data Protection

Data protection protects the rights of individuals over their personal information. In particular, The Data Protection Act covers the processing of data relating to identifiable living individuals. The core of the Data Protection Act is a set of data protection principles. These state that personal data shall be processed fairly and lawfully and shall not be processed unless the subject gave their consent except under specific conditions (for sensitive personal data such as marital status, ethnic origin or health information there are further restrictions). Fair and lawful processing requires that the data was not obtained by deception and is kept confidential and that the data subject was given information about who will process the data and for what purpose. In addition, personal data should be:
  • obtained only for specified purposes, and should not be used for anything else;
  • adequate, relevant and not excessive in relation to the purposes (i.e. only the data that is required);
  • accurate and, where necessary, kept up to date;
  • kept no longer than is necessary for the purposes;
  • processed in accordance with the rights of the data subjects under the Act;
  • protected from:
    • unauthorised or unlawful processing
    • and loss, destruction; or damage
  • shall not be transferred outside the European Economic Area without similar protection being provided.

In general, data subjects have a right to access to data held about them. The onus to provide this data is on QMUL as the data controller, and, as such, QMUL should be able to find any personal data relating to identifiable living individuals which is held within the college.

However, there is a specific exemption, for research which is not targeted at particular individuals and will not cause distress or damage to a data subject, which allows data to be processed for other purposes and held indefinitely. Data subjects also have no immediate right of access for personal data where the data is processed for research purposes and the results do not identify the data subjects.

JISC state:

Data controllers are required by the Act to process personal data only where they have a clear purpose for doing so, and then only as necessitated by that purpose. A data controller’s purpose for any personal data processing operation should thus be clearly set out in advance of the processing, and should be readily demonstrable to data subjects.

They also note:
  • that the majority of the Data Protection principles do apply to research data;
  • that there should be a review to ensure compliance with Data Protection requirements;
  • that a mechanism should be in place for subjects to object to the processing if they believe it would cause them damage or distress;
  • and that particular care must still be taken when processing involves sensitive data.

As data protection applies to identifiable living individuals, it is generally best practice to anonymise any data relating to individuals as soon as possible, discarding any information that allows individuals to be identified. In order to comply with the Data Protection Act, a suitable consent form should be provided allowing the use of data relating to identifiable living individuals in research. Alternatively, such consent may be recorded in interviews. Within QMUL, research which involves human participants and data relating to them should be approved by the college Research Ethics Committee - the fast-track ethics review should be sufficient for most C4DM research.

Further information: The Act:

Freedom Of Information

The Freedom Of Information Act (FoI) gives people the right to request data held by public bodies. It does not matter where the data originated, only who holds it. Copyright relating to information supplied under FoI requests remains unchanged - and provides you with protection from other people (mis)using your data.

The Freedom of Information Act states that research data:
  • can be held indefinitely;
  • is not subject to FoI requests unless individuals are identified in published research;
  • can be used for other research uses;
  • and may be exempt from FoI requests on grounds of (imminent) future publication or commercial interest.

Note that this means that if a researcher from another institution published research identifying individuals and you use their data, then individuals will have the right to request the data from QMUL.

Additionally, if data will be published through the college's normal publication scheme, then there is no onus on the college to provide the data under FoI requests - publishing data removes any additional requirements for FoI.

Further information: The Act:


The 2001 USA PATRIOT Act provides the US government with the right to search/seize data held by any US company or its subsidiaries. It does not matter where the data is physically stored, if it is held by a US company (Microsoft, Apple, Google, DropBox, Amazon...) then the US government can seize the data. However, in order to do so it is necessary for the US government to obtain a court order for the purpose of an anti-terrorism investigation - they can't just idly decide to grab your data.

Note that these rights are not terribly different to the rights of other countries to access data (see Hogan Lovells' white paper).

Further information: The Act:
  • 2001 "Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism" (USA PATRIOT) Act (Link).


Research Council Requirements

Research councils are requiring data management plans as part of grant proposals and their policies also stipulate that research data created through their funding should be published for other researchers to use.

The DCC provides an overview of funders' data policies and individual pages for each funder's policy. The London School of Hygiene and Tropical Medicine (LSHTM) have also published a report on funder requirements for data preservation and publication.

The AHRC and EPSRC policies are most relevant to work at C4DM.

Arts and Humanities Research Council (AHRC)

From AHRC Funding Guide (PDF downloadable from AHRC web-site)

Deposit of resources or datasets
Grant Holders in all areas must make any significant electronic resources or datasets created as a result of research funded by the Council available in an accessible and appropriate depository for at least three years after the end of their grant. The choice of repository should be appropriate to the nature of the project and accessible to the targeted audiences for the material produced.
If you are a Grant Holder in the area of archaeology and decide to deposit with The Archaeology Data Service (ADS), then you should consult them at or before the start of the proposed research to discuss and agree the form and extent of electronic materials to be deposited with the ADS. If the deposit occurs after 31 March 2013, then there will be charge for this deposit.

Self Archiving
The AHRC requires that funded researchers:
• ensure deposit of a copy of any resultant articles published in journals or conference proceedings in appropriate repository
• wherever possible, ensure deposit of the bibliographical metadata relating to such articles, including a link to the publisher’s website, at or around the time of publication.
Full implementation of these requirements must be undertaken such that current copyright and licensing policies, for example, embargo periods and provisions limiting the use of deposited content to non-commercial purposes, are respected by authors.

The DCC provides a summary of AHRC policy.

Engineering and Physical Sciences Research Council (EPSRC)

The EPSRC data management principles state that:
  • research data should be made freely available with as few restrictions as possible
  • data with long term value should remain accessible and usable for future research
  • metadata should be made available to enable other researchers to understand the potential for further research and re-use of the data
  • data management policies and plans should exist for all data – and be adhered to!
  • published results should always include information on how to access the supporting data
  • all users of research data should acknowledge the sources of their data

The DCC provides a summary of EPSRC policy.