Backing up

Why back up your data ?

How to back up data

The core principle is that backup copies of data should regularly be stored in a different location to the main copy.

Suitable locations for backups are:
  • A firesafe, preferably in a different building
  • A network copy
    • A network drive e.g. provided by the institution
    • Internet storage (in the cloud)
    • A data repository - this could be a public thematic / institutional repository for publishing completed research datasets, or an internal repository for archiving datasets during research
  • A portable device / portable media which you keep somewhere other than under your desk / with your laptop.

Backing up on external devices means that you need access to the device... network drives and "internal" backups are usually more available. e.g. backup every time you're in the office / lab or at home.

The best backup is the one you do. The question of how often you need to back up depends very much on how much new data you've generated / how difficult it would be to recreate the data. For primary data (e.g. digital audio recordings of interviews) you should back them up as soon as possible as they may be very time consuming to redo. If an algorithm runs for days generating data files, you may want to set it up to also create backup copies as it proceeds rather than requiring backing up at the end of the processing. If you've changed some source code and can regenerate the data in an afternoon, you may not need to back up the data - but the source code should be safely stored in a version control system somewhere. If you feel too busy too back up your data, it may be a hint that you should make sure there's a copy somewhere safe!

Remember that if you delete your local copy of the data then the primary copy will be the original backup... is that copy backed up anywhere ? If a network drive is used, it may be backed up to tape - but this should be checked with your IT provider.

Details of resources available for C4DM researchers are available here.

Can't I just put it in the cloud ?

You can, but the service agreement with the provider may give them a lot of rights... review the service agreement and decide whether you are happy with it!

Looking at service agreements in November 2012, we found that Google's terms let them use your data in any way which will improve their services - including publishing your data and creating derivative works. This is partly a side-effect of Google switching to a single set of terms for all their services. For Microsoft SkyDrive, the Windows Live services agreement is pretty similar.

Apple's iCloud is better as they restrict publication rights to data which you want to make public / share. Dropbox is relatively good - probably because they just provide storage and aren't mining it to use in all their other services!

Even so, there are issues. Data stored in the cloud is still stored somewhere... you just don't have control over where that location is. Your data may be stored in a country which gives the government the right to access data. Also, the firm that stores your data may still be required to comply with the laws of its home country when the data is stored elsewhere. It is, however, unlikely that digital audio research data will be sensitive enough to find this an issue.

A Forbes article on Can European Firms Legally Use US Clouds To Store Data stated that:

Both Amazon Web Services and Microsoft have recently acknowledged that they would comply with U.S. government requests to release data stored in their European clouds, even though those clouds are located outside of direct U.S. jurisdiction and would conflict with European laws.

If you are worried about what rights a service provider may have to your data in their cloud, then consider encrypting it - e.g. using an encrypted .dmg file on a Mac, or using Truecrypt for a cross-platform solution. These create an encrypted "disc" in a file which you can mount and treat like a real disc - but all the content is encrypted. Note that changing data on an encrypted disc may change the entire contents of the disc and need to resync the whole disc to the cloud storage. Alternatively, BoxCryptor or encFs (also available for Windows) will encrypt individual files separately allowing synchronisation to operate more effectively.

SpiderOak provide "zero knowledge" privacy in which all data is encrypted locally before being submitted to the cloud, and SpiderOak do not have a copy of your decryption key - i.e. they can't actually examine your data.

See JISC/DCC document "Curation In The Cloud" - http://tinyurl.com/8nogtmv

Surely there must be a quicker way...

Figuring out which files to copy can be very tedious, and usually leads to just backing up large chunks of data together. However, utilities can be used to copy just those files that have been updated - or even just update the parts of files that have changed.

The main command-line utility for this on UNIX-like systems (Mac OS X, Linux) is rsync. From the rsync man page:

Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.

Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.

For Windows, there is a rsync tool for Windows, and DeltaCopy provides a GUI over rsync.

In addition, there are modern continuous backup programs (e.g. Apple's "Time Machine") which will synchronise data to a backup device and allow you to revert to any point in time. However, these solutions may not be appropriate if your data is large.

Version control systems for source code are optimised for storing plain text content and are not an appropriate way to store data unless the data is text (e.g. CSV files).