A few weeks ago someone emailed me to say that he would like to replicate one of my experiments (the validation study of BLEU reported in Reiter and Belz 2009), Great, I think replication or important research is essential to science, and I’m happy that someone thinks this work counts as important research that is worth replicating! But then this person asked if I could provide details of this experiment (eg, exact instructions given to experimental subjects), in order to assist in replication.
A very reasonable request. Unfortunately, my collaborator (Anja Belz) and I ran these experiments in 2007, ie 12 years ago. And we did not kept proper archival records of our experimental design, material, and results. I discovered that some key information was in a backup zip file which unfortunately was password protected with a password I could no longer remember. Anja discovered that some key information was in an old computer which she had stored in her house. Anyways, we managed eventually to gather almost all of what was needed for replication. But this was luck, not design.
I had a similar experience a few years ago when a medical researcher wanted some detailed information about our STOP (smoking) project, as part of a meta analysis. In this case my main medical colleagues had either retired or stopped working as researchers, so I was on my own. I eventually found a document written in the late 90s which seemed to to have the requested information, so I sent it to the researcher, and fortunately it did indeed have the information she needed. Again this was luck rather than design!
So if we as researchers think replication and meta-analysis is important and should be encouraged (which is certainly my view!), then we need to keep good archival records of our experiments. In other words, we should be able to provide detailed information about experiments 10-20 years after we did them, even if we have since moved to a new IT environment and/or our colleagues are no longer accessible.
Plan ahead, dont leave it to luck, as I did in the above cases!
What to Archive
You should archive everything needed to repeat an experiment and/or have it be part of a meta-analysis
- Material (eg, the data and texts shown to experimental subjects, and how these were chosen or created)
- Subjects (how many, how recruitedl, demographics)
- Procedure (exact instructions given to subjects, exact screens shown to subjects)
- Results (data produced by experiments)
- Analysis (statistical tests and techniques used to analyse the results, results of this analysis)
- Supporting material (eg, ethical applications)
Above list could be expanded, but hopefully it will suffice for many contexts.
One question is what happens if some of the materials, resulting data, and/or analyses are sensitive and cannot be published, because of either privacy concerns or commercial confidentiality? In an ideal world, we would fully anonymise materials, results, and analyses when we conduct our experiments, but this can be a lot of work, and sometimes is impractical. For example, Babytalk Nurse was evaluated by having nurses use it while looking after real babies in hospital. Which means that the material (what nurses saw) are confidential. Also we promised the hospital that we would delete the data sets and texts after the experiment was finished and written up. So in this case it would have been very difficult to archive materials (thorough anonymisation, permission from hospital), so we did not keep this information.
I guess the main thing I can say is that you need to do the best you can, subject to the confidentiality and privacy constraints youare working under.
How and Where to Archive
In CS, we are not used to archiving material for a period of decades. For many years, my policy was to take regular backups. Unfortunately most of my backups from 10+ years ago are inaccessible, because they are password protected (and I cant remember the password) or because they are on media (such as zip drives) which I can no longer read. Another potential issue is file formats. For example, most of my statistical analyses 10-20 years ago were done in SPSS. And SPSS is still live, but I can imagine a future where everyone uses R, and hence SPSS data files are hard to read.
Similarly we dont want to store archives just on a local hard disk, since the disk may die or simply be lost 10 years in the future. We can archive on the cloud, but again some current providers of cloud-based archiving services may no longer exist in 20 years time.
So one lesson is dont password protect, since you wont remember the password in 20 years time. Instead sort out what can be shared at the time you do the experiment, and keep this in an unprotected file. Another lesson is that you need to be aware of changes in media, file formats, cloud providers, etc (most of which happen slowly, with plenty of notice), and act to migrate and preserve your archived experiment designs and data.
Archiving experimental material to support replication and meta-analyses is not rocket science, it mainly requires being aware of the issues and taking appropriate actions. So not very “exciting”, but it is important for scientific progress!