Comment Our approach in our research group (Score 1) 189
This problem occurs even for people in the same group, who often find problems to repeat the simulations from our own papers, and even as recent as one year ago. The problems typically come from people leaving (PhD finished, grants that expire, people that move to a different job), changes in the simulation tools, etc.
In our Computer Architecture research group we employ Mercurial for versioning the simulator code. Thus, we can know when each change was applied. For each simulation, we store both the configuration file that is used to generate that simulation (which also includes the Mercurial version of the code which is being used) and the simulation results, or at least only the interesting results. Multiple simulators allow for different verbosity levels, and in most cases most of the output is useless, so we typically store the interesting data (such as latency and throughput) because otherwise we would have no disk space.
Even with this setup, we often find problems trying to replicate the exact results of our own previous papers, for example because of poor documentation (this is typical in research, since homebrew simulation tools are not maintained as one would expect from commertial code), changes that introduce subtle effects, code that gets lost when some person leaves or simply large files that get deleted to save disk space (for example, simulation checkpoints or network traces, which are typically very large).
However, you typically do not need to look back and replicate results, so keeping all the data is a useless effort. I completely understand that research data gets lost, but I think that it is largely unavoidable.