Scientific programming using Monte Carlo methods requires reproducibility based on some initial seed so that an analysis can be reconstructed. A good example of this is benchmarking a code for changes in compiler options. If the code is widely distributed, then a large set of random numbers is not as easily distributed as a Twister or other method. Also there would be problems with acceptability of the results of such a code if a developer were to distribute the code with a specific input and set of random numbers. The temptation to cherry pick results would be too high. For security purposes, where a one-time-pad approach is ideal, a truly random number is fine.
I don't particularly buy the authors approach though, because semiconductor physics is full of things that seem random at the moment, but then turn out to be entirely predictable once a suitable model is found. Sun Microsystems found this out years ago when they tried to base a random number generator based on the rate of soft failures from memory chips. They were using Boron as a dopant, which has a high probability of absorbing neutrons and decays with an alpha particle (He +2 atom), causing a hardware error. They claimed they had a perfect random number generator until they saw that the randomness was dependent on the location of the chips. Denver has more cosmic radiation than Miami, thus the randomness was actually Poisson (as are most things nuclear). The method was thus vulnerable to an attack based on the mean number of failures, which could be determined by knowing the physical location of the device.