Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment NIST scientist explains (Score 1) 139

I am a physicist who worked on this project at NIST, so I am sorry to be late to this conversation. A lot of the comments here express doubt or uncertainty about what is new or different in our quantum random number generator compared to others like thermal-electronic noise, lava lamps, random.org, and others. This a great question, because the news article linked at the top of the thread does not explain this well. Maybe I can help.

The key idea is that our randomness is "device independent", meaning that the justification for the unpredictability of its output does not rely on characterization of the devices. Instead it is based on the observable data and a few other surprisingly weak assumptions.

One mode of operation for our random number source is to transform a public randomness into private randomness. At the center of our experiment is a "Bell Test", also known as a "test of local realism". During the Bell Test each member of a pair of entangled photons is sent to a measurement station. At the two stations, a choice is made for a measurement to perform on its photon. We assume that those choices are independent of all other aspects of the experiment, and they are unpredictable by any adversary. They could be provided by a public random source, such as the NIST Randomness Beacon. The two measurement events are space-like separated, so the measurement choice at one station cannot be communicated to the other station (unless it can travel faster than light, which we assume is impossible). We then do a statistical analysis of the choices and the photon detection events. The statistical analysis proves that the photon detections could not have been generated by "hidden variables". Instead the detections are genuinely unpredictable and random. It is important to understand that the statistical analysis is done using only the record of choices and detections. To justify the fact that the measurement stations cannot communicate we also need to know the distance separating them and the times of the measurements. The record of photon detections is now our private random string.

No detailed knowledge of the photon source, detectors, or other devices is needed. In fact these devices might have been built by an adversary who wants to predict or learn our private randomness. We assume that the adversary has no advance knowledge of the public random source used for the measurement choices. We also assume that once the devices are in our laboratory, the adversary cannot communicate with them and maintains no quantum entanglement with them. Lastly, we assume that the classical computers used to process data are reliable and secure. Although we use quantum physics to create the entangled photons, the proof of randomness does not assume that quantum physics is true. The data analysis itself proves that no classical source (such as an adversary's look-up table secretly implanted in our devices) could have produced the observed data.

The next generation of this experiment will be able to perform private randomness expansion, in which a short private string is used to make measurement choices, and a longer private random string is generated by the Bell Test. We are also working to provide security even if the experimental devices maintain quantum entanglement with an adversary once they are secured in our laboratory.

I am happy to answer other questions about this work, if anyone is interested.

Slashdot Top Deals

If all else fails, immortality can always be assured by spectacular error. -- John Kenneth Galbraith

Working...