Totally Random One Time Pads 265
liliafan writes "Scientists in Japan have come up with a way of harnessing a truly random datasource for generating one time encryption pads: Quasars. One time encryption pads are widely accepted as being the most secure form of encryption, but this new technology from the National Institute of Information and Communications Technology makes the pads even more secure."
Dupe (Score:5, Informative)
Re:almost there (Score:3, Informative)
Re:So what? (Score:5, Informative)
The name of the quasar and time to start monitoring are the cryptographic keys. That doesn't sound like a lot of bits in the keyspace.
A common use for OTPs - Numbers Stations (Score:3, Informative)
In order for an intelligence agency to communicate with an asset overseas, spy agencies must often use methods of communication that cannot be easily traced (duh). Passing a message along via e-mail, phone, or a one-to-one meeting can easily be tracked, creating lots of problems for everyone in the loop.
Therefore, many intelligence agencies did (and still) use OTPs and "Numbers Stations" - shortwave radio stations that blast out a seemingly senseless series of numbers at regular intervals and frequencies. This method gets messages and instructions to your assets without betraying who the recipient of the message is.
The beauty is that the asset only needs a cheap, readily available shortwave radio and a OTP, which can be concealed in virtually anything (some were created that could even be affixed to the back of stamps, others were hidden in toothpaste tubes, etc. The agent then responds with a seemingly inocuous method, a "wrong number code", a mark on a wall near where an intelligence officer drives, etc.
The problem, of course, rests in getting OTPs to the asset and ensuring they aren't compromised. But, assuming they are passed and handled securely, there's no problem at all.
More information on Wikipedia [wikipedia.org]
Re:So what? (Score:5, Informative)
Also, the keyspace is larger than you think... the article mentions that quasars have a very broad frequency spectrum. So, #quasars (that are visible to both) X monitoring-time-choices X monitoring-frequency-choices may result in a large-ish keyspace (or, at the very least, means that it may be physically extremely expensive to try to decrypt a message against all possible keys).
Keyspace (Score:3, Informative)
The fundimental problem is that the data is not fully random -- it is mostly deterministic based on the key of what quasar, what frequency and bandwidth, and what time. So an outside person could recover the plaintext by obtaining the observable behavior and trying all keys, or if the outside person could somehow obtain the key.
This is a very similar situation to a "good" pseudorandom number generator. You can transmit the seed for the pseudorandom number generator and generate a one-time pad from the pseudorandom number generator. I guess the difference is that quasar behavior is not observable after the fact, but if it is feasable for the data to be logged then they reduce to similar solutions: find all the pads within the keyspace, xor with the cipher text, and watch for the entropy to drop or visibility of known plaintext.
Re:One Time Pads... (Score:1, Informative)
Before then, they used multiple-use rags. And smart women are starting to go [gladrags.com] back [lunapads.com] to them [bigstep.com].
Spiffy, but not news (Score:5, Informative)
Nothing to see here, folks; move along.
most what? (Score:3, Informative)
"One time encryption pads are widely accepted as being the most secure form of encryption..."
Only for very limited definitions of secure. You have to produce the pads. You have to distribute the pads. You have to synchronize the pads. You have to dispose of the pads. All these steps are tedious and error-prone, and a chink in any of them destroys your supposed "perfect" security.
Now if you said "OTP are the most algorithmically secure pads under ideal conditions", then I'd buy it. Otherwise, there's a reason only well-funded governments use these things. Ask the Soviets how well it worked for them.
Re:So what? (Score:2, Informative)
Let's say the communicators choose the least secure method and publish the exact time they will start recording the one time pad from the quasar. And assume the attacker can only monitor 1e-9 percent of the quasars at once. Then they have a fairly good chance of remaining undetected.
Now if they just keep recording from that quasar for the entire session, the cracker could try lots of different stars over time and see which on matches. But enryption often uses cipher-block chaining, where the unecrypted data from earlier in the session is used to encrypt the next block in addition to the shared secret. If they did this the attacker would have no hope of breaking the encryption unless he gets lucky on the first transmission.
#6 ... (Score:3, Informative)
Re:That's not randomness at all (Score:5, Informative)
There isn't any particularly better definition of randomness than "unpredicability".
That's true not just as a rule of thumb, but in a more formal sense as well. The word "random" is pretty hard to come up with a mathematically formal definition for, and "pretty hard" may mean "impossible" depending on your definition of "definition" (more on that later). To make things simple, let's just talk about sequences of ones and zeros. Take for example the sequence 01101110010111011110001001101010111100110111101111 ... Definitions of randomness from statistics and probability just require a potentially random sequence to have all possible subsequences of a given length appear with the same frequency. That is, 0 appears exactly as often as 1; 00 appears exactly as often as 01, 10, and 11; 000 as often as 001, 010, 011, 100, 101, 110, and 111; and so on. The sequence I gave above passes those tests with flying colors. But it's not random at all. I'll put some spaces in it, and you'll see the pattern: 0 1 10 11 100 101 110 111 1000 1001 1010 1011 1100 1101 1110 1111... It's simply counting in binary. The longer you extend the sequence, the better it does in statistical randomness tests--the first few dozen bits have a pretty strong bias for 1 over 0, but that ends up as noise in the long run.
The relatively young field of information theory introduces the concept of "algorithmic randomness." The randomness of a sequence of bits is defined to be the length of the shortest Universal Turing Machine program which ouputs that sequence. In pseudocode, our example sequence is output by the program:
let i = 0
while (true) do
output i
let i = i + 1
end while
That's a comically short program to generate an arbitrarily long sequence. So the example fails tests for algorithmic randomness miserably. The fun part is that the problem of finding the shortest UTM program to generate a given sequence is provably intractable. Thanks to the the Halting Problem [wolfram.com], you can't always tell if a given UTM program will halt or loop infinitely. All you could ever know is whether or not the program has output the desired sequence yet--if it's still running, it may do so eventually and then halt, it may output something else and then halt, or it may keep running forever. So algorithmic randomness plugs the holes in statistical randomness by trading an unreliably solvable problem for a reliably unsolvable one. You can't ever be sure a sequence is random, but you can sometimes be sure it isn't.
I got off on a bit of a tangent there about information theory, but my point is that algorithmic randomness captures what we mean by "random" much better than statistical randomness. And algorithmic randomness is just a mathematically formal way of saying "unpredictable."
Re:cracking this would be useful (Score:3, Informative)