Or at least they could have chucked the cold war and gone into the scriptwriting business.
I have heard of some that try to utilize some sort of seemingly random event that is naturally occurring. However even these can be modeled over time.
A good post, but I'm not sure you understand hardware based random number generation. At least one way to do it is have a small amount of radiactive material. Although it decays predictably in the long term (half life) it is random in the short term. By measuring the radioactive decay truly random numbers can be obtained.
The decay may be random, but the implementation may not be. I have heard of two issues with actual radioactive random number generators.
1.) The geiger tube (or solid state chip) used for detecting the decays will have imperfections (for example, a dead time so that it will miss a decay occurring too soon after another one), and these can introduce non-randomness into the output.
2.) The early ones were simple accumulators (count for an interval delta-T, and if you get > Y decays, that is a 1, otherwise a zero), and that can be hacked if you
can control the radioactive environment at the detector. I believe that to prevent that now--a-days the algorithm is simething like "count for an interval, and if you get an even number of decays, output a 1," but that might have radioactive hacks as well. (I don't know of any, but I don't have a large staff trying to break this, either.)
The entropy per character of human languages is so low that it doesn't take much non-randomness before you can get into deep trouble.
I don't think that is correct. If you reuse the key, you reveal the key. Worse, it can be easily caught by simply cross-correlating every message pair. (No key reuse = no correlations - key reuse = lots of correlations.) It can be almost trivial to decode a OTP message pair with the same key.
Of course, compressing before you encrypt is fine, as long as you don't reuse the key.
A perfect OTP, you are correct.
An imperfect OTP, or one imperfectly used, it could make a difference.
I consider these sorts of immediate reactions as the worst kind of political deceit. (The Patriot Act was another, similar, case.) It would be one thing if some commission examined the circumstances, and came out in 6 months or so with a considered argument as to why this or that measure might have made a difference. That at least could be debated. But, no, instead it is "here are these pre-canned ideas that have been shot down before, but now you need to adopt them immediately just because."
I would suggest sending the proposers to the Tower, but I understand that is passe now-a-days.
There were plenty of cases of Germans attacking the Third Reich, more obviously there were several attempts by Germans to assassinate Hitler. That didn't make WWII a civil war. Just an international war with some within the country opposed to it.
For sure the Third Reich would have called it terrorism.
The Germans in World War II routinely referred to the resistance in the various occupied countries as terrorists.
I am not a cryptographer, but I *think* it would not harm the strength of the encryption if you compress then encrypt.
In theory, it should actually make it stronger, by removing redundancy. In practice, I bet it would mean that you could then predict the first few bytes of each message sent (i.e., some sort header info, followed maybe by something guessable if you know the language being used) and it can be a bad idea to begin each message with something predictable.
Creating true randomness is a tricky proposition, and I don't see why its safe to believe that "shining a light through a diffusive glass plate" will generate true randomness.
They claim it passes statistcal analysis tests for true randomness.
That is meaningless (there is no test for true randomness, just tests of whether or not various forms of non-randomness are present), and if they truly believe that passing various tests for randomness is sufficient then there may be no hope for them.
Hell, just transmitting large blocks of 100% mathematically random data is a red flag. "One-time pad in use! Something very interesting going on here!"
I have heard that certain locations send megabits / sec of random data continuously, at all times, just so that certain other locations can't tell when encrypted traffic is being sent. Certainly that technique is being used (at a lower bit rate) by the various "number stations" out there.
Who would have thought that the f... article addresses this devilishly ingenious workaround?
"And even if Eve steals the glass, they estimate that it would take her at least 24 hours to extract any relevant information about its structure.
This extraction can only be done by passing light through the glass at a rate that is limited by the amount of heat this creates (since any heating changes the microstructure of the material). And the time this takes should give the owners enough time to realise what has happened and take the necessary mitigating actions."
Right. Note that this implies that this technique should only be used for messages that have an effective lifetime of 1 day.
"Attack at dawn" - yes
"Attack on Sunday" - not so much
Of course, if it's possible to make a copy of a plate, it's no better than trying to securely send thumb drives.
The simple fact that there are two serves as an existence proof of the possibility of making a copy.
All top secret information should flow through one time pad systems.
Look at it this way. What does disk space cost these days? Imagine getting a 30 gigabyte one time pad file on its own little SSD drive. How much data could be passed back and forth as theoretically unbreakable encryption? At the very least 30 gigabytes of data. In practice, probably at least a magnitude beyond that.
No, at most 30 gigabytes. The next byte you send will start to reveal previous traffic.
Three things are required for a one time pad - that the key be shared, random and non-repeated. A one time pad is very much breakable if the key is not both random and non-repeated, and the biggest problem with its use can be the sharing of the keys.
The Soviet "Verona" traffic was decoded because they reused pads (keys), rendering the message decryption straightforward, and also revealing the keys. The revealed keys were found to have some further weaknesses, as they were made manually (apparently by secretaries told to type randomly on their typewriters). These weaknesses included an avoidance of repeated characters, a tendency to alternate hands (a character on the left side of the keyboard would be likely to be followed by one on the right), and (IIRC) a preference for character pairs and triplets that didn't require too much stretching of the hands. (On the top line of a QWERTY keyboard, this means that, say, an initial "q" would be unlikely to be followed by another "q", that it would be likely to be followed by a letter in the "u - p" range, and that the third character would be more likely to be a q, w or e than an r, t or y.)
Now, officially, that amount of manual non-randomness wasn't enough to break further Soviet one time pad encryptions, but I suspect that they were. I have also heard rumors that later use of random keys generated by electronic circuits had problems as the physical limitations of the electronic circuitry imposed a low-pass filtering that made these keys, again, not totally random. Note that true randomness is what is needed here - common digital pseudorandom techniques, such hashing with SHA-1, may help to obscure weaknesses, but they will not make a non-random key random.
In this case, I would worry very much about
- whether the physical technique produces a truly random key and
- how to satisfy myself that today's random key is totally independent of every previous key. If this is, say, dependent on where the laser is pointing to in the glass, how far apart does each pointing need to be to make sure that the results are independent, and can I securely verify that today's direction is sufficiently different from every previous time and
- as the technique is passing an initial sequence of bits through the randomizer glass, how random does the initial sequence need to be ? What weaknesses are imposed by non-randomness in that initial sequence.
I could easily see this technique being secure in theory but massively broken in practice by some weakness in how the glass is made or handled or in the initial keys.
Note, by the way, that the two parties must physically get together to generate the key, so in a sense this is really a secure key storage device. Once they use up their stored keys, they have to meet again to be able to send more messages, which of course is the real problem with one time keys (and why, for example, the Soviets reused some of the Verona keys).
And, finally, this technique might make a cool way of doing truly secure hashing.
That is indeed how the WWII "scrambler" phones worked, but that was not viewed as nearly as secure as a one time pad (required for all messages dealing with Enigma decrypts) and the Germans did decode at least some scrambler phone communications.
The cryptographic trouble is that the inherent correlations of the human voice are still present, just overlaid by noise, and you can use that knowledge to extract the signal (the voice) from the noise. It did prevent idle eavesdropping, which I think was more the point.
That's a typo (well, 2), but it is interesting that they picked these two, which shows that they understand basically nothing about the philosophy and work of either figure.
Here is a hint - Mother Theresa did not treat the dying, only comforted them, and Gandhi believed in rejecting technology and returning to a simpler era. So, the simplest answer for both is, nothing.
But what if it isn't quantum and we've built an entire computer and don't know how it really works? At that point you may as well throw up your hands and yell magic.
Sounds like being a parent.