Oh, I found a draft. It's still just a "bitstream overview", but it's something.
Oh, I found a draft. It's still just a "bitstream overview", but it's something.
That kind of implies that there are specs available for VP9 that I could go and implement independently from the original implementation. However at the WebM website, I can only find a "bitstream guide" for VP8.
Since you made that claim, I am sure you will be able to point me at the specs or at least a "bitstream guide" for VP9?
A quick note to your argument about how with regular encryption you know when you have found the right key because regularities will appear: You can easily circumvent this*, by encrypting the data multiple times with different keys and possibly with different algorithms.
That's how you know the security is solid.
Or that there is nothing there that can be broken...
Imagine for example if I write in chinese and you had no reference for chinese. You do not know how to read it and you don't know of any similar languages to aid in your decoding of the chinese.
First something general about using human languages for the purpose of security.
1) If you invent a language, your vocabulary and grammar will basically be the key. Admittedly, it will probably be a pretty big key, but partial knowledge of the key will likely allow partial deciphering of the messages in that language.
2) Inventing languages for the purpose of security is not a well studied field. If you do it ad-hoc, you are unlikely to get a great result.
3) In general, when using human languages, similar messages will look similar. As information rarely exists in a vacuum (the Voynich manuscript pretty much does, yes), this may allow an attacker to make inferences about your messages.
To use your example of Chinese, consider the following scenario. You are using Chinese to communicate with a friend. In this world, Chinese is a language known only to you and your friend. I am a well funded adversary and capable of observing your general daily behaviour in a way that only requires communication meta-data and a guy who watches your house and one who watches the house of your friend.
On day 1:
You send your friend a message: æ'äçç¦ä½æ"ääç"èã
- You clean the windows.
- You go buy things from the grocery store.
- You give your friend a phone call.
- You take a walk.
- Your friend goes buy things from the supermarket.
- Your friend receives a phone call from you.
On day 2:
You send your friend a message: æ'äfçäåååZçoeæ'æoeåäã
- You take a walk.
- You chat with a neighbour.
- You call your mother on the phone.
- You go visit somebody.
- Your friend receives a phone call from somebody.
- Your friend goes eat at a restaurant.
On day 3:
You send your friend a message: æ'æoeåç¦ä½æ"ääç"èã
- You buy things from the grocery store.
- You take a bath.
- You take an evening walk.
- Your friend receives a phone call from the person you visited the day before.
- Your friend goes swimming.
(Please go here for a copy where
By now, I will be able to make some guesses about the meaning of various parts of the Chinese language. If you communicate more, I will be able to refine those guesses. I suggest you mull over it a bit and see if you can come up with a few hypothesis yourself. And yes, it's proper Chinese.
You're ignoring my point which is that the cryptography is theoretically breakable. Where as the things I'm talking about are not.
I am not ignoring that. I merely hold the opinion that the difference in practical security is negligible. To reiterate the pros and cons of using OTPs instead of regular, modern cryptography:
+ Properly used, OTPs can be perfectly secure.
o This means, if you compare the security of an OTP with that of a conservatively designed cipher using 128bit keys, you have reduced the probability that your cryptography will be broken by a bit less than 0.00000000000000000000000000000000000001%.
o Using an OTP does not defend against attacks that circumvent the cryptography instead of attack it, which is basically all of them.
- You need a secure random number generator, which is not backdoored by the NSA and can quickly generate sufficient amounts of random data.
- You need to securely exchange OTPs with everybody you want to communicate with. This is hard, if you want to maintain the perfect security provided by the OTP:
* You cannot send unencrypted it over the internet. If you send it encrypted, it will be no better than using said encryption.
* You cannot entrust it to the postal service, because it may be intercepted.
* If you don't know your communication partner in person, even meeting up may not be secure as you might meet a similar looking adversary with a fake id card.
* Either of you might get "mugged" on the way to the meeting.
* You need to repeat this whenever you run out of OTP.
o Admittedly, exchanging keys for regular encryption faces similar problems, but there are some established techniques for verifying identities in a probabilistic way (e.g. the web of trust with PGP). If you are doing realtime communication with somebody already, it is probably sufficiently safe to quickly tell them a fingerprint or public key.
- You need to securely store your OTPs. For this you need heavy physical security. If you just encrypt them for security, they won't really be more secure than the encryption you used.
- You give up useful techniques such as Diffie-Hellman key exchange for a slight increase in security.
- You cannot transmit basically arbitrarily large amounts of data with a single key exchange.**
And if those people are out of reach or unknown then the message will remain secure... and here is the point "forever".
Do not skip over that last word please like you did last time.
My point is that we don't need it to last forever. It's enough if it lasts until the universe loses itself in entropy. Hell, it's probably enough if it lasts 500 years. Unless I become immortal, I suppose I would be fine with my secrets from now being revealed in 500 years. Might be nice for the historians then. It is with this in mind that I argue about the sufficiency of non-OTP cryptography.
For any (un)reasonable time span, 256bit keys are more than enough. Heck, if we go with 512bit keys, an attacker that turns every fundamental particle in the universe into a computer that works at breaking your key will still take longer to break the key than the universe has existed for so far.
You can use OTPs to achieve "perfect" security. Yes. It will be impossible to tell, which of the 2^(messagelen) messages is the correct one if you do not have access to the OTP. However, the practical application of OTPs is difficult and the gain in security is unlikely to affect anything in the real world.
As to your point about getting things decrypted by torturing the people that encrypted it... well yes, that always works but the point is to make that a literal requirement.
It basically is a requirement already. That's the reason the UK has laws that allow them to throw you into prison for not giving them your encryption keys. That's the reason the US government went and demanded that Lavabit hand over their encryption keys so they can get a Snowden's emails.
* In that case, your sequence and number of encryptions will become part of the key. At some point, structure will always appear. At the very least in the brain of the receiver.
** There are usually limits for the number of blocks or bytes a single key can be used with a cipher, but you can always negotiate a new one securely through your existing, secure channel.
Somehow I expected that the Voynich manuscript would come up. It's not a very good argument though. We don't know if the contents of that thing are even supposed to make sense. You can't decrypt
Well. You could encrypt something and then map that back into a grammar and speakable words, but that's cheating.
That is why I threw out book codes or one time pad codes as an example. They're unbreakable without the pad. As in NEVER.
You also ignored all the issues those have and which I mentioned.
Today's symmetric ciphers commonly have key lengths of 128bit or 256bit and usually there aren't even purely theoretical attacks that are significantly faster than brute-force. If you have a cipher with very conservative safety margins, such as ChaCha20, and a key length of 256bit that is pretty much unbreakable without the key too.
For comparison, the estimated total number of fundamental particles in the observable universe is somewhere in the range of 2^265 to 2^282. Maybe you would be satisfied 384bit keys? "2^100 times more states to check than there are particles in the universe" has a nice ring to it.
but that's just because that's the weakest link. You don't waste your time with the tough stuff if you can find something softer.
No, that's because modern cryptography is so strong that trying to break it is pretty much futile. If the cryptography was a viable target, it would be targeted, because then you could break all the implementations at once.
Added to what I said above, I think systems that are secure must be simple. Very simple. As in no more then a couple pages of code. Why? Because complicated code is code that can't be debugged. Keep it simple and you can make the code perfect. Total confidence that there is zero error. As in 1+1=2 perfect.
Simplicity is good, I agree. However, many actual cryptographic algorithms are rather compact. AES is just a few pages of code. So is ChaCha20.
As I said before. The cryptography is not the problem. Usually it's not even the code for the cryptography. Granted, there have been some cases of sidechannel attacks, but those can be (mostly) avoided with proper care.
A perfect OTP implementation won't help you if your application is leaking random memory blocks (including your OTP) to an attacker heartbleed-style.
The sort of thing you'd trust to keep out the literal devil.
The devil applies "simmer in a pot full of literal liquid hell fire" cryptanalysis, I believe. Apart from that, 256 bits of security should be enough against that guy.
I'm not trying to be offensive here, but I assume that you do not know too much about modern cryptography. Correctly applied, it is secure. Really secure. Successful attacks target the system that uses the cryptography, not the cryptography itself. Random number generators are a nice target. Systems running vulnerable software are nice targets. Targeting modern cryptography itself is usually a futile endeavour.
Languages can be learned. It may take a bunch of linguists a couple of years to get somewhere if the language is odd enough, but it is doable.
Proprietary, undocumented protocols and file formats exist. People who reverse engineer them and write their own compatible implementations also exist. I have done that kind of thing a few times myself.
Proprietary protocols and the like are basically what is commonly referred to as security through obscurity. This is considered a bad thing.
On the other hand, we have modern cryptography. Properly implemented, that stuff is incredibly secure. Even if you have a bunch of linguists or mathematicians, they won't break it in a few years or even a few hundred years, likely. The situation is not really comparable to WW2 era ciphers at all. We have left those far behind.
Sure, it is possible but we will have some incredible, mathematical breakthrough with regards to integer factorization, but it doesn't seem likely to happen suddenly. Usually, the state of the art advances at a slower pace. Even quantum computers will still leave us with working symmetric cryptography and (somewhat more unwieldy and less studied) asymmetric cryptography intact.
One time pads are commonly cited as the holy grail, but they miss the point and are difficult to employ, even today. Cryptographic systems are not broken by attacking the cryptography. They are broken by circumventing it. To use a one time pad, you first have to generate it. For that you need a true RNG, or it will be no better than a regular stream cipher. If your randomness is bad, you will be vulnerable and there exist interesting attacks that could subvert commonly deployed sources of entropy, such as Intel's RdRand instruction.
The second problem is exchanging the one time pad securely. How are you going to do that? Snail mail? It could get intercepted. Besides, if you have a way to securely share a secret with the person you are communicating with, you could just share a 256bit ChaCha20 or AES key and be done with it. The practical gain in security a one time pad would provide over those would be negligible.
It would only be breached if the enemy got access to the machine used to send the messages. And nothing is going to survive that.
There is a nice property good, modern cryptographic systems provide, which is called "perfect forward secrecy". It guarantees that communications that took place before an attacker gained access to the secret keys of a peer will still remain secure after the fact. I suppose you could achieve something similar by securely zeroing out the used parts of your one time pad, but then you get into the messy affair of how to securely delete data.
An interesting observation the book makes when discussing the DES encryption algorithm, is that all of the talk of the NSA placing backdoors in it are essentially false. To date, no known flaws have been found against DES, and that after being around for over 30 years, the only attack against DES is an exhaustive key attack. This type of attack is where an adversary has to try each of the possible 72 quadrillion key (256permutations â" as the key is 56 bits long) until the right key is discovered.
This is an odd thing to say. It almost sounds like an attempt at whitewashing the current Dual EC DRBG business by debunking a not commonly made claim about another cryptographic algorithm with a vaguely similar name.
It is widely known and accepted that the NSA strengthened DES against differential cryptanalysis, while also ensuring that the key length is short. They both strengthened and weakened it in different way. There also are attacks against DES, which are, in theory, faster than brute force.
Giving the number of tries necessary to brute force a 56bit key is also kind of odd, since that is a key size that can actually be broken these days without too much effort. What's the point in trying to wow the audience with big numbers in that case? Seems misleading to me. Granted, that may have been just the reviewer and not be part of the actual book.
It's the same project.
That's actually mencoder's fault. It has problems muxing to basically anything but AVI. If you use ffmpeg directly, you can make MP4 files just fine. For mencoder, it's unlikely that the situation will change, as it is basically no longer maintained.
Nintendo is doomed if it continues to price its games in the traditional sense.
Nintendo has been doomed for a long while, you know.
So why are game publishers going 3DS for the sake of 3D? That just feels so gimmicky to me. Especially when they can access other systems (like all of them) that do not use 3D technology if they make a non-3D version of the same game or just go non-3D natively. I really do not understand.
There's a very simple reason: It's a completely different system.
The 3DS has much more RAM (128MB vs 16MB, that's more than the Wii), a much better CPU and a much better GPU with actual fancy shaders and stuff. The old DSi can't even hope to compare. It's much more than a "DS with a 3D screen".
You could say that the DS/DSi can be compared to the N64, while the 3DS is more like a Gamecube. Just take a look at some Resident Evil: Revelations footage. There's no way you could do that on a DS.
x264 has a variety of settings that allow you to tweak the quality/speed ratio. It also has (and is getting more) ARM assembly optimizations, which should be useful for use on a number of phones. It's a really well optimized piece of software over a number of platforms.
540p is less vertical resolution than PAL, which is 576p.
x264 does not use the GPU, the program just does not support it. I know that ATI once produced an Avivo H.264 encoder, but that one was of highly questionable quality. Of course, these days they might have made a better one. Have you tried comparing the speed and resulting video to x264 on veryfast or ultrafast presets?
This might not matter too much. Using the GPU to assist in video encoding might be less of a good idea than many people think. Many complex procedures during encoding are not all that suited for parallelization. Take entropy coding for example. You probably have most of a chance for doing anything useful with motion estimation, but that's still quite hard. A bunch of people have worked on adding GPU acceleration to x264, as part of their thesis. There wasn't any real success. Most of them failed to make it actually useful, since cache considerations and the like prevented them from using nicer algorithms than exhaustive search.
As for existing encoders, like Badaboom, they mostly aren't all that fast or good. You can probably beat them with x264 on fast settings and still get similar or even better quality.
To understand a program you must become both the machine and the program.