Point taken. Comment addressed at: http://apple.slashdot.org/comments.pl?sid=1989624&cid=35162734
From the article:"This decryption is possible,since on current (3) iOS devices the required cryptographic key does not depend on the user’s secret passcode"
That is what I take issue with since that is not 100% accurate. The quote, for the device they tested (4.2.1, with file system encryption on) should be, "This decryption is possible IN MOST SITUATIONS,since on current (3) iOS devices the required cryptographic key does not depend on the user’s secret passcode".
However you can set flags on files and keychain entries that DOES make the user's passcode required.
I feel I should clarify. The article summary is a bit misleading and the paper is not, exactly, misleading.
In the version of iOS they tested you have the option of encrypting your keychain entries using the mechanism I describe (which means they would come us as "protected"). And as the PDF article mentions they could not extract the device key (forcing a local brute force attack if you want the passcode set for the device). If the protection level is set to encrypt the keychain entry with the device passcode it can't be recovered through some flaw in the encryption (that we know about).
So the article is basically saying, "Gee we can access things that aren't flagged to be protected with the device passcode". Which is, well what any reasonable observer expected since that is exactly how it was described over a year ago. It is good to see a working implementation.
Apple's real flaw here is that they did not force this encryption for *everything*. Instead they rely on developers to pass in certain options when storing keychain entries (and or when writing files to disk). Without these options the data is, sadly, recoverable. Apple even only encrypts the Mail app out of the box, which does not set the best example. That said they are basically making a very technical commentary on design decisions by Apple and I think this point gets lost in all the scare mongering. It would have been much more coherent (but not have gotten as much PR) to simply make this clear straight away.
That is why the user's passcode is so critical. When you unlock the device it is created once (derived using PBKDF2) and then the passcode is gone. The derived key is held in memory to decrypt the class keys. When the device locks the class keys are (for sure) encrypted and the derived key is forgotten as well.
In IOS >4 with a modern device (3GS or better, iPad included) this article is blatantly incorrect.
"The attack works because the cryptographic key on current iOS devices is based on material available within the device and is independent of the passcode, the researchers said.". Not true. In iOS4 they use a variant of PBKDF2 to generate an encryption key that is used along with the device key alluded to in this article to decrypt "class keys". The class keys are then used to access data at the various protection levels (Never, After First Unlock, Only When Unlocked). Each of those levels of data has a separate key. Those keys are required to decrypt the individual keys on each file. Each file has an encryption key set on it in the meta data (which means you do have to reformat your system and set a reasonable passcode).
Because of the PBKDF2 variant brute forcing is infeasible. Because of the device key you have to try this IN the device and are limited to Apple's hardware for forcing.
All of this is possible because Apple has an AES-256 hardware chip that blazes through crypto for that algorithm.
Remote wipe uses yet another key (the file system key). So each file encryption key requires a "Class key" and a "file system key" to be decrypted. Lose either one and the file system is history. So remote wipe is accomodated in newer versions of iOS by just forgetting the file system key.
In short, this article is not providing an accurate portrayal of "current/latest" devices. Though I am not sure how many people: Have the newer hardware, have iOS 4 AND have reformatted their filesystem to accomodate the required metadata.
If most sites were using bcrypt with a decent work factor or another similar algorithm you would probably never crack more than a tiny, tiny fraction of a password database. We know how to prevent this. It is best summarized in PBKDF type algorithms, bcrypt and others. Use it. This stuff works.
Inquiring minds need to know.
Only, this is exactly how you WOULD do it if you were to use a botnet component in an information warfare strategy. I direct you to the excellent work of Charlie Miller.. who worked for the NSA and has DONE this type of work before (information warfare against foreign governments). Much of his paper is just plain logic/reason as well. Think about it. Especially with the stolen certificates. If I have stolen certs those are BIG playing cards. Like sitting on golden 0-days. You don't whip those out until you are ready to play hard. Once you reveal your hand (that you have the stolen certs) the certs get revoked and the cleanup begins. So you don't play those cards till the time is right. It is impossible to say if a government entity is behind it, but if it IS behind it this is when and how you would do it. Plausible deniability, etc. And, if our govt is NOT behind this they are still not going to complain about it. Also, it would take fairly significant resources.. probably a few million dollars to operate, build and run a botnet like this and keep is quiet. Using compartmentalization, each team / group of people isolated from the others, etc.
Thank about it
Good thing for you most large governments have the root CAs in their pocket and can easily Man in The Middle most SSL transparently, unless the user is superbly vigilant.
Except, that is not true. There are commercial proxies that make it very easy to own users that are using SSL. It just costs money. All the IT administrators have to do is install the proxies certificate authority cert in the list of trusted certificates and transparent man in the middle can be done with ease and the user will never be the wiser. The tools to do this can be developed by anyone with a little knowledge of SSL and some time, as well. This is a major fallacy. It is only difficult for organizations that are lazy and or can't afford the proper tools to do it. So it is easier to fight it administratively than pony up for the commercial tools to do it.
Your gps device is capable of measurements nearly that precise. You just have to let it sit there a while. You let it collect data for a long time and then voila, find the center spot of all the GPS coordinates that got recorded (it will jump around) and you have an incredibly accurate measure.
Surely at least one poor individual out there knows every nuance of the language.