I wouldn't say it's the only one worth using. Palm's (now HP's) WebOS is also linux-based, supports js, java, c++ based apps, and they are actively supporting the open-source community, even to the point of actively documenting how to (officially) gain root access. Not to mention much better multi-tasking support.
So don't feel like Android is the only remaining underdog to compete w/ Apple... Android itself is a rather closed environment compared to the alternatives that are also out there.
Indeed! Just to clarify things for the AC above...
This is an issue of the authors of some code demanding "adhere to our license or get rid of our code". Which I think everyone can understand the need to honor, if just as a matter of "do unto others, or else".
DeCSS is a completely different case. The code was written by a Norwegian named Jon Johansen, who not only did the cryptographic research to invent the algorithm in the first place, but wrote the code and then released it to the world. Copyright-wise, the code is legally open-source. And for all countries except the US, the code is legal for use. So for anyone outside the US, there aren't any legal problems with the code. And VLC isn't a US-developed piece of software (though to help Americans, DeCSS is distributed as a separate library under many linux distributions).
The only thing which taints the algorithm in the US is the "DCMA" law, which outlawed the use of any algorithms which circumvent a "copy protection scheme". The law is so broad that almost *anything* which alters the encoding of data (ROT13, etc) is a copy protection scheme; despite the fact that encrypting a DVD in no way prevents you from making copies of it (copies of encrypted bits play just like the original). So the DVD "CSS" encryption scheme doesn't even stop copying, yet it's able to wrap itself in the legal mantle the DCMA provides. What CSS *does* do is prevent you from playing a DVD unless the software author has paid a license fee to the people who created CSS (NOTE: not the people who creating the video codec it uses, that's just MPEG2). So all it does is stop you from making use of your fair use rights under US copyright law. It's your DVD, you have a right to play it, sell it, etc.
Now, you might argue that the DCMA, while unjust, is still the law, and Americans should abide by it. And that's a whole can of worms to which Slashdot has devoted many pages of discussion over the last decade. But initially, the effects of the DMCA were broader: worldwide, there were *no* open source DVD players. Period. Because the CSS algorithm wasn't even available in source form anywhere. DVD player authors worldwide had to pay a license just to link in a binary-only library. That is, until Jon Johansen (and cohorts) successfully reverse engineered the algorithm in a completely-legal-for-Norway manner (he was tried in court and found innocent of any wrongdoing). Thus allowing the rest of the world to watch dvds without having to pay money under a racket created by a US-only law.
And *thats* where DeCSS came from, and why it's nothing like this situation, which (while foot-and-bullet stupid) is perfectly within all internationally recognized rights of the authors.
I agree with you: decentralized is fine; and decentralized + PKI would be even nicer security wise. And as a patient, I'd trust it over a central system for all the reasons mentioned elsewhere in this discussion.
My main point was that while PKI is optional for decentralized PHR, in order to develop a centralized PHR system like Google Health, you pretty much *have* to have PKI before the doctors will use your system. The lack of trust is a design flaw which, somehow, I don't think any of the centralized phr developers have even realized that they have, much less that PKI would fix it... otherwise they'd be hawking it at the forefront of their advertisements to doctors. I'm not really sure how they missed the trust issue, because it's the first thing the doctors I work with mentioned after they heard about Google Health.
BTW, those are some nice links regarding PKI, thanks for them! Going to have to look into how I can put that stuff to use.
It occurs to me I used a bunch of industry specific acronyms in the above post; let me define 'em...
PHR - patient health records
PHI - protected heath information - mostly equivalent to PHR, but sometimes with private doctor-to-doctor discussions (such as a patient's drug seeking habits)
EMR - electronic medical records - "EMR" software as a class basically is the eletronic equivalent of the wall of paper charts in your doctor's office. most PHR exchange will happen between these types of systems, or be printed out, edited, and faxed (sometimes to another EMR).
credentialling / credentials management - tracking of doctor licenses, certifications, etc... this stuff is personal information about the doctors (ssn, etc) that's flying around between their office, the govt, and insurance companies.
NPI / NPIDB - National Practitioner Data Bank - government database of the public parts of a doctor's credentials; that's trying to unify and replace all the others that are out there (UPIN, Medicaid, Medicare, DEA). It's in use, but the information frequently is years out of date, even with the best intent of all involved.
I work for a company that produces various types of medical records management software (credentials management, PHI document exchange, EMR); and I've spent a lot of time talking to a number of doctors, both tech-saavy and not so much. That disclaimed...
Let me tell you what the key problem is with electronic medical records: they are legally the property of the patient, but no doctor can (or will) trust the important details of such records unless they come from another doctor, and have a verifiable history leading back to that doctor. Not that they don't believe the part that lists a patient's allergies, but when the medical record says the patient has a debilitating disease which *requires* they be given morphine and lots of it, the doctor has to be able to verify the patient didn't just fake a record for a quick drug fix.
This leads to an interesting state electronically: if data records are to be centralized, a public key system must be set up, tied to each doctor, allowing them to both contribute & authenticate records, and allowing the patient to do the same (but the patient contributions will have to remain "untrusted" medically). You can have centralization without a public key system, but then you're just trusting the gatekeeper to never mess up, get hacked, or paid off. And even if you'd set up such a system which you know (as a programmer/cryptographer) can be made to work... you have to get the doctors to trust it as well; as given how seriously most of them take the responsibility to safeguard their patient's records, that's a hard sell even to a tech-saavy doctor.
Which is why the only major movement we've had in adoption of electronic records has been a decentralized one... doctors are converting their offices to use electronic systems internally, exchange information electronically; but always records are transmitted in a p2p fashion (whether by email, fax, courier, etc); allowing the receiving doctor to trust the veracity of the information (at least as far as they trust the originating doctor); without requiring them to trust the patient.
Google Health is merely one of the most prominent "my PHR online" projects out there, but the problem they are faced with solving is not merely legal or luddite based, but a issue of cryptographic trust in it's truest sense.
And that's not to mention that centralization of medical records creates a much more attractive point of failure for all kinds of things (such identity theft, if merely for the purposes of using some else's insurance),
and even if a public key system is implemented, the doctor (and staff) are handing off part of their trust to a central database... and given the mess of outdated information the NPI registry contains, they are loath to believe in such a system.
disclaimer: my company has a number of ongoing projects in this field, but my assessment here is pretty well unbiased architecture and adoption-wise as far as I know, we have a number of pokers in the fire fitting most of the above scenarios.
Or, failing that, measure time in
I'm well aware that OAEP was designed for asymetric ciphers... but I try not to be straight-jacketed by the on-the-box labelling of an algorithm, and keep my mind on what the algorithm actually does. OAEP is basically just a general principle for armoring a block of data to foils partial recovery unless you can decrypt a large % of it.
I agree, OAEP doesn't do _anything_ to fix the weakness in the key-schedule, but I was under the impression this attack required not just related keys, but related plaintext, and needed the similarly of plaintext in order to finess the original key's bits out of the schedule. Because of that, doing something like OAEP to scramble the plaintext would seriously hamper this exploit, since they effectively wouldn't know the anything about the masked bits passed into AES. It wasn't meant to be a fix, but more of a suggestion of a stop-gap for those of us who will continue to work with legacy AES systems, and need any extra security we can throw in.
[OTHO, if I mis-read, and the exploit doesn't rely on any knowledge of the plaintext, or even similar plaintext, OAEP wouldn't be applicable]
Another (somewhat less-well known) thing that can be done is to use OAEP+ (http://en.wikipedia.org/wiki/Optimal_Asymmetric_Encryption_Padding) to encrypt the datablocks that you're transmitting. The link is to OAEP but OAEP+ is probably what you'd want to use with AES... I don't have a link handy, and the basic principle of the two is the same...
The OAEP algorithm scrambles your data chunks by XORing your plaintext with randomly generated bits, but done in a way that's recoverable IF and ONLY IF you have the entire ciphertext decoded (designed for RSA, but can apply to AES). This means that the same key+plaintext will always result in different ciphertext, and also means that in order to get any useful bits of key/plaintext information, the attacker must get them all, or they're just guessing as to which set of random bits OAEP used (and it generally puts 128 bits worth in).
While the actual OAEP protocol is a block-level action, and the safe version adds 128 bits of randomness (and thus size), the general idea can be modified to be as cheap or expensive as you want... the idea in general makes many asymetric ciphers MUCH more secure.
Those brute-forcing apps are great for whittling down the number of possibilities.
But there's more than just simple login delays in your way... there's the password hashing algorithm being used
to encrypt the password for storage. The two leading algorithms right now are BCrypt and SHA512-Crypt. Both of these algorithms have the facility to increase the number of "rounds" of encryption that's applied to your password when generating the hash.
What does this mean? As computers get more powerful (and/or as you need more security), you can up
the number of rounds required to encrypt your password, so that it reliably takes a constant amount of time
to verify it.
If you pick enough rounds that it takes 1 seconds for the system to encrypt/verify your password,
you won't notice much of a delay when logging in. But consider the worst-case scenario where the attacker
has a copy of your
simply because of the complexity of the calculation you're requiring him to perform. At that rate,
trying all 3 letter combinations would take him 4 hours, all 6 letter combinations would take 9 years.
Mind you, those numbers are before any whittling away known subsets is performed. But given that,
you can always up the number of rounds even more to re-balance things. Some high security
systems I've set up take around 5 seconds on a quad-core system just to verify the password!
Parallelization will help, of course, but if your attacker has 128 cores to work with, those 9 years
will still take him 1 month. And if you have something worth an attacker spending _that_ much time and resources,
let's hope a password is not the only thing standing in his way.
[re: windows, I don't know windows password hash algorithms at all. I love a pointer to some resources though]
I agree... it just plain scares me that so many large systems don't even bother with such trivial precautions as hashing. It's even more trivial than sql injections. Up until it happened, I would have _never_ guessed myspace & phpbb stored plaintext. It seems borderline incompetent.
I've implemented tons of little one-off account systems, for websites small enough they'll probably never even see a hacker. But before I even implemented the first one, I went through the trouble of finding the best password hash algorithm I could (http://people.redhat.com/drepper/SHA-crypt.txt)
Sure, I've had customers ask "why can't it just email me my password when I forget?" But you know what? Just a few minutes of quick explanation, and even people with NO math or cs background can understand why it's important.
So for the love of the gods, people, please take an hour out of your time to put in a hash alg (even md5-crypt is better than nothing)... it's just not that hard.
Just to go off on a rant here...
I've also noticed in some web applications there is the tendency to just pick a hash alg at random. Be warned: not all hash algorithms are created equal.
"Checksum" algorithms such as CRC32 are woefully insufficient: easy to reverse (for small strings), easy to find collisions. They're basically just one guessable step away from plaintext.
"Integrity" algorithms such as MD5 & SHA are a little better, since they're very hard to reverse, and difficult to find collisions.
The problem with using these types of hashes directly is that they will always hash a password to the _same_ string. While that's desirable for their purposes (file integrity, etc), that's not good at all for passwords: you can pre-build a table of known mappings beforehand, and use it to quickly guess many passwords in parallel (aka a rainbow table): Given a table of 10k user passwords hashed like this, and a pre-built table, the odds are very good you'll get a significant number of the passwords in a very short amount of time.
This is why a proper "Password" hash (eg bcrypt, md5-crypt, sha-crypt) includes a "salt" which is randomly generated each time the password is set (and not just the first time). This prevents the rainbow attacks which are possible on plain integrity hashes. But prepending (or appending) the salt is not enough, because since it's effect can be undone mathematically, at least enough so that it presents no real additional barrier.
Genuine password hashes, while using an integrity hash their basis, mix & blend the password and the salt in so many variable ways as to make this reversal impossible. And there are so many nuances here that _you should not roll your own_ (unless you're Bruce Schneier). Read bcrypt, sha-crypt or md5-crypt's specs for some details.
Note: don't use the old unix-crypt, while it is a password hash in the strict sense, it's so old and simple, it's barely stronger than crc32.
Note: sha-crypt adds additional flexibility via it's "rounds" system, allowing it to easily grow more complicated as computers grow more powerful. This is why I prefer it above all the others.
End rant: all this is why you should use sha-crypt or md5-crypt, and nothing lesser.
egrep -n '^[a-z].*\(' $ | sort -t':' +2.0