Here's what it means: One major aspect of modern cryptography are "hash functions"- a hash function is a function which essentially has the property that in general two inputs with very small differences will give radically different outputs. Also, ideally a hash function will also make it hard to detect "collisions" which are two inputs which have the same output. In general, hash schemes are used for a variety of different purposes, including determining if a file is what it claims to be (by checking that the file has the correct hash value).
Every few years, an existing hash system gets broken and needs to be replaced. MD5 is an example of this; it was very popular and then got replaced.
One of the major currently used hash schemes is SHA-1. However, a few days ago, a group from Google described an attack that allowed them easily find collisions in SHA-1 (easy here is comparative- the amount of computational resources needed was still pretty high). The group released evidence that they could do so but didn't describe how they did so in detail. They gave an example of two files with a SHA-1 collisions and they also described some of the theory behind their attack. What TFS is talking about is how based on this, others have since managed to duplicate the attack and some make some even more efficient variants of it; so effectively this attack is now in the wild.
If you are a large organization, you can afford more.
Yes, but the point is the way it scales; If you are tiny you can reasonably assume that the almost no occasions will occur when you need to do multiple hashes in a small amount of time. If you are larger then you end up with a lot of extra RAM that you aren't going to use regularly but will need to use during peak log-in times. I agree that you can probably afford more, but getting corporations to do so is difficult; at the end of the day, everyone cares about their bottom lines.
RSA is old, broken crypto which should be migrated away from.
This suggests that you have some very opinionated and somewhat unique views.
I hate to resort to appeal to authority, but the actual analysis required to prove it is way more effort than I have time for this morning. Take a look at keylength.com, it has a host of authoritative references.
I'm familiar with many of the references there, so if there are specific ones you'd like to point to (given the large number there) it might be helpful. But I will note that what they say there agrees to a large extent with what I wrote earlier, in that they explicitly say that they are trying to provide key sizes for a desired level of protection.
It's a valid counterexample because RSA key generation, and, to a much lesser extent, RSA private key operations, are computationally expensive enough to stress low-end devices (an issue I often have to deal with... I'm responsible for some of the core crypto subsystems in Android). But it's a weak counterexample because RSA is not modern crypto. It's ancient, outmoded, we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls, and it's ridiculously expensive computationally. And even still, the common standard of 2048-bit keys is secure for quite some time to come. As that stackoverflow article you linked mentions, the tendency has been to choose much larger-than-required keys (not barely large enough) rather than tracking Moore's law.
As discussed in the same stackexchange link, the key choice is due to infrastructural reasons (and in fact I specifically mentioned that in the part of my above comment you apparently decided not to quote). What actually happens is that we use keys that are larger than required and then use them for a *long time* before jumping to larger key sizes when we really need too. Again, the failure to perfectly track Moore's law (or even improvements in algorithms) is infrastructural, and similar issues will apply to many other crypto systems.
Frankly, I'm concerned that you claim to be someone who has done serious crypto work when you say that "we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls" because this indicates some serious misconceptions; first it isn't that a suspicion that factoring may not be NP-hard. We're very certain of this. If factoring were NP hard then a whole host of current conjectures that are only slightly stronger than P != NP would have to be true. Since factoring is in NP intersect co-NP if factoring were NP-hard then we'd have NP=co-NP we'd have the polynomial hierarchy collapse. Moreover, since factoring is in BQP by Shor's algorithm we'd also have NP in BQP, which we're pretty confident doesn't happen.
But there's a more serious failure here; which is pretty much no major cryptographic system today relies on an NP-hard problem, and reliance on such is not by itself a guarantee of success. For example, Merkle–Hellman knapsack was based on a problem known to NP-hard and it was broken. Similarly, NTRUE has a closely related NP-hard problem but it isn't actually known to be equivalent.
There's also another serious failure here; being reliant on an NP-hard problem isn't nearly as important as being reliant on a problem that is hard *for a random instance*. It isn't at all hard to make an NP-complete problem where the vast majority of instances are trivial. In fact, most standard NP-complete problems are easy for random instances under most reasonable distributions. 3-SAT is a good example of this; while there are distributions which seem to give many hard instances with high probability, naive or simple distributions don't do that.
I do agree that RSA is not ideal in terms of some aspects especially concerns about computational efficiency. But the idea that RSA is "broken" is simply not accurate. And criticizing it as old misses that that is one of its major selling points; the older an encryption system is the most eyes that have looked at it. In contrast, far fewer people have looked at elliptic curve cryptographic systems. Moreover, the one unambiguous way that RSA is actually broken (in the sense of being vulnerable to quantum attacks) applies just as well to ECC.
I suspect that some of our disagreement may stem from the fact that many of the terms we have been using have not been well-quantified, so the degree of actual disagreement may be smaller than we are estimating.
But this is exactly why good password hashing algorithms are moving to RAM consumption as the primary barrier. It's pretty trivial for a server with many GiB of RAM to allocate 256 MiB to hashing a password, for a few milliseconds, but it gets very costly, very fast, for the attacker. And if you can't afford 256 MiB, how about 64?
Using memory dependent hashes works better if one is a small server since one will rarely have a lot of people sending in their passwords at the same time, so the RAM space you need isn't that large. If you are a large organization then this doesn't work as well because you then need room to be able to do many such calculations functionally simultaneously.
Nope. The leverage factor in the password hashing case is linear, since the entropy of passwords is constant (on average). The leverage factor for cryptographic keys is exponential. The reason we don't use much longer keys for public key encryption, etc., is because there's no point in doing so, not because we can't afford it. The key sizes we use are already invulnerable to any practical attack in the near future. For data that must be secret for a long time, we do use larger key sizes, as a hedge against the unknown.
I agree that there's a linear v. exponential difference there(although for many of these it is more like linear and subexponential due to algorithms like the number field sieve), but the rest of your comment is essentially wrong. We keep keys just long enough that we consider it to be highly unlikely that they are going to be vulnerable, but not much more than that. That's why for example we've been steadily increasing the size of keys used in RSA, DH and other systems. Note by the way that part of the concern also is that many of these algorithms require a fair bit of computation not just on the server side but on the client side as well which may be a small device like a tablet or phone. In fact, it would be a lot safer if we increased key sizes more than we do, but there are infrastructural problems with that. See e.g. discussion at http://crypto.stackexchange.com/questions/19655/what-is-the-history-of-recommended-rsa-key-sizes The only way that the linear v. exponential(or almost exponential) comes into play is how much we need to increase the underlying key size or how long we need to make the next hash system if we want it to be secure. Keys only need to be increased a tiny bit, whereas hashes need to grow a lot more. But in both cases we're still not making them any longer than we can plausibly get away with for most applications.
There's one context in which their concern isn't unreasonable: the default assumption is that if any crypto system (key exchange, public key encryption, hashing system, etc.) becomes common then people are going to think about it pretty hard. That's going to lead to a lot of insight in how to do better than brute force. The classic example of this is RSA where RSA-129 was estimated by Rivest that it would take on the order of quadrillions of years to factor even assuming the same improvement rate in computational power. But now RSA-129 is factorable in a few hours with a standard implementation of the number field sieve. This isn't as much about improvement in hardware as it is in improvement in algorithms (modern sieves were inspired in a large part due to RSA). So if you aim for your key to be large enough that any brute force method will be physically impossible, you can be more confident that even with algorithmic improvements, cracking will still take very long.
The real problem with their idea is that given current hardware, demanding long keys is computationally intensive for all involved (and as you pointed out for the vast majority of these what they want to hide just isn't worth that).
Kosovo is an independent country the same way Abchasia is an independent country - in name only. It is a puppet state controlled by Albanian mafia.
I disagree with this, and I suspect that a detailed discussion of the matter would take us far afield and be unlikely to resolve much.
This is also not correct - for example more soldiers participating in the Crimean war were killed by cholera than by weapons. Typhus was rampant among soldiers during the WW1. The use of antibiotics made wounds far less likely to be deadly and so did blood transfusions that were perfected by the 1960ies.
Antibiotics and blood transfusions are relevant improvements. But the death toll totals hold even when one isn't counting deaths from diseases such as cholera.
As for the Taiping rebellion - true, I guess I am too eurocentric. But there was a reason that WW1 was supposed to be the war to end all wars - never before Europe has been that ravaged and only WW2 topped that, so the wars in Yugoslavia or all the conflicts which resulted from the breakup of the USSR were small potatoes in comparison because of the far smaller scale.
But as a percentage basis of total population at the time, WW1 wasn't that much larger than previous European wars. Around 5 million people died in the Thirty Years war when there were around 600 million people alive. By WW1, there were around 1.6 billion people, and around 20 million people died. So by that standard, WW1 was only about 50% worse than the Thirty Years war.
(Incidentally, Blindsight is an awesome book and that's a great sig.)
The part with fewer people dying is only true because WW1 and WW2 set the "standards" so ridiculously high. Well, that and better medical support. Compared to the 19th century wars the second half of the 20th century is pretty much competitive.
Improved medical care has mattered certainly, but that's much more in the last 30 or so years (and is partially responsible also for the decrease in homicide rates). But that's relatively recent; modern emergency medicine did improve after World War II, but the casualty death rate during the Korean War and Vietnam were both close to that of World War II. It is only in the last 20 years that the emergency medicine has improved so much as to really make a substantial difference there, and even then it isn't large enough to explain the entire effect. And the idea that the world wars were so ridiculously high isn't accurate. The Taiping Rebellion and the Manchu conquest of China both had higher total death tolls than World War I for example, even as the world population was much smaller (and in fact they occurred in relatively narrow geographic areas). There's an excellent book which discusses many of these issues (although he doesn't give as much attention to the improved medical care as I would have liked)- "The Better Angels of Our Nature" by Steven Pinker.
Yet magic and hierarchy arise from the same source, and this source has a null pointer.