Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Makes the proposed SLS mission even more a waste. (Score 2) 158

There's a proposal for the first SLS mission to be an around the moon shot There are a lot of problems with this; Amy Shira Teitel discussed it in detail This would make it even more of a bad idea. Right now the SLS mission proposal is just highly unsafe, redundant, and not part of a coherent program. This would make it super-super redundant.

Comment Here's what it means (Score 4, Informative) 159

Here's what it means: One major aspect of modern cryptography are "hash functions"- a hash function is a function which essentially has the property that in general two inputs with very small differences will give radically different outputs. Also, ideally a hash function will also make it hard to detect "collisions" which are two inputs which have the same output. In general, hash schemes are used for a variety of different purposes, including determining if a file is what it claims to be (by checking that the file has the correct hash value).

Every few years, an existing hash system gets broken and needs to be replaced. MD5 is an example of this; it was very popular and then got replaced.

One of the major currently used hash schemes is SHA-1. However, a few days ago, a group from Google described an attack that allowed them easily find collisions in SHA-1 (easy here is comparative- the amount of computational resources needed was still pretty high). The group released evidence that they could do so but didn't describe how they did so in detail. They gave an example of two files with a SHA-1 collisions and they also described some of the theory behind their attack. What TFS is talking about is how based on this, others have since managed to duplicate the attack and some make some even more efficient variants of it; so effectively this attack is now in the wild.

Comment Re:What should happen and what will happen (Score 1) 142

If you are a large organization, you can afford more.

Yes, but the point is the way it scales; If you are tiny you can reasonably assume that the almost no occasions will occur when you need to do multiple hashes in a small amount of time. If you are larger then you end up with a lot of extra RAM that you aren't going to use regularly but will need to use during peak log-in times. I agree that you can probably afford more, but getting corporations to do so is difficult; at the end of the day, everyone cares about their bottom lines.

RSA is old, broken crypto which should be migrated away from.

This suggests that you have some very opinionated and somewhat unique views.

I hate to resort to appeal to authority, but the actual analysis required to prove it is way more effort than I have time for this morning. Take a look at, it has a host of authoritative references.

I'm familiar with many of the references there, so if there are specific ones you'd like to point to (given the large number there) it might be helpful. But I will note that what they say there agrees to a large extent with what I wrote earlier, in that they explicitly say that they are trying to provide key sizes for a desired level of protection.

It's a valid counterexample because RSA key generation, and, to a much lesser extent, RSA private key operations, are computationally expensive enough to stress low-end devices (an issue I often have to deal with... I'm responsible for some of the core crypto subsystems in Android). But it's a weak counterexample because RSA is not modern crypto. It's ancient, outmoded, we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls, and it's ridiculously expensive computationally. And even still, the common standard of 2048-bit keys is secure for quite some time to come. As that stackoverflow article you linked mentions, the tendency has been to choose much larger-than-required keys (not barely large enough) rather than tracking Moore's law.

As discussed in the same stackexchange link, the key choice is due to infrastructural reasons (and in fact I specifically mentioned that in the part of my above comment you apparently decided not to quote). What actually happens is that we use keys that are larger than required and then use them for a *long time* before jumping to larger key sizes when we really need too. Again, the failure to perfectly track Moore's law (or even improvements in algorithms) is infrastructural, and similar issues will apply to many other crypto systems.

Frankly, I'm concerned that you claim to be someone who has done serious crypto work when you say that "we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls" because this indicates some serious misconceptions; first it isn't that a suspicion that factoring may not be NP-hard. We're very certain of this. If factoring were NP hard then a whole host of current conjectures that are only slightly stronger than P != NP would have to be true. Since factoring is in NP intersect co-NP if factoring were NP-hard then we'd have NP=co-NP we'd have the polynomial hierarchy collapse. Moreover, since factoring is in BQP by Shor's algorithm we'd also have NP in BQP, which we're pretty confident doesn't happen.

But there's a more serious failure here; which is pretty much no major cryptographic system today relies on an NP-hard problem, and reliance on such is not by itself a guarantee of success. For example, Merkle–Hellman knapsack was based on a problem known to NP-hard and it was broken. Similarly, NTRUE has a closely related NP-hard problem but it isn't actually known to be equivalent.

There's also another serious failure here; being reliant on an NP-hard problem isn't nearly as important as being reliant on a problem that is hard *for a random instance*. It isn't at all hard to make an NP-complete problem where the vast majority of instances are trivial. In fact, most standard NP-complete problems are easy for random instances under most reasonable distributions. 3-SAT is a good example of this; while there are distributions which seem to give many hard instances with high probability, naive or simple distributions don't do that.

I do agree that RSA is not ideal in terms of some aspects especially concerns about computational efficiency. But the idea that RSA is "broken" is simply not accurate. And criticizing it as old misses that that is one of its major selling points; the older an encryption system is the most eyes that have looked at it. In contrast, far fewer people have looked at elliptic curve cryptographic systems. Moreover, the one unambiguous way that RSA is actually broken (in the sense of being vulnerable to quantum attacks) applies just as well to ECC.

I suspect that some of our disagreement may stem from the fact that many of the terms we have been using have not been well-quantified, so the degree of actual disagreement may be smaller than we are estimating.

Comment Re:What should happen and what will happen (Score 1) 142

But this is exactly why good password hashing algorithms are moving to RAM consumption as the primary barrier. It's pretty trivial for a server with many GiB of RAM to allocate 256 MiB to hashing a password, for a few milliseconds, but it gets very costly, very fast, for the attacker. And if you can't afford 256 MiB, how about 64?

Using memory dependent hashes works better if one is a small server since one will rarely have a lot of people sending in their passwords at the same time, so the RAM space you need isn't that large. If you are a large organization then this doesn't work as well because you then need room to be able to do many such calculations functionally simultaneously.

Nope. The leverage factor in the password hashing case is linear, since the entropy of passwords is constant (on average). The leverage factor for cryptographic keys is exponential. The reason we don't use much longer keys for public key encryption, etc., is because there's no point in doing so, not because we can't afford it. The key sizes we use are already invulnerable to any practical attack in the near future. For data that must be secret for a long time, we do use larger key sizes, as a hedge against the unknown.

I agree that there's a linear v. exponential difference there(although for many of these it is more like linear and subexponential due to algorithms like the number field sieve), but the rest of your comment is essentially wrong. We keep keys just long enough that we consider it to be highly unlikely that they are going to be vulnerable, but not much more than that. That's why for example we've been steadily increasing the size of keys used in RSA, DH and other systems. Note by the way that part of the concern also is that many of these algorithms require a fair bit of computation not just on the server side but on the client side as well which may be a small device like a tablet or phone. In fact, it would be a lot safer if we increased key sizes more than we do, but there are infrastructural problems with that. See e.g. discussion at The only way that the linear v. exponential(or almost exponential) comes into play is how much we need to increase the underlying key size or how long we need to make the next hash system if we want it to be secure. Keys only need to be increased a tiny bit, whereas hashes need to grow a lot more. But in both cases we're still not making them any longer than we can plausibly get away with for most applications.

Comment Re:Practical? (Score 1) 142

There's one context in which their concern isn't unreasonable: the default assumption is that if any crypto system (key exchange, public key encryption, hashing system, etc.) becomes common then people are going to think about it pretty hard. That's going to lead to a lot of insight in how to do better than brute force. The classic example of this is RSA where RSA-129 was estimated by Rivest that it would take on the order of quadrillions of years to factor even assuming the same improvement rate in computational power. But now RSA-129 is factorable in a few hours with a standard implementation of the number field sieve. This isn't as much about improvement in hardware as it is in improvement in algorithms (modern sieves were inspired in a large part due to RSA). So if you aim for your key to be large enough that any brute force method will be physically impossible, you can be more confident that even with algorithmic improvements, cracking will still take very long.

The real problem with their idea is that given current hardware, demanding long keys is computationally intensive for all involved (and as you pointed out for the vast majority of these what they want to hide just isn't worth that).

Comment Re:What should happen and what will happen (Score 1) 142

The problem with that is on the other practical end: if you massively increase the resources needed will also increase the server side resources; it won't be as bad as it will be on the cracking end, but server resources are expensive. There's a point beyond which you aren't going to get people to agree to do it and a certain point where that insistence really does become reasonable. This is similar to why we don't use much longer keys for public key encryption and use really large primes for DH key exchange.

Comment What should happen and what will happen (Score 4, Interesting) 142

If one looks at the history of what happened the last time a major hash was broken, there was a large gap between when MD5 has its first collisions and when it became practical to detect collisions. There was about a little under a decade between when the first collisions were found and when it became easy to find collisions. The general expectation is that hash systems will fail gracefully in a similar way so we have a large amount of warning to switch over. Unfortunately, we've also seen that in practice people don't adopt new hash algorithms nearly as fast as they should. The second to last Yahoo security breach was so bad in part because the passwords were hashed with a completely unsalted MD5 The lack of salting would have been by itself a problem even when MD5 was still considered insecure. That in 2015, a decade after MD5 was broken for almost all purposes, Yahoo was still using it, is appalling. Unfortunately, they likely aren't the only one. And I fully expect that if Slashdot is around in a decade we'll read about someone who has foolishly stored passwords using SHA-1.

Comment Re: I'm sure he had nothing to hide (Score 1) 895

Kosovo is an independent country the same way Abchasia is an independent country - in name only. It is a puppet state controlled by Albanian mafia.

I disagree with this, and I suspect that a detailed discussion of the matter would take us far afield and be unlikely to resolve much.

This is also not correct - for example more soldiers participating in the Crimean war were killed by cholera than by weapons. Typhus was rampant among soldiers during the WW1. The use of antibiotics made wounds far less likely to be deadly and so did blood transfusions that were perfected by the 1960ies.

Antibiotics and blood transfusions are relevant improvements. But the death toll totals hold even when one isn't counting deaths from diseases such as cholera.

As for the Taiping rebellion - true, I guess I am too eurocentric. But there was a reason that WW1 was supposed to be the war to end all wars - never before Europe has been that ravaged and only WW2 topped that, so the wars in Yugoslavia or all the conflicts which resulted from the breakup of the USSR were small potatoes in comparison because of the far smaller scale.

But as a percentage basis of total population at the time, WW1 wasn't that much larger than previous European wars. Around 5 million people died in the Thirty Years war when there were around 600 million people alive. By WW1, there were around 1.6 billion people, and around 20 million people died. So by that standard, WW1 was only about 50% worse than the Thirty Years war.

(Incidentally, Blindsight is an awesome book and that's a great sig.)

Comment Re: I'm sure he had nothing to hide (Score 1) 895

Kosovo is an independent country. China continues to have serious problems with Tibet (and the situation there has been a part of of some ongoing issues- for example it was part of why the US decided not to include China in the ISS in the 1990s). But your basic point does have some merit; it isn't like the Russian situation is the only example of this sort of thing and many have not been reverted. But every time this happens, there's a damage to this taboo which is by and large strong.

The part with fewer people dying is only true because WW1 and WW2 set the "standards" so ridiculously high. Well, that and better medical support. Compared to the 19th century wars the second half of the 20th century is pretty much competitive.

Improved medical care has mattered certainly, but that's much more in the last 30 or so years (and is partially responsible also for the decrease in homicide rates). But that's relatively recent; modern emergency medicine did improve after World War II, but the casualty death rate during the Korean War and Vietnam were both close to that of World War II. It is only in the last 20 years that the emergency medicine has improved so much as to really make a substantial difference there, and even then it isn't large enough to explain the entire effect. And the idea that the world wars were so ridiculously high isn't accurate. The Taiping Rebellion and the Manchu conquest of China both had higher total death tolls than World War I for example, even as the world population was much smaller (and in fact they occurred in relatively narrow geographic areas). There's an excellent book which discusses many of these issues (although he doesn't give as much attention to the improved medical care as I would have liked)- "The Better Angels of Our Nature" by Steven Pinker.

Comment Re: I'm sure he had nothing to hide (Score 1) 895

The claim isn't that there are no wars, but that there have been few large scale wars. In general, even as the population has gone up, the total number of war casualties has been low, and as a percentage basis, the fraction of people dying in war has gone down. See e.g. And yes, the claim isn't that there have been no annexations, and yes, every one of those is problematic. The particular problem here is the revanchist aspect- the justifying of annexation by claims that territory was historically one's own or has people in one's own ethnic group, which only applies to some of those. Note by the way that for multiple of your examples, the country attempting annexation doesn't currently have control. For example, East Timor is independent.

Comment Re: I'm sure he had nothing to hide (Score 4, Insightful) 895

The difference is how recent the event is. And that's important. A major reason the last 50 years have been relatively peaceful is that post World War II a general norm has been established that taking territory based on revanchist claims is not acceptable. The events by Russia seriously undermine that norm.

Comment Connected to jobs also (Score 5, Informative) 491

Millenials have fewer job prospects in general and are less wealthy than their parents were at the same age. This is true by a variety of different metrics. See e.g. In the last few years, something, it isn't clear what, has been drastically reducing the resources available to young people. This is combining with cost disease in a way that is leaving many people in the young age bracket with far less effective purchasing power than their parents would have had for many things. It isn't completely the case; some goods such as computers and cell phones are far cheaper (and often weren't even available to their parents) but that's a relatively small fraction of their total goods. Some other trends are clear positive, such as the reduction in poverty in the US, and the overall trends throughout the world are mainly positive. See e.g. But the US specific young people are clearly going through a bad time in general.

Slashdot Top Deals

Yet magic and hierarchy arise from the same source, and this source has a null pointer.