Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re:Hard wired (Score 1) 148

As hunter-gatherers (you know, in the time before writing and the invention of religion)

Before writing, yes. I strongly suspect that religion existed even then. All of the hunter-gatherer societies that survived to historical times had religions, often quite sophisticated ones.

Comment Re:Practical? (Score 1) 119

The bit change is not necessary for computation at all from information theory perspective. Theoretically, no energy needed at all for any computation. Whatever, you can do with active circuit, can be done using passive circuit (e.g. your camera lens can be used for FFT). The energy is only needed for reading information. So no matter, how complex the cryptography is, the theoretical energy required to decrypt is zero.

Yes and no. In my understanding as a physicist, bit flipping per se is free, but you need a minimum of 1/2 kT of energy to destroy information (create entropy). To avoid destroying information during computation, you basically need to store every step you do, so the operation becomes reversible (google "reversible computing" for more). This is not usually practical, so most of computing does suffer from the 1/2 kT limit per bit operation.

The lens example is valid IMHO, as Fourier transform is reversible (and there are similar integer transforms to stay bit-exact, if you're worried about floats.) But to make that practical, you need to store all that information somewhere.

Comment Why kill yourself? (Score 1) 61

There are a number of people who think if you are below 60 now you may well live forever at this point.

That may be a bit extreme but I don't think living to 200 is unlikely if you are anywhere below 50 and keep yourself healthy...

So if you are going to miss this in 200 years it pretty much means you did yourself in. Don't do that.

Comment Re:I cut off FB a month ago. It's been a good mont (Score 1) 148

I never really understood the allure of heroin. My friends kept urging me... "You gotta try it" they would say. I finally caved in and bought some, but I just couldn't commit to injecting. With no real reward for my effort, I just deemed that it was a huge waste of my time, and disposed of what I had. I have been free and clear of it since, and I couldn't feel better.

FACEBOOK. I meant Facebook, not heroin.

Comment Dont worry I've got a backup (Score 4, Funny) 182

As it turns out I have a backup sample, because you have to keep it at incredibly high pressure I keep it in the much more reliably pressurized environment of a dorm room with two Chemical Engineering majors.

Indeed because of the pressures involved I had to add some padding around the sample to prevent the rare metal from being crushed.

You can come collect it whenever, except of course when there's a sock on the door handle (P.S. there is never a sock on the door handle).

Comment Re:What should happen and what will happen (Score 1) 119

The problem with that is on the other practical end: if you massively increase the resources needed will also increase the server side resources; it won't be as bad as it will be on the cracking end, but server resources are expensive.

It won't be as bad as on the cracking end, that's the whole point. The reason for doing password hashing is to exploit the asymmetric level of effort between hashing and brute force search. To make that work, though, you do need to invest as much as you can afford in the server, to move the bar for the attacker as high as possible -- hopefully out of reach of all but the most serious. If you can't afford very much, fine, but realize that you're also not setting the bar very high.

But this is exactly why good password hashing algorithms are moving to RAM consumption as the primary barrier. It's pretty trivial for a server with many GiB of RAM to allocate 256 MiB to hashing a password, for a few milliseconds, but it gets very costly, very fast, for the attacker. And if you can't afford 256 MiB, how about 64?

What you definitely do not want is a solution that takes microseconds and uses a few dozen bytes. That creates a trivial situation for the attacker given modern hardware, and your server absolutely can afford more than that.

This is similar to why we don't use much longer keys for public key encryption and use really large primes for DH key exchange.

Nope. The leverage factor in the password hashing case is linear, since the entropy of passwords is constant (on average). The leverage factor for cryptographic keys is exponential. The reason we don't use much longer keys for public key encryption, etc., is because there's no point in doing so, not because we can't afford it. The key sizes we use are already invulnerable to any practical attack in the near future. For data that must be secret for a long time, we do use larger key sizes, as a hedge against the unknown.

Comment Re:Are two hashes better than one? (Score 1) 119

... however it's worth noting that there are currently no ways of finding a collision for both MD5 and SHA-1 hashes simultaneously

Any crypto geeks want to weigh in on the truth of this statement? I've often wondered about this. Wouldn't using two hash algorithms be easier and more effective over the long term than getting the whole world to upgrade to the Latest And Greatest Hash every ~10 years?

MD5 + SHA1 is a "new hash algorithm". Think about what you have to do to shift to a new algorithm... all of the message formats that have to be updated, all of the stored values that have to be recomputed, all of the cross-system compatibility dances you have to do to ensure that you can upgrade both sides (or all sides; there are often more than two) in order to update without having to make some error-prone attempt to cut over simultaneously.

The challenge of changing hash algorithms has nothing to do with getting correctly-implemented source code for a new algorithm. That's super easy. The challenges are all about how to handle the changeover, which is exactly the same whether you're switching to an actual new algorithm that incorporates the latest ideas and is (currently) completely invulnerable to all known attacks, or to a combination of old, broken algorithms that may or may not be better than either one alone.

The right solution is to build systems with algorithm agility and algorithm negotiation, then to add new algorithms as soon as they're standardized and remove algorithms completely once all parties have updated.

Comment Re:For variable values of "practical" and "relevan (Score 2) 119

Not a lot you can do?

Anything that requires signatures is vulnerable to forgery if the signer's certificate specifies SHA1.

An attacker could forge:

1. Software signatures - to slip malware into a software vendor's distribution channels.

That requires a second pre-image attack, not just a collision attack. (What gweihir called "two-sided" rather than "one-sided"... though that is not standard terminology).

2. SSL certificates - to MITM web connections to phish, steal data, or distribute malware.

Also requires a second pre-image attack.

3. Personal digital signatures - to fabricate documents, including emails, transaction, orders, etc that are normally trusted implicitly due to the signature

This one can be done with a collision attack. You generate two different documents which hash to the same value, but have different contents. The PDF format, unfortunately, make it pretty easy to generate documents which look sensible and have this property. It's not possible with more transparent formats (not without a second pre-image attack).

4. Subordinate CA certificates - to create trusted certificates which permit all of the above

The problem lies with #4.

This can only be done with a collision attack if the CA is really, really stupid. Proper CAs should include chain-length restrictions in their certificates. That way even if you can create two certificates which hash to the same value, one of which has the keyCertSign bit set to true (which the CA would refuse to sign) and one of which does not (which presumably you can get the CA to sign), it wouldn't matter because if you used the former to generate other certs, no one would accept them due to the fact that your chain is too long.

The only solution is to discontinue the use of SHA1 internally and to revoke trust for all CAs that still use SHA1.

I certainly agree that any CA still issuing certificates with SHA1 should not be trusted. Any existing certs based on SHA1 should be scrutinized, but most of them are still secure.

Better crypto has existed for a long time---the standard for SHA2 was finalized in 2001, well over a decade ago.

Absolutely. Of course, I say that as the maintainer (ish) of an open source crypto library that still uses SHA1. In systems that weren't originally designed for digest agility, it's often hard to retrofit. Today's news is a nice kick in the pants, though.

Slashdot Top Deals

One man's "magic" is another man's engineering. "Supernatural" is a null word. -- Robert Heinlein

Working...