Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Matter, anti-matter... (Score 2) 393

Are we sure there were equal amounts?

The way I have understood what's been said so far is this. The universe started with equal amounts matter and antimatter. Matter and anti-matter can only be produced and annihilated in equal amounts. Today we have reached a state, where there is much more matter than antimatter.

This is obviously inconsistent. So one of those three statements has to be wrong. I for one don't know which one of them is wrong. And I also haven't come across a physicist who had solid evidence for which of them is wrong.

One possibility I have been wondering about is that of antimatter galaxies. Seen from a distance, wouldn't an antimatter galaxy look exactly like one made of matter? I have been told this is not a possibility either, since that would imply that somewhere there would have to be a boundary between matter and antimatter, where a lot of annihilation would be going on and producing gamma-radiation, which we have not observed. I am wondering if the reason we are not observing this boundary is because those regions of space are by now so empty that there is no significant amount of annihilation going on anymore. Or could it possibly be the case that those boundaries are actually so far apart, that there just isn't any such boundary within our event-horizon. That would imply that the antimatter is out there somewhere beyond the event horizon and maybe 10^12 years from now it will be visible.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

If you have a list of ten million passwords, and you hash each password and then compare to the password database, you're just generating a rainbow table on the fly. There's no difference between that and doing the ten million hashes beforehand, or getting the list from somebody who already did.

Rainbow tables don't work that way. A rainbow table is not based on a dictionary. When generating a rainbow table you will be hashing pseudorandom inputs (chosen according to a probability distribution). And you are not hashing every input just once, you may end up reaching the same input multiple times. Also a rainbow table does not store all the computed hashes.

Case one: the bad guy wants to crack any account, and doesn't care which. The bad guy benefits from large numbers, because it increases the odds of somebody using a lame password.

I did not say having a large number of users made the system harder to attack. I said the slowdown salting does to the attack is proportional to the number of users. If salted hashes are used there are two factors involved as the number of users increases. More users means higher probability of somebody using a really lame password, this benefits the attacker, I am making no claims about the exact size of this factor. But salting means each password from the dictionary has to be hashed more times, which is a disadvantage to the attacker. In the ideal world these two factors cancel out. In the real world those two factors probably don't cancel out exactly. Nevertheless I stand by my statement about the slowdown of the attack introduced by salting, as it is the other factor, which there is most uncertainty about.

So let's assume an attacker wants to find just one valid password for one user. And let's assume there are n users and that in order to find one valid password, the attacker need a dictionary containing m passwords. So far those assumptions say nothing about how passwords are stored, and they are general enough to cover any such scenario. We don't know what n and m will be in a concrete scenario. What I stated is, that the number of hashes an attacker need to compute is n times larger, if the password database is salted than if plain unsalted hashes are used.

If the passwords are not salted, the attacker need to computer just m hashes and compare those against the password database. That comparison is easy to perform by simply sorting the hashes. If OTOH the passwords are salted, the attacker need to computer m*n different hashes in order to find the one combination, where there is a match.

If n is reasonably large, and if there is no strict password policy, it is likely that m will be just 1. But even in that case, the calculations are still valid.

Comment Re:WTF? (Score 1) 277

An old-school salted hash == partial verification for the whole entry. So the old-school solution is strictly worse than this.

You are right. I misunderstood that detail the first time around. The two bytes, which are leaked, are not two bytes of the password, but rather two bytes of the salted hash.

An attacker could still utilize those two bytes to perform an offline attack to reduce the length of a dictionary by a factor of 65536, followed by online attempts at logging in using this much shorter dictionary. However the article did mention how that attack can be detected by the server side.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

You do not understand what you are talking about. Salting has absolutely no influence on brute-forcing.

I give up. You have clearly demonstrated that you do not know what you are talking about, and that you are not willing to learn. I don't know why you think you can convince me about something by repeating a statement, which I know is not true.

If you are not willing to accept that you were mistaken, there is not point in this thread continuing any further.

The number of users has absolutely no influence on the time it takes to brute-force one. You clearly do not know what "brute-force" means. Maybe read up on the concepts before spouting utter nonsense?

  • You should read what I wrote instead of making something up, which I did not write.
  • I'd say taking a university degree in cryptography does count as reading up on the concepts.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

Either it is insecure, or it is vulnerable to DoS. So what is your point?

If you use a salted hash based on a cryptographic hash with no known weaknesses, then you won't be as vulnerable to DoS attacks. And security-wise it is a justifiable solution. Hashing and salting add a lot of security. It will slow down an attack significantly without a significant cost for legitimate usage. That's what you expect from good cryptography. Iterating the hash will OTOH slow down legitimate usage and attacks by the same factor. Slowing down legitimate usage by the same factor that you slow down attacks is not good cryptography.

Instead of slowing down legitimate usage without being able to slow down attacks by even more, you should be looking at adopting protocols, that provide real security improvements. For example it is entirely possible to perform password authentication without the server ever having a chance of picking up the password in cleartext. Such protocols provide real security improvement. You can also increase computation cost on the client side rather than server side, and slow down brute force of a leaked password database that way. The later is still not great, because you are still only slowing down the attacks by the same factor as legitimate usage. But at least you don't make yourself vulnerable to DoS attacks, if those extra computations happen on the client rather than the server.

If you go with salted hash with only 1 or 2 iterations of the hash function to protect yourself against DoS attacks, and you push for adoption of protocols that hide the password from the server, then you are doing more for security than most sites. And should those salted hashes leak, only the very weakest passwords will be brute-forced. In that situation if a user's password is broken, the user bear the responsibility for choosing such a weak password in the first place.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

The ideal would be some form of client certificate. That way, the server either stores a copy of the key, or just stores a hash of it so it can recognize the key material when presented with it.

Certificate means a trusted third party signed a certificate stating that this particular public key belongs to this user. I'm not hooked on the idea of a trusted third party for this. Having the server store the public key or a hash of it, like you suggest, is a better approach. But then it is not really a client certificate.

That approach is sort of similar to what I describe, except that in my scenario the private key is computed on the fly, and in your case it is stored on the computer. Each approach has advantages. It is possible to design the protocol such that the client can choose whichever of the two approaches it prefers, and the server won't know which of the two is in use.

One drawback of storing the private key on the computer is, that there is now a file you can lose, and if you do, you lose access to all sites. My approach would only require you to remember a password, and then you can always get a new computer and use it. That may be a drawback in some scenarios, as if someone learned your password, they could authenticate as you. OTOH if only the private key is required, somebody stealing your device could authenticate as you (though the private key could be encrypted using a password).

Another drawback of storing the private key is that you would be using the same key with many sites, which could then violate your privacy by deducing that all of those accounts on different sites belong to the same person. My approach would use a different private key for each site since it would depend on the salt. The protocol for setting the password in the first place could enforce uniqueness of the salt by requiring it to be a hash combining inputs from both client and server.

If this doesn't work, maybe a system where an ephemeral key is generated and used, which is signed by the user's real key (which is kept offline.)

If you do go with the stored key approach, then this additional layer of indirection would be beneficial to the security.

but it would get rid of passwords altogether.

I don't believe in getting rid of passwords. If you don't have any passwords at all, then anybody stealing your hardware could authenticate as you. For me the goal is not to get rid of passwords, but to ensure you never need to present your password to an untrusted device.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

What really needs to happen is separation of duties and storing the hashes the same way companies store private keys used for signing... a physically secure, hardened appliance with a limited interface out. Backups are done to a USB port physically on the appliance, and the data never is exposed on the network, only calls to use it.

I say the effort is better spend on new protocols, where the server will never be able to learn the password, even if an administrator decided to install software to capture data after it has been decrypted by SSL. Such protocols are possible, but not widely deployed.

How many users wouldn't want a system, where the administrator couldn't leak the users passwords, even if they wanted to? As an added bonus you can safely use the same password on all sites, that makes use of such a more secure protocol. The implication of that would be, that you only have to remember one password, and that would hopefully get users to choose a slightly stronger password, than they do today.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

Salting actually provides no security at all against brute-forcing. Salting helps against rainbow-tables, but that is a different attack.

You clearly did not understand what I wrote. Salting slows down attacks proportional to the number of users. The only way you can attack salted hashes as fast as unsalted hashes is if you are attacking a system, which is only ever used by one single user.

Rainbow tables is just a way to start the attack before the leak happens. If you already have the leaked hashes there is no point in using a rainbow table, since rainbow tables are slower than an ordinary brute force attack.

If there is many small leaks from identical hash funnction, you would have to decide when there is enough to bother with a brute force attack. If you use rainbow tables, most of the computation can be reused for each leak, because that part can be computed before the leak happens. But overall you end up spending more CPU time on the attack, than you would have, if you just waited for all the leaks to happen and only then started a brute force attack.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

But there are other ways, for example requiring users to solve a captcha in addition or rate-limiting individual IP addresses.

Rate-limiting individual IP addresses is of limited value. It is not that hard for an attacker to attack you through different IP addresses. And in many cases an attacker would even be able to get a few requests coming through the same IPs as some legitimate users. If an attacker has access to a botnet, the situation gets even worse.

I predict that killing the login service due to lack of CPU capacity requires a much smaller botnet than flooding the network connection would.

Using a captcha may help. But the implication would be, that if you are under attack, you are going to require a captcha from lots of legitimate users. That's not very user friendly. I am not aware of any formal analysis of the security of captcha schemes, but in cases like this breaking the captcha is only going to permit a DoS attack, and not gain actual access, so it is not that bad. And you can even adjust the difficulty of the captcha dynamically to keep server load under control.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

All of those methods for slowing down password validation are DoS attack vectors.

You can protect against that by moving part of the computation to the client side. Once upon a time, I wrote a proof-of-concept in javascript. Much better solutions are possible, if you designed a new protocol and were able to get clients to support it.

A rough idea goes like this. Client sends username to the server. Server responds with a salt. Client uses salt + password to seed a PRNG. Output from the PRNG is used to generate an asymmetrical keypair. The client then signs a session ID (from the SSL layer) using the generated secret key. Client sends public key and signature to the server. The server then validates the public key by using a salted hash using a different salt value, and it validates the signature.

For each user, the server would need to store two salts and a hash value. The most expensive part of the above calculation is the generation of the asymmetrical keypair, which happens on the client. Thus you are better protected against DoS attacks. And that computation actually requires more CPU time than a typical iterated hash for password validation. The second most expensive part of the calculation is the signing using the secret key, which also happens on the client.

The validation of the signature does require a bit of CPU time on the server. But that happens after you validated the public key. For an attacker to even get the public key is actually harder than breaking the password protection schemes we are using today. Effectively from the above you could remove the signature validation step of the protocol, and it would still be more secure, than what we are currently using.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

Salts do not help that much today, as brute-forcing is faster than generating rainbow-tables.

Salting provides lots of security against brute force attacks. Let's assume you have a system with one million users and you have a list of the 10 million most common passwords. If the system uses unsalted hashes, you only have to hash those 10 million passwords once to know which users have been using a password from that list. If OTOH the hashes are salted, you have one million salts and 10 million passwords. That's 10 billion combinations you have to try in order to know which users used which passwords from your list.

Comment Re:Clarification (Score 1) 277

Or one person with N passwords he logs in with. In which case, why not just give that guy a one time pad sort of thing that he primes each server with?

Actually for that we can just use a single strong password which could for example have 128 bits of entropy. So you just need to have one employee capable of memorizing a strong password, probably it would be a good idea to have a few such employees for redundancy.

Comment Re:WTF? (Score 3, Informative) 277

So... how do you know if a user can log in? You have to wait until a bunch of users want to log in simultaneously?

Exactly. The first of those users will experience the password validation taking longer than usual. How much longer depends on various parameters in the system. If some of the users gave up and closed the connection, you still have the information needed for unlocking, so you don't need all of those users to log in simultaneously. You just need enough different users trying to log in after a restart. Once the threshold is reached, that user will get logged in after having to wait at most a couple of seconds. Earlier users will get logged in at the same time if they are still waiting.

But I suspect you might be able to DoS that process by just submitting a stream of invalid passwords. They may be able to avoid that through the partial validation described in the paper, but the partial validation sounds like it leaks so much information, I would rather trust an old-school salted hash.

Comment Re:There is a major difference (Score 1) 132

Now, finally, you said "some people have argued... shouldn't even be actively be contacting candidates." The question is ... why is this justified?

I don't know if it is justified. But enough people have taken that position, that we need to at least acknowledge, that there is a group of people with that opinion.

It is not hard to understand why some people have that opinion. Nobody want to see their own inbox filled up with offers from loads of companies they'd never want to work for. But of course a few unwanted offers per year is better than a situation where you couldn't apply for those jobs even if you want to.

Slashdot Top Deals

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...