Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Dear slashdot, (Score 1) 92

Transaction fees prevent DoS attacks too, even with infinite block size.

I don't think so. Let's say somebody wants to perform a DoS attack spending as few bitcoins as possible. Just take a tiny amount of bitcoins and spend it all on transaction fees one satoshi at a time. With transactions spending one satoshi in fee and not actually transferring any bitcoins anywhere, miners would have incentive to include those transactions in the blocks. After all, if there is no limit on the block size, a miner may as well take that additional fee.

That being said, I still think that off-chain transactions are a bit of a kluge.

I absolutely agree.

Some way of infinitely scaling in-chain transactions, while still providing an incentive to mine long-term, would be awesome.

This I also agree with, except from one detail. The current proof-of-work approach is wasteful and must be replaced by something else. There are some ideas about proof-of-stake, which may be suitable at some point.

Comment Re:Dear slashdot, (Score 1) 92

Sorry to reply off-topic, but this part isn't true. We'll just start using more off-chain transactions.

That's actually not off-topic at all. The description of off-chain transactions mention that one way to do it is through the use of trusted third parties such as Mt. Gox! It does proceed to describe how a system could potentially be designed with auditing that can prove if fraud is happening, which would be an improvement, but it does not suggest any way to avoid such fraud.

If we forked every time transaction volume neared the limit then there would be no point in any limit at all

Sure there would. Requiring manual action to increase the transaction volume could protect against some kinds of DoS attacks, which would be possible, if there was no limit.

You can validate the chain of block headers without ever seeing the content of the blocks. The signature on individual transactions and their ancestors can be validated without ever seeing the full blocks, you just need a path from the block header to the transaction, which is only logarithmic in size. There are two reasons this is insufficient to solve the scalability problem. First of all the number of ancestors of a transaction could grow exponentially over time. Secondly checking for double spending requires a complete view of all the transactions in all the blocks. Solve those two problems, and you have solved the scalability problem.

Comment Re:Dear slashdot, (Score 1) 92

No, there is no intention to tighten the blockchain rules at this time. This would cause a hard fork, and breaking compatibility with old versions is not considered lightly.

And it should not be taken lightly. But as I understand it, such forks have been done in the past, and another one will be needed due to transaction volume approaching a hard limit imposed by the current rules. The particular tightening of the rules about signatures could piggyback on another update, which would cause a fork. Is there any reason not to piggyback it on the next fork?

Mtgox's software is unique. The reference client, for example, can not be fooled by changing transaction IDs.

And of course changing the reference implementation to mitigate security bugs in alternative implementations has far lower priority than getting the actual bugs in those alternative implementations fixed.

There are two values, each with a 1 in 256 chance. 1/256 + 1/256 = 1/128.

That makes sense. So the success probability is about 0.8%.

But the paper is written to make a much broader claim, and I haven't seen the authors going out of their way to mitigate that misunderstanding in the press, much the opposite.

The news sites I follow haven't picked up anything except from the original paper.

I believe their research is incomplete, but is there anything incorrect in the research they did perform? And is there anything wrong about the conclusion they reached, which was that transaction malleability cannot explain the bitcoins disappearing from mtgox?

Comment Re:Dear slashdot, (Score 1) 92

The bitcoin software started refusing to relay transactions with improperly padded transactions, even though they are still valid, if they make it into a block.

Are there any plans to stop accepting them in blocks?

The claimed attack is that people took these transactions, fixed them, and broadcast them.

I guess we can agree, that the article is not covering this attack, but rather a very different attack.

but they don't work very often, since it involves accepting a transaction over the p2p network, changing it, then broadcasting your version in hopes of winning the race to reach a miner first.

The paper says success rate is about 20%

But they aren't particularly useful for scamming mtgox (or anyone else).

Why not? If they have 20% success rate compared to the 0.4% success rate in the other rate, why not try it?

profiting on roughly one cycle out of every 128.

How do you get that to 128? One out of every 256 would sound more likely to me.

Either way the conclusion appears to be that money was not stolen from mtgox using any version of the malleability attack. The paper only argued that they weren't attacked with one particular variant, which would still be correct, though an incomplete investigation.

Comment Re:Dear slashdot, (Score 1) 92

The transactions did happen by malleability attack. What makes you think they did not?

The paper suggested they happened due to a malleability attack, I have no reason to think otherwise. It was not me who said that was nonsense.

It would look like any other transaction.

The paper carefully explained difference in the looks of the involved transactions. By saying an attack would look like any other transaction, you are contradicting the paper, and you are providing less evidence to support your case than the paper did. Hence the paper is more trustworthy than your statement.

They failed to steal anything, hence proving the MtGox story is bullshit.

First of all the paper did not say anything about who those were targeted at, neither if they succeeded. It is likely that they failed to steal anything, but unless the attacks were targeted at you, you cannot know if they succeeded.

Even if we assume those copy-cats failed to steal anything, that doesn't prove anything.

Remember that the spike happened after MtGox closed withdrawals.

Yes, I already quoted that from the paper.

The observation in the paper was that if it was true, when mtgox said in their announcement, that they have closed withdrawals, then those attacks could not have been directed at mtgox. So they could be excluded from the set of attacks, that could have stolen money from mtgox.

The observation made in the paper was that the total number of attempted malleability attacks across the entire bitcoin network during the period were the alleged thefts happened were much fewer than the amount of bitcoins, that were allegedly stolen that way.

I can't figure out who you are trying to say is right - mtgox or the researches. And I don't see much in your comment pointing one way or the other. For now the methodology used in the paper appears sound to me. I haven't seen the raw data though, and due to the nature of the attacks only half the raw data will be in the blockchain. If they did publish the raw data, I don't know if it is possible to independently verify the validity of said data.

Comment Re:Correlation != Causation (Score 1) 351

Correlation is not causation. It's entirely possible that dying natives cause visiting Europeans.

How can we even be sure there is a correlation? We can measure mortality of the tribes that we do find. But then we need to compare that number to the mortality of the tribes that we do not find. Measuring the mortality of tribes that we do not find sounds tricky.

Comment Re:Dear slashdot, (Score 4, Interesting) 92

Just that this paper is nonsense.

Care to answer a few questions then?

  • How did the transactions found by these researches happen, if not by a malleability attack?
  • If a malleability attack would not result in transactions looking like what was found by these researchers, then what would it look like?
  • What is the explanation for the spike found just after the announcement, if that was not due to copy-cats attempting malleability attacks?

Comment Re:Matter, anti-matter... (Score 2) 393

Are we sure there were equal amounts?

The way I have understood what's been said so far is this. The universe started with equal amounts matter and antimatter. Matter and anti-matter can only be produced and annihilated in equal amounts. Today we have reached a state, where there is much more matter than antimatter.

This is obviously inconsistent. So one of those three statements has to be wrong. I for one don't know which one of them is wrong. And I also haven't come across a physicist who had solid evidence for which of them is wrong.

One possibility I have been wondering about is that of antimatter galaxies. Seen from a distance, wouldn't an antimatter galaxy look exactly like one made of matter? I have been told this is not a possibility either, since that would imply that somewhere there would have to be a boundary between matter and antimatter, where a lot of annihilation would be going on and producing gamma-radiation, which we have not observed. I am wondering if the reason we are not observing this boundary is because those regions of space are by now so empty that there is no significant amount of annihilation going on anymore. Or could it possibly be the case that those boundaries are actually so far apart, that there just isn't any such boundary within our event-horizon. That would imply that the antimatter is out there somewhere beyond the event horizon and maybe 10^12 years from now it will be visible.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

If you have a list of ten million passwords, and you hash each password and then compare to the password database, you're just generating a rainbow table on the fly. There's no difference between that and doing the ten million hashes beforehand, or getting the list from somebody who already did.

Rainbow tables don't work that way. A rainbow table is not based on a dictionary. When generating a rainbow table you will be hashing pseudorandom inputs (chosen according to a probability distribution). And you are not hashing every input just once, you may end up reaching the same input multiple times. Also a rainbow table does not store all the computed hashes.

Case one: the bad guy wants to crack any account, and doesn't care which. The bad guy benefits from large numbers, because it increases the odds of somebody using a lame password.

I did not say having a large number of users made the system harder to attack. I said the slowdown salting does to the attack is proportional to the number of users. If salted hashes are used there are two factors involved as the number of users increases. More users means higher probability of somebody using a really lame password, this benefits the attacker, I am making no claims about the exact size of this factor. But salting means each password from the dictionary has to be hashed more times, which is a disadvantage to the attacker. In the ideal world these two factors cancel out. In the real world those two factors probably don't cancel out exactly. Nevertheless I stand by my statement about the slowdown of the attack introduced by salting, as it is the other factor, which there is most uncertainty about.

So let's assume an attacker wants to find just one valid password for one user. And let's assume there are n users and that in order to find one valid password, the attacker need a dictionary containing m passwords. So far those assumptions say nothing about how passwords are stored, and they are general enough to cover any such scenario. We don't know what n and m will be in a concrete scenario. What I stated is, that the number of hashes an attacker need to compute is n times larger, if the password database is salted than if plain unsalted hashes are used.

If the passwords are not salted, the attacker need to computer just m hashes and compare those against the password database. That comparison is easy to perform by simply sorting the hashes. If OTOH the passwords are salted, the attacker need to computer m*n different hashes in order to find the one combination, where there is a match.

If n is reasonably large, and if there is no strict password policy, it is likely that m will be just 1. But even in that case, the calculations are still valid.

Comment Re:WTF? (Score 1) 277

An old-school salted hash == partial verification for the whole entry. So the old-school solution is strictly worse than this.

You are right. I misunderstood that detail the first time around. The two bytes, which are leaked, are not two bytes of the password, but rather two bytes of the salted hash.

An attacker could still utilize those two bytes to perform an offline attack to reduce the length of a dictionary by a factor of 65536, followed by online attempts at logging in using this much shorter dictionary. However the article did mention how that attack can be detected by the server side.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

You do not understand what you are talking about. Salting has absolutely no influence on brute-forcing.

I give up. You have clearly demonstrated that you do not know what you are talking about, and that you are not willing to learn. I don't know why you think you can convince me about something by repeating a statement, which I know is not true.

If you are not willing to accept that you were mistaken, there is not point in this thread continuing any further.

The number of users has absolutely no influence on the time it takes to brute-force one. You clearly do not know what "brute-force" means. Maybe read up on the concepts before spouting utter nonsense?

  • You should read what I wrote instead of making something up, which I did not write.
  • I'd say taking a university degree in cryptography does count as reading up on the concepts.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

Either it is insecure, or it is vulnerable to DoS. So what is your point?

If you use a salted hash based on a cryptographic hash with no known weaknesses, then you won't be as vulnerable to DoS attacks. And security-wise it is a justifiable solution. Hashing and salting add a lot of security. It will slow down an attack significantly without a significant cost for legitimate usage. That's what you expect from good cryptography. Iterating the hash will OTOH slow down legitimate usage and attacks by the same factor. Slowing down legitimate usage by the same factor that you slow down attacks is not good cryptography.

Instead of slowing down legitimate usage without being able to slow down attacks by even more, you should be looking at adopting protocols, that provide real security improvements. For example it is entirely possible to perform password authentication without the server ever having a chance of picking up the password in cleartext. Such protocols provide real security improvement. You can also increase computation cost on the client side rather than server side, and slow down brute force of a leaked password database that way. The later is still not great, because you are still only slowing down the attacks by the same factor as legitimate usage. But at least you don't make yourself vulnerable to DoS attacks, if those extra computations happen on the client rather than the server.

If you go with salted hash with only 1 or 2 iterations of the hash function to protect yourself against DoS attacks, and you push for adoption of protocols that hide the password from the server, then you are doing more for security than most sites. And should those salted hashes leak, only the very weakest passwords will be brute-forced. In that situation if a user's password is broken, the user bear the responsibility for choosing such a weak password in the first place.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

The ideal would be some form of client certificate. That way, the server either stores a copy of the key, or just stores a hash of it so it can recognize the key material when presented with it.

Certificate means a trusted third party signed a certificate stating that this particular public key belongs to this user. I'm not hooked on the idea of a trusted third party for this. Having the server store the public key or a hash of it, like you suggest, is a better approach. But then it is not really a client certificate.

That approach is sort of similar to what I describe, except that in my scenario the private key is computed on the fly, and in your case it is stored on the computer. Each approach has advantages. It is possible to design the protocol such that the client can choose whichever of the two approaches it prefers, and the server won't know which of the two is in use.

One drawback of storing the private key on the computer is, that there is now a file you can lose, and if you do, you lose access to all sites. My approach would only require you to remember a password, and then you can always get a new computer and use it. That may be a drawback in some scenarios, as if someone learned your password, they could authenticate as you. OTOH if only the private key is required, somebody stealing your device could authenticate as you (though the private key could be encrypted using a password).

Another drawback of storing the private key is that you would be using the same key with many sites, which could then violate your privacy by deducing that all of those accounts on different sites belong to the same person. My approach would use a different private key for each site since it would depend on the salt. The protocol for setting the password in the first place could enforce uniqueness of the salt by requiring it to be a hash combining inputs from both client and server.

If this doesn't work, maybe a system where an ephemeral key is generated and used, which is signed by the user's real key (which is kept offline.)

If you do go with the stored key approach, then this additional layer of indirection would be beneficial to the security.

but it would get rid of passwords altogether.

I don't believe in getting rid of passwords. If you don't have any passwords at all, then anybody stealing your hardware could authenticate as you. For me the goal is not to get rid of passwords, but to ensure you never need to present your password to an untrusted device.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

What really needs to happen is separation of duties and storing the hashes the same way companies store private keys used for signing... a physically secure, hardened appliance with a limited interface out. Backups are done to a USB port physically on the appliance, and the data never is exposed on the network, only calls to use it.

I say the effort is better spend on new protocols, where the server will never be able to learn the password, even if an administrator decided to install software to capture data after it has been decrypted by SSL. Such protocols are possible, but not widely deployed.

How many users wouldn't want a system, where the administrator couldn't leak the users passwords, even if they wanted to? As an added bonus you can safely use the same password on all sites, that makes use of such a more secure protocol. The implication of that would be, that you only have to remember one password, and that would hopefully get users to choose a slightly stronger password, than they do today.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...