Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Lol wut (Score 1) 128

One wrong jump is all it takes.

That is true even if you keep the array in user space. Kernel code has privileges to do anything, even jumping directly to user space and executing code from there. What you need to review is not the byte array, but the code processing it. Because that code is running with kernel privileges, so you need it to be bug-free.

Comment Re:Cut off your nose to spite your face (Score 1) 86

You may recall that elliptic curve encryption was thought to be a highly promising encryption technology at the time.

Yes, compared to other asymmetrical primitives. I have seen no research suggesting that it would be a good idea to replace symmetrical cryptography with elliptic curves. Quite the contrary, since symmetrical cryptography is more resistant to cryptoanalysis using quantum computers, than asymmetrical cryptography is.

Comment Re:Cut off your nose to spite your face (Score 1) 86

There is no evidence that a backdoor actually exists, only that one is possible with the technology.

  • Using asymmetrical primitives to build a PRNG is suspicious, since a PRNG can be build from symmetrical primitives, but placing a backdoor which can be used by yourself but not by others requires asymmetrical primitives.
  • Long before DECDRBG was published it was well established among cryptographers, that you document where your constants came from, and that any constant which is not justified is by default assumed to be a backdoor.
  • It is fully documented how the constants in DECDRBG could have been obtained with a backdoor, and to this date this date no other explanation for the exact value of the constant has been given.
  • Leaked documents suggest that NSA has been actively working on planting backdoors in cryptographic standards.

To me that is more than sufficient evidence to assume DECDRBG to have a backdoor. A deliberately placed backdoor definitely is the most likely explanation for the structure of DECDRBG. By now there is really only one additional piece of information, which could change that picture, and that would be the actual calculations that were used to produce the constants.

Comment Re:Cut off your nose to spite your face (Score 1) 86

The weakness in Dual_EC_DRBG is publicly known.

Sometimes it may be difficult to tell the difference between a weakness and a backdoor. But in the case of dual EC DRBG there is so much evidence indicating that it is an actual backdoor, and not (just) a weakness, that I think it is no longer fair to label it as a weakness. Who placed the backdoor is not officially confirmed, but we all know who the prime suspect is.

Comment Re:Dear slashdot, (Score 1) 92

Transaction fees prevent DoS attacks too, even with infinite block size.

I don't think so. Let's say somebody wants to perform a DoS attack spending as few bitcoins as possible. Just take a tiny amount of bitcoins and spend it all on transaction fees one satoshi at a time. With transactions spending one satoshi in fee and not actually transferring any bitcoins anywhere, miners would have incentive to include those transactions in the blocks. After all, if there is no limit on the block size, a miner may as well take that additional fee.

That being said, I still think that off-chain transactions are a bit of a kluge.

I absolutely agree.

Some way of infinitely scaling in-chain transactions, while still providing an incentive to mine long-term, would be awesome.

This I also agree with, except from one detail. The current proof-of-work approach is wasteful and must be replaced by something else. There are some ideas about proof-of-stake, which may be suitable at some point.

Comment Re:Dear slashdot, (Score 1) 92

Sorry to reply off-topic, but this part isn't true. We'll just start using more off-chain transactions.

That's actually not off-topic at all. The description of off-chain transactions mention that one way to do it is through the use of trusted third parties such as Mt. Gox! It does proceed to describe how a system could potentially be designed with auditing that can prove if fraud is happening, which would be an improvement, but it does not suggest any way to avoid such fraud.

If we forked every time transaction volume neared the limit then there would be no point in any limit at all

Sure there would. Requiring manual action to increase the transaction volume could protect against some kinds of DoS attacks, which would be possible, if there was no limit.

You can validate the chain of block headers without ever seeing the content of the blocks. The signature on individual transactions and their ancestors can be validated without ever seeing the full blocks, you just need a path from the block header to the transaction, which is only logarithmic in size. There are two reasons this is insufficient to solve the scalability problem. First of all the number of ancestors of a transaction could grow exponentially over time. Secondly checking for double spending requires a complete view of all the transactions in all the blocks. Solve those two problems, and you have solved the scalability problem.

Comment Re:Dear slashdot, (Score 1) 92

No, there is no intention to tighten the blockchain rules at this time. This would cause a hard fork, and breaking compatibility with old versions is not considered lightly.

And it should not be taken lightly. But as I understand it, such forks have been done in the past, and another one will be needed due to transaction volume approaching a hard limit imposed by the current rules. The particular tightening of the rules about signatures could piggyback on another update, which would cause a fork. Is there any reason not to piggyback it on the next fork?

Mtgox's software is unique. The reference client, for example, can not be fooled by changing transaction IDs.

And of course changing the reference implementation to mitigate security bugs in alternative implementations has far lower priority than getting the actual bugs in those alternative implementations fixed.

There are two values, each with a 1 in 256 chance. 1/256 + 1/256 = 1/128.

That makes sense. So the success probability is about 0.8%.

But the paper is written to make a much broader claim, and I haven't seen the authors going out of their way to mitigate that misunderstanding in the press, much the opposite.

The news sites I follow haven't picked up anything except from the original paper.

I believe their research is incomplete, but is there anything incorrect in the research they did perform? And is there anything wrong about the conclusion they reached, which was that transaction malleability cannot explain the bitcoins disappearing from mtgox?

Comment Re:Dear slashdot, (Score 1) 92

The bitcoin software started refusing to relay transactions with improperly padded transactions, even though they are still valid, if they make it into a block.

Are there any plans to stop accepting them in blocks?

The claimed attack is that people took these transactions, fixed them, and broadcast them.

I guess we can agree, that the article is not covering this attack, but rather a very different attack.

but they don't work very often, since it involves accepting a transaction over the p2p network, changing it, then broadcasting your version in hopes of winning the race to reach a miner first.

The paper says success rate is about 20%

But they aren't particularly useful for scamming mtgox (or anyone else).

Why not? If they have 20% success rate compared to the 0.4% success rate in the other rate, why not try it?

profiting on roughly one cycle out of every 128.

How do you get that to 128? One out of every 256 would sound more likely to me.

Either way the conclusion appears to be that money was not stolen from mtgox using any version of the malleability attack. The paper only argued that they weren't attacked with one particular variant, which would still be correct, though an incomplete investigation.

Comment Re:Dear slashdot, (Score 1) 92

The transactions did happen by malleability attack. What makes you think they did not?

The paper suggested they happened due to a malleability attack, I have no reason to think otherwise. It was not me who said that was nonsense.

It would look like any other transaction.

The paper carefully explained difference in the looks of the involved transactions. By saying an attack would look like any other transaction, you are contradicting the paper, and you are providing less evidence to support your case than the paper did. Hence the paper is more trustworthy than your statement.

They failed to steal anything, hence proving the MtGox story is bullshit.

First of all the paper did not say anything about who those were targeted at, neither if they succeeded. It is likely that they failed to steal anything, but unless the attacks were targeted at you, you cannot know if they succeeded.

Even if we assume those copy-cats failed to steal anything, that doesn't prove anything.

Remember that the spike happened after MtGox closed withdrawals.

Yes, I already quoted that from the paper.

The observation in the paper was that if it was true, when mtgox said in their announcement, that they have closed withdrawals, then those attacks could not have been directed at mtgox. So they could be excluded from the set of attacks, that could have stolen money from mtgox.

The observation made in the paper was that the total number of attempted malleability attacks across the entire bitcoin network during the period were the alleged thefts happened were much fewer than the amount of bitcoins, that were allegedly stolen that way.

I can't figure out who you are trying to say is right - mtgox or the researches. And I don't see much in your comment pointing one way or the other. For now the methodology used in the paper appears sound to me. I haven't seen the raw data though, and due to the nature of the attacks only half the raw data will be in the blockchain. If they did publish the raw data, I don't know if it is possible to independently verify the validity of said data.

Comment Re:Correlation != Causation (Score 1) 351

Correlation is not causation. It's entirely possible that dying natives cause visiting Europeans.

How can we even be sure there is a correlation? We can measure mortality of the tribes that we do find. But then we need to compare that number to the mortality of the tribes that we do not find. Measuring the mortality of tribes that we do not find sounds tricky.

Comment Re:Dear slashdot, (Score 4, Interesting) 92

Just that this paper is nonsense.

Care to answer a few questions then?

  • How did the transactions found by these researches happen, if not by a malleability attack?
  • If a malleability attack would not result in transactions looking like what was found by these researchers, then what would it look like?
  • What is the explanation for the spike found just after the announcement, if that was not due to copy-cats attempting malleability attacks?

Comment Re:Matter, anti-matter... (Score 2) 393

Are we sure there were equal amounts?

The way I have understood what's been said so far is this. The universe started with equal amounts matter and antimatter. Matter and anti-matter can only be produced and annihilated in equal amounts. Today we have reached a state, where there is much more matter than antimatter.

This is obviously inconsistent. So one of those three statements has to be wrong. I for one don't know which one of them is wrong. And I also haven't come across a physicist who had solid evidence for which of them is wrong.

One possibility I have been wondering about is that of antimatter galaxies. Seen from a distance, wouldn't an antimatter galaxy look exactly like one made of matter? I have been told this is not a possibility either, since that would imply that somewhere there would have to be a boundary between matter and antimatter, where a lot of annihilation would be going on and producing gamma-radiation, which we have not observed. I am wondering if the reason we are not observing this boundary is because those regions of space are by now so empty that there is no significant amount of annihilation going on anymore. Or could it possibly be the case that those boundaries are actually so far apart, that there just isn't any such boundary within our event-horizon. That would imply that the antimatter is out there somewhere beyond the event horizon and maybe 10^12 years from now it will be visible.

Comment Re:He pretty much agrees with you on page 12. (Score 1) 277

If you have a list of ten million passwords, and you hash each password and then compare to the password database, you're just generating a rainbow table on the fly. There's no difference between that and doing the ten million hashes beforehand, or getting the list from somebody who already did.

Rainbow tables don't work that way. A rainbow table is not based on a dictionary. When generating a rainbow table you will be hashing pseudorandom inputs (chosen according to a probability distribution). And you are not hashing every input just once, you may end up reaching the same input multiple times. Also a rainbow table does not store all the computed hashes.

Case one: the bad guy wants to crack any account, and doesn't care which. The bad guy benefits from large numbers, because it increases the odds of somebody using a lame password.

I did not say having a large number of users made the system harder to attack. I said the slowdown salting does to the attack is proportional to the number of users. If salted hashes are used there are two factors involved as the number of users increases. More users means higher probability of somebody using a really lame password, this benefits the attacker, I am making no claims about the exact size of this factor. But salting means each password from the dictionary has to be hashed more times, which is a disadvantage to the attacker. In the ideal world these two factors cancel out. In the real world those two factors probably don't cancel out exactly. Nevertheless I stand by my statement about the slowdown of the attack introduced by salting, as it is the other factor, which there is most uncertainty about.

So let's assume an attacker wants to find just one valid password for one user. And let's assume there are n users and that in order to find one valid password, the attacker need a dictionary containing m passwords. So far those assumptions say nothing about how passwords are stored, and they are general enough to cover any such scenario. We don't know what n and m will be in a concrete scenario. What I stated is, that the number of hashes an attacker need to compute is n times larger, if the password database is salted than if plain unsalted hashes are used.

If the passwords are not salted, the attacker need to computer just m hashes and compare those against the password database. That comparison is easy to perform by simply sorting the hashes. If OTOH the passwords are salted, the attacker need to computer m*n different hashes in order to find the one combination, where there is a match.

If n is reasonably large, and if there is no strict password policy, it is likely that m will be just 1. But even in that case, the calculations are still valid.

Comment Re:WTF? (Score 1) 277

An old-school salted hash == partial verification for the whole entry. So the old-school solution is strictly worse than this.

You are right. I misunderstood that detail the first time around. The two bytes, which are leaked, are not two bytes of the password, but rather two bytes of the salted hash.

An attacker could still utilize those two bytes to perform an offline attack to reduce the length of a dictionary by a factor of 65536, followed by online attempts at logging in using this much shorter dictionary. However the article did mention how that attack can be detected by the server side.

Slashdot Top Deals

Systems programmers are the high priests of a low cult. -- R.S. Barton

Working...