Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:This is the dumbest research I've seen this yea (Score 1) 486 486

>According TFA, they actually do an explicit sync to disk at the end of the writes. So it's not purely writing into cache.

The code in the paper says they flush before closing the file. This is not the same as a sync. They don't even flush (or sync) after each write.

Comment Re:This is the dumbest research I've seen this yea (Score 1) 486 486

>This is the dumbest research I've seen in 2015. There was actually no computation involved -- they just wanted to write a long string to disk. They concluded that adding the superfluous step of concatenating strings in memory, then writing to disk, was slower. Well duh! That's not what memory is for!

Agreed with you on the uselessness of their research, but that is most definitely one important and common use of memory: buffer caches used by the operating system.

Effectively, they unintentionally tested the speed of the OS to concatenate strings vs Java or Python. The researchers are wrong right out of the gate: they say "Heavy Disk Usage" in their research headline, but at no point did they actually test disk performance, everything they did is being handled by the OS buffer cache.

All the researchers have shown is that string concatenation operations in Java and Python are atrociously slow. The java example used the naive form a=a+b; to concatenate strings, which is one of the slowest ways to do it in Java if you are doing repeated concatenations to a string.

If, in their tests, they had also done a string concatenation in C by allocating a buffer and appending to it using a pointer (not strcat) the speed difference doing that vs. 1 million write calls would have been negligible.

Also, if they sync'd after each of a million 1-byte writes to test how slow "Heavy Disk Usage" is compared to a single write of a million bytes, they wouldn't have bothered finishing this paper at all because it's so damn obvious that memory is faster.

Comment Re:Do not use standard passwords (Score 2) 198 198

What next? You use 15 or 20 character passwords, or a passphrase of several words.

But for the server side, use key strengthening with something like bcrypt or scrypt.
If it takes 1 second on very fast hardware to hash a single password, then your attacker has to also spend a lot of time on each hash attempt.
scrypt was also designed with custom hardware attacks in mind (it uses lots of memory) so it is still slow and expensive even if the attacker has key derivation logic in an asic or fpga.

If it takes a tenth of a second for an attacker to derive a key (or hash) from a password then a 10 character password is still incredibly strong.
If the passwords have salt (as they should) even a plain english dictionary attack on a 2M password file will take years to finish.

As faster hardware becomes available, you adapt by changing the key derivation parameters.

Comment Reminds me of a sound demo. (Score 1) 381 381

I can't remember what software it was, but it included samples labeled "8-bit" and "16-bit" to demonstrate the difference between 8 and 16 bits/sample audio.
I assumed the 8-bit audio file was deliberately made noisy and grainy, because it sounded much worse than the 16 bit file downsampled to 8.

Comment Re:Who generates 512-bit RSA keys these days? (Score 1) 80 80

>RSA for example needs two prime numbers as a keypair, so while the key length might be 512 bit, there are actually not that many from those 2^512 numbers to choose from. Also, certain key values are prone to attacks.

How many is not that many? Bruce Schneier in Cryptography Engineering calculates that 1 in 1386 numbers in the 2^2000 bit range is prime. In the 2^512 range primes are even more frequent, according to prime counting estimates.

In these matters the only certainty is that there is nothing certain. -- Pliny the Elder

Working...