Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Wrong optimization (Score 1) 105

It is basically impossible to make a return trip to Mars because of the fuel requirements, which is why there has not been any manned missions, and will likely not be any in the near future. It takes a lot of fuel to deliver a very small amount of fuel to Mars. There'd have to be a long series of fuel delivery flights before a manned mission. For those, fuel efficiency is of course priority one.

Comment Re:Really? .. it comes with the job (Score 1) 772

Another problem is that the interrogation techniques were not originally designed to get information. They were originally developed to get captured soldiers to admit to false confessions.

As was the case here. The the Bush/Cheney administration knew that they wouldn't get any useful intelligence from torture, but they wanted "evidence" pointing towards Saddam, so that they could start another war.

Comment Re:Not a strong chain if the IP is the strongest l (Score 1) 84

It's not meant to be the strongest link in the chain. Just a link in the chain. If, every time someone connects in a suspicious way, you call their cell-phone to verify, or ask for an extra one-time password, or at the very least send them an email, then you can detect/prevent a lot of fraud. (This applies not only to Tor, but to any type of "unusual" connection, for example connecting from Russia five minutes after using a credit card in the U.S.)

Comment Re:Shyeah, right. (Score 1) 284

Rsync will happily detect and copy changes without propagating whole files, yes. But only on network transfers, and it requires reading the entire file on both ends. When backing up to a local (removable) hard drive, which is what we are discussing here, it is usually faster to copy the file once than to checksum it twice, so that is what rsync does by default.

I have compared the tools, and I do know what they do. I found a hundredfold improvement in the time to backup a set of virtual machines on a linux server after switching from rsync to ZFS.

Comment Re:Tape Culture Fallacy (Score 1) 284

If the file server uses a file system with checksums, and those checksums are also backed up, then it's a simple matter of reading through the tape and verifying the checksums. You don't need to compare to the original files.

(The probability of a corrupted backup server accidentally creating a correct checksum can be made arbitrarily small. Usually it's something like 2^-256.)

Comment Re:Shyeah, right. (Score 1) 284

You might want to look at using ZFS instead of rsync. I switched a while back, and it was definitely worth the initial effort of changing the file system on the server.

With rsync you can get inconsistencies because not all files are backed up at the same instant. ZFS snapshots get around this.

If you modify a large file (say a 100 GB virtual machine), rsync will re-backup the entire file. ZFS will keep track of the part that changed and only copy that.

Also if a file on one of your multiple backups is subtly corrupt, you might not notice. Or even if you do compare the copies, you might not know which one is correct. With ZFS, the entire file system is checksummed and a raid or mirror can heal itself.

Comment Re:Saturday is Semantics Day (Score 1) 181

After all each time you add a different type of specialty processor into an environment, you introduce another codebase for the application, another toolchain to learn and another set of communication / OS support issues.

That will be an issue only for the OS and library developers. To the applications developer there will be no noticeable difference. It is already the case that you need to use specialized libraries to get maximum performance on common types of tasks.

For example, if you want to use an FFT on a modern "general purpose" processor, you will get much better performance using a standard library function than you would if you wrote your own. There are so may issues with memory access patterns, core and cache utilization, etc. that you will never have time to figure out if you just want to use the FFT (rather than do research on the algorithm itself.)

If a future CPU gets a built in FFT, then the standard library will be updated, and your application will just run faster. No modification necessary.

Comment Re:Who cares (Score 1) 145

I used to hang out in a swedish photography/videography forum. Bandwidth is cheap in Sweden, so a lot of these guys were on 100+ Mbit connections and liked to keep a backup in the cloud. Whenever a new "unlimited" storage service came around they'd hop on and upload tens of terabytes of photos/videos. (None of wich could be de-duplicated, since it was all original work.)

Inevitably, the storage service would update its TOS within a year, or go bankrupt.

Slashdot Top Deals

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...