Forgot your password?

Comment: Not an open problem. (Score 1) 161

by alexhs (#47810333) Attached to: New HTML Picture Element To Make Future Web Faster

Retrieving optimized images from the server, based on device (desktop, tablet, phone) and the device's internet connection (fiber, broadband, mobile), has always been an open problem.

Nope. It was already solved by the JPEG's hierarchical mode, more than twenty years ago. You're limited to scaled sizes that are the inverse of a power of 2 of the full size, but on the other hand the client wouldn't even need to inform the server and just proceed with a partial download, up to the point where it has enough data for the desired resolution.

Comment: Re:typed that backward. Fingers don't believe me. (Score 4, Insightful) 99

It's interesting that the Microsoft announcement is MORE support for Mac and LESS support for Windows.

The next generation of interesting software will be done on the Macintosh, not the IBM PC.
-- Bill Gates, BusinessWeek, 26 November 1984

Comment: Clueless article (Score 4, Informative) 396

by alexhs (#47236227) Attached to: One Developer's Experience With Real Life Bitrot Under HFS+

People talking about "bit rot" usually have no clue, and this guy is no exception.

It's extremely unlikely that a file would become silently corrupted on disk. Block devices include per-block checksums, and you either have a read error (maybe he has) or the data read is the same as the data previously written. As far as I know, ZFS doesn't help to recover data from read errors. You would need RAID and / or backups.

Main memory is the weakest link. That's why my next computer will have ECC memory. So, when you copy the file (or otherwise defragment or modify the file, etc), you read a good copy, some bit flips in RAM, and you write back corrupted data. Your disk receives the corrupted data, happily computes a checksum, therefore ensuring you can read back your corrupted data faithfully. That's where ZFS helps. Using checksumming scripts is a good idea, and I do it myself. But I don't have auto-defrag on Linux, so I'm safer : when I detect a corrupted copy, I still have the original.

ext2 was introduced in 1993, and so was NTFS. ext4 is just ext2 updated (ext was a different beast). If anything, HFS+ is more modern, not that it makes a difference. All of them are updated. By the way, I noticed recently that Mac OS X resource forks sometimes contain a CRC32. I noticed it in a file coming from Mavericks.

Comment: Re:I'm ignorant (Score 2) 105

by alexhs (#47178769) Attached to: Evidence of Protoplanet Found On Moon

Given enough data, almost all theories are disproven. The only ones that remain are the ones that fit the data.

Given enough data, almost all hypotheses are disproven. The ones which remain and have not yet been disproven by evidence become theories.

Nope, the AC was right.

By your definition, there is ultimately no such thing as a theory. Newtonian physics don't fit as they've been invalidated by Einstein's general relativity, which itself is known to be wrong as it is inconsistent with quantum mechanics (which are also wrong for the same reason).

You can't claim that former theories that were refined / invalidated never were theories in the first place : The "not yet" in your second sentence is problematic as it only allows theories to be defined with hindsight.

Therefore :

When data doesn't fit current theories, you're forming hypotheses, and test them. If your hypothesis fits the data better than former theories on some domain of validity (whose boundaries might not be completely known at the time of formulation, and will be refined with time and experimentations), good for you: you now have a new theory. It will ultimately be replaced by better theories, usually with an extended domain of validity (data that were missing at the time of formulation and testing).

And that was well summed up by the GP.

To avoid criticism, do nothing, say nothing, be nothing. -- Elbert Hubbard