6Gbps is about 2TB / hr.
Just put a 3TB HDD (or lots of DVDs) on a car and drive to the destination in half and hour, and you've achieved more than double the bandwidth
Are we really hubristic enough to think we will ever have a theory that predicts and explains everything with 100% accuracy at all levels?
"There is another theory which states that this has already happened." - D.A.
He mixed up 2 different cases. One is the one you described, and the other is the 'cat' person using a "remote control virus" to taunt the police and got them to arrest the wrong person (whose PC is infected and used to send email remotely). He got caught and confessed eventually I think.
The hardware having the wrong range is probably pretty hard to avoid due to variance between terminals and problems keeping them all tuned over their lifetime.
However, the NFC reader shouldn't be active until the customer told the cashier he/she will be using a contactless card for payment and the cashier enabling the reader.
It wouldn't prevent reading the wrong card if the customer has several NFC cards, but it would at least prevent the kind of surprises shown in the article.
The most notable example being SATA on Intel chipsets:
If Intel wanted to, they could probably have a new driver that enables support for port multipliers before WD releases the disk.
I hate flash as much as everyone, but I think the blame is misplaced wrt the Firefox situation.
On my system Firefox only locks up when I close a tab containing flash, never when flash is running. I have not had flash content crash in mid-run, let alone it bringing down the browser.
Sure, killing plugin-container.exe unlocks the browser, but it is an ugly hack at the most and not a fix.
Firefox devs have been plugging their ears and closing their eyes every time someone mentions this problem. They cannot expect users to believe it is the users' setup (or drivers, or plugins) that is the fault in every case when there are so many reports in the wild.
It also doesn't explain why other browsers have no such problem, nor why FF 3.6 did not have it (without resorting to lame excuses like "the flash version is different") either.
There are bad sectors on your brand new drive. You can count on it. You have to make the drive find them and map around them because it won't happen in the factory.
In the MFM/RLL days, SCSI disks were tested in the factory and came with a list of known bad C/H/S locations, and also keeps a list for bad sectors developed afterwards. I forgot whether the controller board had to skip those sectors during LBA translation or the OS had to not use them.
When IDE drives came out, the 'factory list' suddenly disappeared, and all drives seemingly came with 0 bad sectors out of the factory, but it was understood that the list was just hidden. They also introduced reserved sectors used to replace bad sectors developed afterwards so the user/OS always can always see/use the same capacity as long as the reserved area is not used up.
I believe this is still the case (test in the factory and hiding the list) as 2 new drives of the same model / batch can perform differently when tested, and sometimes there are consistent speed dips in the performance graph where you can tell something is going on.
That said, drives nowadays are more reliable, and I've not encountered a drive that develop bad sectors during the initial fill with random data, which I always do when I buy a new drive. I would not trust any brand-new drive which does it and for old drives that develops bad sectors I'll not use for anything important, even though the drive can reallocate them and might still run for years onwards.
Please work on something that will be actually useful, like those below. These are hard to do but it looks really bad when Mozilla ignore these for nearly 10 years to work on eye candy.
HTML5 <ruby> support
CSS3 writing-mode (vertical text)
We are not a loved organization, but we are a respected one. -- John Fisher