Would it surprise you to learn that nobody drives by to meter your gas usage? It's wireless to the nearest cellular uplink, usually in a wireless electric meter. Nobody has to come query it, since that would mostly defeat the point.
Amazing.. it appears you didn't even look at the post you replied to.
A bigger problem is our reluctance to move off 512-byte sectors. Who needs that fine granularity of LBA?
That's two sectors per kilobyte.. dating back to the floppy disk. And we still use this quanta on TB hard disks.
Your CF card is going to use the USB interface
This is Informative?
CF cards are actually IDE devices. The adapters that plug CF into your IDE bus are just passive wiring.. no protocol adapter needed.
It's trivial to replace a laptop drive with a modern high-density CF card, and sometimes a great thing to do.
The highest-performance CF cards today use UDMA for even higher bandwidth.
HighSpeed USB can't reasonably get over 25MB/sec from the cards using a USB-CF adapter, but you can do better by using its native bus.
Older flash devices allowed multiple writes to one page, but new ones do not.
The higher-density MLC devices do not allow you to read a page, flip a bit to 0 and overwrite it. They require that pages be written just one, and in order.
This is causing no end of frustration for the Microsoft mobile filesystems, which frequently overwrote pages to flag them.
That's so oversimplified as to be completely wrong.
The number of write/erase cycles on NAND is significantly less than a hard drive. Typical devices are rated for 10,000 cycles. Bleeding-edge MLC parts can be as low as 5,000 or 7,000 erase cycles.
But.. a well-designed device will perform accurate wear-levelling across all the available blocks, so it doesn't matter what kind of access the user performs -- the whole device will wear evenly.
There are indeed reserve blocks to mitigate premature death of some parts.
But, the most important part is the ECC mechanism. The parts don't just wear out and die, they get an increasing bit error rate. By overdesigning the ECC logic, you can squeeze longer life out of the parts.
It does not play guess and check.. well-recognized error correction algorithms like Reed-Solomon or BCH are used with really high detect/correct rates.
Once you have accurate wear levelling, excellent ECC, and some manner of failure prediction, then it doesn't make so much sense to keep all your flash "in reserve" ready to swap out other parts wholesale. You might as well involve all the parts in the mix, so you get longer wear throughout.
MLC brings more density to the table. That's the only reason they do it. Smaller die size and storage density means more MB per dollar
SLC would be a much smaller capacity drive for the same money. It would be faster at writing, but probably too expensive or too small to have many adopters.
Same reason SLC is all but unheard-of in thumbdrives. (IronKey being one exception.)
One of the biggest challenges of the coming years will be finding and developing filesystems (logical data stores) that take advantage of the strengths of flash memory while deminishing the weaknesses of it.
Our approach today is mapping large banks of Flash to look like a hard drive, and then using a filesystem that is optimized to reduce seek activity. (Cyl/Hds/Tracks-per-Sector..)
EXT3 on SSD, FAT on huge SD cards, it's just shoe-horning our old filesystems onto new media. It makes about as much sense as using a hard drive to store a single TAR image only.
Once we make the huge step of designing high-performance filesystems that are exclusively *for* flash media, then we can take advantage of some of the huge benefits that are distinctly flash.
Key things like journalling should be designed with the flash organization in mind: pages and blocks vs "sectors". That kind of thing.
Access time != sequential bulk read throughput.
Think hard drive vs flash drive.
Flash does have "access time" close to RAM, since it doesn't have to seek or do complex addressing.
When you have these huge banks of flash acting as one drive, then "access time" becomes a computational problem of how fast you can look up the physical location of the user's data, based on a logical sector address.
Still faster then mechanically moving a drive head, of course.
Again, it's only the ERASE unit that is huge -- 64KB, 128KB, or 256KB on the device itself.
You can't erase 4KB alone.
It gets more complicated when you consider huge parallel arrays of NAND, and the complex logical remapping that goes on to give the appearance of a typical 512-byte sector device.
NAND blocks are *erased* in large blocks, probably 128KB or larger in this case.
However, the read and write operations occur at a *page* level, not block. NAND pages today are typically 2K or 4KB in size.
So you can read and write in smaller units than 128KB.
However, to erase any byte of the NAND, you have to relocate the preserved data and erase a whole block.
Because these drives operate on huge aggregate arrays of NAND, their block structure may be much larger, or they may have very complicated and smart algorithms to re-map write new data while waiting to perform erases much later.
When cells age, they take longer to erase. This happens over 5,000, 10,000 cycles or longer. It's not dramatic, and eventually the cells fail in a way more severe than can be corrected by the ECC.
Because there is a (software) process to bring full speed back to the drive, we can safely conclude that none of the slowdown is related to cell aging or other cell-level issues. It's more of an organization and fragmentation issue.