Forgot your password?
typodupeerror

Comment: Re:Still too expensive... (Score 2, Informative) 207

by Dr. Ion (#26942637) Attached to: Optimizing Linux Systems For Solid State Disks

Your CF card is going to use the USB interface

This is Informative?

CF cards are actually IDE devices. The adapters that plug CF into your IDE bus are just passive wiring.. no protocol adapter needed.

It's trivial to replace a laptop drive with a modern high-density CF card, and sometimes a great thing to do.

The highest-performance CF cards today use UDMA for even higher bandwidth.

HighSpeed USB can't reasonably get over 25MB/sec from the cards using a USB-CF adapter, but you can do better by using its native bus.

Comment: Re:Why? (Score 2, Informative) 95

by Dr. Ion (#26853565) Attached to: Long-Term Performance Analysis of Intel SSDs

Older flash devices allowed multiple writes to one page, but new ones do not.

The higher-density MLC devices do not allow you to read a page, flip a bit to 0 and overwrite it. They require that pages be written just one, and in order.

This is causing no end of frustration for the Microsoft mobile filesystems, which frequently overwrote pages to flag them.

Comment: Re:There's got to be some writable space here... (Score 4, Insightful) 95

by Dr. Ion (#26853529) Attached to: Long-Term Performance Analysis of Intel SSDs

That's so oversimplified as to be completely wrong.

The number of write/erase cycles on NAND is significantly less than a hard drive. Typical devices are rated for 10,000 cycles. Bleeding-edge MLC parts can be as low as 5,000 or 7,000 erase cycles.

But.. a well-designed device will perform accurate wear-levelling across all the available blocks, so it doesn't matter what kind of access the user performs -- the whole device will wear evenly.

There are indeed reserve blocks to mitigate premature death of some parts.

But, the most important part is the ECC mechanism. The parts don't just wear out and die, they get an increasing bit error rate. By overdesigning the ECC logic, you can squeeze longer life out of the parts.

It does not play guess and check.. well-recognized error correction algorithms like Reed-Solomon or BCH are used with really high detect/correct rates.

Once you have accurate wear levelling, excellent ECC, and some manner of failure prediction, then it doesn't make so much sense to keep all your flash "in reserve" ready to swap out other parts wholesale. You might as well involve all the parts in the mix, so you get longer wear throughout.

Comment: Small vs Large (Score 1) 95

by Dr. Ion (#26853487) Attached to: Long-Term Performance Analysis of Intel SSDs

MLC brings more density to the table. That's the only reason they do it. Smaller die size and storage density means more MB per dollar

SLC would be a much smaller capacity drive for the same money. It would be faster at writing, but probably too expensive or too small to have many adopters.

Same reason SLC is all but unheard-of in thumbdrives. (IronKey being one exception.)

Comment: Re:File system? (Score 2, Insightful) 95

by Dr. Ion (#26853455) Attached to: Long-Term Performance Analysis of Intel SSDs

One of the biggest challenges of the coming years will be finding and developing filesystems (logical data stores) that take advantage of the strengths of flash memory while deminishing the weaknesses of it.

Our approach today is mapping large banks of Flash to look like a hard drive, and then using a filesystem that is optimized to reduce seek activity. (Cyl/Hds/Tracks-per-Sector..)

EXT3 on SSD, FAT on huge SD cards, it's just shoe-horning our old filesystems onto new media. It makes about as much sense as using a hard drive to store a single TAR image only.

Once we make the huge step of designing high-performance filesystems that are exclusively *for* flash media, then we can take advantage of some of the huge benefits that are distinctly flash.

Key things like journalling should be designed with the flash organization in mind: pages and blocks vs "sectors". That kind of thing.

Comment: Re:Bullshit (Score 1) 95

by Dr. Ion (#26853427) Attached to: Long-Term Performance Analysis of Intel SSDs

Access time != sequential bulk read throughput.

Think hard drive vs flash drive.

Flash does have "access time" close to RAM, since it doesn't have to seek or do complex addressing.

When you have these huge banks of flash acting as one drive, then "access time" becomes a computational problem of how fast you can look up the physical location of the user's data, based on a logical sector address.

Still faster then mechanically moving a drive head, of course.

Comment: Closer, but.. no. (Score 4, Informative) 95

by Dr. Ion (#26853403) Attached to: Long-Term Performance Analysis of Intel SSDs

NAND blocks are *erased* in large blocks, probably 128KB or larger in this case.

However, the read and write operations occur at a *page* level, not block. NAND pages today are typically 2K or 4KB in size.

So you can read and write in smaller units than 128KB.

However, to erase any byte of the NAND, you have to relocate the preserved data and erase a whole block.

Because these drives operate on huge aggregate arrays of NAND, their block structure may be much larger, or they may have very complicated and smart algorithms to re-map write new data while waiting to perform erases much later.

Comment: Re:Why? (Score 5, Informative) 95

by Dr. Ion (#26853393) Attached to: Long-Term Performance Analysis of Intel SSDs

Um... no.

When cells age, they take longer to erase. This happens over 5,000, 10,000 cycles or longer. It's not dramatic, and eventually the cells fail in a way more severe than can be corrected by the ECC.

Because there is a (software) process to bring full speed back to the drive, we can safely conclude that none of the slowdown is related to cell aging or other cell-level issues. It's more of an organization and fragmentation issue.

If a 6600 used paper tape instead of core memory, it would use up tape at about 30 miles/second. -- Grishman, Assembly Language Programming

Working...