I think in a years time frame, we could see the 6 Gb/s passed with the way SSDs are going. To make this standard is dumb. If we're looking for speed, SATA 6Gb/s is not it and this ancient CHS scheme has to go to accommodate a better way to map, access and control data. Ultimately, we need to have these devices understand & control the file system. (Trim does this for SSDs) For example: The OCZ vertex nearly saturates the 3Gb/s mark already. They only way the drives 'fail' to accomplish this sustaining speed is with random writes, typically which occur when writing data to a spot marked as available when the NAND isn't zeroed, it either has to re-zero or move on. If the drive knows that the OS is deleting a file (not marking the site, as available) then the drive can zero automatically without you noticing. Its only in certain conditions, these drive don't Consistently perform at peak performance: Free space not consolidated, Free space not zeroed, Swap file creates random writing (slows performance), Indexing is now useless with
.1 ms seek times. Using write filters, or something that converts random writes to sequential writes (through buffers, caches or drivers) greatly enhances speed, such as the MFT Software or even windows SteadyState for the devices.
I like the idea of the 'RAM socket' interface as someone stated above. These devices i think work better in a parallel manner. Most work like this internally anyway.