Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

The Benefits of Hybrid Drives 193

feminazi writes "Flash memory is being integrated with the hard disk by Seagate and Samsung and onto the motherboard by Intel. Potential benefits: faster read/write performance; fewer crashes; improved battery life; faster boot time; lower heat generation; decreased energy-consumption. Vista's ReadyDrive will use the hybrid system first for laptops and probably for desktops down the road. The heat and power issues may also make it attractive in server environments."
This discussion has been archived. No new comments can be posted.

The Benefits of Hybrid Drives

Comments Filter:
  • Re:Old? (Score:5, Informative)

    by nmb3000 ( 741169 ) on Monday July 31, 2006 @12:15AM (#15814950) Journal
    Windows creates an immense swapfile anyway - why not just get the system to do it on either a designated part of the hard drive, or on a USB 2.0 flash drive?

    Actually, has anyone tried that? I expect you could see a decent increase in performance that way.


    Windows' swapfile usage is pretty similar to the way Linux does swap, except that Windows uses a file instead of a partition. By default it's 1.5 times the amount of RAM installed in the system and is made all at once to ensure a contiguous file. On systems with plenty of RAM it's still good to have because it means the OS can commit to having plenty of memory for applications which request a lot, most of which they might never use. Without a page file 10-20% of physical memory is wasted because the OS has committed to having it (think Photoshop, etc).

    I don't know how well the pagefile would work on a USB drive since if you're using much swap you're already seeing serious degradation. Besides, flash drives still suck at write speeds, being many times worse than even an old IDE drive. That's the biggest problem with integrating the two technologies I would think--making sure that you don't introduce bottlenecks due to stuff like that.
  • by keith134 ( 935880 ) on Monday July 31, 2006 @12:17AM (#15814955)
    Plug in a USB or Flash drive and mount it as a non removable drive (drivers exist for this purpose...google them) then set your page file and temp files, etc. to the flash drive.
  • by DDLKermit007 ( 911046 ) on Monday July 31, 2006 @12:25AM (#15814992)
    An age old method called write leveling. In practice it usualy doesn't work since most people do it with USB thumb drives and/or flash memorycards which are all removed on a regular basis so the reader/writer of the media gets changed allot or the controler doesn't impliment write leveling. As such the write leveling never really gets done very well. With a system like this the write leveling would be exact and the flash memory would end up outlasting the moveing parts of the hard-disk. Also as parts of the card went bad the controler would skip over those sectors in the future which would lead to it working even longer. Even one part of a regular hard disk goes bad and your boned completely.
  • by phoebe ( 196531 ) on Monday July 31, 2006 @12:26AM (#15814995)

    They are not flash though, the solid state storage devices use banks of DIMMS with backup batteries and hard drives to save state when power failure occurs.

    Flash isn't a terribly fast medium either, hence all the marketing over 12x, 20x, 50x compact flash cards in the digital camera market.

  • by earnest murderer ( 888716 ) on Monday July 31, 2006 @12:38AM (#15815028)
    Am I missing something here?

    Yes, durability has improved tremendously. Also, they aren't using it for swap. Most of the files that will get cached here are things the OS developer knows (or the system obsreves) are going to be asked for frequently. Data will also be saved here sometimes to avoid spinning up the disk.

    The sum of these writes are not going to exceed the durability (some millions of writes was the last spec I saw) of modern flash in any reasonable time frame.

    Also, if someone is abusing the technology or just keeps the same drive around that long the whole system doesn't fail, it just becomes a bog standard disk. Since a write failure is known at the time it is written you don't even loose data.
  • by Zan Lynx ( 87672 ) on Monday July 31, 2006 @12:55AM (#15815083) Homepage
    Haven't any of you been playing with the Vista betas? Vista has a sort of swap file / prefetch feature that you can enable on USB flash drive. Vista first benchmarks the device, to determine if it is fast enough. Then you can create a sort of swap file on it, as big as you like.

    It's part of the Vista SuperPrefetch.
    http://www.windowsitpro.com/Windows/Article/Articl eID/48085/48085.html [windowsitpro.com]
  • by Anonymous Coward on Monday July 31, 2006 @01:31AM (#15815180)
    eh? since when did bad disc sectors make a hd die? regular magnetic drives have been working around bad sectors long before the inroduction of flash. no, the problem with magnetic hd drives are the movings parts that die which elimates the drives completely, which will still exist in these hybrid drives. until a pure flash hd comes out, all these hybrids does is create a minor improvement in performance. flash unfortunitely still has a bit to improve in performance(faster then hd speeds are expensive for flash), size, and cost. as stated by you, write leveling will make flash last quite a while, probably to the life of the hd but i don't see how useful the addition of flash would be instead of say a flash slot (you already have a ram slot, a connector for hd, why not add a flash slot into the mix that is far more flexable where the os could be read from it or something along thsoe lines) (i'm aware the idea has been put out, just can't remember what intel called it)
  • by adrianmonk ( 890071 ) on Monday July 31, 2006 @01:47AM (#15815214)
    The technical specifications of the flash memory in my USB drive says that it is guaranteed to work for, at most, 100000 (i.e., one followed by 5 zeros) writes.

    I thought I'd seen specs an order of magnitude larger than that in many cases, but the problem still may not be as bad as you think in many cases even if it is as bad as 100000 writes. The reason? Flash devices have systems built in to their controllers specifically to deal with these problems. The mechanisms may vary, but the ones I know about are wear leveling [wikipedia.org] and excess capacity (beyond the capacity that the device reports to the operating system) that can be pressed into service when a block fails.

    Briefly, wear leveling means that if you write to the same logical address over and over, the controller will map that write to different physical addresses each time. That means that you can't wear out the device by rewriting the same file over and over again; instead, you only add a little bit of wear to each physical block on the flash device. The concept is a little bit like rotating the tires on your car except that it's a more dramatic win since write patterns can be much more uneven than wear on tires on a car.

    The other mechanism for mitigating the effects of limited flash life is putting excess capacity aside (so that it's not reported to the OS) to be used when a physical block does fail. Since it's a matter of probability just which write will cause a given block to fail (meaning that some will fail after less than 100,000 writes and some will probably last much longer), even with wear leveling it's unlikely that all blocks will fail at once. It should be easy to tell when the pool of spare blocks is nearing exhaustion and give you advanced warning that your flash device is wearing out. So in that sense, it is actually safer than hard drives, which tend to fail without warning.

    Finally, this whole thing reminds me of the reaction some people (mostly audiophiles) had to compact discs, and digital audio in general, when it first began to replace analog systems. There was some resistance to the technology because it was sure to sound artificial: after all, you were taking the music apart into discrete steps and putting it back together again. Obviously, a system which broke a waveform down into a discrete step can't ever really reproduce exactly the same waveform as the original. And that's true, but what they missed was the fact that analog systems can't ever reproduce exactly the same waveform as the original either. Both systems have limitations, in this case distortion of the signal, and the true question should be not whether the proposed new system has limitations, but whether the limitations of the new system are worse or better. (The answer to that may depend on the intended use.)

    I think the same thing applies to flash devices. Yes, you may have a hard drive that has been humming along for 5 years without a problem, and that's fairly common, but hard drives do fail. When I was a system admin, I saw my fair share of them. (I've seen a few since then too.) The key in the case of flash is probably to get in place a nice warning system that can take advantage of the ability to notice that spare blocks are being depleted and warn the user when failure of the device is nearing. I haven't researched it carefully, but perhaps SMART [wikipedia.org] would be useful for this in some applications, such as where flash is replacing hard disks.

  • Re:Old? (Score:4, Informative)

    by DrXym ( 126579 ) on Monday July 31, 2006 @04:44AM (#15815703)
    Windows' swapfile usage is pretty similar to the way Linux does swap, except that Windows uses a file instead of a partition. By default it's 1.5 times the amount of RAM installed in the system and is made all at once to ensure a contiguous file.

    Sadly, it isn't always contiguous since it has an initial size and a maximum size. If you run too many apps or an app goes crazy and consumes all your memory, your pagefile goes through the roof.. I was horrified to discover the pagefile.sys on my laptop was split into 3000+ pieces. I had to page defrag over it (a SysInternals tool). After running it a bunch of times, it's still at 800 pieces even now.

    I I prefer the Linux method since you can choose a swapfile or a swap partition. A partition guarantees no fragmentation (and optimal performance since there is no underlying fs), but you have the flexibility of a swap file if you need it.

  • Re:Finally... (Score:3, Informative)

    by dfghjk ( 711126 ) on Monday July 31, 2006 @08:45AM (#15816426)
    Technically, the drive controller is in the drive (that's what IDE stands for). The controller they referred to was one that was added to support the flash. These flash parts aren't cache RAM's like everyone seems to be imagining, they're additional storage.
  • by YesIAmAScript ( 886271 ) on Monday July 31, 2006 @11:26AM (#15817457)
    NOR flash (like the BIOS chip in your PC) is good for 1M writes or more.

    But NOR flash is low density. An 8MByte NOR flash is large.

    The flash that is being integrated into these drives is NAND flash. NAND flash is the kind of flash you use in your digital camera. NAND flash is high density.

    And it is crap.

    SLC NAND flash is good for 100,000 writes. But SLC is on the way out because it's only half as dense as MLC NAND flash. MLC NAND flash is good for 10,000 writes.

    Are you scared yet?

    That's a statistical measure, so often cells last longer than 10,000 writes before crapping out. And systems that use NAND flash use ECC (error correction codes) and wear levelling to try to hide the flash wearing out. It's complex, but it does work pretty well.

    But a coworker made a flash burner app to wear out some flash on purpose. It wrote constantly. He able to wear it out in a couple days. It didn't wear out the entire flash chip, but that's when the flash started to develop sectors that were unusably bad, even with ECC.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...