Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

The Benefits of Hybrid Drives 193

feminazi writes "Flash memory is being integrated with the hard disk by Seagate and Samsung and onto the motherboard by Intel. Potential benefits: faster read/write performance; fewer crashes; improved battery life; faster boot time; lower heat generation; decreased energy-consumption. Vista's ReadyDrive will use the hybrid system first for laptops and probably for desktops down the road. The heat and power issues may also make it attractive in server environments."
This discussion has been archived. No new comments can be posted.

The Benefits of Hybrid Drives

Comments Filter:
  • Finally... (Score:4, Insightful)

    by Cherita Chen ( 936355 ) on Sunday July 30, 2006 @11:48PM (#15814865) Homepage
    This is not a new idea, nor is it new technology... This has been a long time coming.
    • But it's still pretty cool - a new way to integrate existing technologies, bring them together to make computers work better. I thought TFA was an interesting read, even though it didn't have anything particularly earth-shattering in it.
      • Re:Finally... (Score:2, Interesting)

        by NitsujTPU ( 19263 )
        I don't think that's what he's driving at.

        People have been talking about doing exactly this technique for quite a while. It just never hit the mainstream. I even think that there were a couple commercial implementations of this, but I'm not sure on that last point. It is definitely talked about in research papers on filesystems that I have read.
    • Re:Finally... (Score:5, Interesting)

      by grammar fascist ( 239789 ) on Sunday July 30, 2006 @11:58PM (#15814898) Homepage
      This is not a new idea, nor is it new technology... This has been a long time coming.

      The prices finally fell to where it's economically feasible.

      Personally, I like Intel's idea better (embedding the flash memory in the drive controller), because it should work just fine with existing drives. It might also be upgradeable, but I'm not holding my breath.
      • Re:Finally... (Score:5, Insightful)

        by Knetzar ( 698216 ) on Monday July 31, 2006 @02:18AM (#15815289)
        I don't like that idea, since if a system failure occurs and I want to move my harddrive to another system, there is a chance that the harddrive is in a bad state. Where as if you have th flash integrated with the HDD, then the write buffer is with the disk (as it should be).
        • That's what I was thinking. If the IO call returns as soon as the data is written to the flash , then that data sure as hell should be on the disk somewhere. Keeping what is basically a disk cache on the system board sounds like a terrible idea to me.
          • Re:Finally... (Score:3, Interesting)

            by Fordiman ( 689627 )
            Still, it might not be a bad idea to have flash journalling; have the controller record a disk-to-disk write and return immediately, intelligently handle disk reads, even if the data hasn't been relocated yet, etc. The flash chip just stores a list of actions (like in a journalled FS) and the controller performs them. They can be suitably small (1 block) so as to keep state granularity high.

            No, seriously. Sure, lots of filesystems journal, but how many can journal with separated control? In a normal jou
        • You'd move your harddrive in a bad state? Surely in the case of a system crash / power outage, you'd be best just booting up the machine first, establishing the state of the harddrive, and then move it? Otherwise you've no idea what you're putting into the other machine... (or did you mean something else?)

        • You don't think anybody has thought of that? That there won't be utilities to commit cache to disk before moving the hard drive?
          • And if motherboard problems end up being the reason for the crash? Or even a bad CPU...you might not want to replace the CPU, but instead you might just want to move your drive to a new system.
        • No there's not. The hard drive will have a conventional filesystem just as it does today. The flash storage contents are managed separately. Flash storage is not a "write buffer".
          • "Windows, however, is more transactional. It tends to trickle log files and other data even when systems are idle, keeping drives spinning. Placing that data in the write cache allows disk drives to power down."
      • Re:Finally... (Score:3, Informative)

        by dfghjk ( 711126 )
        Technically, the drive controller is in the drive (that's what IDE stands for). The controller they referred to was one that was added to support the flash. These flash parts aren't cache RAM's like everyone seems to be imagining, they're additional storage.
      • Oh, cool.

        So when the flash-containing controller chip wears out after n writes, you get to buy a new motherboard. Awesome.

        Just think of all the new chipsets Intel will be able to sell that way!

    • by reporter ( 666905 ) on Monday July 31, 2006 @12:30AM (#15815012) Homepage
      The technical specifications of the flash memory in my USB drive says that it is guaranteed to work for, at most, 100000 (i.e., one followed by 5 zeros) writes. People do not talk about this limitation, but I have seen this limitation written into the technical specifications of the flash memory in many devices [globalspec.com].

      The hard drive in my Compaq x86 workstation has been humming nicely for more than 5 years. Due to the nature of my work at the institute, the number of writes to the hard drive have easily exceeded 100000 during that time.

      Using flash memory as a fast cache for the hard drive will increase the performance of the drive but will decrease the overall life of the drive. Someone will be awfully upset when she makes a final save of her million-dollar PowerPoint presentation for the CEO and discovers that the save is the 100001st write to the hybrid drive.

      Hopefully, the engineer who designed this hybrid drive has, at a minimum, integrated an LCD counter and a tiny speaker into the drive. The counter shall display the running total of the number of writes to the flash memory. The tiny speaker shall beep like crazy when the total exceeds 99900.

      • by servognome ( 738846 ) on Monday July 31, 2006 @12:37AM (#15815026)
        Hopefully, the engineer who designed this hybrid drive has, at a minimum, integrated an LCD counter and a tiny speaker into the drive. The counter shall display the running total of the number of writes to the flash memory. The tiny speaker shall beep like crazy when the total exceeds 99900.

        It was in the original engineering design, but the lawyers said it would be cheaper to just include a warning in the fine print of the warranty.
      • by Volante3192 ( 953645 ) on Monday July 31, 2006 @12:45AM (#15815053)
        The technical specifications of the flash memory in my USB drive says that it is guaranteed to work for, at most, 100000 (i.e., one followed by 5 zeros) writes. People do not talk about this limitation, but I have seen this limitation written into the technical specifications of the flash memory in many devices

        But, on the other hand, how often do you write to your windows folder? There's the monthly update, the occassional reg hack, but all in all, once it's established, that's a pretty static area of your drive. I could see this as an incredible benefit to system files, which, as has been discussed oft here before, the big reason for this.

        Loading your PPT file in flash won't help bootup. Loading that fuster-cluck of the system32 folder, though, would.

        Someone will be awfully upset when she makes a final save of her million-dollar PowerPoint presentation for the CEO and discovers that the save is the 100001st write to the hybrid drive.

        Backups? Alternate locations? If this is what it takes for them to learn the necessity of redundant copies, it's even better.

        There should be some level of safeguard built in that anything user created should be stored to the magnetic part of the drive, my documents, program files, but they should have this anyway. I mean, nothing like the last save and then having to call Dell because your drive is spitting out an Error Code 7...
        • by Osty ( 16825 ) on Monday July 31, 2006 @02:34AM (#15815339)

          But, on the other hand, how often do you write to your windows folder? There's the monthly update, the occassional reg hack, but all in all, once it's established, that's a pretty static area of your drive. I could see this as an incredible benefit to system files, which, as has been discussed oft here before, the big reason for this.

          Depends on what you're doing. For example, if you run IIS, your log files (by default; you can change this) are in %WINDIR%\Sytem32\LogFiles. That's going to have a lot of writes. Any new hardware or software installation may cause writes to %WINDIR%. There's a lot of other stuff that legitimately writes to %WINDIR% like installing a new printer (think roaming -- you may print to a different printer every day), the .NET Global Assembly Cache, Visual Styles and themes, and a whole lot more. Whether these things should be in %WINDIR% or not is a different question. The point is that using flash for %WINDIR% under the assumption that you'll not write there very often is a little naive. Perhaps Vista reorganizes %WINDIR% somewhat so that fewer processes need to write there.

          There should be some level of safeguard built in that anything user created should be stored to the magnetic part of the drive, my documents, program files, but they should have this anyway. I mean, nothing like the last save and then having to call Dell because your drive is spitting out an Error Code 7...

          All of this is a moot point anyway, because this use of flash is only as cache. Anything written to the flash drive should eventually be flushed to the hard drive. Similarly, if you've exhausted your write cycles and try to write to the cache, it should seamlessly catch the fault and go directly to hard drive. In that case it would be nice to give an occasional notice that your flash chip is exhausted and you need to replace it, but you should not risk losing any data. I'm not a big fan of on-board flash simply because it may be unreplaceable. Any onboard flash chips should not be surface-mounted, but socketed like RAM, CPU, or the clock battery. That will require some standardization on sockets, but as long as there are only two or three different options and the designers of said options let others build chips using that interface (*cough*Sony*cough*) it shouldn't be a problem.

          In the long run, I think computer manufacturers will love this. How likely do you think your parents will be to replace their onboard flash when they run out of write cycles? The average consumer will just buy another PC for a couple hundred dollars rather than buying a new flash chip and installing it (or paying someone to install it).

          How soon do you think the conspiracy theories will start up that manufacturers like Dell are intentionally shortening the life of onboard flash through factory "testing"?

      • It seems like the sane design would be to use the flash memory if available, but otherwise function like hard drives do today. In otherwords, if the flash memory craps out, you can still read and write to the drive, although with a performance hit.
        Given, as you mention, the limited number of writes on these, it might also be neat to have using the flash as a supplement to increase speeds something that can be turned on or off from the OS. I could see that being useful in a number of ways, if it was writt
      • by evilviper ( 135110 ) on Monday July 31, 2006 @12:49AM (#15815069) Journal
        Someone will be awfully upset when she makes a final save of her million-dollar PowerPoint presentation for the CEO and discovers that the save is the 100001st write to the hybrid drive.

        Yes. Everyone knows flash RAM will explode in a gigantic fireball on the 1st attempt to write to it, once it has gone beyond spec.
      • The hard drive in my Compaq x86 workstation has been humming nicely for more than 5 years. Due to the nature of my work at the institute, the number of writes to the hard drive have easily exceeded 100000 during that time.

        During which you weren't purchasing new drives for your machine.

        I can see why hard drive manufacturers might like the idea of a limited-life-span device...
        • True enough, but what customers are going to pay more for a drive that costs more AND needs to be replaced more often when the only advantage is a possibly insignificant performance increase?

          I doubt many people are going to take this route when existing technologies (RAID comes to mind) work fine for those who absolutely need the extra performance (or rather, have convinced themselves they need it [Hi, owner of that $7,500 gaming rig!]).
          • but what customers are going to pay more for a drive that costs more AND needs to be replaced more often when the only advantage is a possibly insignificant performance increase?

            Very true. I was being a touch cynical hey. ;) It would probably also take phasing the non-combined drives out of the market as well, something we may not see for a while, or never.

            or rather, have convinced themselves they need it

            Probably a viable market there come to think about it. People who need (or think they need) that little
          • Indeed, but i would think the main market for this would be laptops. Where the value is in making the computer useable in the fastest time or lowest battery useage, hopefully both. Something like the UMPC/ PDA type tablet could boot without firing up the hard drive, till you need something on it. Then just use pretty graphics to get the user to not notice the wait (log in screen should do it).

            Alot of data is write rarely read often, such as individual track info for media player, and the media player itself
      • Yes, it just means you add a new level of storage. Essentially, flash is less than primary but more than secondary storage. Since it has write limitations, you need to make sure that it is mostly WORM files, such as OS and program files. Write-intensive files such as user data files, temp files, transactional databases, paging files or swap partitions, etc should remain on magnetic media. Flash offers very high read performance, plain and simple. It is not a replacement for a hard drive any more than a
      • by flooey ( 695860 ) on Monday July 31, 2006 @01:09AM (#15815128)
        Using flash memory as a fast cache for the hard drive will increase the performance of the drive but will decrease the overall life of the drive. Someone will be awfully upset when she makes a final save of her million-dollar PowerPoint presentation for the CEO and discovers that the save is the 100001st write to the hybrid drive.

        The thing to note is that that limitation is per flash block, not for the whole thing. So for a 1 GB flash component, given perfect block mapping, you can write around 100 TB of data to it before it wears out. With a 150MB/sec transfer rate, it would take more than a week of continuous writing to write that much. As well, modern flash can withstand a couple million writes, extending the life to several months of continuous writing. Given that this would generally be containing operating system components, which are read often but written to rarely, the lifespan of the memory should be no worry at all.
      • That's really 10000 writes to a single cell on the flash chip. All flash memory has built-in algorithms to statistically spread out those writes over all the cells of the chip, so it's not like if you wrote a file 10000 times you'd have an instant failure. If you only write to on average, say 10% of the disk, then you wouldn't see a failure for probably 100000 writes. Given a 4 GB flash chip, the average write is probably just a few megabytes, so works out to even longer time between failures. Still not
        • All flash memory has built-in algorithms to statistically spread out those writes over all the cells of the chip,

          No it doesn't. Some flash controllers have wear-levelling, as do some filesystems specifically designed for flash memory. But it's not done on-chip. The xD and SmartMedia formats have no controller and thus no wear-levelling.

      • by adrianmonk ( 890071 ) on Monday July 31, 2006 @01:47AM (#15815214)
        The technical specifications of the flash memory in my USB drive says that it is guaranteed to work for, at most, 100000 (i.e., one followed by 5 zeros) writes.

        I thought I'd seen specs an order of magnitude larger than that in many cases, but the problem still may not be as bad as you think in many cases even if it is as bad as 100000 writes. The reason? Flash devices have systems built in to their controllers specifically to deal with these problems. The mechanisms may vary, but the ones I know about are wear leveling [wikipedia.org] and excess capacity (beyond the capacity that the device reports to the operating system) that can be pressed into service when a block fails.

        Briefly, wear leveling means that if you write to the same logical address over and over, the controller will map that write to different physical addresses each time. That means that you can't wear out the device by rewriting the same file over and over again; instead, you only add a little bit of wear to each physical block on the flash device. The concept is a little bit like rotating the tires on your car except that it's a more dramatic win since write patterns can be much more uneven than wear on tires on a car.

        The other mechanism for mitigating the effects of limited flash life is putting excess capacity aside (so that it's not reported to the OS) to be used when a physical block does fail. Since it's a matter of probability just which write will cause a given block to fail (meaning that some will fail after less than 100,000 writes and some will probably last much longer), even with wear leveling it's unlikely that all blocks will fail at once. It should be easy to tell when the pool of spare blocks is nearing exhaustion and give you advanced warning that your flash device is wearing out. So in that sense, it is actually safer than hard drives, which tend to fail without warning.

        Finally, this whole thing reminds me of the reaction some people (mostly audiophiles) had to compact discs, and digital audio in general, when it first began to replace analog systems. There was some resistance to the technology because it was sure to sound artificial: after all, you were taking the music apart into discrete steps and putting it back together again. Obviously, a system which broke a waveform down into a discrete step can't ever really reproduce exactly the same waveform as the original. And that's true, but what they missed was the fact that analog systems can't ever reproduce exactly the same waveform as the original either. Both systems have limitations, in this case distortion of the signal, and the true question should be not whether the proposed new system has limitations, but whether the limitations of the new system are worse or better. (The answer to that may depend on the intended use.)

        I think the same thing applies to flash devices. Yes, you may have a hard drive that has been humming along for 5 years without a problem, and that's fairly common, but hard drives do fail. When I was a system admin, I saw my fair share of them. (I've seen a few since then too.) The key in the case of flash is probably to get in place a nice warning system that can take advantage of the ability to notice that spare blocks are being depleted and warn the user when failure of the device is nearing. I haven't researched it carefully, but perhaps SMART [wikipedia.org] would be useful for this in some applications, such as where flash is replacing hard disks.

      • How about if it just becomes a normal drive once the flash dies?
        • How about if it just becomes a normal drive once the flash dies?

          I imagine this is exactly how they would do it, though perhaps be a bit more advanced about it. It'd be easy enough I hope to have some simple error detection and correction built into the flash. Whenever a particular block is detected to have started to turn bad, it can be recovered (since it's quite fair to assume the vast majority of failures would begin as single-bit ones), then marked as bad from then on. Over time, the amount of flash
      • You're describing this disaster scenario as if it's a new problem introduced by this technology. I've already had plenty of demos I was doing for important people blow up because of a hard drive error (I'm just lucky that way). Which is more likely: that you'll hit the flash write limit, or that the mechnical part of the drive will crash and burn? My experience with hard drives suggests they're none too reliable right now, and anything that can reduce the amount of time they spend moving around has a po
      • Hopefully, the engineer who designed this hybrid drive has, at a minimum, integrated an LCD counter and a tiny speaker into the drive. The counter shall display the running total of the number of writes to the flash memory. The tiny speaker shall beep like crazy when the total exceeds 99900.

        Just enter "4, 8, 15, 16, 23, 42" and everything will be okay.

      • Erm... well here on slashdot, it /always/ gets bought up. Few points, that's not how many /writes/ it can handle, but complete writes, or to put it another way, that's how many times each block can be overwritten. To get full advantage out of the flash memory, the files you want to store on it are the files that are /read/ most often, not written most often, for example (as others have said) your system files, to make bootup and program startup faster.

        That aside, should the flash memory fail, there's no rea
    • nor is it a brand new technology - but it's only going to happen and appear on the site of your favorite component supplier because MS has decided to support it in it's new OS.
      A whole load of new hardware tech never takes off as it's a bit chicken and egg - I'm not buying a PhysX card until I actually find my software will support it and the software's not going to be made until I buy the card etc.
      People might bitch and moan about MS, but it looks like they can actually make new stuff happen.
      MS decide th
  • by Sixtyten ( 991538 ) on Sunday July 30, 2006 @11:51PM (#15814873)
    Will they increase fuel economy as well?
    • Will they increase fuel economy as well?

      I hear the worst part about them is refilling the bits after they run out. The cells are large and clunky and you have to wear special gloves to do it. Even worse, it actually takes many times more bits to create a cell than the cell stores, meaning that it's more economic just to get them from the pump!

      What a waste.
  • Most flash memory i've seen (such as the USB keychain drives), have a rated maximum writes before the memory starts having problems.

    Am I missing something here? How are they going to overcome this if they plan on using the same type of memory for disk cache?
    • Yes. There are companies offering solid state storage devices commercially as high-speed replacements for hard drives. BitMicro is one of them, and offer terrabyte RAIDS of the stuff.
    • by DDLKermit007 ( 911046 ) on Monday July 31, 2006 @12:25AM (#15814992)
      An age old method called write leveling. In practice it usualy doesn't work since most people do it with USB thumb drives and/or flash memorycards which are all removed on a regular basis so the reader/writer of the media gets changed allot or the controler doesn't impliment write leveling. As such the write leveling never really gets done very well. With a system like this the write leveling would be exact and the flash memory would end up outlasting the moveing parts of the hard-disk. Also as parts of the card went bad the controler would skip over those sectors in the future which would lead to it working even longer. Even one part of a regular hard disk goes bad and your boned completely.
    • How about adding one of those red jewels from Logan's Run ? When the jewel starts flashing, you send it of to Carousel and it can Renew!
      Oh wait, they just died in Logan's Run too..
    • Am I missing something here?

      Yes, durability has improved tremendously. Also, they aren't using it for swap. Most of the files that will get cached here are things the OS developer knows (or the system obsreves) are going to be asked for frequently. Data will also be saved here sometimes to avoid spinning up the disk.

      The sum of these writes are not going to exceed the durability (some millions of writes was the last spec I saw) of modern flash in any reasonable time frame.

      Also, if someone is abusing the tec
    • There are several things. First: flash chips are easy to make in such a way that some cells have a much longer longlivety. So many flash chips come with say 10k max writes, but a specific 1% of the chip is specified to last at least 10 times longer.

      Secondly, things go bad if you write the same spot repeatedly. Don't do that then. This is much easier to implement for a harddisk-write-cache than for an USB stick. In the write-cache write something like: "write-ID 1234, sector 4567, data:..." // write-ID 1235,
    • I am guessing that it will be dedicated to OS system, and boot files that don't change often... not for a swap file.
    • by Eivind ( 15695 ) <eivindorama@gmail.com> on Monday July 31, 2006 @01:11AM (#15815134) Homepage
      You're missing that the typical comercial flash-module is built to withstand 1 million writes or more.

      A 1GB flash-module bein written to *constantly* (24 hours a day, 365 days a year) with a sustained speed of 5MB/s would thus wear out sometime after 6.5 *YEARS* of continous operation.

      I'm guessing you can see why this problem is purely hypothethical for 99.99% of all laptops out there. You don't write to disc *constantly* and even if you did, you don't typically use the laptop 24/365, and even if you did, having a laptop-drive fail after 6-7 years is normally not a showstopper.

      If, more realistically, the laptop is used 8 hours/day 250 days/years, and writes to disc 10% of the time when turned on, then the 1 million writes to flash will get reached after aproximately 30 years.

      Even these numbers are high -- my laptop is heavily used as a developer workstation, and it certainly does not write to disc 10% of the time it is turned on.

      • by YesIAmAScript ( 886271 ) on Monday July 31, 2006 @11:26AM (#15817457)
        NOR flash (like the BIOS chip in your PC) is good for 1M writes or more.

        But NOR flash is low density. An 8MByte NOR flash is large.

        The flash that is being integrated into these drives is NAND flash. NAND flash is the kind of flash you use in your digital camera. NAND flash is high density.

        And it is crap.

        SLC NAND flash is good for 100,000 writes. But SLC is on the way out because it's only half as dense as MLC NAND flash. MLC NAND flash is good for 10,000 writes.

        Are you scared yet?

        That's a statistical measure, so often cells last longer than 10,000 writes before crapping out. And systems that use NAND flash use ECC (error correction codes) and wear levelling to try to hide the flash wearing out. It's complex, but it does work pretty well.

        But a coworker made a flash burner app to wear out some flash on purpose. It wrote constantly. He able to wear it out in a couple days. It didn't wear out the entire flash chip, but that's when the flash started to develop sectors that were unusably bad, even with ECC.
  • Magnetic-RAM. (Score:2, Insightful)

    by Anonymous Coward
    MRAM would have been a better choice.
  • A good idea (Score:2, Interesting)

    This is a good idea (even if it is old). In fact flash memory is so small that you could scrap hard drives altogether if you had enough money.

    Imagine twenty 1 gig flash memory cards in a row ... less space then the equivelent hard drive.
    • Imagine twenty 1 gig flash memory cards in a row ... less space then the equivelent hard drive.

      Now imagine 500 1 gig flash memory cards in a row - I bet the 500GB HDs beat them out on form-factor quite considerable. Not to mention the other problems with flash as a replacement for harddrives - read/write times and the relatively low write-limit are the things that jump to mind.
  • another "benefit" (Score:2, Interesting)

    another benefit of integrating flash memory onto the motherboard is the ability of hackers to hack your motherboard independently of the OS, and for friendly companies like microsoft to protect you from yourself by placing code they control in places you cant access on your machine.

    no, I dont like this one bit, it's just a huge security hole begging for exploitation by hackers and DRM vendors.
    • What the hell are you talking about? Most modern day motherboard integrate flash memory in the form of a flashable bios. Hackers don't abuse this because it is such a narrow target, every motherboard is different.

      Regardless, this article is about flash in hard disks.
      • Hackers don't abuse this because it is such a narrow target, every motherboard is different.

        exactly, every mobo is different.. this sounds like something which could make its way in as a standard part of windows computers.. much less narrow a target.
    • Re:another "benefit" (Score:5, Interesting)

      by plasmacutter ( 901737 ) on Monday July 31, 2006 @12:15AM (#15814948)
      I hate to reply to my own post but look, it's not offtopic.

      flash memory is persistant. Unless you provide open apis to allow anyone to develop applications to wipe it, there is no real way to confirm anything that gets stored on it is actually removed.

      Every platform, but especially windows, has a history of security exploits, and now the viruses will have somewhere to hide where they will be much harder to dig out, and anyone wanting to implement DRM could build an OS designed to hide critical components of it by burying it on the flash memory.
    • One of the key points of cache is that if you change something on the disk, that overrules the cache. That's implemented in hardware.

      Disks have cache now, although it's volatile. All this should change is that the cache will be there when the system boots. Sure, getting a virus in there will be just like if you had a virus on your hard drive, but any changes to the disk should be changed in the cache as well by the controller. The system itself probably won't have any way of directly accessing the cache
  • Plug in a USB or Flash drive and mount it as a non removable drive (drivers exist for this purpose...google them) then set your page file and temp files, etc. to the flash drive.
    • I've done that before, and it's not as good of an idea as it sounds. Don't use flash for paging! It's not designed for that. It will perform slower than a hard drive by an order of magnitude or two, and the amount of stress put on the memory will cause it to degrade quickly.
  • by Fallen Kell ( 165468 ) on Monday July 31, 2006 @12:17AM (#15814956)
    The solid state portion of the drives are really only good for data that will not change often. That section suffers from limited number of re-writes before the data integrity degrades. The hybrid disks work well mainly for the primary system OS disk and that is really just about it. The kernel and main OS components will rarely change (patches and kernel updates are the only times). This is why boot times are increased using these disks, because the OS and kernel is contained on the faster solid state memory...

    Again, in an environment where data is constantly being written and deleted, these disks will fail a lot sooner.
  • preference (Score:3, Insightful)

    by spykemail ( 983593 ) on Monday July 31, 2006 @12:22AM (#15814975) Homepage
    I'd prefer something longer lasting (and faster) than flash memory.
  • by pestilence669 ( 823950 ) on Monday July 31, 2006 @12:26AM (#15814998)
    "...The heat and power issues may also make it attractive in server environments..."

    Not necessarily... perhaps during boot time. These potential savings are reserved for end-users who aren't doing anything data intensive. Last time I checked: database, web, email, and file servers are all data intensive... meaning that the drives will have to be spinning.

    Hybrid drives do less in a server environment than a RAM disk. They can help boot faster, which is great for disaster recovery. If heat & power are a huge concern, flash drives, that are here now, solve those problems.
    • In an enterprise environment, if your database server or web server is often reading from the disk, then chances are you have a major performance problem. Disk I/O should be limited in any environment where you want high performance.

      I can see this being very helpful when writing logs though. Instead of keeping the drive constantly spinning, it can just write to the flash memory and only occationally spin the drive up to dump that to disk.
    • Most e-mail servers fit your description, but there are plenty of server environments with a mix of peak times where the hard drive is going constantly and off-times where it isn't. For example, it would be nice if corporate servers that get hammered only from 9-5 could reduce their power usage during the evenings with less nuisance than the current power management schemes require.
  • They forgot one... (Score:4, Insightful)

    by mattmacf ( 901678 ) <mattmacf@[ ]online.net ['opt' in gap]> on Monday July 31, 2006 @12:27AM (#15815000) Homepage
    Potential benefits: faster read/write performance; fewer crashes; improved battery life; faster boot time; lower heat generation; decreased energy-consumption.

    What about increased reliability? I realize a lot of this might depend on how the flash memory is interfaced, but it would be awesome to have a small built in flash chip capable of live backups of critical data. With say a spare gig of memory on the hard drive, it should be more than feasible to have data of certain folders (e.g. My Documents and system folders) in the off chance that your hard drive actually does fail. Being able to boot directly to the flash chip would be great in emergencies, and a copy of DSL/Puppy Linux/*Your favorite recovery tool* would be perfect to store there. Bonus points if you can easily (i.e. without a soldering iron) swap the flash chip to a fresh drive and do a Stage 1 Gentoo reinstall from scratch.

    Come to think of it, the possiblities of RAIDing these things together could be interesting as well. With a RAID 1, all but the most paranoid wouldn't need to include the flash memory in the mirror. Or, should the flash memory get sufficiently large (say, 20-25% of the hard drive size), you could use the flash memory as dedicated parity in a RAID 4 array. Obviously this means squat if you can't interface the flash memory properly, but hey, at least the possibilities are there.
    • If your data is really critical, do an off-site backup. At the least, burn a CD using a high-quality blank with the most important stuff and put it in a bank safe-deposit box. In another town.

      Putting it in a different area of the same disk drive, using the same drive controller, the same motherboard, the same RAM (you get the idea), is asking for trouble.
  • by mbstone ( 457308 ) on Monday July 31, 2006 @12:31AM (#15815014)
    Another benefit of hybrid drives is, you can use the carpool lane even if you're by yourself.
  • by evilviper ( 135110 ) on Monday July 31, 2006 @12:44AM (#15815051) Journal
    The heat and power issues may also make it attractive in server environments.

    No, it won't. Servers have large ammounts of system RAM, which is far faster than flash on the hard drive bus could ever be. They also have battery-backed RAID controllers, meaning flash would be a step down, not a step up.

    This is only really useful in notebooks.
    • Servers that don't access much of the disk (say, less than 1GB or whatever the size of the flash cache is) the majority of the time would benefit from this the same as laptops, by letting their disks spin down.

      Also fast restart is especially good for critical servers as a method of reducing both planned and unplanned downtime. I know at lylix.net [lylix.net], we will be getting one of these as soon as Gentoo Linux properly supports it - you don't want an Asterisk box down longer than it has to be.
      • Servers that don't access much of the disk (say, less than 1GB or whatever the size of the flash cache is) the majority of the time would benefit from this
        What you call a "server" I call a "node" and it may as well not have any disk at all and boot from the network (or boot from flash memory if you want). If you are only talking about 1GB the very fast alternative is a RAM disk. I think all of knoppix can fit inside a 1GB RAM disk.
    • It would seem you've never had a RAID battery backup fail or run out before power could be restored. I have, and as a result I wouldn't suggest flash is a step down from that approach. Sideways instead of up, maybe, but not down.
    • Yes, servers have large banks of RAM. However, Linux performs dirty page writeback every five seconds to avoid congestion on disk writes.

      Without onboard flash, the disk must service the request. The 3.5" disk in my office machine must be shutdown for at least 30 seconds to save energy. A sensible timeout (provably 2-competitive) is 60 seconds. Servers have higher performance disks than my desktop, so their timeout is going to be longer than that. The disk can never shut down and save energy due to di

  • Vista's ReadyDrive will use the hybrid system first for laptops and probably for desktops down the road.


    Run, Vista! Run! Don't let those bullies Linux and OS X catch you!
  • Are the standard NTFS or Ext3/Reiser/Whatever optimized for use on hard drives? If flash drives start appearing as main system drives, would new or modified versions of file systems help in any way? Or are modern file systems abstract enough to where they dont deal with all the little fiddly-bits? I don't know enough about this area, but it would seem to me that a new hardware device to store files may benefit with a change in the way the OS uses it.
  • by Michael Woodhams ( 112247 ) on Monday July 31, 2006 @01:12AM (#15815135) Journal
    The article discusses this. Intel want to put it on the MB, the drive manufacturers want to put it in the drive. A third option is to attach it separately and externally (e.g. a USB flash drive.) A final option would be to (e.g.) have a compact-flash-card (or similar) socket on the hard-drive, and users provide their own flash.

    To my mind, the logical place to put it is on the drive. This is where the useful caching information is most easily available. (Which sectors are read/written how often? Which reads are often delayed by waiting for the disk to spin up?) This is also where you can make the process most transparent. The drive's firmware can make the system "just work", like a standard HD, but faster - whatever the OS, no drivers needed. (Although you'd possibly like to have drivers to give the OS more control over what is flash-cached.)

    • how about using CF and a CF->IDE adapter for desktops and hang the adapter off the IDE bus. CF based PCcard adapters in laptops maybe? You can do this today and IIRC, CF cards have builtin wear leveling. move your /boot onto this for quicker boots and/or put your swap here and have quicker restores from hibernates.

      LoB
    • If that were realistic then you wouldn't need OS support at all. The drive is the worst location because it lies underneath the filesystem where much of the knowledge of what is being done is lost. Flash storage is additional nonvolatile storage that is made available to the operating system and hiding it in the drive is the worst place. If you wanted to do that you'd be better off using RAM and just enough battery back to ensure the data gets flushed.
  • I like the part best where these hybrids fund the R&D for pure transistor drives without moving parts.
    • Well, I guess we know why the preview button's there... good thing I code better than I post, otherwise I'd be fired!
  • by kickdown ( 824054 ) on Monday July 31, 2006 @01:44AM (#15815208)
    Damn. The RSS feed made me think this might be about hybrid _cars_, not hard drives. I was already dreaming of making clever comments about how cool it is to own a Toyota Prius. Now I make whiny comments about getting it wrong instead. Damn. Mod me down for futility and insignificance.
  • It seems like there is a huge demand for faster booting systems, but so few people use suspend to disk (hibernate for windows, goes by other names too). Shutting down and booting are faster, and it uses *no* power when off. It seems to me that some people are overly fixated on faster boot times, so long as no interesting software tricks such as suspend are used. Why is that? Many people want a faster booting computer, but refuse to do so with anything other than a traditional boot. I understand the lim
  • I've not been able to figure out why flash RAID setups aren't more popular in portable devices. 10 4GB flash disks in a RAID would give you what, 20GB drive space with 100Mb/s bandwidth. Not bad for memory that doesn't have any moving parts and doesn't vanish without power. And a 'drive' like that would be about the size of a typical laptop network adapter card. I'm guessing cost is the main problem?
  • Massive disk cache (Score:5, Interesting)

    by nmg196 ( 184961 ) * on Monday July 31, 2006 @06:49AM (#15816018)
    We seem to be going backwards. About 10 years ago, I had a vesa local bus HDD controller which took SIMMS to use as cache. You could shove up to 32mb on it and it would remain powered even when the system was shut down. This meant you could load DOS and even Windows 3.11 entirely from the disk cache after rebooting. As far as I'm aware, there are no SATA controllers which can take DIMMS or similar to use as a large cache. PLEASE correct me if I'm wrong.

    Why doesn't this exist today? I think it was a really good idea. The closest thing I've found is Gigabyte's iRam, but this isn't really the same thing - as it's purely a RAM drive and doesn't persist to hard disk.

    I think that slow booting is the one of the biggest annoyances of computers and the primary reason many people never turn off their machines in an office environment (hiberating on XP rarely works reliably in my experience - usually due to driver issues not reinitialising the hardware properly rather than there being any problem with XP itself).

    If people's machines booted to the desktop in under 10 seconds, far more people would turn them off at the end of the day and worldwide power consumption would be significantly reduced.
    • Hibernate has been pretty reliable for me, but you can't use it on garbage, the hardware must be good, with good drivers. Sleep mode has always been reliable, I don't think there is any excuse to not use it.
  • Write Limitation (Score:2, Interesting)

    by xdxfp ( 992259 )
    100,000 writes is only a median of the distribution. Some will be higher and some will be lower, so a counter would be useless. I'm sure it's made of a higher quality RAM than your typical flash drive. 100,000 writes would last about one day on a server, and probably less than four days for your typical PC [if you assume one write per second for a busy server, which is not unreasonable]. I'm not sure a flash cache makes a hell of a difference. Why not just use RAM, and have a battery to keep the memory

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...