The Benefits of Hybrid Drives 193
feminazi writes "Flash memory is being integrated with the hard disk by Seagate and Samsung and onto the motherboard by Intel. Potential benefits: faster read/write performance; fewer crashes; improved battery life; faster boot time; lower heat generation; decreased energy-consumption. Vista's ReadyDrive will use the hybrid system first for laptops and probably for desktops down the road. The heat and power issues may also make it attractive in server environments."
Finally... (Score:4, Insightful)
Re:Finally... (Score:2, Insightful)
Re:Finally... (Score:2, Interesting)
People have been talking about doing exactly this technique for quite a while. It just never hit the mainstream. I even think that there were a couple commercial implementations of this, but I'm not sure on that last point. It is definitely talked about in research papers on filesystems that I have read.
Re:Finally... (Score:5, Interesting)
The prices finally fell to where it's economically feasible.
Personally, I like Intel's idea better (embedding the flash memory in the drive controller), because it should work just fine with existing drives. It might also be upgradeable, but I'm not holding my breath.
Re:Finally... (Score:5, Insightful)
Re:Finally... (Score:2)
Re:Finally... (Score:3, Interesting)
No, seriously. Sure, lots of filesystems journal, but how many can journal with separated control? In a normal jou
Re:Finally... (Score:2)
Re:Finally... (Score:2)
Re:Finally... (Score:2)
Re:Finally... (Score:2)
Re:Finally... (Score:2)
Re:Finally... (Score:3, Informative)
Re:Finally... (Score:2)
So when the flash-containing controller chip wears out after n writes, you get to buy a new motherboard. Awesome.
Just think of all the new chipsets Intel will be able to sell that way!
Catastrophic Failure of Flash Memory (Score:4, Interesting)
The hard drive in my Compaq x86 workstation has been humming nicely for more than 5 years. Due to the nature of my work at the institute, the number of writes to the hard drive have easily exceeded 100000 during that time.
Using flash memory as a fast cache for the hard drive will increase the performance of the drive but will decrease the overall life of the drive. Someone will be awfully upset when she makes a final save of her million-dollar PowerPoint presentation for the CEO and discovers that the save is the 100001st write to the hybrid drive.
Hopefully, the engineer who designed this hybrid drive has, at a minimum, integrated an LCD counter and a tiny speaker into the drive. The counter shall display the running total of the number of writes to the flash memory. The tiny speaker shall beep like crazy when the total exceeds 99900.
Re:Catastrophic Failure of Flash Memory (Score:5, Funny)
It was in the original engineering design, but the lawyers said it would be cheaper to just include a warning in the fine print of the warranty.
Re:Catastrophic Failure of Flash Memory (Score:5, Insightful)
But, on the other hand, how often do you write to your windows folder? There's the monthly update, the occassional reg hack, but all in all, once it's established, that's a pretty static area of your drive. I could see this as an incredible benefit to system files, which, as has been discussed oft here before, the big reason for this.
Loading your PPT file in flash won't help bootup. Loading that fuster-cluck of the system32 folder, though, would.
Someone will be awfully upset when she makes a final save of her million-dollar PowerPoint presentation for the CEO and discovers that the save is the 100001st write to the hybrid drive.
Backups? Alternate locations? If this is what it takes for them to learn the necessity of redundant copies, it's even better.
There should be some level of safeguard built in that anything user created should be stored to the magnetic part of the drive, my documents, program files, but they should have this anyway. I mean, nothing like the last save and then having to call Dell because your drive is spitting out an Error Code 7...
Re:Catastrophic Failure of Flash Memory (Score:5, Interesting)
Depends on what you're doing. For example, if you run IIS, your log files (by default; you can change this) are in %WINDIR%\Sytem32\LogFiles. That's going to have a lot of writes. Any new hardware or software installation may cause writes to %WINDIR%. There's a lot of other stuff that legitimately writes to %WINDIR% like installing a new printer (think roaming -- you may print to a different printer every day), the .NET Global Assembly Cache, Visual Styles and themes, and a whole lot more. Whether these things should be in %WINDIR% or not is a different question. The point is that using flash for %WINDIR% under the assumption that you'll not write there very often is a little naive. Perhaps Vista reorganizes %WINDIR% somewhat so that fewer processes need to write there.
All of this is a moot point anyway, because this use of flash is only as cache. Anything written to the flash drive should eventually be flushed to the hard drive. Similarly, if you've exhausted your write cycles and try to write to the cache, it should seamlessly catch the fault and go directly to hard drive. In that case it would be nice to give an occasional notice that your flash chip is exhausted and you need to replace it, but you should not risk losing any data. I'm not a big fan of on-board flash simply because it may be unreplaceable. Any onboard flash chips should not be surface-mounted, but socketed like RAM, CPU, or the clock battery. That will require some standardization on sockets, but as long as there are only two or three different options and the designers of said options let others build chips using that interface (*cough*Sony*cough*) it shouldn't be a problem.
In the long run, I think computer manufacturers will love this. How likely do you think your parents will be to replace their onboard flash when they run out of write cycles? The average consumer will just buy another PC for a couple hundred dollars rather than buying a new flash chip and installing it (or paying someone to install it).
How soon do you think the conspiracy theories will start up that manufacturers like Dell are intentionally shortening the life of onboard flash through factory "testing"?
Re:Catastrophic Failure of Flash Memory (Score:2)
C:\Windows\TEMP
C:\WINDOWS\Downloaded Installations
C:\WINDOWS\msdownld.tmp
C:\WINDOWS\Offline Web Pages
C:\WINDOWS\ftpcache
the windows update folders....
There are a lot of places where files are written. Plus I don't remember anywhere that said the drive was smart enough to selectively handle certain PATHs which are seen at the operating system level and not at the drive level. Are we trying to get the hard drive to understand NTFS/FAT32/HFS/HPFS/UFS/ZFS/EXT2/EXT3/ReiserFS/JFS
Re:Catastrophic Failure of Flash Memory (Score:2)
Given, as you mention, the limited number of writes on these, it might also be neat to have using the flash as a supplement to increase speeds something that can be turned on or off from the OS. I could see that being useful in a number of ways, if it was writt
Re:Catastrophic Failure of Flash Memory (Score:5, Funny)
Yes. Everyone knows flash RAM will explode in a gigantic fireball on the 1st attempt to write to it, once it has gone beyond spec.
Re:Catastrophic Failure of Flash Memory (Score:2, Funny)
Re:Catastrophic Failure of Flash Memory (Score:2)
During which you weren't purchasing new drives for your machine.
I can see why hard drive manufacturers might like the idea of a limited-life-span device...
Re:Catastrophic Failure of Flash Memory (Score:2, Insightful)
I doubt many people are going to take this route when existing technologies (RAID comes to mind) work fine for those who absolutely need the extra performance (or rather, have convinced themselves they need it [Hi, owner of that $7,500 gaming rig!]).
Re:Catastrophic Failure of Flash Memory (Score:2)
Very true. I was being a touch cynical hey.
or rather, have convinced themselves they need it
Probably a viable market there come to think about it. People who need (or think they need) that little
Re:Catastrophic Failure of Flash Memory (Score:2, Insightful)
Alot of data is write rarely read often, such as individual track info for media player, and the media player itself
Re:Catastrophic Failure of Flash Memory (Score:2)
Re:Catastrophic Failure of Flash Memory (Score:5, Interesting)
The thing to note is that that limitation is per flash block, not for the whole thing. So for a 1 GB flash component, given perfect block mapping, you can write around 100 TB of data to it before it wears out. With a 150MB/sec transfer rate, it would take more than a week of continuous writing to write that much. As well, modern flash can withstand a couple million writes, extending the life to several months of continuous writing. Given that this would generally be containing operating system components, which are read often but written to rarely, the lifespan of the memory should be no worry at all.
Re:Catastrophic Failure of Flash Memory (Score:2)
Re:Catastrophic Failure of Flash Memory (Score:2)
No it doesn't. Some flash controllers have wear-levelling, as do some filesystems specifically designed for flash memory. But it's not done on-chip. The xD and SmartMedia formats have no controller and thus no wear-levelling.
Re:Catastrophic Failure of Flash Memory (Score:4, Informative)
I thought I'd seen specs an order of magnitude larger than that in many cases, but the problem still may not be as bad as you think in many cases even if it is as bad as 100000 writes. The reason? Flash devices have systems built in to their controllers specifically to deal with these problems. The mechanisms may vary, but the ones I know about are wear leveling [wikipedia.org] and excess capacity (beyond the capacity that the device reports to the operating system) that can be pressed into service when a block fails.
Briefly, wear leveling means that if you write to the same logical address over and over, the controller will map that write to different physical addresses each time. That means that you can't wear out the device by rewriting the same file over and over again; instead, you only add a little bit of wear to each physical block on the flash device. The concept is a little bit like rotating the tires on your car except that it's a more dramatic win since write patterns can be much more uneven than wear on tires on a car.
The other mechanism for mitigating the effects of limited flash life is putting excess capacity aside (so that it's not reported to the OS) to be used when a physical block does fail. Since it's a matter of probability just which write will cause a given block to fail (meaning that some will fail after less than 100,000 writes and some will probably last much longer), even with wear leveling it's unlikely that all blocks will fail at once. It should be easy to tell when the pool of spare blocks is nearing exhaustion and give you advanced warning that your flash device is wearing out. So in that sense, it is actually safer than hard drives, which tend to fail without warning.
Finally, this whole thing reminds me of the reaction some people (mostly audiophiles) had to compact discs, and digital audio in general, when it first began to replace analog systems. There was some resistance to the technology because it was sure to sound artificial: after all, you were taking the music apart into discrete steps and putting it back together again. Obviously, a system which broke a waveform down into a discrete step can't ever really reproduce exactly the same waveform as the original. And that's true, but what they missed was the fact that analog systems can't ever reproduce exactly the same waveform as the original either. Both systems have limitations, in this case distortion of the signal, and the true question should be not whether the proposed new system has limitations, but whether the limitations of the new system are worse or better. (The answer to that may depend on the intended use.)
I think the same thing applies to flash devices. Yes, you may have a hard drive that has been humming along for 5 years without a problem, and that's fairly common, but hard drives do fail. When I was a system admin, I saw my fair share of them. (I've seen a few since then too.) The key in the case of flash is probably to get in place a nice warning system that can take advantage of the ability to notice that spare blocks are being depleted and warn the user when failure of the device is nearing. I haven't researched it carefully, but perhaps SMART [wikipedia.org] would be useful for this in some applications, such as where flash is replacing hard disks.
Re:Catastrophic Failure of Flash Memory (Score:2)
Re:Catastrophic Failure of Flash Memory (Score:2)
I imagine this is exactly how they would do it, though perhaps be a bit more advanced about it. It'd be easy enough I hope to have some simple error detection and correction built into the flash. Whenever a particular block is detected to have started to turn bad, it can be recovered (since it's quite fair to assume the vast majority of failures would begin as single-bit ones), then marked as bad from then on. Over time, the amount of flash
Re:Catastrophic Failure of Flash Memory (Score:2)
Re:Catastrophic Failure of Flash Memory (Score:2, Funny)
Hopefully, the engineer who designed this hybrid drive has, at a minimum, integrated an LCD counter and a tiny speaker into the drive. The counter shall display the running total of the number of writes to the flash memory. The tiny speaker shall beep like crazy when the total exceeds 99900.
Just enter "4, 8, 15, 16, 23, 42" and everything will be okay.
Re:Catastrophic Failure of Flash Memory (Score:2)
That aside, should the flash memory fail, there's no rea
It's not a brand new idea (Score:2)
A whole load of new hardware tech never takes off as it's a bit chicken and egg - I'm not buying a PhysX card until I actually find my software will support it and the software's not going to be made until I buy the card etc.
People might bitch and moan about MS, but it looks like they can actually make new stuff happen.
MS decide th
Hybrid Drives! (Score:4, Funny)
Re:Hybrid Drives! (Score:2)
I hear the worst part about them is refilling the bits after they run out. The cells are large and clunky and you have to wear special gloves to do it. Even worse, it actually takes many times more bits to create a cell than the cell stores, meaning that it's more economic just to get them from the pump!
What a waste.
Re:Hybrid Drives! (Score:4, Insightful)
The article said that it will be integrated into windows server architecture, so that your servers will power down the hdd's to save power. but this idea has flaws
* first of all, who the hell wants to spin down server hdd-s ? you can't cache hudred of gigabytes, and servers that would save any noticeable amount of power from that can't cache all the necessary data to the tiny dram or flash.
* second, there is no real "mega power save" here, intel makes cpu's that still float near 100W while they are at fullspeed, whereas a modern hdd goes under 10W in normal conditions while spinning normally.
* third, if it's mainly used as booting speedup, how many times do you really want to start your server (yeah ok, on windows, the update cycle needs you to boot once per month, but still
* fourth, spinning any physical item down and up again will reduce it's lifetime, temperature changes in the oil and materials make it less resistable to damages.
* fifth, spinning up the hdd requires a lot more power than keeping it spinning.
* sixth, unless this works transparently (emulating some 'natural' disk operations will certainly make it slower than just disk access), who the hell is going to rework all the raid software that you have enhanced your boxes with ?
* seventh, add all the things up from here, and althrough you find the disks inexpensive, the total cost will be expensive, may not save you a dime.
To save power i currently look at amd geode and laptop cpu's (from both, intel and amd). if i stack up my machines with those i will save more power per work unit than any flash trick.
For a desktop or notebook that you boot once per day, this ofcourse seems like a nice idea, way to go.
Maximum Writes for Flash Memory? (Score:2, Interesting)
Am I missing something here? How are they going to overcome this if they plan on using the same type of memory for disk cache?
Re:Maximum Writes for Flash Memory? (Score:2)
Re:Maximum Writes for Flash Memory? (Score:2, Informative)
They are not flash though, the solid state storage devices use banks of DIMMS with backup batteries and hard drives to save state when power failure occurs.
Flash isn't a terribly fast medium either, hence all the marketing over 12x, 20x, 50x compact flash cards in the digital camera market.
Re:Maximum Writes for Flash Memory? (Score:2)
http://www.bitmicro.com/products_edisk_35_ide.php [bitmicro.com]
Re:Maximum Writes for Flash Memory? (Score:5, Informative)
Re:Maximum Writes for Flash Memory? (Score:2)
Solution: Doing a RAID so if a drive does go bad, there's the second drive. Just replace. (I don't know too much about RAID, so correct me if I'm wrong.)
The advantage of flash is what the article says though. Faster O.S. booting. Hybrid drives is a really good idea. Something fast to put the O.S. onto mea
Re:Maximum Writes for Flash Memory? (Score:2)
Re:Maximum Writes for Flash Memory? (Score:2)
Oh wait, they just died in Logan's Run too..
Re:Maximum Writes for Flash Memory? (Score:3, Informative)
Yes, durability has improved tremendously. Also, they aren't using it for swap. Most of the files that will get cached here are things the OS developer knows (or the system obsreves) are going to be asked for frequently. Data will also be saved here sometimes to avoid spinning up the disk.
The sum of these writes are not going to exceed the durability (some millions of writes was the last spec I saw) of modern flash in any reasonable time frame.
Also, if someone is abusing the tec
Re:Maximum Writes for Flash Memory? (Score:2)
Re:Maximum Writes for Flash Memory? (Score:2)
Secondly, things go bad if you write the same spot repeatedly. Don't do that then. This is much easier to implement for a harddisk-write-cache than for an USB stick. In the write-cache write something like: "write-ID 1234, sector 4567, data:..."
Re:Maximum Writes for Flash Memory? (Score:2)
Re:Maximum Writes for Flash Memory? (Score:5, Interesting)
A 1GB flash-module bein written to *constantly* (24 hours a day, 365 days a year) with a sustained speed of 5MB/s would thus wear out sometime after 6.5 *YEARS* of continous operation.
I'm guessing you can see why this problem is purely hypothethical for 99.99% of all laptops out there. You don't write to disc *constantly* and even if you did, you don't typically use the laptop 24/365, and even if you did, having a laptop-drive fail after 6-7 years is normally not a showstopper.
If, more realistically, the laptop is used 8 hours/day 250 days/years, and writes to disc 10% of the time when turned on, then the 1 million writes to flash will get reached after aproximately 30 years.
Even these numbers are high -- my laptop is heavily used as a developer workstation, and it certainly does not write to disc 10% of the time it is turned on.
you're thinking of the wrong kind of flash... (Score:5, Informative)
But NOR flash is low density. An 8MByte NOR flash is large.
The flash that is being integrated into these drives is NAND flash. NAND flash is the kind of flash you use in your digital camera. NAND flash is high density.
And it is crap.
SLC NAND flash is good for 100,000 writes. But SLC is on the way out because it's only half as dense as MLC NAND flash. MLC NAND flash is good for 10,000 writes.
Are you scared yet?
That's a statistical measure, so often cells last longer than 10,000 writes before crapping out. And systems that use NAND flash use ECC (error correction codes) and wear levelling to try to hide the flash wearing out. It's complex, but it does work pretty well.
But a coworker made a flash burner app to wear out some flash on purpose. It wrote constantly. He able to wear it out in a couple days. It didn't wear out the entire flash chip, but that's when the flash started to develop sectors that were unusably bad, even with ECC.
Magnetic-RAM. (Score:2, Insightful)
A good idea (Score:2, Interesting)
Imagine twenty 1 gig flash memory cards in a row
Re:A good idea (Score:2)
Now imagine 500 1 gig flash memory cards in a row - I bet the 500GB HDs beat them out on form-factor quite considerable. Not to mention the other problems with flash as a replacement for harddrives - read/write times and the relatively low write-limit are the things that jump to mind.
another "benefit" (Score:2, Interesting)
no, I dont like this one bit, it's just a huge security hole begging for exploitation by hackers and DRM vendors.
Re:another "benefit" (Score:2)
Regardless, this article is about flash in hard disks.
Re:another "benefit" (Score:3, Insightful)
exactly, every mobo is different.. this sounds like something which could make its way in as a standard part of windows computers.. much less narrow a target.
Re:another "benefit" (Score:5, Interesting)
flash memory is persistant. Unless you provide open apis to allow anyone to develop applications to wipe it, there is no real way to confirm anything that gets stored on it is actually removed.
Every platform, but especially windows, has a history of security exploits, and now the viruses will have somewhere to hide where they will be much harder to dig out, and anyone wanting to implement DRM could build an OS designed to hide critical components of it by burying it on the flash memory.
Re:another "benefit" (Score:2)
Disks have cache now, although it's volatile. All this should change is that the cache will be there when the system boots. Sure, getting a virus in there will be just like if you had a virus on your hard drive, but any changes to the disk should be changed in the cache as well by the controller. The system itself probably won't have any way of directly accessing the cache
Re:another "benefit" (Score:2)
I really am interested in how exactly the hypothetical I put forward is not possible, it would certainly ease my mind, but from my substantial time weeding out viruses and malware, experience has shown that if you are unable to delete every part of it it will grow back like a friggin weed.
Less expensive and probably just as effective (Score:2, Informative)
Re:Less expensive and probably just as effective (Score:2)
Impracticle in large data storage... (Score:3, Insightful)
Again, in an environment where data is constantly being written and deleted, these disks will fail a lot sooner.
How is this redundant when 15 mins BEFORE others?! (Score:2, Insightful)
preference (Score:3, Insightful)
Not applicable to server environments (Score:4, Interesting)
Not necessarily... perhaps during boot time. These potential savings are reserved for end-users who aren't doing anything data intensive. Last time I checked: database, web, email, and file servers are all data intensive... meaning that the drives will have to be spinning.
Hybrid drives do less in a server environment than a RAM disk. They can help boot faster, which is great for disaster recovery. If heat & power are a huge concern, flash drives, that are here now, solve those problems.
Re:Not applicable to server environments (Score:2)
I can see this being very helpful when writing logs though. Instead of keeping the drive constantly spinning, it can just write to the flash memory and only occationally spin the drive up to dump that to disk.
Re:Not applicable to server environments (Score:2)
They forgot one... (Score:4, Insightful)
What about increased reliability? I realize a lot of this might depend on how the flash memory is interfaced, but it would be awesome to have a small built in flash chip capable of live backups of critical data. With say a spare gig of memory on the hard drive, it should be more than feasible to have data of certain folders (e.g. My Documents and system folders) in the off chance that your hard drive actually does fail. Being able to boot directly to the flash chip would be great in emergencies, and a copy of DSL/Puppy Linux/*Your favorite recovery tool* would be perfect to store there. Bonus points if you can easily (i.e. without a soldering iron) swap the flash chip to a fresh drive and do a Stage 1 Gentoo reinstall from scratch.
Come to think of it, the possiblities of RAIDing these things together could be interesting as well. With a RAID 1, all but the most paranoid wouldn't need to include the flash memory in the mirror. Or, should the flash memory get sufficiently large (say, 20-25% of the hard drive size), you could use the flash memory as dedicated parity in a RAID 4 array. Obviously this means squat if you can't interface the flash memory properly, but hey, at least the possibilities are there.
Re:They forgot one... (Score:2)
Putting it in a different area of the same disk drive, using the same drive controller, the same motherboard, the same RAM (you get the idea), is asking for trouble.
Another benefit... (Score:5, Funny)
Not in most servers... (Score:4, Interesting)
No, it won't. Servers have large ammounts of system RAM, which is far faster than flash on the hard drive bus could ever be. They also have battery-backed RAID controllers, meaning flash would be a step down, not a step up.
This is only really useful in notebooks.
Re:Not in most servers... (Score:3, Insightful)
Also fast restart is especially good for critical servers as a method of reducing both planned and unplanned downtime. I know at lylix.net [lylix.net], we will be getting one of these as soon as Gentoo Linux properly supports it - you don't want an Asterisk box down longer than it has to be.
Re:Not in most servers... (Score:2)
Re:Not in most servers... (Score:2)
Re:Not in most servers... (Score:2)
Without onboard flash, the disk must service the request. The 3.5" disk in my office machine must be shutdown for at least 30 seconds to save energy. A sensible timeout (provably 2-competitive) is 60 seconds. Servers have higher performance disks than my desktop, so their timeout is going to be longer than that. The disk can never shut down and save energy due to di
Finally! Something Vista will have first! (Score:2)
Run, Vista! Run! Don't let those bullies Linux and OS X catch you!
Are standard file formats fine for use on flash? (Score:2, Interesting)
Re:Are standard file formats fine for use on flash (Score:4, Interesting)
There are research filesystems that are optimized for this kind of a hybrid environment. These were written for MEMS insetead of flash, but the basic ideas are nearly the same.
http://www.ssrc.ucsc.edu/proj/mems.html [ucsc.edu]
Disclaimer: I work there. I may be biased.
Where to put the flash? (Score:5, Interesting)
To my mind, the logical place to put it is on the drive. This is where the useful caching information is most easily available. (Which sectors are read/written how often? Which reads are often delayed by waiting for the disk to spin up?) This is also where you can make the process most transparent. The drive's firmware can make the system "just work", like a standard HD, but faster - whatever the OS, no drivers needed. (Although you'd possibly like to have drivers to give the OS more control over what is flash-cached.)
Re:Where to put the flash? (Score:3, Interesting)
LoB
Re:Where to put the flash? (Score:2)
Hybrid Momentum (Score:2)
Excellent for servers but what does linux support? (Score:2)
My question is - what kinds of support can/will the linux kernel have for this? We run Gentoo Linux as our host OS, and I cannot see us migrating to Windows for the forseeable futu
Re:Excellent for servers but what does linux suppo (Score:2)
Re:SPAM (Score:2)
I'll rephrase it:
'But, does it run Linux?'
The obviously answer is: It doesn't exist yet, how could linux possibly have drivers for it yet? On the other hand, name some 'basic' hardware that Linux -doesn't- support. Some intrepid soul will eventually write a driver for it, it's just a matter of time. I don't know whether they left Linux out on purpose (knowing someone else would write an open source driver if they wasted their time on a
Damn, can't show off! (Score:3, Funny)
Suspend to disk + flash for better boot times. (Score:2, Interesting)
Re:Suspend to disk + flash for better boot times. (Score:2)
Flash RAID (Score:2)
Massive disk cache (Score:5, Interesting)
Why doesn't this exist today? I think it was a really good idea. The closest thing I've found is Gigabyte's iRam, but this isn't really the same thing - as it's purely a RAM drive and doesn't persist to hard disk.
I think that slow booting is the one of the biggest annoyances of computers and the primary reason many people never turn off their machines in an office environment (hiberating on XP rarely works reliably in my experience - usually due to driver issues not reinitialising the hardware properly rather than there being any problem with XP itself).
If people's machines booted to the desktop in under 10 seconds, far more people would turn them off at the end of the day and worldwide power consumption would be significantly reduced.
Re:Massive disk cache (Score:2)
Write Limitation (Score:2, Interesting)
Re:Old? (Score:5, Informative)
Actually, has anyone tried that? I expect you could see a decent increase in performance that way.
Windows' swapfile usage is pretty similar to the way Linux does swap, except that Windows uses a file instead of a partition. By default it's 1.5 times the amount of RAM installed in the system and is made all at once to ensure a contiguous file. On systems with plenty of RAM it's still good to have because it means the OS can commit to having plenty of memory for applications which request a lot, most of which they might never use. Without a page file 10-20% of physical memory is wasted because the OS has committed to having it (think Photoshop, etc).
I don't know how well the pagefile would work on a USB drive since if you're using much swap you're already seeing serious degradation. Besides, flash drives still suck at write speeds, being many times worse than even an old IDE drive. That's the biggest problem with integrating the two technologies I would think--making sure that you don't introduce bottlenecks due to stuff like that.
Re:Old? (Score:4, Informative)
Sadly, it isn't always contiguous since it has an initial size and a maximum size. If you run too many apps or an app goes crazy and consumes all your memory, your pagefile goes through the roof.. I was horrified to discover the pagefile.sys on my laptop was split into 3000+ pieces. I had to page defrag over it (a SysInternals tool). After running it a bunch of times, it's still at 800 pieces even now.
I I prefer the Linux method since you can choose a swapfile or a swap partition. A partition guarantees no fragmentation (and optimal performance since there is no underlying fs), but you have the flexibility of a swap file if you need it.
Re:Old? (Score:2)
You're right. "Ensure" is too strong a word I suppose. You can minimize fragmentation of the pagefile by setting custom values, using the same number for the initial and maximum size (1.5 x RAM). This will prevent it from growing and fragmenting that way. The other way it can fragment is by creating
USB Flash, Swap, Windows Vista (Score:2, Informative)
It's part of the Vista SuperPrefetch.
http://www.windowsitpro.com/Windows/Article/Articl eID/48085/48085.html [windowsitpro.com]