Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

How Much Virtual Memory is Enough? 544

whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
This discussion has been archived. No new comments can be posted.

How Much Virtual Memory is Enough?

Comments Filter:
  • lots (Score:5, Funny)

    by emphatic ( 671123 ) on Tuesday August 29, 2006 @08:59PM (#16004033)
    lots
  • by FishWithAHammer ( 957772 ) on Tuesday August 29, 2006 @08:59PM (#16004034)
    Under Windows it seems it'll swap out whether the free RAM is needed or not, no matter what (there's a registry setting to change this though). Under Linux, you won't swap much anyway unless you need it.

    I run a Core Duo laptop with 1GB of RAM and have never swapped out in Linux, no matter what I was doing.
    • by Propaganda13 ( 312548 ) on Tuesday August 29, 2006 @09:05PM (#16004056)
      Simple. Monitor your own resource usage and figure out what YOU require. Everyone has different hardware, programs, and habits.
    • Re: (Score:3, Insightful)

      by toller8 ( 705418 )
      Different jobs, different needs....

      Two of my (Linux) servers have lots of memory and lots of small processes so anything that does swap out swaps out quick. These don't use a lot of swap (512Mb?) and don't have gig sized processes to write into swap... so they don't really need the 2+ gig of allocated swap.

      One other (Linux) server has big processes (1Gig or more) and when they have to swap out, watch the machine fall apart while the process is swapped out - it takes a while to write 1 gig of ram into

      • by mathew7 ( 863867 ) on Wednesday August 30, 2006 @02:12AM (#16005325)
        "One other (Linux) server has big processes (1Gig or more) and when they have to swap out, watch the machine fall apart while the process is swapped out - it takes a while to write 1 gig of ram into swap! Since the process is large, swap needs to be large.... Just hope that server needs to have 3 or 4 multi gig processes swapped out...."

        You seems to miss the idea of swap. All modern OSes combined with processors (from 386 in the x86 range) will swap 4KB pages. So if memory is needed, the last accessed page (4KB) in RAM will be swapped (and the algorithm continues until no more RAM is required). When one of the swapped 4KB pages is needed, it's retrieved from swap in free RAM (if no free RAM is available, it swaps out another page).
        I don't think it swaps out all of your application, and if it does, you should increase you RAM. The thing is that your app can try to access the "just swapped" page, which is a preformance killer. Swapping is done on page chunks, not app chunks.
        PS: the term pagefile probably comes from windows 95 because it contains "pages". All modern processors have MMU (http://en.wikipedia.org/wiki/Memory_management_un it), which segments the memory in 4-64KB of pages.
    • Exactly. And being on a laptop, the HD speed is slower and a complete waste of battery and time. So I have ZERO virtual memory. On Linux it'd be a different story because it's smart enough not to needlessly swap. I barely go over 500 MB, unless I'm playing Civ4, in which case it might go to a gig. But with 2GB RAM, why does Windows swap so much? Well, for me, not any more.
    • by megaditto ( 982598 ) on Tuesday August 29, 2006 @09:17PM (#16004103)
      To control how much 'it will swap' on Linux:
      #echo [0-100] > /proc/sys/vm/swappiness

      A better question is how much memory you can address. Could your 32 bit Windows system address over 2^36 bits of memory (64GB), for example? And could you allocate over 2GB to windows kernel?
      Could your 64-bit linux system address over 2^48 bits of memory?
      • Re: (Score:3, Interesting)

        Could your 64-bit linux system address over 2^48 bits of memory?

        Doubt it. I think AMD64 tops out at 41-42 address lines right now.

        • by jrumney ( 197329 )
          Address lines are for addressing physical RAM. Virtual memory is not limited by the availability of address lines. Many older (386, 486) cpus could address 4Gb of virtual memory, but only had 26 or 28bit address lines for example, so were limited to 64Mb or 256Mb RAM, which was far more than most people could imagine using.
      • by Spackler ( 223562 ) on Wednesday August 30, 2006 @08:26AM (#16006425) Journal
        #echo [0-100] > /proc/sys/vm/swappiness

        I hope this is not your example of how Linux is ready for the mainstream.
        • Re: (Score:3, Informative)

          by walt-sjc ( 145127 )
          I know you're just trolling, but...

          Just because the kernel has this tuning feature does not mean everyone has to muck with it. Having the capability to tune / customize is what makes linux flexible enough to use on devices from watches to supercomputing clusters / mainframes. If you don't want to make your own Linux Myth PVR, get a Linux based TIVO that doesn't require any mucking around at all. Linux, the kernel, has been in the mainstream for YEARS.
        • by astralbat ( 828541 ) on Wednesday August 30, 2006 @09:48AM (#16006927)
          This parameter was introduced with 2.6 and it's useful for laptops where a lower value will mean it swaps less. This parameter could be used for a distribution's event scripts that will change the value when, for example, the user unplugs their laptop from AC.

          The idea is the users's battery life is extended slightly without them realising how.

    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday August 29, 2006 @09:29PM (#16004157) Journal
      Swapping out makes sense sometimes, though. For instance, there are tiny chunks of the system -- daemons and such -- that are pretty much never accessed. I'd rather reclaim that, if only to cache something worthwhile.

      Also, remember that suspend2 requires swap, so figure how much of an image you'll need (and how much is cache that can be freed) and get a bit more than that. My own rule of thumb is, swap is roughly 1x to 1.5x RAM, so that I can be sure I have room for the suspend. But I have the space, and Windows doesn't use swap for this anyway, it uses hiberfil.sys
    • by Junta ( 36770 ) on Tuesday August 29, 2006 @09:35PM (#16004189)
      Linux has futzed with this a lot (and lets the user tweak VM behavior a lot, but /proc/sys/vm/swappiness goes a long way...), both linux and Windows will swap well ahead of not having free memory (for good reason). Just wanted to go into detail because I keep seeing people complain that they see swap used in linux or windows when they still have free memory, not realizing this isn't a bad thing generally.

      There are generally two strategies:
      -The common-sense one where you swap when you run out of memory. This makes a lot of practical sense on systems with limited write cycles (flash based swap, though you really never ever should do that anyway), and systems that want to spin down drives to conserve power for battery conservation. Performance wise (this may surprise people who haven't spent time thinking about it), this can often be bad. Avoiding swapping is generally only good on systems where resource utilization is carefully managed and you know it won't swap ever (the IO operations of unneeded can interfere with the productive activity of a constantly busy system). This is actually a vast minority of systems in the world (no matter how l33t one may think themselves, they most certainly don't have a usage pattern that would be impacted by the extraneous IO operations of occasional write to swap.

      -Pre-emptive swapping. When the IO subsystem is idle and the system can afford to copy memory to swap area, it does so (depending on criteria). Generally speaking it will select memory not accessed much and write it to disk, but leave the memory copy in place if the physical memory is not immediately needed. A fair amount of swap used in an apparently underutilized system is duplicated in physical memory and swap space. The benefit here is that if the process reads back that memory, it doesn't incur any penalty in reading it back despite it being also in swap (the system may make certain decisions on what is the best swap candidate and write to disk different data). The benefit of writing this stuff to swap even when not needed is clear when an application comes along that allocs more memory than the system has free in physical space. In the first strategy, this means the malloc blocks while data is written to disk, and the new application starting or needing a lot of data is severely impacted. In the pre-emptive swap case, system notices the condition, knows what memory it has a backup in swap of that hasn't been used lately, and can free that memory and satisfy the malloc pretty much instantly.

      To those who have 1GB of RAM or so it becomes less likely that the system will have to flush memory from physical RAM, but there is a balance to be struck between memory used directly invoked by applications, what the application memory access pattern is, and what ram you can use to buffer filesystem access. If your total application memory allocation is 75%, it still may make sense performance wise to only keep 50% of your physical memory dedicated to the applications, (the other bit relegated to swap), and 50% of the memory to buffer disk I/O.
      • by Junta ( 36770 ) on Tuesday August 29, 2006 @10:08PM (#16004363)
        My strategy generally is to use a file for swap rather than a partition, even in linux. I figure that if memory has to be swapped in from disk, it's already crappy going to disk so the extra overhead doesn't matter much, and I have freedom to adjust it up or down depending on my needs. (This is a desktop/laptop circumnstance). I generally start at 512MB or so, increasing maybe if IO is faster on the drive. I view swap like a rumble strip on a road before a stop sign. With no swap, you don't realize a process leaked memory until it's too late, with swap, while it eats through your swap the performance will degrade and you'll see the end coming ahead of time, and may be able to head it off with a kill. It may be well an good your 4GB of ram is technically capable of handling the same load your 1GB RAM+1GB swap handled in the past, but having some noticable impact when things start going wroing is nice. I realize theoretically there are better approaches, but nothing gets in your face like poor performance and tons of disk accesses.

        On a production server or a problematic system where I want support and the OS likes to dump a core to swap, I'll ensure a generous swap partition is available (generally observed active swapx1.5+physical memory size). In this case a file-backed swap may depend on layers of the kernel that are in an invalid state, and a swap partition is more likely to be reliably writable. The only system I would even theoretically hibernate on is my laptop, and I only ever suspend to ram or shutdown completely, so I don't consider my laptop as needing a swap partition of any significant size.
        • by mcrbids ( 148650 ) on Wednesday August 30, 2006 @01:12AM (#16005133) Journal
          My strategy generally is to use a file for swap rather than a partition, even in linux.

          What I find curious is that you have a strategy. On what relevant experience do you base this strategy? 1 GB of disk space costs less than $0.50. [pricewatch.com] Set up 3 GB of VM if it makes you feel good. The latte you drink while you set it up costs more than the extra disk space!

          So go for it!!! Who cares what you do? Heck, give yourself 10x the RAM and see if it actually makes any difference!!! (it won't)

          This is sort of like asking: "Which goes faster: the yellow Pacer or the red Pacer?"!
          • Re: (Score:3, Interesting)

            So go for it!!! Who cares what you do? Heck, give yourself 10x the RAM and see if it actually makes any difference!!! (it won't)

            In general, I approve of your philosophy. But remember there are addressing overheads to map all that disk space into memory, and all that page management can give you a bit of a performance hit too. It isn't the cost of disk, it's the cost of managing it that means you have to put a little bit of thought into it. I know the stuff is cheap, but you still have to compute with it

      • by irritating environme ( 529534 ) on Tuesday August 29, 2006 @10:56PM (#16004592)
        Sure when you had 128MB of ram, and you had a 256MB swap.

        But dude, my next box will have two GIGABYTES of RAM!

        Every one of your usage options assumes you'll run out of physical ram. Maybe if the OS is wasting it on pointless disk caching, but don't you think the programs in memory should have priority over blind disk caching?

        Lest a foolish reader believe your two options (swap immediately, or swap as lazily/late as you can) are the only two possibilities, how about swapping when, say, only 20% of physical RAM is left? That way my Firefox and Eclipse don't swap to disk and take twenty seconds to swap in when I have 500MB of GODDAMN FREE RAM!
        • by r00t ( 33219 ) on Tuesday August 29, 2006 @11:09PM (#16004655) Journal
          You have a GUI to run: 600 MB for firefox, 1800 MB for OpenOffice.org, 100 MB for X, 100 MB for desktop odds and ends, 300 MB for Evolution or Thunderbird, and 10 MB for old-style stuff running in the background.

          Total: 2910 MB

          Yep, you need a gigabyte of swap. OpenOffice.org was made 64-bit clean for a reason. If you plan ahead, not wanting to reallocate disk space in the next few years, you'll allow for this:

          2 GB for firefox, 5 GB for OpenOffice.org, 1/2 GB for X, 1/2 GB for desktop odds and ends, 1 GB for Evolution or Thunderbird, and 10 MB for old-style stuff running in the background

          That's 9.01 GB. You're exactly 7.01 GB short, so you'll be needing that swap space before you know it.
    • Well... (Score:3, Funny)

      by eliot1785 ( 987810 )
      ...I see your swap is as big as mine...
  • Depends (Score:5, Interesting)

    by beavis88 ( 25983 ) on Tuesday August 29, 2006 @09:01PM (#16004042)
    My rule of thumb these days is 1.5x RAM, unless you're at 2GB, in which case I go with 2GB swap as well. This is for *gasp* Windows, though.
    • It really does depend.

      It depends on what you're doing with the computer, and what hardware resources are available. Out of memory is bad. Very bad. On systems which have oodles of RAM, I tend to give low or no swap; on systems tight on RAM I may give 10x or more the amount of RAM.

      Here, "oodles of RAM" and "tight on RAM" are very dependant on what the system's being used for. For a home NAT gateway 64MB may be oodles; for an image processing station, 1GB may be tight (especially when dealing with medium
  • by bob whoops ( 808543 ) <bobwhoopsNO@SPAMgmail.com> on Tuesday August 29, 2006 @09:02PM (#16004044) Homepage

    Back when I had 512MB of memory, I had a 512MB swap partition, but I noticed that I never came close to using all of it.

    When I got my new machine with 1G, I never bothered to make one at all, and I've never had a problem with it. If I do ever find myself in a situation where I need some swap space, I could always just create a swap file. It's a lot more convinient because it wouldn't have to be a fixed size, doesn't take up space when I don't need it, and I have one less partition

    Especially if you have 2G or more, I don't see a real reason to use swap

    • LVM (Score:3, Informative)

      by XanC ( 644172 )
      If you use LVM (which you should, it's great!), you can expand and contract your swap partition as needed.
    • by MyDixieWrecked ( 548719 ) on Tuesday August 29, 2006 @09:18PM (#16004111) Homepage Journal
      not creating a swap partition at all is a bad idea, imo...

      you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

      I typically just make a 1 or 2 GB swap partition since I've got more than enough space to spare. I mean, back in the days when 128MB of RAM was considered a lot, and a 5GB drive was considered huge, no one would consider using 20% of their storage space for swap. Now, it's not unusual to have 300GB of storage, so what's 1% of that being used for swap?

      I've also got a serious collection of 2-6GB harddrives kicking around, now, so I've been using them for swap. It's really pointless to have a 4GB partition for data, so I just use the entire 6GB drive for swap on some machines.

      my primary server right now has a 4GB swap partition and 1.25GB of RAM... a piece of bad AJAX code that ran overnight wound up using all the RAM and had some seriously detrimental effects on the performance of the server. it took 25 minutes to ssh in in the morning and when I finally got in, I found that the load averages were at over 100 (I've NEVER see that before).

      my point is that even if you have a LOT of RAM, it's still handy to have some spillover available.
      • by edmudama ( 155475 ) on Tuesday August 29, 2006 @09:32PM (#16004169)
        If you've got a 300GB primary drive, it's foolish to use a 5GB drive for your swap. While you gain the benefit of having that drive separate from the primary (and potentially not contending for the bus), those drives are so far apart technology wise that you'd probably be better off with a swap partition on your most modern disk.

        That 2/5/6GB drive may have a 20MB/s sequential rate at OD and half that at ID. Modern drives more than double that sequential performance (or triple), which is what's critical when swapping in/out a large job. Many drives in that generation don't support UDMA either, and talk with PIO, meaning you get no data checksum on your transfers.

        You can span generations when you're using a cost reduced modern drive (fewer heads, same formats) but the drive that was stretching to make 5GB across 6/8 heads will be a real POS compared to modern drives performance wise.

        Thrashing is bad, but thrashing to a slow disk I'd think would be worse. It is even compounded since that 5GB drive is probably PATA, meaning you're going to have your swap drive and primary drive sharing a cable, which will basically nuke most of the savings of 2 disks since they'll be reselecting master/slave at almost every command.

        • by EvanED ( 569694 ) <evaned@NOspAM.gmail.com> on Tuesday August 29, 2006 @09:55PM (#16004289)
          Well, again, that depends; if your usage patterns don't cause enough memory use to justify swapping, and you're just creating a swap partition for the emergency where some program decides to break, then it hardly matters if your swap drives are slow, because they are never accessed.
      • by gfxguy ( 98788 ) on Tuesday August 29, 2006 @09:37PM (#16004202)
        you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

        Frankly, while I do use swap, in this case I'd rather have the process crash sooner rather than later.
      • Re: (Score:3, Insightful)

        by hackstraw ( 262471 ) *
        you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

        Personally, I prefer a runaway process to run out of resources and stop vs take over my whole system. It takes a long time to page out 1+ Gigs of RAM. It takes a long time to unpage all of that at shutdown or even when an app is closed.

        Swap completely depends on the computer's real RAM available and the purpose of the computer and the OS on said computer.

        To adequately answer the q
        • Re: (Score:3, Insightful)

          Swap completely depends on the computer's real RAM available and the purpose of the computer and the OS on said computer.

          To adequately answer the question, "How much Virtual Memory is Enough?" The correct answer is "It depends".


          exactly... and some OSs (read: OSX) caches less-frequently used data (cached window contents, and other images, etc) to the drive to free up real RAM; it doesn't matter how much RAM is installed on the machine, it'll still use the swap. Even my machine at work with 8GB of RAM frequen
      • by fuzz6y ( 240555 ) on Tuesday August 29, 2006 @11:20PM (#16004696)
        you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.
        You never know when some runaway process is going to eat all yer RAM and swap combined, no matter how much swapspace you've got.
        a piece of bad AJAX code that ran overnight wound up using all the RAM and had some seriously detrimental effects on the performance of the server
        too bad you had all that swapspace for it to run rabid across. if you'd had no swap at all, 1 of 2 things would have happened:
        1. the kernel kills the process because of a low memory condition
        2. an attempt to allocate memory fails. The application then handles this somehow. Since we've established that it's a lousy application, I'd guess it handles it by crashing.
        Either way, the Dude^W server abides.
        Naturally if you actually had that much physical RAM, the process would have still gone nuts, but your server wouldn't have had to thrash its disk for every process except the prodigal son, so the performance hit probably wouldn't have been noticeable.
      • Re: (Score:3, Interesting)

        "you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got."

        If you truly have a runaway process, it will use up all of your swap, no matter how much swap you've got. In most cases, it would be better for it to die sooner rather than later.

        I am a very heavy user and run many applications simultaneously. I have been running XP with 1 or 2GB of RAM and no swap file for over a year now. Despite having dozens of tabs open in two different

      • Re: (Score:3, Interesting)

        by MikShapi ( 681808 )
        But how does swap help?

        If you have 2GB of RAM and a process started leaking violently, providing it with 1.5 gigs of (physical) ram to work before it or the box dies or 3.5 gigs of ram (2 of which are swap) is meaningless. If it'll be chugging so much memory, it's probbably leaking without restraint anyway.

        This really depends on how likely you see a scenario where you'll be (legitimately) using more than your physical 2GB. For my office desktop box, that's a "never ever ever, not by a long shot", so I plain
      • Re: (Score:3, Insightful)

        by Jeremi ( 14640 )
        you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

        The thing is, in that situation, swap just makes things worse. Now instead of having a computer with all its RAM used up, you have a computer with all its RAM and all its swap space being used up, and it's slow as molasses due to constantly waiting for the hard disk I/O.

        At least without swap, the runaway process will be killed in a few seconds and then you can continue working.

  • Enough... (Score:3, Funny)

    by talkingpaperclip ( 952112 ) on Tuesday August 29, 2006 @09:02PM (#16004046) Homepage
    640k should be enough for anybody.
  • I just make mine equal to my ram these days.
  • I use this (Score:5, Insightful)

    by Anonymous Coward on Tuesday August 29, 2006 @09:07PM (#16004062)
    2X physical memory for under 2G RAM
    2G swap for up to 8G RAM
    +1G swap for every 4G RAM beyond that
  • 1GB ram using XP (Score:3, Informative)

    by Karloskar ( 980435 ) on Tuesday August 29, 2006 @09:09PM (#16004066)
    I disable virtual memory on computers with more than 1GB of ram unless the user is going to be manipulating large images. Never had a problem yet.
    • by uler ( 583670 ) <postNO@SPAMkaylix.net> on Tuesday August 29, 2006 @09:20PM (#16004121)
      One of the real advantages of using swap isn't to avoid memory exhaustion at all; by moving infrequently accessed pages from memory you make more room for the disk cache, thereby possibly improving overall system performance by reducing hard drive reads.
      • by pe1chl ( 90186 ) on Wednesday August 30, 2006 @05:17AM (#16005887)
        Unfortunately the Linux system has a hard time determining what are "infrequently accessed pages" and what are useful pages to keep in the disk cache.

        This is most obvious when you are copying large amounts of data, e.g. during a backup.
        Say you have a 250GB disk and you copy it to another one. The system will continously try to keep the files you have read in disk cache (because you may read them again) and try to keep room for many dirty pages that still have to be written to the destination disk (because you may change them again before final writing).
        All of this "(because)" is never going to happen as everything is read once and written once and then no longer needed.
        But still, it will swap out running processes to make room for the above.

        The net effect you see is that the source and swap disks are very busy, the destination disks sits idle long times until the kernel feels like flushing out some dirty buffers, and the other programs slow down to a crawl fighting for the swapspace.

        It can be tuned with the "swappiness" variable but it remains a tough thing to control. It looks like Windows does a better job in this (not so hypothetical) case.

        There should be some "file copy mode" (used during backups and other large tree copies) where it:
        - discards all disk USERDATA caches immediately after use (directory and other filesystem allocation data may be kept)
        - immediately writes out any written USERDATA to the destination disk, not having it populate the dirty pages until bdflush comes around to write them
        - keeps re-using the same small set of buffers to pump the data from source to destination, without stealing memory from others

        Issue is of course: how could this mode be enabled. It could be a special systemcall, but who would call it and where?
        Personally, I would already be happy with a program like "nice" or "ionice" that would run a commandline in a special mode (e.g. with a very small buffer quota) to force such behaviour. But the world at large would of course be better serviced if this would happen automatically when lots of data are copied sequentially.

        • Re: (Score:3, Funny)

          by 0123456 ( 636235 )
          "It looks like Windows does a better job in this (not so hypothetical) case."

          LOL. Windows will swap out my web browser when I'm copying a 2GB file from one drive to another.

          The whole idea of kicking out real applications to increase disk cache size is absolutely retarded. Unless the cache is below some absolute minimum size, it should never, ever swap out an application just to try to cache data that I'm probably never going to use again. The operating system has no damn clue about how important a file may
  • by TLouden ( 677335 ) on Tuesday August 29, 2006 @09:11PM (#16004079)
    If you really want to know, I use 1-2 GB swap with 1GB ram and the same for 512MB ram.

    However, you might just do what I do and try out different values to figure out what works. If you're talking about a linux system a real-time memory/swap usage graph can be added to most window managers so that you can see what's happening. You could also try to estimate usages based on what the machine is expected to do.
  • The first time I used a friend's 128k RAM Macintosh, I noticed how busy the HD seemed to be.
    After some poking around in the system, we found that we were in the topsy-turvy situation of having the OS running in RAM and all the applications running in the swap file on the HD!

    As soon as he got rid of the silly voices and other frippery (cool, though!), it went back to behaving in a more sensible manner.

    I think RAM prices have fallen faster than HD speeds have risen, so it has more impact than it used to to

    • Re: (Score:2, Informative)

      by Anonymous Coward
      You're full of it.

      1. The 128 KB Mac did not have a HD (though there were some companies that made disks that plugged into the floppy port).

      But more importantly:

      2. There was no "swap" (Virtual Memory) for the Mac OS until System 7, which wouldn't run anything less than a Mac Plus.
  • by Millenniumman ( 924859 ) on Tuesday August 29, 2006 @09:19PM (#16004116)
    I use 4x750 GB hard drives (RAID), purely for virtual memory. It increases the speed on the RAM preprocessing directive, but demodulates the core processing utility monitor. I find it to be a good setup, especially for running Naibed Linux.
  • by StikyPad ( 445176 ) on Tuesday August 29, 2006 @09:19PM (#16004118) Homepage
    According to MS, it's 1.5 times the total RAM [microsoft.com]. I assume you're asking because you're trying to avoid a fragmented page file. While the benefits of an unfragmented page file are dubious at best (since it will be randomly accessing different parts of the page file), it's better to err on the side of caution: If you have 2GB of memory, you likely have an equally compensating-for-something hard drive, so you probably won't miss 3GB of space, or even 4. It's better to waste a little space than have Windows run out of Virtual Memory. Otherwise, just let it do its dynamic page file adjustment thing.

    If you're asking about creating a swap partition for Linux then 1.5X is also recommended. Just be generous, unless -- for some reason -- you've got 2GB of RAM and a 50 meg hard drive. Too much is always better than not enough.
    • Before talking about swapping, pagging, and virtual memory please learn and understand this equation: Virtual Memory = Physical Memory + Swap(or Page) file. I let the OS (windows) manage my page file. The current generation of windows OS's (2k, xp, & 03) mange the swap file much more efficently than windows 9x did. All this mumbo-jumbo about tweaking the swapfile came about because these old versions of windows needed to be tweaked, they had memory problems and tweaking the swapfile would improve perfo
  • I still use a small multiplier, typically 2-3x physical RAM, for swap partition sizes on Solaris, Linux, xBSD, etc.

    Systems typically are paging less now that we have multiple gigs of RAM per server, but if something goes wrong, the disk is so cheap that having the overhead installed and ready to use is fine. Having a live, active safety margin is just good sytem planner sense.

    If you skimp on OS hard disks so much that 2-3x physical RAM is an excessive chunk out of the hard disks, then you're doing somethin
  • auto (Score:3, Insightful)

    by Joe The Dragon ( 967727 ) on Tuesday August 29, 2006 @09:25PM (#16004137)
    just let windows set it for you.
  • No swap at all (Score:4, Interesting)

    by DrZaius ( 6588 ) <gary.richardson+slashdot@gmail.com> on Tuesday August 29, 2006 @09:38PM (#16004207) Homepage
    I think it was one of the Live Journal guys at OScon that said, "If your server starts to swap, you've lost the battle".

    With all of our 64bit 4GB of ram minimum hosts floating around, there is no longer a point to having swap -- if you server really is swapping, it's under a huge load and the io is making the problem worse. Let the OS kill a few processes to get it back under control
    • Re:No swap at all (Score:5, Interesting)

      by georgewilliamherbert ( 211790 ) on Tuesday August 29, 2006 @09:46PM (#16004245)
      If the server starts to swap, you've lost the battle. But randomly killing things or locking up is losing the war.

      It's fine to set off alerts and alarms if you're paging. You should set off alerts and alarms if your servers start paging. Randomly killing things instead? Insanity.

      You can never build reliable services for users/customers unless you can handle random or accidental error conditions gracefully. Swap space is a cheap and easy key way to do that.

    • Rule of thumb... (Score:5, Insightful)

      by tachyonflow ( 539926 ) * on Tuesday August 29, 2006 @11:19PM (#16004693) Homepage

      But... but... the rule of thumb says to have twice as much swap as RAM!

      It's a pet peeve of mine that so many system administrators appeal to "rules of thumb" about decisions such as this, instead of actually thinking it through. Sys admins pass around these nuggets of wisdom with unquestioning reverence, like they were handed down from some bearded UNIX guru sitting on a mountaintop. These rules either 1) happen to reflect reality, 2) do not reflect reality, or 3) reflected reality 20 years ago but nobody got around to issuing some sort of "revocation rule of thumb". :)

      My experience is that very little swap is needed these days, and the rule of thumb falls into category #3. Long gone are the days that the OS demanded swap space for all process memory [san-francisco.ca.us].

      If I have a machine with 1GB of RAM, I'll usually give it 512MB of swap or so. As discussed elsewhere in this thread, a little bit of swap is good for pre-emptive swapping and for emergencies (to avoid the dreaded Linux "oom killer".) Also, if you're going to use hibernate, you'll want at least as much swap as real memory.

  • by Fry-kun ( 619632 ) on Tuesday August 29, 2006 @09:59PM (#16004311)
    OP poses wrong question. Virtual Memory is built into the OS and cannot be turned off. What OP means is Paging or Swap File (i.e. simulating memory using HD space). The rest of this reply will ignore this difference.

    Very simply, if you use windows and use it heavily (run some intensive tasks or need performance), turning off the page file will give you a nice performance boost.. or rather will not take away from performance.
    I have 1GiB of physical memory on my laptop, and reaching the limit in Windows when my paging file was off, posed a challenge (in other words, it worked perfectly well without it)
    This is because Windows attempts to use the paging file whenever it can (proactive), unlike Linux, which uses it only when there's no other way (reactive). Depending on the applications you're running, one of the approaches will be better than the other, though from what I've seen, I don't like what Windows does...
    Caveat Lector: this might be because I wasn't seeing the slowdowns which might've been caused by reactive approach. I've still yet to formulate an opinion on it - but so far it looks very reasonable.

    If using Linux, keep the swap partition and forget about it.
    In Windows, the best way to figure out if you need your page file is to load up as many apps as you normally load, maybe a few more - and check the memory usage (don't trust "VM usage" in windows task manager, it doesn't show you what you think it shows you!). If the usage is lower than your physical ram by a [few] hundred MiBs, turn off the page file and don't look back. If it's closer, set the page file to a small size, usually no more than 512MiB. If you set the file, make its size static, so that Windows doesn't try to adjust it all the time (it's too stupid to understand that you want to keep it as small as possible)

    Interesting to note that the paging file is not used for hibernation, even though you'd think it were almost tailor-made for that purpose. I've heard that early betas of Windows 2000 woke up from hibernation in a few seconds - I bet they were using the paging file for hibernation then... but I digress

    HTH
  • 4GB RAM, 4GB swap (Score:4, Insightful)

    by Agelmar ( 205181 ) * on Tuesday August 29, 2006 @10:34PM (#16004486)
    I have 4GB of physical ram (ddr2-6400) and 4gb of swap. There are actually a few reasons for this, YMMV (obviously I think the answer to this question depends on what you do).

    I have a lot of things running which, usually, are doing nothing. For instance, apache2, mysql, postfix, and courier-imapd-ssl are always running, but they're rarely actually *doing* anything. (If I get a hit or an email, it's relatively rare as I hardly have very little hosted off of my home box - nevertheless, I do want these running). So I'm happy to let these get swapped out. When I start up matlab, and start dealing with huge datasets, I know it's going to swap most of these out. That's good. It will also swap out some of my matlab data that's loaded but not currently being used (and yes, it's quite possible to have >4gb in your workspace). For me, I have the swap because I need it. Figure out what you need, and you will have the answer to your question.
  • Mac OS X swap (Score:5, Informative)

    by atomm1024 ( 570507 ) on Tuesday August 29, 2006 @10:58PM (#16004609)
    On Mac OS X, swap is stored (by default) in files in the /var/vm directory on the boot hard drive, instead of on a separate partition. So there's no limit to how much is used, nor a predefined minimum amount of space used, the swap space expanding and contracting as needed. That seems reasonable.
  • BSDs like more (Score:5, Insightful)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday August 29, 2006 @11:00PM (#16004619) Homepage Journal
    According to FreeBSD's tuning(7) [freebsd.org] man page:
    The kernel's VM paging algorithms are tuned to perform best when there is at least 2x swap versus main memory. Configuring too little swap can lead to inefficiencies in the VM page scanning code as well as create issues later on if you add more memory to your machine.

    Disk is always far cheaper and more plentiful than memory. If you have four gigs of memory, what's wrong with carving eight gigs of swap out of your terrabyte RAID? If you have that much memory in the first place, then you're probably running large apps. Do you and them a favor and give them a little breathing room.

  • by hpa ( 7948 ) on Tuesday August 29, 2006 @11:33PM (#16004752) Homepage
    One thing to consider is whether or not you're using tmpfs for /tmp. For performance, I recommend using tmpfs for /tmp, and basically treat the swap partition as your /tmp partition. It may seem counterintuitive, "why would it be faster than a filesystem when it's backed to disk anyway, and my filesystems caches just file if need be?" The answer is that tmpfs never needs to worry about consistency. On the kernel.org machines, we have seen /tmp-intensive tasks run 10-100 times faster with tmpfs than with a plain filesystem. The downside, of course, is that on a reboot, tmpfs is gone.

  • Read (please!) (Score:5, Informative)

    by Anonymous Coward on Tuesday August 29, 2006 @11:43PM (#16004789)
    Man, it's utterly depressing to see the same useless "rules-of-thumb" still in effect when the original question is asking if the rule-of-thumb is a good idea.

    1) Page space is not swap space. There's a small distinction that's generally lost (and generally ignored). Page space is used to move memory pages to and from disk. Swap space is technically to move entire processes out to disk. The difference is mainly based on when your OS was created (i.e., technological underpinnings) and no need to get into it now... but the difference is meaningful.

    2) Page space is not *free*. There's a misconception that if you have 500G of disk space then "how does it hurt" to put 8G of swap on 4G RAM. Depending on your OS, the size of the page table can grow remarkably depending on how much memory (RAM + VM) is allocated. This means that adding 2G of page space may not cost anything, but adding 2.5G may suddenly take up another chunk of real, non-pageable memory because the page table cannot itself be paged. This means that if your app is thrashing, then adding page space may make it worse.

    3) Even with lots of RAM, it's still often a good idea (depending on your usage) to have some page space. Modern OSes will still page out unused pages to use RAM for better stuff. I.e., if you have a huge file open in a graphics application, but are not actively using that application for a length of time (an hour, say) then the OS will page it to disk. This makes better use of your physical RAM. On some OSes the OS will use page space even if free RAM is available. It can then toggle a page out by flipping a bit in the page table and not have to do an expensive write.

    4) In some systems you can overcommit memory. Applications tend to request a lot more memory from the OS than they'll actually use. This is useful in many instances but it again depends on your usage. If you're running a single application that doesn't dynamically allocate memory then you can run pageless. If a new app requests memory that's not available then it will get a failure on malloc request. This can be desirable in some circumstances.

    5) There are benefits to running page space on a separate disk, but for the vast majority of home users, the difference is negligible. This applies to Windows and Linux. Once you start stressing the VM subsystem then a separate disk is highly desirable.

    6) You can create page files on Unix/Linux. It's not desirable generally because of the extra filesystem overhead and possibility of fragmentation. But hey, in a pinch it works.

    7) Why this 2x RAM rule? A lot of it comes from old VM subsystems that needed a "picture" of the entire memory space. This made the page-out algorithms easier to code. Newer algorithms don't require the 2X RAM.

    KL

  • min(2*RAM, 512Mb) (Score:4, Interesting)

    by YGingras ( 605709 ) <ygingras@ygingras.net> on Wednesday August 30, 2006 @01:39AM (#16005218) Homepage
    I never user more than 512Mb of swap. If you have a runaway process, you can let it live but you avoid a lot of trashing. If more than one process start consuming RAM like crazy you, actually want them to die from an out-of-mem error otherwise your whole system will grind to a halt while it spends most of its time unswaping one and the other. At 512Mb you can do a little excess of memory usage but won't go beyond what you can unswap in a time quantum (mostly).

    Smarter per-process ressource quotas would probably be better and it would be nice to have a trashiness function according to the disk speed but so far 512Mb sounds like the limit between using the resset button or just taking a coffee break when you see the HD led blinking like a strobe.

    It is just easier to try the approach where you consume a lot of RAM first and to re-code if it doesn't work. I work in bioinformatics and we often have huge datasets, I alway try to load the whole thing and to make the computation in RAM. Only when I get and out-of-mem error do I segment the dataset and try a smarter approach. That might explain my choice for 512Mb and the right threshold for other people might be bigger or lower but I'm pretty sure that its bellow 8Gb.
  • by haeger ( 85819 ) on Wednesday August 30, 2006 @03:04AM (#16005457)
    "Memory is like an orgasm. It's a lot better if you don't have to fake it."
                      -- Seymour Cray, on virtual memory.

    • Re: (Score:3, Funny)

      by Gnavpot ( 708731 )
      "Memory is like an orgasm. It's a lot better if you don't have to fake it."
                                          -- Seymour Cray, on virtual memory.

      It is usually recommended to use analogies which the target audience can relate to.
  • "It Depends" (Score:3, Insightful)

    by edward.virtually@pob ( 6854 ) on Wednesday August 30, 2006 @04:27AM (#16005733)
    The equation stays about the same though the scale of memory sizes involved increases. If one ran a set number of processes that all fit within core (RAM) memory and did not increase in size over time, one wouldn't need virtual memory at all. When using a properly sized computer of a given generation, the typical set of processes being run fits in or almost fits in core memory, so a virtual memory size equal to the core size provides ample protection against memory exhaustion (both core and virtual memories full). Exactly how much memory this is increases as the sizes of those typical processes increase. These days 2gb of core is usually large enough to avoid the need to use virtual memory, but it can be consumed pretty quickly by either large numbers of typical processes or a few memory intensive ones. Memory exhaustion is a very unpleasant situation and leads to data loss and service outages. The computer does not react well to having literally no room to think. So given this, and that virtual memory (disk space) gets cheaper at (somewhat) the same pace as core (RAM), it is much safer and cost effective to err on the side of caution and make the virtual memory bigger than necessary for day to day operation. Regardless of the scale of the current generation memory sizes, a virtual memory space equal to one or so times the core space of a properly equiped machine is the right size. For small core machines, the larger the core memory deficit, the more times larger the virtual memory space must be to avoid running out of total memory. A machine running the latest Windows environment in 512mb of core would need a virtual memory much larger than one or two times that size to be safe. Said machine would still perform very poorly due to the cost of continually accessing the virtual memory, but it would avoid crashing due to memory shortage. Systems with much more than average core memory may be able to do safely with less or even no virtual space, but it is arguably a foolish place to conserve since disk space is cheap and maintaining at least a one times core sized virtual memory space is insurance against the pain of memory exhaustion.

    Or distilled: less RAM than average needs more than two times that for virtual, average RAM needs one to two times that, and lots more RAM than average can probably get away with less than one times or even none but probably should use one times anyway.

    Again note that average refers to the RAM size of a current generation machine configured to run the typical number of typical current programs with reasonable performance.
  • by Terje Mathisen ( 128806 ) on Wednesday August 30, 2006 @04:47AM (#16005796)
    What the original article didn't mention, and none of the replies seemed to go into, is the fact that with current CPUs, effectively all RAM is 'virtual':

    Only on-chip memory, i.e. cache, is "real" these days, and all accesses to DRAM will be handled in paging units of 64/128 bytes or so. If this sounds familiar, it should! CPUs with 1 to 4 MB of real memory and lots of virtual memory is what the mainframes and minicomputers had about 20-30 years ago.

    What this means is that now, just like then, all performance-critical code needs to be written to keep the working set within the amount of "real" memory you have available. When you passed this limit, you needed to make sure that you handled paging in suitably large blocks, to overcome the initial seek time overhead.

    Today this corresponds to the difference between random access to DRAM and burst-mode (block transfer) which can be nearly an order of magnitude faster.

    In the old days, when you passed the limits of your drum/disk swap device, you had to go to tape, which was a purely sequential device. Today, when you pass the limits of DRAM, you have to go to disk, which also needs to be treated as a bulk transfer/sequential device.

    I.e. all the programming algorithms that was developed to handle resource limitations on old mainframes should now be ressurected!

      "those who forget their history, are condemned to repeat it"

    Terje

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...