Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

How Much Virtual Memory is Enough? 544

whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
This discussion has been archived. No new comments can be posted.

How Much Virtual Memory is Enough?

Comments Filter:
  • by FishWithAHammer ( 957772 ) on Tuesday August 29, 2006 @08:59PM (#16004034)
    Under Windows it seems it'll swap out whether the free RAM is needed or not, no matter what (there's a registry setting to change this though). Under Linux, you won't swap much anyway unless you need it.

    I run a Core Duo laptop with 1GB of RAM and have never swapped out in Linux, no matter what I was doing.
  • by bob whoops ( 808543 ) <bobwhoops@NoSPaM.gmail.com> on Tuesday August 29, 2006 @09:02PM (#16004044) Homepage

    Back when I had 512MB of memory, I had a 512MB swap partition, but I noticed that I never came close to using all of it.

    When I got my new machine with 1G, I never bothered to make one at all, and I've never had a problem with it. If I do ever find myself in a situation where I need some swap space, I could always just create a swap file. It's a lot more convinient because it wouldn't have to be a fixed size, doesn't take up space when I don't need it, and I have one less partition

    Especially if you have 2G or more, I don't see a real reason to use swap

  • I use this (Score:5, Insightful)

    by Anonymous Coward on Tuesday August 29, 2006 @09:07PM (#16004062)
    2X physical memory for under 2G RAM
    2G swap for up to 8G RAM
    +1G swap for every 4G RAM beyond that
  • by toller8 ( 705418 ) on Tuesday August 29, 2006 @09:15PM (#16004097)
    Different jobs, different needs....

    Two of my (Linux) servers have lots of memory and lots of small processes so anything that does swap out swaps out quick. These don't use a lot of swap (512Mb?) and don't have gig sized processes to write into swap... so they don't really need the 2+ gig of allocated swap.

    One other (Linux) server has big processes (1Gig or more) and when they have to swap out, watch the machine fall apart while the process is swapped out - it takes a while to write 1 gig of ram into swap! Since the process is large, swap needs to be large.... Just hope that server needs to have 3 or 4 multi gig processes swapped out....

    So, YMMV! Know your machines and what *may* need to swap out and you can live on the edge and figure your minimum swap.... or you could be safe and boring and have x.5 times RAM... After all... who needs that critical app to run after memory gets tight and the kernel kills it cause it was the memory hog?

    :)

  • by StikyPad ( 445176 ) on Tuesday August 29, 2006 @09:19PM (#16004118) Homepage
    According to MS, it's 1.5 times the total RAM [microsoft.com]. I assume you're asking because you're trying to avoid a fragmented page file. While the benefits of an unfragmented page file are dubious at best (since it will be randomly accessing different parts of the page file), it's better to err on the side of caution: If you have 2GB of memory, you likely have an equally compensating-for-something hard drive, so you probably won't miss 3GB of space, or even 4. It's better to waste a little space than have Windows run out of Virtual Memory. Otherwise, just let it do its dynamic page file adjustment thing.

    If you're asking about creating a swap partition for Linux then 1.5X is also recommended. Just be generous, unless -- for some reason -- you've got 2GB of RAM and a 50 meg hard drive. Too much is always better than not enough.
  • by uler ( 583670 ) <post@ k a y l i x . net> on Tuesday August 29, 2006 @09:20PM (#16004121)
    One of the real advantages of using swap isn't to avoid memory exhaustion at all; by moving infrequently accessed pages from memory you make more room for the disk cache, thereby possibly improving overall system performance by reducing hard drive reads.
  • auto (Score:3, Insightful)

    by Joe The Dragon ( 967727 ) on Tuesday August 29, 2006 @09:25PM (#16004137)
    just let windows set it for you.
  • by gfxguy ( 98788 ) on Tuesday August 29, 2006 @09:37PM (#16004202)
    you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

    Frankly, while I do use swap, in this case I'd rather have the process crash sooner rather than later.
  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Tuesday August 29, 2006 @09:55PM (#16004289)
    Well, again, that depends; if your usage patterns don't cause enough memory use to justify swapping, and you're just creating a swap partition for the emergency where some program decides to break, then it hardly matters if your swap drives are slow, because they are never accessed.
  • by Junta ( 36770 ) on Tuesday August 29, 2006 @10:08PM (#16004363)
    My strategy generally is to use a file for swap rather than a partition, even in linux. I figure that if memory has to be swapped in from disk, it's already crappy going to disk so the extra overhead doesn't matter much, and I have freedom to adjust it up or down depending on my needs. (This is a desktop/laptop circumnstance). I generally start at 512MB or so, increasing maybe if IO is faster on the drive. I view swap like a rumble strip on a road before a stop sign. With no swap, you don't realize a process leaked memory until it's too late, with swap, while it eats through your swap the performance will degrade and you'll see the end coming ahead of time, and may be able to head it off with a kill. It may be well an good your 4GB of ram is technically capable of handling the same load your 1GB RAM+1GB swap handled in the past, but having some noticable impact when things start going wroing is nice. I realize theoretically there are better approaches, but nothing gets in your face like poor performance and tons of disk accesses.

    On a production server or a problematic system where I want support and the OS likes to dump a core to swap, I'll ensure a generous swap partition is available (generally observed active swapx1.5+physical memory size). In this case a file-backed swap may depend on layers of the kernel that are in an invalid state, and a swap partition is more likely to be reliably writable. The only system I would even theoretically hibernate on is my laptop, and I only ever suspend to ram or shutdown completely, so I don't consider my laptop as needing a swap partition of any significant size.
  • by hackstraw ( 262471 ) * on Tuesday August 29, 2006 @10:23PM (#16004427)
    you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

    Personally, I prefer a runaway process to run out of resources and stop vs take over my whole system. It takes a long time to page out 1+ Gigs of RAM. It takes a long time to unpage all of that at shutdown or even when an app is closed.

    Swap completely depends on the computer's real RAM available and the purpose of the computer and the OS on said computer.

    To adequately answer the question, "How much Virtual Memory is Enough?" The correct answer is "It depends".

    Having too much swap on a HPC type of machine is a nightmare and will kill performance. Having too little swap on a general purpose server (moreso real RAM) is going to hurt performance. Paging out too much on a laptop with a slow disk can be very painful and slow down the shutdown process.

    There is no right answer.

  • 4GB RAM, 4GB swap (Score:4, Insightful)

    by Agelmar ( 205181 ) * on Tuesday August 29, 2006 @10:34PM (#16004486)
    I have 4GB of physical ram (ddr2-6400) and 4gb of swap. There are actually a few reasons for this, YMMV (obviously I think the answer to this question depends on what you do).

    I have a lot of things running which, usually, are doing nothing. For instance, apache2, mysql, postfix, and courier-imapd-ssl are always running, but they're rarely actually *doing* anything. (If I get a hit or an email, it's relatively rare as I hardly have very little hosted off of my home box - nevertheless, I do want these running). So I'm happy to let these get swapped out. When I start up matlab, and start dealing with huge datasets, I know it's going to swap most of these out. That's good. It will also swap out some of my matlab data that's loaded but not currently being used (and yes, it's quite possible to have >4gb in your workspace). For me, I have the swap because I need it. Figure out what you need, and you will have the answer to your question.
  • by MyDixieWrecked ( 548719 ) on Tuesday August 29, 2006 @10:57PM (#16004596) Homepage Journal
    Swap completely depends on the computer's real RAM available and the purpose of the computer and the OS on said computer.

    To adequately answer the question, "How much Virtual Memory is Enough?" The correct answer is "It depends".


    exactly... and some OSs (read: OSX) caches less-frequently used data (cached window contents, and other images, etc) to the drive to free up real RAM; it doesn't matter how much RAM is installed on the machine, it'll still use the swap. Even my machine at work with 8GB of RAM frequently uses the swap even before 1/4 of the RAM has been touched.
  • BSDs like more (Score:5, Insightful)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday August 29, 2006 @11:00PM (#16004619) Homepage Journal
    According to FreeBSD's tuning(7) [freebsd.org] man page:
    The kernel's VM paging algorithms are tuned to perform best when there is at least 2x swap versus main memory. Configuring too little swap can lead to inefficiencies in the VM page scanning code as well as create issues later on if you add more memory to your machine.

    Disk is always far cheaper and more plentiful than memory. If you have four gigs of memory, what's wrong with carving eight gigs of swap out of your terrabyte RAID? If you have that much memory in the first place, then you're probably running large apps. Do you and them a favor and give them a little breathing room.

  • Rule of thumb... (Score:5, Insightful)

    by tachyonflow ( 539926 ) * on Tuesday August 29, 2006 @11:19PM (#16004693) Homepage

    But... but... the rule of thumb says to have twice as much swap as RAM!

    It's a pet peeve of mine that so many system administrators appeal to "rules of thumb" about decisions such as this, instead of actually thinking it through. Sys admins pass around these nuggets of wisdom with unquestioning reverence, like they were handed down from some bearded UNIX guru sitting on a mountaintop. These rules either 1) happen to reflect reality, 2) do not reflect reality, or 3) reflected reality 20 years ago but nobody got around to issuing some sort of "revocation rule of thumb". :)

    My experience is that very little swap is needed these days, and the rule of thumb falls into category #3. Long gone are the days that the OS demanded swap space for all process memory [san-francisco.ca.us].

    If I have a machine with 1GB of RAM, I'll usually give it 512MB of swap or so. As discussed elsewhere in this thread, a little bit of swap is good for pre-emptive swapping and for emergencies (to avoid the dreaded Linux "oom killer".) Also, if you're going to use hibernate, you'll want at least as much swap as real memory.

  • by Anonymovs Coward ( 724746 ) on Wednesday August 30, 2006 @12:01AM (#16004860)

    2 GB for firefox, 5 GB for OpenOffice.org, 1/2 GB for X, 1/2 GB for desktop odds and ends, 1 GB for Evolution or Thunderbird, and 10 MB for old-style stuff running in the background
    If you're using "top" to get those numbers, you've probably [kdedevelopers.org]got them wrong. They definitely look wrong.
  • by mcrbids ( 148650 ) on Wednesday August 30, 2006 @01:12AM (#16005133) Journal
    My strategy generally is to use a file for swap rather than a partition, even in linux.

    What I find curious is that you have a strategy. On what relevant experience do you base this strategy? 1 GB of disk space costs less than $0.50. [pricewatch.com] Set up 3 GB of VM if it makes you feel good. The latte you drink while you set it up costs more than the extra disk space!

    So go for it!!! Who cares what you do? Heck, give yourself 10x the RAM and see if it actually makes any difference!!! (it won't)

    This is sort of like asking: "Which goes faster: the yellow Pacer or the red Pacer?"!
  • by TheNetAvenger ( 624455 ) on Wednesday August 30, 2006 @01:22AM (#16005167)
    Score 5 for this comment? Windows can't live without swap memory. 0 is automatically ignored (and other small values), and Windows decides to use as much as it wants. This is easily verified by running DirectX Siagnostic Tool, dxdiag.exe.

    In Windows 2000/XP you can't disable swap memory- plain and simple. Swap size can be reduced, that's all, but Windows will only follow your seeting until need arises (and that won't be when Windows has ran out of RAM, as other have explained).



    Actually they can be turned off in WindowsXP, easily, with no problems what so ever if you have a large memory footprint.

    In fact, the way Windows DOES handle memory it is better at running without a paging file than most OSes because it will not shove in crap loads of content to the pagefile anticipating the application will use it.

    Windows Vista also can and will run will without a pagefile, without incident.

    Where windows has 'sucked' at pagefiles in the past is that it will give priority to file operations that are non-application load related and use the RAM Cache, thereby paging existing applications to the Hard Drive. (This is changed in Vista, file copy operations should no longer consume RAM Cache at the expense of applications.)
  • by Anonymous Coward on Wednesday August 30, 2006 @02:18AM (#16005341)
    Why in the name of Pete does the kernel just *kill* anything, ever??? I know of no real unix, or unix clone, that does this. Who was smoking what when they decided: "Out of memory? Kill something!"????

    JP

    P.S. That's a serious question, not a troll...
  • by Jeremi ( 14640 ) on Wednesday August 30, 2006 @02:42AM (#16005404) Homepage
    you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.


    The thing is, in that situation, swap just makes things worse. Now instead of having a computer with all its RAM used up, you have a computer with all its RAM and all its swap space being used up, and it's slow as molasses due to constantly waiting for the hard disk I/O.


    At least without swap, the runaway process will be killed in a few seconds and then you can continue working.

  • "It Depends" (Score:3, Insightful)

    by edward.virtually@pob ( 6854 ) on Wednesday August 30, 2006 @04:27AM (#16005733)
    The equation stays about the same though the scale of memory sizes involved increases. If one ran a set number of processes that all fit within core (RAM) memory and did not increase in size over time, one wouldn't need virtual memory at all. When using a properly sized computer of a given generation, the typical set of processes being run fits in or almost fits in core memory, so a virtual memory size equal to the core size provides ample protection against memory exhaustion (both core and virtual memories full). Exactly how much memory this is increases as the sizes of those typical processes increase. These days 2gb of core is usually large enough to avoid the need to use virtual memory, but it can be consumed pretty quickly by either large numbers of typical processes or a few memory intensive ones. Memory exhaustion is a very unpleasant situation and leads to data loss and service outages. The computer does not react well to having literally no room to think. So given this, and that virtual memory (disk space) gets cheaper at (somewhat) the same pace as core (RAM), it is much safer and cost effective to err on the side of caution and make the virtual memory bigger than necessary for day to day operation. Regardless of the scale of the current generation memory sizes, a virtual memory space equal to one or so times the core space of a properly equiped machine is the right size. For small core machines, the larger the core memory deficit, the more times larger the virtual memory space must be to avoid running out of total memory. A machine running the latest Windows environment in 512mb of core would need a virtual memory much larger than one or two times that size to be safe. Said machine would still perform very poorly due to the cost of continually accessing the virtual memory, but it would avoid crashing due to memory shortage. Systems with much more than average core memory may be able to do safely with less or even no virtual space, but it is arguably a foolish place to conserve since disk space is cheap and maintaining at least a one times core sized virtual memory space is insurance against the pain of memory exhaustion.

    Or distilled: less RAM than average needs more than two times that for virtual, average RAM needs one to two times that, and lots more RAM than average can probably get away with less than one times or even none but probably should use one times anyway.

    Again note that average refers to the RAM size of a current generation machine configured to run the typical number of typical current programs with reasonable performance.
  • by Terje Mathisen ( 128806 ) on Wednesday August 30, 2006 @04:47AM (#16005796)
    What the original article didn't mention, and none of the replies seemed to go into, is the fact that with current CPUs, effectively all RAM is 'virtual':

    Only on-chip memory, i.e. cache, is "real" these days, and all accesses to DRAM will be handled in paging units of 64/128 bytes or so. If this sounds familiar, it should! CPUs with 1 to 4 MB of real memory and lots of virtual memory is what the mainframes and minicomputers had about 20-30 years ago.

    What this means is that now, just like then, all performance-critical code needs to be written to keep the working set within the amount of "real" memory you have available. When you passed this limit, you needed to make sure that you handled paging in suitably large blocks, to overcome the initial seek time overhead.

    Today this corresponds to the difference between random access to DRAM and burst-mode (block transfer) which can be nearly an order of magnitude faster.

    In the old days, when you passed the limits of your drum/disk swap device, you had to go to tape, which was a purely sequential device. Today, when you pass the limits of DRAM, you have to go to disk, which also needs to be treated as a bulk transfer/sequential device.

    I.e. all the programming algorithms that was developed to handle resource limitations on old mainframes should now be ressurected!

      "those who forget their history, are condemned to repeat it"

    Terje
  • by astralbat ( 828541 ) on Wednesday August 30, 2006 @09:48AM (#16006927)
    This parameter was introduced with 2.6 and it's useful for laptops where a lower value will mean it swaps less. This parameter could be used for a distribution's event scripts that will change the value when, for example, the user unplugs their laptop from AC.

    The idea is the users's battery life is extended slightly without them realising how.

  • by LogicHoleFlaw ( 557223 ) on Wednesday August 30, 2006 @01:34PM (#16008836) Homepage
    I recently attended IBM's "Performance Tuning with AIX" course. It could basically be summed up as "Don't Use Paging Space. Ever." Then it went into lots of detail about AIX memory management techniques and the VM subsystems, with a brief foray into network performance.

    It is a very sickening feeling to go and power-cycle a production system which is completely halted due to running out of memory. Almost as bad is a system which is hitting the swap and responding like molasses.

    Look at the work you need your server to do, then put the RAM in it you need to get the job done. I've not worked with Linux in a full-on production environment, but I will go look into its systems for dealing with OOM errors. I'm sure it will be interesting.
  • by darkonc ( 47285 ) <stephen_samuel AT bcgreen DOT com> on Wednesday August 30, 2006 @02:44PM (#16009456) Homepage Journal
    For me, swap space is like insurance. It's better (and often cheaper) to have it and never need it than to need it and not have it.

    Yes ram is incredibly cheap, and any amount of serious swapping is to be avoided. On the other hand, once in a while you do something stupid like having VI load a 2GB log file into RAM, or whip firefox into an 800mb frenzy and then load that 16kx32k image into GIMP, or do that database query that uses *way* more ram than you'd expected.

    In general, I'd rather have my system slow to a crawl than blow up in my face when something like that happens. At least, then, I've got the choice of what I want to kill/stop, rather than having random (critical) processes die on me and have no choice other than a post mortem.

    If you're that worried about your system slowing to a crawl when you start eating into swap space, then put instrumentation onto the system that alerts you when swap gets over 100MB. At least that way, you keep uptime and some hope of a controled recovery. With the price of hard disk storage being what it is today, it's not having a few spare gigabytes of backup VM resources that seems like a bad idea.

  • by billstewart ( 78916 ) on Wednesday August 30, 2006 @05:29PM (#16010944) Journal
    "But who ever sits there watching their hard drive light?"

    I do - any time I'm running Mozilla with a lot of tabs open and it decides to go into annoying-swapping-mode (on WinXP and predecessors) for no obviously good reason, so I've got to wait for Mozilla to swap itself in or out before I can see the web page or other application I want. It doesn't help that I mainly use it on a laptop, where the drive is slow and the RAM is a fairly large 384MB, but it also happens on my home desktop, where the drives are faster and there's 640GB of RAM, which ought to be enough for anybody.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...