Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

How Much Virtual Memory is Enough? 544

whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
This discussion has been archived. No new comments can be posted.

How Much Virtual Memory is Enough?

Comments Filter:
  • by Propaganda13 ( 312548 ) on Tuesday August 29, 2006 @09:05PM (#16004056)
    Simple. Monitor your own resource usage and figure out what YOU require. Everyone has different hardware, programs, and habits.
  • 1GB ram using XP (Score:3, Informative)

    by Karloskar ( 980435 ) on Tuesday August 29, 2006 @09:09PM (#16004066)
    I disable virtual memory on computers with more than 1GB of ram unless the user is going to be manipulating large images. Never had a problem yet.
  • Depends... (Score:1, Informative)

    by PianoComp81 ( 589011 ) on Tuesday August 29, 2006 @09:09PM (#16004069)
    It depends on what you want to use. If you don't care about Hibernate mode, then you probably wouldn't need any swap (or much). However, if you want to use that mode, you need to have at least the same amount of swap space as memory. I've tried it with less, and it wouldn't even attempt to go to sleep (for obvious reasons - swap is used to store what's currently in RAM when going into hibernate mode).
  • LVM (Score:3, Informative)

    by XanC ( 644172 ) on Tuesday August 29, 2006 @09:10PM (#16004074)
    If you use LVM (which you should, it's great!), you can expand and contract your swap partition as needed.
  • by TLouden ( 677335 ) on Tuesday August 29, 2006 @09:11PM (#16004079)
    If you really want to know, I use 1-2 GB swap with 1GB ram and the same for 512MB ram.

    However, you might just do what I do and try out different values to figure out what works. If you're talking about a linux system a real-time memory/swap usage graph can be added to most window managers so that you can see what's happening. You could also try to estimate usages based on what the machine is expected to do.
  • by larry bagina ( 561269 ) on Tuesday August 29, 2006 @09:14PM (#16004090) Journal

    system control panel -> advanced -> performance options -> advanced - > virtual memory.

    Set to no paging.

  • by megaditto ( 982598 ) on Tuesday August 29, 2006 @09:17PM (#16004103)
    To control how much 'it will swap' on Linux:
    #echo [0-100] > /proc/sys/vm/swappiness

    A better question is how much memory you can address. Could your 32 bit Windows system address over 2^36 bits of memory (64GB), for example? And could you allocate over 2GB to windows kernel?
    Could your 64-bit linux system address over 2^48 bits of memory?
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday August 29, 2006 @09:29PM (#16004157) Journal
    Swapping out makes sense sometimes, though. For instance, there are tiny chunks of the system -- daemons and such -- that are pretty much never accessed. I'd rather reclaim that, if only to cache something worthwhile.

    Also, remember that suspend2 requires swap, so figure how much of an image you'll need (and how much is cache that can be freed) and get a bit more than that. My own rule of thumb is, swap is roughly 1x to 1.5x RAM, so that I can be sure I have room for the suspend. But I have the space, and Windows doesn't use swap for this anyway, it uses hiberfil.sys
  • by Junta ( 36770 ) on Tuesday August 29, 2006 @09:35PM (#16004189)
    Linux has futzed with this a lot (and lets the user tweak VM behavior a lot, but /proc/sys/vm/swappiness goes a long way...), both linux and Windows will swap well ahead of not having free memory (for good reason). Just wanted to go into detail because I keep seeing people complain that they see swap used in linux or windows when they still have free memory, not realizing this isn't a bad thing generally.

    There are generally two strategies:
    -The common-sense one where you swap when you run out of memory. This makes a lot of practical sense on systems with limited write cycles (flash based swap, though you really never ever should do that anyway), and systems that want to spin down drives to conserve power for battery conservation. Performance wise (this may surprise people who haven't spent time thinking about it), this can often be bad. Avoiding swapping is generally only good on systems where resource utilization is carefully managed and you know it won't swap ever (the IO operations of unneeded can interfere with the productive activity of a constantly busy system). This is actually a vast minority of systems in the world (no matter how l33t one may think themselves, they most certainly don't have a usage pattern that would be impacted by the extraneous IO operations of occasional write to swap.

    -Pre-emptive swapping. When the IO subsystem is idle and the system can afford to copy memory to swap area, it does so (depending on criteria). Generally speaking it will select memory not accessed much and write it to disk, but leave the memory copy in place if the physical memory is not immediately needed. A fair amount of swap used in an apparently underutilized system is duplicated in physical memory and swap space. The benefit here is that if the process reads back that memory, it doesn't incur any penalty in reading it back despite it being also in swap (the system may make certain decisions on what is the best swap candidate and write to disk different data). The benefit of writing this stuff to swap even when not needed is clear when an application comes along that allocs more memory than the system has free in physical space. In the first strategy, this means the malloc blocks while data is written to disk, and the new application starting or needing a lot of data is severely impacted. In the pre-emptive swap case, system notices the condition, knows what memory it has a backup in swap of that hasn't been used lately, and can free that memory and satisfy the malloc pretty much instantly.

    To those who have 1GB of RAM or so it becomes less likely that the system will have to flush memory from physical RAM, but there is a balance to be struck between memory used directly invoked by applications, what the application memory access pattern is, and what ram you can use to buffer filesystem access. If your total application memory allocation is 75%, it still may make sense performance wise to only keep 50% of your physical memory dedicated to the applications, (the other bit relegated to swap), and 50% of the memory to buffer disk I/O.
  • by LiquidCoooled ( 634315 ) on Tuesday August 29, 2006 @09:37PM (#16004199) Homepage Journal
    This works really well until the one day you leave everything running, startup halflife and flick around the levels (uses memory faster...)

    bleeding thing cannot smoothly say "You are running out of memory, Setting up an emergency page file now...." without something crashing.

    Fix this problem and you are cooking on gas. A modern computer should be able to accomodate every malloc upto memory+free disk space and it can't easily.
  • by Anonymous Coward on Tuesday August 29, 2006 @09:38PM (#16004206)
    You're full of it.

    1. The 128 KB Mac did not have a HD (though there were some companies that made disks that plugged into the floppy port).

    But more importantly:

    2. There was no "swap" (Virtual Memory) for the Mac OS until System 7, which wouldn't run anything less than a Mac Plus.
  • old discussion (Score:1, Informative)

    by Anonymous Coward on Tuesday August 29, 2006 @10:20PM (#16004411)
    There's an old discussion [kerneltrap.org] amongst various kernel developers about this issue which, even though it's a couple of years old, is almost certainly going to be more insightful than anything you'll read on slashdot. You'll note that there doesn't seem to be a ready consensus even amongst the folks who know this stuff best, so if you see anyone posting the "correct" answer, call bullshit.

    My advice is that you should just do the same thing as some random poster on slashdot who says "these days I set up my swap like blah blah blah" without any explanation or justification.
  • Re:Depends... (Score:5, Informative)

    by Limecron ( 206141 ) on Tuesday August 29, 2006 @10:26PM (#16004446)
    This is completely wrong.

    In Windows, your RAM is saved to a file called "hiberfil.sys" which is the exact size of your physical RAM. Your swap file stays exactly the way it is, otherwise you'd lose the data that was swapped to it.

    In Linux, it depends on what program you are using to suspend, but typically, it's a file in /tmp.
  • by cookd ( 72933 ) <douglascook&juno,com> on Tuesday August 29, 2006 @10:30PM (#16004461) Journal
    That's not due to pre-emptive swapping. Pre-emptive swapping makes your hard disk work more when the system is idle, but it doesn't force anything out of memory.

    Your issue is due to an incorrect decision somewhere (not sure where) about how much memory to make available to WoW's direct (memory allocation) and indirect (disk cache) needs. WoW IS taking advantage (directly or indirectly) of that extra memory, but it probably only makes a 0.1% performance difference and you would rather it left your other programs in RAM. That is a hard situation to tune for.

    Note that there are (at least) two different ways for memory to be used even when it shows up as "free". One is via disk cache. The other is via large temporary allocations that are made, used, and then freed before they really register on the performance monitor.
  • by markk ( 35828 ) on Tuesday August 29, 2006 @10:41PM (#16004512)
    This depends - see other comments for most situations. However if you have a large Sun, HP, Fujitsu, IBM, etc. with 16+ CPUs and say 2 to 8 Gb per CPU (not uncommon in the big systems), then at minimum you need 3 times the --per CPU-- memory, becuse if one of the CPU's goes bad, the hot swap mechanism is going to use the swap space to keep the processes (at least on some of these systems) for moving them to the other CPU's as it marks the one as bad. You certainly don't need 2 times the total memory, or several hundred Gig. This is assuming the kind of NUMA architechure that I think all of these systems still have.
    Generally we just used to use, say, 36 Gig local drives as (mirrored) swap for simplicity. In this environment you are probably on a SAN and people will say to move everything there, and that might be more true now than a year or two ago.
  • Mac OS X swap (Score:5, Informative)

    by atomm1024 ( 570507 ) on Tuesday August 29, 2006 @10:58PM (#16004609)
    On Mac OS X, swap is stored (by default) in files in the /var/vm directory on the boot hard drive, instead of on a separate partition. So there's no limit to how much is used, nor a predefined minimum amount of space used, the swap space expanding and contracting as needed. That seems reasonable.
  • Re:Depends... (Score:3, Informative)

    by cnettel ( 836611 ) on Tuesday August 29, 2006 @11:06PM (#16004642)
    A note regarding Windows, though: from XP and on, it's very rary that the complete hiberfil.sys is used. Pages are swapped out aggressively to general swap or whatever binary file that's backing read-only pages. The remaining pages are compressed. However, when all these decisions are made, it would be impossible/inconvenient to realize that the file was really too small, so a worst-case allocation is made.
  • Re:Depends... (Score:2, Informative)

    by gyrojoe ( 600717 ) <gyrojoe+slashdot@NoSPaM.gmail.com> on Tuesday August 29, 2006 @11:15PM (#16004679)
    swsusp stores the data in the swap partition.
    Suspend2 can write it to a file instead.
    See http://www.suspend2.net/features [suspend2.net]
  • by WuphonsReach ( 684551 ) on Tuesday August 29, 2006 @11:24PM (#16004715)
    About the only thing I'd use them for is a PostgreSQL xlog location (the scratch area that PostgreSQL writes to prior to committing the writes to the database). It's all sequential writes, not very high volume, but when the xlog is on the same spindle as the database you get a lot of contention and slowdown in write-heavy applications.

    Even then, I'd probably replace the 5GB drive with a more modern 300GB or 400GB spindle. Create 5GB for the swap area on it, use the rest for temp directories, the xlog, and a quick-n-dirty backup location for rsync snapshots.

    (Older drives are *really* slow... 5-10MB/s vs 30+ MB/s for a more modern drive. The 750GB drives do 75MB/s at the outer diameter.)
  • by m0rph3us0 ( 549631 ) on Tuesday August 29, 2006 @11:35PM (#16004761)
    It's a problem called free-list errosion. Windows will swap out apps to make room for buffer cache. So when WoW reads a 2GB file, all your programs end up swapped out to disk.
  • by Anonymous Coward on Tuesday August 29, 2006 @11:38PM (#16004776)
    Turning off the page file in windows can be regarded as a hint that you wont need it. If you leave it turned on, windows will assume it might be needed and use it proactively to make the best use of it when it is absolutely needed.

    However, this is probably not a good hint to make. However great you think it is not to use the pagefile, youd probably rather some of that memory get used for disk io buffers than hold super ancient stale pages that nobody is accessing.

    I think the best way to look at this is the same way as the superfetchwhatever flash drive feature of vista.. obviously adding 1G of dram is better than adding 1G of flash memory, but even if you think its pointless, a lot of very intelligent people are finding ways to make the OS take advantage of whatever resources you can scrape up for it and there is plenty of room for improvement and innovation in this respect.

    Similarly, if you are choosing between 2G of page file and 2G of dram--choose the dram! But if you can't fit any more dram in, you'll wish you gave windows the extra resources it can use to tune your performance.
  • swap bits (Score:2, Informative)

    by walshy007 ( 906710 ) on Tuesday August 29, 2006 @11:41PM (#16004783)
    whereas linux can swap however you wish, swap partitions are only frequently used because they are the best performance wise, no overhead dealing with the filesystem. Linux can and will use basicly any type of storage you can imagine to swap on though.

    If I ever encounter a linux box without swap, a quick dd creating an empty file and then using swapon on the file fixes all, you can also use multiple swap files/partitions if necessary. same deal.

    only main annoyance with mac os vm is that it swaps out way before necessary, not as bad as windows but still it does some strange things which my mac tech friends cannot fathom why it does that way. Also it (appears) to not be changable behaviour. Correct me if wrong on that since said friends are quite annoyed with it :)
  • Read (please!) (Score:5, Informative)

    by Anonymous Coward on Tuesday August 29, 2006 @11:43PM (#16004789)
    Man, it's utterly depressing to see the same useless "rules-of-thumb" still in effect when the original question is asking if the rule-of-thumb is a good idea.

    1) Page space is not swap space. There's a small distinction that's generally lost (and generally ignored). Page space is used to move memory pages to and from disk. Swap space is technically to move entire processes out to disk. The difference is mainly based on when your OS was created (i.e., technological underpinnings) and no need to get into it now... but the difference is meaningful.

    2) Page space is not *free*. There's a misconception that if you have 500G of disk space then "how does it hurt" to put 8G of swap on 4G RAM. Depending on your OS, the size of the page table can grow remarkably depending on how much memory (RAM + VM) is allocated. This means that adding 2G of page space may not cost anything, but adding 2.5G may suddenly take up another chunk of real, non-pageable memory because the page table cannot itself be paged. This means that if your app is thrashing, then adding page space may make it worse.

    3) Even with lots of RAM, it's still often a good idea (depending on your usage) to have some page space. Modern OSes will still page out unused pages to use RAM for better stuff. I.e., if you have a huge file open in a graphics application, but are not actively using that application for a length of time (an hour, say) then the OS will page it to disk. This makes better use of your physical RAM. On some OSes the OS will use page space even if free RAM is available. It can then toggle a page out by flipping a bit in the page table and not have to do an expensive write.

    4) In some systems you can overcommit memory. Applications tend to request a lot more memory from the OS than they'll actually use. This is useful in many instances but it again depends on your usage. If you're running a single application that doesn't dynamically allocate memory then you can run pageless. If a new app requests memory that's not available then it will get a failure on malloc request. This can be desirable in some circumstances.

    5) There are benefits to running page space on a separate disk, but for the vast majority of home users, the difference is negligible. This applies to Windows and Linux. Once you start stressing the VM subsystem then a separate disk is highly desirable.

    6) You can create page files on Unix/Linux. It's not desirable generally because of the extra filesystem overhead and possibility of fragmentation. But hey, in a pinch it works.

    7) Why this 2x RAM rule? A lot of it comes from old VM subsystems that needed a "picture" of the entire memory space. This made the page-out algorithms easier to code. Newer algorithms don't require the 2X RAM.

    KL

  • by LO0G ( 606364 ) on Wednesday August 30, 2006 @12:02AM (#16004868)
    VM != VA. You're confusing the two.

    VA is Virtual Address space. For a 32bit processor, you have 32bits of virtual address space - each process can occupy no more than 3G of RAM (on XP, with the /3G switch (which can hurt other parts of the system because it reduces the memory that the kernel can use)).

    If you have more than one process, you have more than one virtual address space. So saying that each process can only address 3G of RAM doesn't matter - with 30 processes running, you could theoretically have 90G of VA allocated.

    What's important is VM.

    VM is vitual memory. VM is what backs the pages that are mapped into the VA.

    The maximum amount of VM you can have allocated on a machine is measured by the commitment limit on the machine, which is typically measured as "physical RAM + page file space". If overall VM always stays below physical RAM, you don't need a paging file. But if it EVER goes above it, you're toast if you don't have a paging file. All those pages from the boot process that normally would have been discarded to the paging file (or were allocated by daemons that started during boot but haven't done anything since then) stick in the craw of the memory manager taking up space that COULD be used for your application, but can't because you've not told the OS where to put them.

    That's why you have a paging file - it gives the OS a place to put the mouldy old pages that were allocated by apps that aren't actively doing things so your application can re-use the memory that those apps were using.

    Btw, it's my understanding that ALL modern virtual memory based operating systems have essentially the same VM architecture - Linux, Windows, whatever. They both use paging files for essentially the same things - discarding writable pages that are not in current use by applications (read-only pages can typically be loaded from the binary image).
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday August 30, 2006 @12:03AM (#16004874) Homepage Journal
    In order to determine the correct amount of virtual memory, you need to decide what contingencies virtual memory is being used for. Don't worry with what you are normally doing, because you can always buy enough RAM to do whatever you do normally, on modern systems. On old systems, where physical RAM was capped at a very small amount, you didn't have that option. So, virtual memory today is what you need when there are abnormal conditions.

    The 2.5x case comes from two simple rules of thumb. Firstly, you need enough to be able to hold the whole of what is in RAM now, plus everything you want to swap in, plus enough to minimise fragmentation and cover overheads. Secondly, the more swap space you have, the more metadata you need to manage it AND the greater the latency to perform any kind of swap AND the more swapping you need to do to run all active processes. Too much virtual memory is a Bad Thing. Having 2.5 x RAM was considered a good compromise and it is one I use to this day.

    Today, both rules of thumb still hold. The largest single object you can have is one that fills ALL of RAM after the kernel, and you absolutely must have sufficient swap space to be able to dump that object to disk. If you don't, then the kernel will either panic, kill the process or cause any other activity to behave unpredictably. It won't have the resources to behave correctly. Any number of these objects could, in theory, be swapped out - but remember that they don't run when on disk, only when in memory, so the more you have, the smaller the timeslice each will get - and the sum of those timeslices will go down, as you need to allow time for the swap to take place.

    However, today isn't quite the same as yesterday. The difference in performance between hard drives and RAM has changed. There is better caching on the drive. The swap algorithms are smarter and there is more understanding of what metadata is useful and what really has no value. Process handling is also smarter, so processes aren't necessarily run in order - round-robin scheduling is used for some time-critical stuff on Linux, but most applications use a more relaxed system.

    Also, programming has changed. There is greater re-use of tools and libraries - well, sometimes - and this means that the largest object you really have to handle at a time is much smaller than the size of RAM. A certain fraction of what's left will be used by shared libraries and shared resources.

    Lastly, because hard drives are reasonably cheap and most PCs can handle several at the same time, you are far far better off getting a drive and dedicating it to swap. This is good for many reasons, not least because the drive won't have to move the read heads from data space to swap space and back. You eliminate a vast chunk of seek time, reduce the stress on the drive AND can experiment with different swap sizes without risking losing data.

    I would therefore STRONGLY advise using the classic 2.5x and a different hard drive, but if you can't do this for some reason and want an updated formula, here is what I would suggest:

    The meaningful RAM will be equal to the total RAM minus the space used by the kernel and vital, non-swappable resources/daemons. Multiply this by three for 7200RPM hard drives or by five for 15000RPM hard drives. Multiply by one and a quarter for basic swap schemes, or by one and an eighth for profiling/intelligent swap schemes. Add the size of the hard disk cache, if the cache uses a high water mark to control operations. Subtract the size of the hard drive cache (unless this takes the size below zero) if the behavior is controlled by a low water mark only. Add one megabyte per simultaneous user. Add one megabyte for each large -or- long-running application likely to be running simultaneously. Subtract the total size of all the shared libraries likely to be loaded in the case just considered.

    This is a LOT more complex than 2.5x, so much so that I generally wouldn't bother using it except

  • No...not really (Score:5, Informative)

    by Chas ( 5144 ) on Wednesday August 30, 2006 @12:36AM (#16005009) Homepage Journal
    There's no real hard and fast rule anymore. And setting it against a static value (like physical memory) is incredibly wasteful.

    It's a much better idea to set it interactively. Use the system without adjusting the Virtual Memory for a while. Then take a look at your usage and set your virtual memory against that usage.

    For instance.

    If you're in a Windows machine, let it run normally for a few days.
    Run everything the way you normally use it.
    Multiple apps, multiple instances, games out the ass, everything.

    Then open up the Task Manager and look at the Performance tab.
    Take a look at the Peak value under "Commit Charge".
    Set your virtual memory, min and max, at about 10% above that value to leave yourself a little headroom.
    Normally this will be enough to deal with your maximum swap requests.

    If, somehow, you begin bumping against virtual memory limits again AFTER that, bump it another 10%.

    If you still have problems, keep bumping it in 10% increments, and start looking for apps that are memory leaking.
  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Wednesday August 30, 2006 @12:40AM (#16005025)
    Did you set the options in Firefox that disable caching of back pages? It by default stores (I think) prerendered pages (or at least a representation a lot closer to a rendered page than the HTML) in RAM so if you hit back it can come up nearly instantly. Before I turned that off, I'd have to restart Firefox every couple days because it would start to eat up so much RAM. The processes tab in task manager would tell me that it would routinely use several hundred megs. I have a post in my LiveJournal where I complain that it was using 700 MB.

    Heck, even with that option turned off, it says the mem usage is at 210MB now. That's with 17 tabs in three windows open.
  • Re:Depends... (Score:2, Informative)

    by Griffyn ( 948838 ) on Wednesday August 30, 2006 @12:47AM (#16005053)
    for obvious reasons - swap is used to store what's currently in RAM when going into hibernate mode

    Nope. There is a separate file called hibernate.fil (I think) that's stored in the root folder of the same drive containing your Windows folder.

  • by spuzzzzzzz ( 807185 ) on Wednesday August 30, 2006 @01:09AM (#16005122) Homepage
    ...unlike Linux, which uses it only when there's no other way (reactive)
    Not quite true. Read this. [kerneltrap.org]
  • by spongman ( 182339 ) on Wednesday August 30, 2006 @01:24AM (#16005173)
    start TaskMgr.exe, go to the processes tab, View->Select Columns: "I/O Reads", "I/O Writes". Sort by Process Name, wait a while, see which processes are still reading/writing. Kill them.
  • Re:Mac OS X swap (Score:1, Informative)

    by Anonymous Coward on Wednesday August 30, 2006 @01:55AM (#16005273)
    Read here [columbia.edu] about moving swapfiles on Tiger (or Panther).
  • by mathew7 ( 863867 ) on Wednesday August 30, 2006 @02:12AM (#16005325)
    "One other (Linux) server has big processes (1Gig or more) and when they have to swap out, watch the machine fall apart while the process is swapped out - it takes a while to write 1 gig of ram into swap! Since the process is large, swap needs to be large.... Just hope that server needs to have 3 or 4 multi gig processes swapped out...."

    You seems to miss the idea of swap. All modern OSes combined with processors (from 386 in the x86 range) will swap 4KB pages. So if memory is needed, the last accessed page (4KB) in RAM will be swapped (and the algorithm continues until no more RAM is required). When one of the swapped 4KB pages is needed, it's retrieved from swap in free RAM (if no free RAM is available, it swaps out another page).
    I don't think it swaps out all of your application, and if it does, you should increase you RAM. The thing is that your app can try to access the "just swapped" page, which is a preformance killer. Swapping is done on page chunks, not app chunks.
    PS: the term pagefile probably comes from windows 95 because it contains "pages". All modern processors have MMU (http://en.wikipedia.org/wiki/Memory_management_un it), which segments the memory in 4-64KB of pages.
  • by qazsedcft ( 911254 ) on Wednesday August 30, 2006 @02:35AM (#16005390)
    In Windows 2000/XP you can't disable swap memory- plain and simple. Swap size can be reduced, that's all, but Windows will only follow your seeting until need arises (and that won't be when Windows has ran out of RAM, as other have explained).

    You apparently do not have a Win XP SP2 machine to check this out. In the control panel there is an option "No page file" which is not the same as setting the size to zero. I've been running my machine without a pagefile for over a year without any problems whatsoever.
  • by ookaze ( 227977 ) on Wednesday August 30, 2006 @03:40AM (#16005554) Homepage
    Yes, swap is useful in any situation when you don't know if you'll have enough RAM to run everything.
    And RAM can be so "cheap" as you say, but disk is still far cheaper.

    With swap, you also have some way to find out that you're running out of memory. You can monitor it, and you can also sometimes see a performance decrease (if it's a desktop), though you'll probably not notice it with SCSI disks. But you still have the monitor, right ?
  • by shani ( 1674 ) <shane@time-travellers.org> on Wednesday August 30, 2006 @04:04AM (#16005645) Homepage
    Or to put it a third way, is there any situation where swapping is helpful, anymore?

    Sure. Consider Andrew Morton's logic:

    http://kerneltrap.org/node/3000 [kerneltrap.org]

    In your average program, most code never gets executed, and most data is never used. For a long-lived process, swapping out the unnecessary bits frees the memory for disk cache.

    While you may improve overall performance, by minimizing the average completion time for operations, the downside is responsiveness. As a user, I don't care if Firefox reads cached images a few milliseconds faster (by reading from cache instead of disk) if I have to wait 3 seconds for Thunderbird to respond to my clicks (because it has to swap in) after I've been browsing for a while. Average speed be damned! :)

    Having said that, I just set my swappiness to 100.
  • by paganizer ( 566360 ) <thegrove1@hotmail . c om> on Wednesday August 30, 2006 @04:13AM (#16005678) Homepage Journal
    Here is The True Word from a MCSE of long standing on the subject of virtual memory and windows (please note that nearly every person who has worked with windows will have a different true word):
    All windows: defrag your drives first.
    Win98SE: if RAM is =/> than 256mb, make Min setting equal to the amount of RAM, Max set to 1.5 the amount of RAM.
    If less than 256mb, set min setting to 1.5 times amount of ram, max to 2.5 times or 512MB, whichever comes first.

    Winnt:
    if you have 2 drives (not two partitions, two drives) create swap files with min/max equal to the amount of physical memory in the system on 2 drives. This is a way to make WinNT scream when it comes to disk writes
    Otherwise, if RAM is less than 256mb, set Virtual memory, both min & max, to twice your amount of RAM; if you have => 256MB, set min & Max to 1.5 times the amount of RAM.

    Win2k: if you have less than or equal to 512MB, set min to 1.5 times RAM, max to 2 times RAM. if you have greater than 512mb, set swap min/max to 1.5 times RAM.
    If you ever get an "out of virtual memory" error, defrag and add 100mb to min/max.
    If you have =/> 2GB RAM, disable swap, unless you are running server, in which case 4GB is the magic number.
    The 2 drive swap method just doesn't seem to work as well on Win2k as it did on WinNT; no clue why, but i've tested it repeatedly.

    WinXP Pro: Luser. why are you running the Windows ME of the 21st century? at least you aren't running WinXP home, though. just follow the guidelines for win2k, since that is all WinXP pro is, win2k with add-on crap, no changes to kernel or underlying function.

    Win2003: No clue.

    Vista: Not only have no clue, but I promise you I never will.
  • by Vlad_the_Inhaler ( 32958 ) on Wednesday August 30, 2006 @04:30AM (#16005744)
    That would make a nice variation on those 'BSD is dead' trolls - 'BSD kills your memory hogs'.

    I work under a mainframe OS and before VM was introduced (20 years ago?) the OS would happily swap processes or unused parts of processes out, it would kill any process which tried to allocate more memory than was physically available.
  • Re:lots (Score:1, Informative)

    by Anonymous Coward on Wednesday August 30, 2006 @05:04AM (#16005850)
    A gig of RAM costs 50 times more than a Gig of HDD.

    With this in mind, the no brainer option would be to set an extra-large maximum swap file size and then set your computer to only use it when absolutely neccessary (e.g. in windows use "ConservativeSwapFileUsage = 1" in system.ini). Minimum chance of running out of memory space, minimum unneccessary slowdown.
  • by gameforge ( 965493 ) on Wednesday August 30, 2006 @05:06AM (#16005860) Journal
    You're right about all of that, except that no 32-bit OS should be forced to deal with more than 4GB of physical address space.

    One of the above posts outlines exactly what the min. and max. settings for each version of Windows' pagefile should be, except of course XP's. :) Still, with XP, the max pagefile size combined with your total RAM size is not more than 4GB by default. This is on purpose. If all of the process' virtual to physical memory maps contain unique addresses within a 4GB address space, then Windows doesn't have to go through the map and adjust every address every time the program is swapped one way or another. In reality, as you approach the limit, it still will; but at least if all the tables are in chip and ready to go, it's speedy about it.

    With WinXP, you can have 4GB of RAM and another 4GB of pagefile space and total an 8GB commit limit (and that is the REAL limit without PAE); however, as said, doing so slows it down.

    PAE is faster (but not as fast as 4GB actually exists or not - you can even do /NOLOWMEM to ensure that device drivers get fed 64-bit physical addresses). Run some benchmarks.

    You can safely put 4GB of RAM into your computer and forget about pagefiles. Windows will love you for it.
  • by Anonymous Coward on Wednesday August 30, 2006 @05:11AM (#16005870)
    Not true at all. The WinXP kernel is significantly different from the Win2K kernel with one very important area of memory management. The Win2K kernel kept the registry hives in the paged pool. This means they took up kernel address space and prototype PTE entries. As a side effect this limited the number of hives in the system which was problematic for terminal services.

    WinXP and newer do something similar to the NT cache manager and map hive views in and out as they need them. This means that WinXP can deal better with low memory pressure in terms of pageable kernel data.

    So the rules for setting up optimal swapfiles are different between XP and 2K.

    Sorry, but MCSE is no match for somebody familiar with the kernel in detail. :)
  • by Damouze ( 766305 ) on Wednesday August 30, 2006 @05:51AM (#16005988)
    In my opinion the rule of thumb still applies. No matter what you are going to do with your system, it never hurts to have (more than) enough swap space. Like several people mentioned earlier: the more free RAM you have, the more RAM is available to the OS for the disk and buffer caches.

    There is, however, a potentially severe case if you have two processes accessing the same resource simultanously. Every good informatician and computer programmer knows that such a case is the ultimate no-no in software engineering. Unfortunately, there are scenarios thinkable, in which it will happen nonetheless.

    Back when I was still working on my Bachelor's degree I and a couple of friends of mine tried to simulate this theoretical possibility and see what happens. We had two processes, called 'ss1' and 'ss2', accessing the same resource at the same time:

    ss1 would create a file sized X and go into an endless loop writing random bytes at random positions in the file. ss2 would open that file and mmap() it. That way it would be in the buffer cache as long as data was written to it (and since data was written to it by the other process, that was actually the case). The result of the mmap() was a character array and ss2 would write random bytes to that character array at random positions.

    We tested this on the following OSes: Linux 2.0, Linux 2.2, Solaris x86 (can't remember which version), FreeBSD 3.3, Irix 4.0.5, 5.3 and 6.2 and Windows NT 4.0 Workstation. We ran the application with administrative or superuser privileges.

    As long as the size (X) did not approach half the physical amount of RAM present in the machine, there were no problems whatsoever. However, as soon as X passed that threshold, bad things started to happen. The only exception was Windows NT, which simply aborted the process with a page fault and an out-of-memory error.

    All the aforementioned machines that were running Linux or a variant of UNIX, suffered the same problem: a non-responsive system. The processes could only be terminated by doing a hardware reset of the machine. A kill -9 of the two processes did not work, because they were in a non-interruptable sleep. And the reason they were was that the OS was trying to fullfill the resource demands of the processes by swapping out other stuff, including, as we theorized, other parts of the file that were not "hot" at that time.

    This piece of intentionally bad-written software and intentionally bad system operatorship of course proved that, while it was highly unlikely to happen, it could happen and would have dire consequences for the system.

    Ordinarily, one should never run programs as a privileged user unless one absolutely has to and the two competing processes would have been terminated by the OS had they not run as root on the Linux and UNIX machines. But regardless of whether the OS in question uses the optimistic or pessimistic approach when allocating resources for a new process, the net result of having such a (in our case intentionally) badly written piece of software is the same: the system becomes non-responsive.

    In this case, it does not matter much how much swap space you have, the only difference is that if you have only a little amount of swap space the "dreaded" OOM killer starts to kill of processes at a very early hour instead of when it is already too late (and virtually incapable of functioning properly and actually do its job).

    Personally, I would still recommend using at least the same amount of swap space as you have physical RAM, and preferably at twice the amount. Bad things happen all the time, and it is better to be prepared for it. Therefore, the rule of thumb still applies.
  • by Anonymous Coward on Wednesday August 30, 2006 @06:21AM (#16006069)
    After the demand for virtual memory in Minix drove Linus to create Linux, now people don't want to use it.

    Disk is so cheap and plentiful I now configure w/ swap ~8x DRAM so I can suspend large jobs and still start new jobs instead of having to kill the process. I also install max DRAM to minimize swapping and paging. But I'm a scientist working w/ large datasets and don't do Windows or web stuff, so your milage may vary.

    old, bearded, Unix guy
  • by ajcarr ( 467073 ) on Wednesday August 30, 2006 @06:59AM (#16006168)
    The VM Size listed in Activity Monitor is not the size of the swapfiles. I'm running on a machine with 1 GB RAM, my VM size is 6.53 GB, but I only have 128 MB of swapfiles. You might find it interesting to intall MenuMeters http://www.ragingmenace.com/software/menumeters/ [ragingmenace.com] to keep track of what's going on.
  • Re:Mac OS X swap (Score:2, Informative)

    by Anonymous Coward on Wednesday August 30, 2006 @08:26AM (#16006428)
    So there's no limit to how much is used, nor a predefined minimum amount of space used, the swap space expanding and contracting as needed. That seems reasonable.

    I hope that there is some upper limit on how much is used! It's bad enough when a memory-leaky process uses up all of your RAM, but all of your hard-drive space too (in the form of swap)? Yeesh!
  • by walt-sjc ( 145127 ) on Wednesday August 30, 2006 @08:47AM (#16006541)
    But that's the ONLY way to fully answer the question.

    The old guideline of swap size = 2X RAM size still holds as RAM usage (application bloat) / system memory increases automatically mean swap space increases. But that was a general purpose guideline, and the guidance has ALWAYS been to set your swap space size to what you need based on actual usage. your only other option is to just set it to a ridiculously high number.

    If you are concerned about something yet are unwilling to spend 10 minutes educating yourself on how to deal with your concerns, then you have to live with the current situation or pay someone to handle your concerns for you. There is no magic bullet.
  • by walt-sjc ( 145127 ) on Wednesday August 30, 2006 @09:16AM (#16006693)
    I know you're just trolling, but...

    Just because the kernel has this tuning feature does not mean everyone has to muck with it. Having the capability to tune / customize is what makes linux flexible enough to use on devices from watches to supercomputing clusters / mainframes. If you don't want to make your own Linux Myth PVR, get a Linux based TIVO that doesn't require any mucking around at all. Linux, the kernel, has been in the mainstream for YEARS.
  • by VdG ( 633317 ) on Wednesday August 30, 2006 @11:50AM (#16007944)
    My own perspective is from UNIX servers. As I keep telling people any use of swap/paging spaces is bad for performance so the ideal solution is to add RAM. That's not always practicable, so the real answer to how much swap space to allocate is "enough".

    I still get software suppliers, (mostly SAP AG) moaning that we've got to allocate 3.5xRAM, which is arrant nonsense. It might have been necessary years back when 2GB was a lot of memory. Now I've got servers with 10s of GBs and I really don't want to waste 100s of GBs of disk on swap space which simply isn't going to be used. Sure: disk is cheap but it all adds up. One of the larger servers I support has 128GB of RAM and 32GB of paging space, (only 1% is actually used at the moment). A few servers like that and you're saving TBs of disk space.

    Of course, if you're going to keep your swap space to a minimum you need to have good monitoring in place so that you can extend it before it becomes a problem if something unexpected happens, and it's sensible to be a bit generous about it. We do occasionally have problems when processes suddenly start writing vast amounts of data to memory but I doubt that having loads more swap space would help in those cases, as there are usually bugs in the code. Fortunately root can usually still get in, (if you're patient), identify the offending processes and kill them.

    It also helps to have an OS that makes effective use of memory. What I know best is AIX and a few years back, (quite a lot of years in IT terms!) the memory allocation processes were changed so that even if you requested an enormous amount of memory it wasn't really allocated until you actually started to use it, (i.e. put some data in there). That made a considerable difference. I would expect any modern and efficient OS to do something similar.

    Paging can be dreadful for performance as you get a multiple hit: the process that needs swapped-out pages runs slow as it waits for data to be paged in; your system as a whole also runs slowly as CPU cycles are taken up servicing the paging requests; your I/O subsystem suffers as it spends time reading and writing to/from paging spaces rather than actually doing useful I/O. It's one of the first things I always target when I'm investigating performance problems on a server, just as it was a couple of decades ago when I was doing the same things with MVS.
  • Re:Virtual what ? (Score:3, Informative)

    by dal20402 ( 895630 ) * <dal20402@ m a c . com> on Wednesday August 30, 2006 @12:05PM (#16008076) Journal

    Power Mac G5
    OS X.4.7
    3GB physical RAM
    64MB swap file, which has never grown bigger since I added the extra RAM

    ...so, no, at least on OS X there's no point in having 6GB swap files.

  • IMO 1GB is too much. (Score:3, Informative)

    by TheLink ( 130905 ) on Wednesday August 30, 2006 @12:28PM (#16008282) Journal
    I think most people who think that swap should be in terms of multiples of physical RAM are missing the point.

    How much swap you have should be related to the longest you are willing to wait for stuff to be swapped in and out.

    Adjust your swap so that your computer is as slow as you can tolerate when it runs out of memory.

    For example: if you have a typical ATA drive, random read transfers would be about 10-15MB/sec. So if you ever need to swap in 400MB of stuff, you'd have to wait about 30-40 seconds before all of it is read in.

    What complicates things is there are some applications/programs that allocate memory that they will practically never use, so you'd may want to add swap for that.

    So the swap amount would be something like: total swap = "permanently swapped out unused stuff" + (seconds willing to wait * random read speed).

    Of course virtual mem doesn't really behave exactly like that - when you are low on RAM the computer will be continously reading the program it needs in, while writing the stuff it thinks it is less important out, but basically you're kind of reliving the old days of "drum/disk memory" - where you running stuff from drum or disk. And that's really slow.

    The problem with running out of memory is that under some conditions some operating systems (e.g. Linux) can mess up and kill the wrong process to free memory. I think this has improved somewhat - but Linux used to be pretty stupid and kill pretty important stuff...

    This is mainly because of the default overcommitting of memory. With overcommit, the O/S can say "fine" even if there really isn't enough memory, but when it turns out you really do need it all, the O/S goes around looking for stuff to kill...

    If you turn off overcommit things can become safer, but you'll need enough memory to hold all allocated memory even if unused.
  • Swap... (Score:2, Informative)

    by Corwn of Amber ( 802933 ) <corwinofamber@@@skynet...be> on Thursday August 31, 2006 @10:00AM (#16015447) Journal
    Zero. ZERO.

    Zero swap. Buy enough ram, deactivate swap, watch your computer run as fast as it should.
  • My expirience (Score:2, Informative)

    by Frozen Void ( 831218 ) on Friday September 01, 2006 @01:33PM (#16025231) Homepage
    I never use swap (for about six years) and had a 256MB ram machine with win98.
    Thing is i run into out of memory errors,when running alot of stuff,though rarely(windows takes 35MB by itself here).Now with 512MB i could run practically anything.

    My advice:Turn off swap,buy more ram.

     

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...