How Much Virtual Memory is Enough? 544
whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
Re:Not much, anymore... (Score:5, Informative)
1GB ram using XP (Score:3, Informative)
Depends... (Score:1, Informative)
LVM (Score:3, Informative)
Well, there's what I do and then there's reality (Score:4, Informative)
However, you might just do what I do and try out different values to figure out what works. If you're talking about a linux system a real-time memory/swap usage graph can be added to most window managers so that you can see what's happening. You could also try to estimate usages based on what the machine is expected to do.
Re:Not much, anymore... (Score:5, Informative)
system control panel -> advanced -> performance options -> advanced - > virtual memory.
Set to no paging.
Re:Not much, anymore... (Score:5, Informative)
#echo [0-100] >
A better question is how much memory you can address. Could your 32 bit Windows system address over 2^36 bits of memory (64GB), for example? And could you allocate over 2GB to windows kernel?
Could your 64-bit linux system address over 2^48 bits of memory?
Re:Not much, anymore... (Score:4, Informative)
Also, remember that suspend2 requires swap, so figure how much of an image you'll need (and how much is cache that can be freed) and get a bit more than that. My own rule of thumb is, swap is roughly 1x to 1.5x RAM, so that I can be sure I have room for the suspend. But I have the space, and Windows doesn't use swap for this anyway, it uses hiberfil.sys
Pre-emptive swapping... (Score:5, Informative)
There are generally two strategies:
-The common-sense one where you swap when you run out of memory. This makes a lot of practical sense on systems with limited write cycles (flash based swap, though you really never ever should do that anyway), and systems that want to spin down drives to conserve power for battery conservation. Performance wise (this may surprise people who haven't spent time thinking about it), this can often be bad. Avoiding swapping is generally only good on systems where resource utilization is carefully managed and you know it won't swap ever (the IO operations of unneeded can interfere with the productive activity of a constantly busy system). This is actually a vast minority of systems in the world (no matter how l33t one may think themselves, they most certainly don't have a usage pattern that would be impacted by the extraneous IO operations of occasional write to swap.
-Pre-emptive swapping. When the IO subsystem is idle and the system can afford to copy memory to swap area, it does so (depending on criteria). Generally speaking it will select memory not accessed much and write it to disk, but leave the memory copy in place if the physical memory is not immediately needed. A fair amount of swap used in an apparently underutilized system is duplicated in physical memory and swap space. The benefit here is that if the process reads back that memory, it doesn't incur any penalty in reading it back despite it being also in swap (the system may make certain decisions on what is the best swap candidate and write to disk different data). The benefit of writing this stuff to swap even when not needed is clear when an application comes along that allocs more memory than the system has free in physical space. In the first strategy, this means the malloc blocks while data is written to disk, and the new application starting or needing a lot of data is severely impacted. In the pre-emptive swap case, system notices the condition, knows what memory it has a backup in swap of that hasn't been used lately, and can free that memory and satisfy the malloc pretty much instantly.
To those who have 1GB of RAM or so it becomes less likely that the system will have to flush memory from physical RAM, but there is a balance to be struck between memory used directly invoked by applications, what the application memory access pattern is, and what ram you can use to buffer filesystem access. If your total application memory allocation is 75%, it still may make sense performance wise to only keep 50% of your physical memory dedicated to the applications, (the other bit relegated to swap), and 50% of the memory to buffer disk I/O.
Re:Not much, anymore... (Score:3, Informative)
bleeding thing cannot smoothly say "You are running out of memory, Setting up an emergency page file now...." without something crashing.
Fix this problem and you are cooking on gas. A modern computer should be able to accomodate every malloc upto memory+free disk space and it can't easily.
Re:Excuse me while I reminisce... (Score:2, Informative)
1. The 128 KB Mac did not have a HD (though there were some companies that made disks that plugged into the floppy port).
But more importantly:
2. There was no "swap" (Virtual Memory) for the Mac OS until System 7, which wouldn't run anything less than a Mac Plus.
old discussion (Score:1, Informative)
My advice is that you should just do the same thing as some random poster on slashdot who says "these days I set up my swap like blah blah blah" without any explanation or justification.
Re:Depends... (Score:5, Informative)
In Windows, your RAM is saved to a file called "hiberfil.sys" which is the exact size of your physical RAM. Your swap file stays exactly the way it is, otherwise you'd lose the data that was swapped to it.
In Linux, it depends on what program you are using to suspend, but typically, it's a file in
Re:Pre-emptive swapping... (Score:2, Informative)
Your issue is due to an incorrect decision somewhere (not sure where) about how much memory to make available to WoW's direct (memory allocation) and indirect (disk cache) needs. WoW IS taking advantage (directly or indirectly) of that extra memory, but it probably only makes a 0.1% performance difference and you would rather it left your other programs in RAM. That is a hard situation to tune for.
Note that there are (at least) two different ways for memory to be used even when it shows up as "free". One is via disk cache. The other is via large temporary allocations that are made, used, and then freed before they really register on the performance monitor.
Depends of if CPUS are Hot swappable (Score:2, Informative)
Generally we just used to use, say, 36 Gig local drives as (mirrored) swap for simplicity. In this environment you are probably on a SAN and people will say to move everything there, and that might be more true now than a year or two ago.
Mac OS X swap (Score:5, Informative)
Re:Depends... (Score:3, Informative)
Re:Depends... (Score:2, Informative)
Suspend2 can write it to a file instead.
See http://www.suspend2.net/features [suspend2.net]
Re:If you have enough, none (Score:3, Informative)
Even then, I'd probably replace the 5GB drive with a more modern 300GB or 400GB spindle. Create 5GB for the swap area on it, use the rest for temp directories, the xlog, and a quick-n-dirty backup location for rsync snapshots.
(Older drives are *really* slow... 5-10MB/s vs 30+ MB/s for a more modern drive. The 750GB drives do 75MB/s at the outer diameter.)
Re:Pre-emptive swapping... (Score:5, Informative)
windows knows better than you (Score:1, Informative)
However, this is probably not a good hint to make. However great you think it is not to use the pagefile, youd probably rather some of that memory get used for disk io buffers than hold super ancient stale pages that nobody is accessing.
I think the best way to look at this is the same way as the superfetchwhatever flash drive feature of vista.. obviously adding 1G of dram is better than adding 1G of flash memory, but even if you think its pointless, a lot of very intelligent people are finding ways to make the OS take advantage of whatever resources you can scrape up for it and there is plenty of room for improvement and innovation in this respect.
Similarly, if you are choosing between 2G of page file and 2G of dram--choose the dram! But if you can't fit any more dram in, you'll wish you gave windows the extra resources it can use to tune your performance.
swap bits (Score:2, Informative)
If I ever encounter a linux box without swap, a quick dd creating an empty file and then using swapon on the file fixes all, you can also use multiple swap files/partitions if necessary. same deal.
only main annoyance with mac os vm is that it swaps out way before necessary, not as bad as windows but still it does some strange things which my mac tech friends cannot fathom why it does that way. Also it (appears) to not be changable behaviour. Correct me if wrong on that since said friends are quite annoyed with it
Read (please!) (Score:5, Informative)
1) Page space is not swap space. There's a small distinction that's generally lost (and generally ignored). Page space is used to move memory pages to and from disk. Swap space is technically to move entire processes out to disk. The difference is mainly based on when your OS was created (i.e., technological underpinnings) and no need to get into it now... but the difference is meaningful.
2) Page space is not *free*. There's a misconception that if you have 500G of disk space then "how does it hurt" to put 8G of swap on 4G RAM. Depending on your OS, the size of the page table can grow remarkably depending on how much memory (RAM + VM) is allocated. This means that adding 2G of page space may not cost anything, but adding 2.5G may suddenly take up another chunk of real, non-pageable memory because the page table cannot itself be paged. This means that if your app is thrashing, then adding page space may make it worse.
3) Even with lots of RAM, it's still often a good idea (depending on your usage) to have some page space. Modern OSes will still page out unused pages to use RAM for better stuff. I.e., if you have a huge file open in a graphics application, but are not actively using that application for a length of time (an hour, say) then the OS will page it to disk. This makes better use of your physical RAM. On some OSes the OS will use page space even if free RAM is available. It can then toggle a page out by flipping a bit in the page table and not have to do an expensive write.
4) In some systems you can overcommit memory. Applications tend to request a lot more memory from the OS than they'll actually use. This is useful in many instances but it again depends on your usage. If you're running a single application that doesn't dynamically allocate memory then you can run pageless. If a new app requests memory that's not available then it will get a failure on malloc request. This can be desirable in some circumstances.
5) There are benefits to running page space on a separate disk, but for the vast majority of home users, the difference is negligible. This applies to Windows and Linux. Once you start stressing the VM subsystem then a separate disk is highly desirable.
6) You can create page files on Unix/Linux. It's not desirable generally because of the extra filesystem overhead and possibility of fragmentation. But hey, in a pinch it works.
7) Why this 2x RAM rule? A lot of it comes from old VM subsystems that needed a "picture" of the entire memory space. This made the page-out algorithms easier to code. Newer algorithms don't require the 2X RAM.
KL
Re:Not much, anymore... (Score:5, Informative)
VA is Virtual Address space. For a 32bit processor, you have 32bits of virtual address space - each process can occupy no more than 3G of RAM (on XP, with the
If you have more than one process, you have more than one virtual address space. So saying that each process can only address 3G of RAM doesn't matter - with 30 processes running, you could theoretically have 90G of VA allocated.
What's important is VM.
VM is vitual memory. VM is what backs the pages that are mapped into the VA.
The maximum amount of VM you can have allocated on a machine is measured by the commitment limit on the machine, which is typically measured as "physical RAM + page file space". If overall VM always stays below physical RAM, you don't need a paging file. But if it EVER goes above it, you're toast if you don't have a paging file. All those pages from the boot process that normally would have been discarded to the paging file (or were allocated by daemons that started during boot but haven't done anything since then) stick in the craw of the memory manager taking up space that COULD be used for your application, but can't because you've not told the OS where to put them.
That's why you have a paging file - it gives the OS a place to put the mouldy old pages that were allocated by apps that aren't actively doing things so your application can re-use the memory that those apps were using.
Btw, it's my understanding that ALL modern virtual memory based operating systems have essentially the same VM architecture - Linux, Windows, whatever. They both use paging files for essentially the same things - discarding writable pages that are not in current use by applications (read-only pages can typically be loaded from the binary image).
First, understanding virtual memory (Score:2, Informative)
The 2.5x case comes from two simple rules of thumb. Firstly, you need enough to be able to hold the whole of what is in RAM now, plus everything you want to swap in, plus enough to minimise fragmentation and cover overheads. Secondly, the more swap space you have, the more metadata you need to manage it AND the greater the latency to perform any kind of swap AND the more swapping you need to do to run all active processes. Too much virtual memory is a Bad Thing. Having 2.5 x RAM was considered a good compromise and it is one I use to this day.
Today, both rules of thumb still hold. The largest single object you can have is one that fills ALL of RAM after the kernel, and you absolutely must have sufficient swap space to be able to dump that object to disk. If you don't, then the kernel will either panic, kill the process or cause any other activity to behave unpredictably. It won't have the resources to behave correctly. Any number of these objects could, in theory, be swapped out - but remember that they don't run when on disk, only when in memory, so the more you have, the smaller the timeslice each will get - and the sum of those timeslices will go down, as you need to allow time for the swap to take place.
However, today isn't quite the same as yesterday. The difference in performance between hard drives and RAM has changed. There is better caching on the drive. The swap algorithms are smarter and there is more understanding of what metadata is useful and what really has no value. Process handling is also smarter, so processes aren't necessarily run in order - round-robin scheduling is used for some time-critical stuff on Linux, but most applications use a more relaxed system.
Also, programming has changed. There is greater re-use of tools and libraries - well, sometimes - and this means that the largest object you really have to handle at a time is much smaller than the size of RAM. A certain fraction of what's left will be used by shared libraries and shared resources.
Lastly, because hard drives are reasonably cheap and most PCs can handle several at the same time, you are far far better off getting a drive and dedicating it to swap. This is good for many reasons, not least because the drive won't have to move the read heads from data space to swap space and back. You eliminate a vast chunk of seek time, reduce the stress on the drive AND can experiment with different swap sizes without risking losing data.
I would therefore STRONGLY advise using the classic 2.5x and a different hard drive, but if you can't do this for some reason and want an updated formula, here is what I would suggest:
The meaningful RAM will be equal to the total RAM minus the space used by the kernel and vital, non-swappable resources/daemons. Multiply this by three for 7200RPM hard drives or by five for 15000RPM hard drives. Multiply by one and a quarter for basic swap schemes, or by one and an eighth for profiling/intelligent swap schemes. Add the size of the hard disk cache, if the cache uses a high water mark to control operations. Subtract the size of the hard drive cache (unless this takes the size below zero) if the behavior is controlled by a low water mark only. Add one megabyte per simultaneous user. Add one megabyte for each large -or- long-running application likely to be running simultaneously. Subtract the total size of all the shared libraries likely to be loaded in the case just considered.
This is a LOT more complex than 2.5x, so much so that I generally wouldn't bother using it except
No...not really (Score:5, Informative)
It's a much better idea to set it interactively. Use the system without adjusting the Virtual Memory for a while. Then take a look at your usage and set your virtual memory against that usage.
For instance.
If you're in a Windows machine, let it run normally for a few days.
Run everything the way you normally use it.
Multiple apps, multiple instances, games out the ass, everything.
Then open up the Task Manager and look at the Performance tab.
Take a look at the Peak value under "Commit Charge".
Set your virtual memory, min and max, at about 10% above that value to leave yourself a little headroom.
Normally this will be enough to deal with your maximum swap requests.
If, somehow, you begin bumping against virtual memory limits again AFTER that, bump it another 10%.
If you still have problems, keep bumping it in 10% increments, and start looking for apps that are memory leaking.
Re:your next box needs swap (Score:3, Informative)
Heck, even with that option turned off, it says the mem usage is at 210MB now. That's with 17 tabs in three windows open.
Re:Depends... (Score:2, Informative)
Nope. There is a separate file called hibernate.fil (I think) that's stored in the root folder of the same drive containing your Windows folder.
Re:heavy windows usage = 0, anything else = defaul (Score:4, Informative)
Re:Not much, anymore... (Score:3, Informative)
Re:Mac OS X swap (Score:1, Informative)
Re:Not much, anymore... (Score:4, Informative)
You seems to miss the idea of swap. All modern OSes combined with processors (from 386 in the x86 range) will swap 4KB pages. So if memory is needed, the last accessed page (4KB) in RAM will be swapped (and the algorithm continues until no more RAM is required). When one of the swapped 4KB pages is needed, it's retrieved from swap in free RAM (if no free RAM is available, it swaps out another page).
I don't think it swaps out all of your application, and if it does, you should increase you RAM. The thing is that your app can try to access the "just swapped" page, which is a preformance killer. Swapping is done on page chunks, not app chunks.
PS: the term pagefile probably comes from windows 95 because it contains "pages". All modern processors have MMU (http://en.wikipedia.org/wiki/Memory_management_u
Re:Not much, anymore... (Score:2, Informative)
You apparently do not have a Win XP SP2 machine to check this out. In the control panel there is an option "No page file" which is not the same as setting the size to zero. I've been running my machine without a pagefile for over a year without any problems whatsoever.
Re:Is swapping obsolete? (was:Rules of thumb are d (Score:3, Informative)
And RAM can be so "cheap" as you say, but disk is still far cheaper.
With swap, you also have some way to find out that you're running out of memory. You can monitor it, and you can also sometimes see a performance decrease (if it's a desktop), though you'll probably not notice it with SCSI disks. But you still have the monitor, right ?
Re:Is swapping obsolete? (Score:4, Informative)
Sure. Consider Andrew Morton's logic:
http://kerneltrap.org/node/3000 [kerneltrap.org]
In your average program, most code never gets executed, and most data is never used. For a long-lived process, swapping out the unnecessary bits frees the memory for disk cache.
While you may improve overall performance, by minimizing the average completion time for operations, the downside is responsiveness. As a user, I don't care if Firefox reads cached images a few milliseconds faster (by reading from cache instead of disk) if I have to wait 3 seconds for Thunderbird to respond to my clicks (because it has to swap in) after I've been browsing for a while. Average speed be damned!
Having said that, I just set my swappiness to 100.
Re:Not much, anymore... (Score:5, Informative)
All windows: defrag your drives first.
Win98SE: if RAM is =/> than 256mb, make Min setting equal to the amount of RAM, Max set to 1.5 the amount of RAM.
If less than 256mb, set min setting to 1.5 times amount of ram, max to 2.5 times or 512MB, whichever comes first.
Winnt:
if you have 2 drives (not two partitions, two drives) create swap files with min/max equal to the amount of physical memory in the system on 2 drives. This is a way to make WinNT scream when it comes to disk writes
Otherwise, if RAM is less than 256mb, set Virtual memory, both min & max, to twice your amount of RAM; if you have => 256MB, set min & Max to 1.5 times the amount of RAM.
Win2k: if you have less than or equal to 512MB, set min to 1.5 times RAM, max to 2 times RAM. if you have greater than 512mb, set swap min/max to 1.5 times RAM.
If you ever get an "out of virtual memory" error, defrag and add 100mb to min/max.
If you have =/> 2GB RAM, disable swap, unless you are running server, in which case 4GB is the magic number.
The 2 drive swap method just doesn't seem to work as well on Win2k as it did on WinNT; no clue why, but i've tested it repeatedly.
WinXP Pro: Luser. why are you running the Windows ME of the 21st century? at least you aren't running WinXP home, though. just follow the guidelines for win2k, since that is all WinXP pro is, win2k with add-on crap, no changes to kernel or underlying function.
Win2003: No clue.
Vista: Not only have no clue, but I promise you I never will.
Re:Not much, anymore... (Score:3, Informative)
I work under a mainframe OS and before VM was introduced (20 years ago?) the OS would happily swap processes or unused parts of processes out, it would kill any process which tried to allocate more memory than was physically available.
Re:lots (Score:1, Informative)
With this in mind, the no brainer option would be to set an extra-large maximum swap file size and then set your computer to only use it when absolutely neccessary (e.g. in windows use "ConservativeSwapFileUsage = 1" in system.ini). Minimum chance of running out of memory space, minimum unneccessary slowdown.
Re:Not much, anymore... (Score:3, Informative)
One of the above posts outlines exactly what the min. and max. settings for each version of Windows' pagefile should be, except of course XP's.
With WinXP, you can have 4GB of RAM and another 4GB of pagefile space and total an 8GB commit limit (and that is the REAL limit without PAE); however, as said, doing so slows it down.
PAE is faster (but not as fast as 4GB actually exists or not - you can even do
You can safely put 4GB of RAM into your computer and forget about pagefiles. Windows will love you for it.
Re:Not much, anymore... (Score:2, Informative)
WinXP and newer do something similar to the NT cache manager and map hive views in and out as they need them. This means that WinXP can deal better with low memory pressure in terms of pageable kernel data.
So the rules for setting up optimal swapfiles are different between XP and 2K.
Sorry, but MCSE is no match for somebody familiar with the kernel in detail.
The rule of thumb still applies (Score:2, Informative)
There is, however, a potentially severe case if you have two processes accessing the same resource simultanously. Every good informatician and computer programmer knows that such a case is the ultimate no-no in software engineering. Unfortunately, there are scenarios thinkable, in which it will happen nonetheless.
Back when I was still working on my Bachelor's degree I and a couple of friends of mine tried to simulate this theoretical possibility and see what happens. We had two processes, called 'ss1' and 'ss2', accessing the same resource at the same time:
ss1 would create a file sized X and go into an endless loop writing random bytes at random positions in the file. ss2 would open that file and mmap() it. That way it would be in the buffer cache as long as data was written to it (and since data was written to it by the other process, that was actually the case). The result of the mmap() was a character array and ss2 would write random bytes to that character array at random positions.
We tested this on the following OSes: Linux 2.0, Linux 2.2, Solaris x86 (can't remember which version), FreeBSD 3.3, Irix 4.0.5, 5.3 and 6.2 and Windows NT 4.0 Workstation. We ran the application with administrative or superuser privileges.
As long as the size (X) did not approach half the physical amount of RAM present in the machine, there were no problems whatsoever. However, as soon as X passed that threshold, bad things started to happen. The only exception was Windows NT, which simply aborted the process with a page fault and an out-of-memory error.
All the aforementioned machines that were running Linux or a variant of UNIX, suffered the same problem: a non-responsive system. The processes could only be terminated by doing a hardware reset of the machine. A kill -9 of the two processes did not work, because they were in a non-interruptable sleep. And the reason they were was that the OS was trying to fullfill the resource demands of the processes by swapping out other stuff, including, as we theorized, other parts of the file that were not "hot" at that time.
This piece of intentionally bad-written software and intentionally bad system operatorship of course proved that, while it was highly unlikely to happen, it could happen and would have dire consequences for the system.
Ordinarily, one should never run programs as a privileged user unless one absolutely has to and the two competing processes would have been terminated by the OS had they not run as root on the Linux and UNIX machines. But regardless of whether the OS in question uses the optimistic or pessimistic approach when allocating resources for a new process, the net result of having such a (in our case intentionally) badly written piece of software is the same: the system becomes non-responsive.
In this case, it does not matter much how much swap space you have, the only difference is that if you have only a little amount of swap space the "dreaded" OOM killer starts to kill of processes at a very early hour instead of when it is already too late (and virtually incapable of functioning properly and actually do its job).
Personally, I would still recommend using at least the same amount of swap space as you have physical RAM, and preferably at twice the amount. Bad things happen all the time, and it is better to be prepared for it. Therefore, the rule of thumb still applies.
Andy Tanenbaum must be laughing (Score:1, Informative)
Disk is so cheap and plentiful I now configure w/ swap ~8x DRAM so I can suspend large jobs and still start new jobs instead of having to kill the process. I also install max DRAM to minimize swapping and paging. But I'm a scientist working w/ large datasets and don't do Windows or web stuff, so your milage may vary.
old, bearded, Unix guy
Re:OSX - 4 gigs RAM, 14 gigs swap?!? (Score:4, Informative)
Re:Mac OS X swap (Score:2, Informative)
I hope that there is some upper limit on how much is used! It's bad enough when a memory-leaky process uses up all of your RAM, but all of your hard-drive space too (in the form of swap)? Yeesh!
Re:Not much, anymore... (Score:4, Informative)
The old guideline of swap size = 2X RAM size still holds as RAM usage (application bloat) / system memory increases automatically mean swap space increases. But that was a general purpose guideline, and the guidance has ALWAYS been to set your swap space size to what you need based on actual usage. your only other option is to just set it to a ridiculously high number.
If you are concerned about something yet are unwilling to spend 10 minutes educating yourself on how to deal with your concerns, then you have to live with the current situation or pay someone to handle your concerns for you. There is no magic bullet.
Re:Not much, anymore... (Score:3, Informative)
Just because the kernel has this tuning feature does not mean everyone has to muck with it. Having the capability to tune / customize is what makes linux flexible enough to use on devices from watches to supercomputing clusters / mainframes. If you don't want to make your own Linux Myth PVR, get a Linux based TIVO that doesn't require any mucking around at all. Linux, the kernel, has been in the mainstream for YEARS.
Re:gig of RAM costs 50 times more than a Gig of HD (Score:3, Informative)
I still get software suppliers, (mostly SAP AG) moaning that we've got to allocate 3.5xRAM, which is arrant nonsense. It might have been necessary years back when 2GB was a lot of memory. Now I've got servers with 10s of GBs and I really don't want to waste 100s of GBs of disk on swap space which simply isn't going to be used. Sure: disk is cheap but it all adds up. One of the larger servers I support has 128GB of RAM and 32GB of paging space, (only 1% is actually used at the moment). A few servers like that and you're saving TBs of disk space.
Of course, if you're going to keep your swap space to a minimum you need to have good monitoring in place so that you can extend it before it becomes a problem if something unexpected happens, and it's sensible to be a bit generous about it. We do occasionally have problems when processes suddenly start writing vast amounts of data to memory but I doubt that having loads more swap space would help in those cases, as there are usually bugs in the code. Fortunately root can usually still get in, (if you're patient), identify the offending processes and kill them.
It also helps to have an OS that makes effective use of memory. What I know best is AIX and a few years back, (quite a lot of years in IT terms!) the memory allocation processes were changed so that even if you requested an enormous amount of memory it wasn't really allocated until you actually started to use it, (i.e. put some data in there). That made a considerable difference. I would expect any modern and efficient OS to do something similar.
Paging can be dreadful for performance as you get a multiple hit: the process that needs swapped-out pages runs slow as it waits for data to be paged in; your system as a whole also runs slowly as CPU cycles are taken up servicing the paging requests; your I/O subsystem suffers as it spends time reading and writing to/from paging spaces rather than actually doing useful I/O. It's one of the first things I always target when I'm investigating performance problems on a server, just as it was a couple of decades ago when I was doing the same things with MVS.
Re:Virtual what ? (Score:3, Informative)
Power Mac G5
OS X.4.7
3GB physical RAM
64MB swap file, which has never grown bigger since I added the extra RAM
...so, no, at least on OS X there's no point in having 6GB swap files.
IMO 1GB is too much. (Score:3, Informative)
How much swap you have should be related to the longest you are willing to wait for stuff to be swapped in and out.
Adjust your swap so that your computer is as slow as you can tolerate when it runs out of memory.
For example: if you have a typical ATA drive, random read transfers would be about 10-15MB/sec. So if you ever need to swap in 400MB of stuff, you'd have to wait about 30-40 seconds before all of it is read in.
What complicates things is there are some applications/programs that allocate memory that they will practically never use, so you'd may want to add swap for that.
So the swap amount would be something like: total swap = "permanently swapped out unused stuff" + (seconds willing to wait * random read speed).
Of course virtual mem doesn't really behave exactly like that - when you are low on RAM the computer will be continously reading the program it needs in, while writing the stuff it thinks it is less important out, but basically you're kind of reliving the old days of "drum/disk memory" - where you running stuff from drum or disk. And that's really slow.
The problem with running out of memory is that under some conditions some operating systems (e.g. Linux) can mess up and kill the wrong process to free memory. I think this has improved somewhat - but Linux used to be pretty stupid and kill pretty important stuff...
This is mainly because of the default overcommitting of memory. With overcommit, the O/S can say "fine" even if there really isn't enough memory, but when it turns out you really do need it all, the O/S goes around looking for stuff to kill...
If you turn off overcommit things can become safer, but you'll need enough memory to hold all allocated memory even if unused.
Swap... (Score:2, Informative)
Zero swap. Buy enough ram, deactivate swap, watch your computer run as fast as it should.
My expirience (Score:2, Informative)
Thing is i run into out of memory errors,when running alot of stuff,though rarely(windows takes 35MB by itself here).Now with 512MB i could run practically anything.
My advice:Turn off swap,buy more ram.