How Much Virtual Memory is Enough? 544
whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
Depends (Score:5, Interesting)
Re:If you have enough, none (Score:5, Interesting)
you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.
I typically just make a 1 or 2 GB swap partition since I've got more than enough space to spare. I mean, back in the days when 128MB of RAM was considered a lot, and a 5GB drive was considered huge, no one would consider using 20% of their storage space for swap. Now, it's not unusual to have 300GB of storage, so what's 1% of that being used for swap?
I've also got a serious collection of 2-6GB harddrives kicking around, now, so I've been using them for swap. It's really pointless to have a 4GB partition for data, so I just use the entire 6GB drive for swap on some machines.
my primary server right now has a 4GB swap partition and 1.25GB of RAM... a piece of bad AJAX code that ran overnight wound up using all the RAM and had some seriously detrimental effects on the performance of the server. it took 25 minutes to ssh in in the morning and when I finally got in, I found that the load averages were at over 100 (I've NEVER see that before).
my point is that even if you have a LOT of RAM, it's still handy to have some spillover available.
Re:If you have enough, none (Score:5, Interesting)
That 2/5/6GB drive may have a 20MB/s sequential rate at OD and half that at ID. Modern drives more than double that sequential performance (or triple), which is what's critical when swapping in/out a large job. Many drives in that generation don't support UDMA either, and talk with PIO, meaning you get no data checksum on your transfers.
You can span generations when you're using a cost reduced modern drive (fewer heads, same formats) but the drive that was stretching to make 5GB across 6/8 heads will be a real POS compared to modern drives performance wise.
Thrashing is bad, but thrashing to a slow disk I'd think would be worse. It is even compounded since that 5GB drive is probably PATA, meaning you're going to have your swap drive and primary drive sharing a cable, which will basically nuke most of the savings of 2 disks since they'll be reselecting master/slave at almost every command.
No swap at all (Score:4, Interesting)
With all of our 64bit 4GB of ram minimum hosts floating around, there is no longer a point to having swap -- if you server really is swapping, it's under a huge load and the io is making the problem worse. Let the OS kill a few processes to get it back under control
Re:No swap at all (Score:5, Interesting)
It's fine to set off alerts and alarms if you're paging. You should set off alerts and alarms if your servers start paging. Randomly killing things instead? Insanity.
You can never build reliable services for users/customers unless you can handle random or accidental error conditions gracefully. Swap space is a cheap and easy key way to do that.
heavy windows usage = 0, anything else = default (Score:3, Interesting)
Very simply, if you use windows and use it heavily (run some intensive tasks or need performance), turning off the page file will give you a nice performance boost.. or rather will not take away from performance.
I have 1GiB of physical memory on my laptop, and reaching the limit in Windows when my paging file was off, posed a challenge (in other words, it worked perfectly well without it)
This is because Windows attempts to use the paging file whenever it can (proactive), unlike Linux, which uses it only when there's no other way (reactive). Depending on the applications you're running, one of the approaches will be better than the other, though from what I've seen, I don't like what Windows does...
Caveat Lector: this might be because I wasn't seeing the slowdowns which might've been caused by reactive approach. I've still yet to formulate an opinion on it - but so far it looks very reasonable.
If using Linux, keep the swap partition and forget about it.
In Windows, the best way to figure out if you need your page file is to load up as many apps as you normally load, maybe a few more - and check the memory usage (don't trust "VM usage" in windows task manager, it doesn't show you what you think it shows you!). If the usage is lower than your physical ram by a [few] hundred MiBs, turn off the page file and don't look back. If it's closer, set the page file to a small size, usually no more than 512MiB. If you set the file, make its size static, so that Windows doesn't try to adjust it all the time (it's too stupid to understand that you want to keep it as small as possible)
Interesting to note that the paging file is not used for hibernation, even though you'd think it were almost tailor-made for that purpose. I've heard that early betas of Windows 2000 woke up from hibernation in a few seconds - I bet they were using the paging file for hibernation then... but I digress
HTH
The way I use swap (Score:2, Interesting)
Re:Not much, anymore... (Score:4, Interesting)
I see. So how do you get around that little address space issue? I'm quite certain a regular 32-bit PC w/ 4GB of RAM doesn't need a swapfile unless you're running Linux (and an AWFUL lot of software, or software with VERY large RAM requirements). Even in that case, it's a special kernel option, and if you can actually max your 4GB of RAM to the gills by multitasking regular, every-day software, you deserve to be penalized with sluggish performance! Okay no, but still.
Since RAM is dirt cheap anymore, everyone really should have 4GB on their 32-bit computers for the sole purpose of turning their swapfile off; it's probably the least amount of money you could spend per the increased performance in Windows.
Re:Rules of thumb are dumb (Score:5, Interesting)
a few years ago, we had a customer with multiple colocated servers complaining that sometimes they crashed for no apparent reasons.
after much debugging, we figured out one of their script was leaking memory. eventually consuming all ram (2.5G) + all swap (1-2g).
now the real problem is this. those were LIVE processes so the system was constantly paging back and forth, using 90%-95% cpu just to swap the damned things in and out and starving the actual processes.
linux 2.4, linux 2.6 (early 2.6). same deal. amazingly, the distro made a difference, redhat was pure hell, debian slightly better (though still not acceptable)
freebsd was much smarter, it just killed the offending processes.it sure wasn't ideal, but at least the server was still serving its clients
to this day, i never put more than 256m as swap even on servers with 4G of ram. that's where we had the least problems.
You think like a dinosaur (Score:4, Interesting)
But dude, my next box will have two GIGABYTES of RAM!
Every one of your usage options assumes you'll run out of physical ram. Maybe if the OS is wasting it on pointless disk caching, but don't you think the programs in memory should have priority over blind disk caching?
Lest a foolish reader believe your two options (swap immediately, or swap as lazily/late as you can) are the only two possibilities, how about swapping when, say, only 20% of physical RAM is left? That way my Firefox and Eclipse don't swap to disk and take twenty seconds to swap in when I have 500MB of GODDAMN FREE RAM!
Re:Not much, anymore... (Score:3, Interesting)
I think most people's issues with this is that there aren't a lot of good options in windows. Either you deal with their crappy swapping decision algorithms, or you go without swap in an OS that has assumed swap has been there for about a decade.
Re:If you have enough, none (Score:4, Interesting)
Naturally if you actually had that much physical RAM, the process would have still gone nuts, but your server wouldn't have had to thrash its disk for every process except the prodigal son, so the performance hit probably wouldn't have been noticeable.
Are you using tmpfs or not? (Score:5, Interesting)
Re:If you have enough, none (Score:3, Interesting)
"you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got."
If you truly have a runaway process, it will use up all of your swap, no matter how much swap you've got. In most cases, it would be better for it to die sooner rather than later.
I am a very heavy user and run many applications simultaneously. I have been running XP with 1 or 2GB of RAM and no swap file for over a year now. Despite having dozens of tabs open in two different browsers and running programs like Studio 10 for movie editing, I've never come close to running out of RAM.
Interestingly, it seems that Windows programs in general use less RAM if they have less virtual memory available to them. I have experimented with this to prove it to myself by reloading two identical 2GB machines with identical OS and software loads, setting one to have a 5GB swap and the other to have no swap, and starting up the same dozen or so applications on each. The "commit charge" on the machine without the swap was over 200MBs lower than on the one with swap.
Re:If you have enough, none (Score:3, Interesting)
If you have 2GB of RAM and a process started leaking violently, providing it with 1.5 gigs of (physical) ram to work before it or the box dies or 3.5 gigs of ram (2 of which are swap) is meaningless. If it'll be chugging so much memory, it's probbably leaking without restraint anyway.
This really depends on how likely you see a scenario where you'll be (legitimately) using more than your physical 2GB. For my office desktop box, that's a "never ever ever, not by a long shot", so I plain don't need the swap. Were I running a 512MB box with 1GB of ram I'd be fine. 2 gigs of real ram? when and how the hell would I be using that swap?
Of course, if I were running software that'd be using up 1.8GB on average, say some game with a chubby 3D engine, it'd be a whole different ballgame.
As always, depends on your personal needs.
min(2*RAM, 512Mb) (Score:4, Interesting)
Smarter per-process ressource quotas would probably be better and it would be nice to have a trashiness function according to the disk speed but so far 512Mb sounds like the limit between using the resset button or just taking a coffee break when you see the HD led blinking like a strobe.
It is just easier to try the approach where you consume a lot of RAM first and to re-code if it doesn't work. I work in bioinformatics and we often have huge datasets, I alway try to load the whole thing and to make the computation in RAM. Only when I get and out-of-mem error do I segment the dataset and try a smarter approach. That might explain my choice for 512Mb and the right threshold for other people might be bigger or lower but I'm pretty sure that its bellow 8Gb.
Is swapping obsolete? (was:Rules of thumb are dumb (Score:5, Interesting)
That raises the question: is swapping obsolete? Or to put it more explicitly, has the speed difference between modern CPUs and hard drives become so large, and RAM so cheap, that it's better to consider running out of RAM to be indicative of a software failure? That way you end up with a system where one or more processes may fail (or be terminated) but at least the machine remains usable and doesn't swap itself into non-responsiveness.
In my experience, the answer is yes: with 2GB of RAM in my machine, I never need to swap, and in the few instances where swapping did occur, it was because of buggy software (memory leaks) and manually terminating the offending processes is what I needed to do resolve the memory shortage. So why not just have the OS do that automatically?
Or to put it a third way, is there any situation where swapping is helpful, anymore?
Re:Not much, anymore... (Score:3, Interesting)
Could your 64-bit linux system address over 2^48 bits of memory?
Doubt it. I think AMD64 tops out at 41-42 address lines right now.
Re:But really, who cares? (Score:3, Interesting)
In general, I approve of your philosophy. But remember there are addressing overheads to map all that disk space into memory, and all that page management can give you a bit of a performance hit too. It isn't the cost of disk, it's the cost of managing it that means you have to put a little bit of thought into it. I know the stuff is cheap, but you still have to compute with it.
Or to quote my favorite philosopher, Yogi Berra, "In theory, there's no difference between theory and practice. In practice, there is".
Re:Don't forget disk cache (Score:4, Interesting)
This is most obvious when you are copying large amounts of data, e.g. during a backup.
Say you have a 250GB disk and you copy it to another one. The system will continously try to keep the files you have read in disk cache (because you may read them again) and try to keep room for many dirty pages that still have to be written to the destination disk (because you may change them again before final writing).
All of this "(because)" is never going to happen as everything is read once and written once and then no longer needed.
But still, it will swap out running processes to make room for the above.
The net effect you see is that the source and swap disks are very busy, the destination disks sits idle long times until the kernel feels like flushing out some dirty buffers, and the other programs slow down to a crawl fighting for the swapspace.
It can be tuned with the "swappiness" variable but it remains a tough thing to control. It looks like Windows does a better job in this (not so hypothetical) case.
There should be some "file copy mode" (used during backups and other large tree copies) where it:
- discards all disk USERDATA caches immediately after use (directory and other filesystem allocation data may be kept)
- immediately writes out any written USERDATA to the destination disk, not having it populate the dirty pages until bdflush comes around to write them
- keeps re-using the same small set of buffers to pump the data from source to destination, without stealing memory from others
Issue is of course: how could this mode be enabled. It could be a special systemcall, but who would call it and where?
Personally, I would already be happy with a program like "nice" or "ionice" that would run a commandline in a special mode (e.g. with a very small buffer quota) to force such behaviour. But the world at large would of course be better serviced if this would happen automatically when lots of data are copied sequentially.
Witch Doctors (Score:2, Interesting)
Oh, please. How do you quantify "more responsive?"
Sorry to say this, but the witch-doctory of computer maintenance is not engineering, it isn't science, it isn't even common sense
If you get "thrashing" in your system it is because you don't have enough RAM for the applications you are using. At this stage, the application has one of two options, use user space files to reduce the "in RAM" data size or quit with an out of memory error. Most all applications, in this scenario, perform better when the OS provides virtual memory. (Obviously not all because the default swap algorithm can be a worst case, but I digress.)
The Macintosh's early virtual memory (pre-OS/X) and old OS/2 1.x virtual memory systems really created a lot of lore about the woes of virtual memory. The reality is that the modern VMMs as implemented in the Pentiums and later are very good, and the OS support for them are excellent, and a system is almost always better off with VM than without.
In an honest discussion it is hard to speak with absolutes, so no one can say any one way is any better than any other way 100% of the time, unless it is an obvious choice like drink poison or drink pure distilled sterilized and safe water in moderate amounts when you are thursty and have balanced electrolytes.
That being said, unless you have a specific and quantifiable case where virtual memory hurts performance dramatically, use virtual memory, you will be better off.
I don't have the magic bullet, but... (Score:1, Interesting)
(coming from a 256M system, and moving to a new build with 2G, win2k pro & Ubuntu 6.06 on the machine currently)
For Ubuntu (Linux kernel v. 2.6.16) I went with a 2G swap partition, or a 1:1 DRAM/VM ratio.
For win2k SP4 + all post SP4 updates: I'm currently just letting windows manage the VM space. On the old 256M machine, I went with a fixed 3x DRAM VM pagefile.
Caveats: I've only recently moved to this system, and have NOT stressed them to any real degree with fully memory intensive operations. The most stressful application that has been to date on the new build has been Oblivion under win2k. The max pagefile use in those conditions have been c. 385M of pagefile use, however judging from performance that space was not being actively utilized(i.e. didn't notice alot of extraneous swapping going on judging by disk activity and pagefile usage growth(none).)
Under linux, I can't really thinik of anything that I utilize it for that would require vast amounts of memory ATM, although I may examine building of new linux kernels and gcc from source. I expect that gcc build would be the most demanding, but suspect that, that may not even stress a system w/2G. I have no large databases or a way to generate enough use artificially or otherwise(or time or interest ATM really) see what would happen, but I strongly suspect that a swapfile around the size of physical memory may be enough, especially on systems with even more memory as undoubtedly the idea in those situations was to minimize swap space use to avoid thrashing, or even more low level swapping activity...
Back in the day VM was great as disk space was much cheaper than physical RAM, and large VM swapfiles/partitions allowed us to run applications that may not have been able to run before. Most of these same apps now run on machines with MUCH large physical RAM installations, while using not very much more physical memory than before meaning that they can run directly from RAM which is significantly faster than swapping under VM. It would seem the best strategy is to allocate some(I like the 1:1 ratio) as applications will still allocate large amounts of space most of which will be allocated from the VM pool(reducing physical RAM size) while still avoiding swapping. In more limited(#/size of apps) systems you could probably even turn off VM, although I really don't think that it would gain you much, and each running app would have a MUCH larger physical RAM footprint as all of it's requested memory would have to be from the physical RAM pool...
i.e. babbling aside, you'll need to do some experimentation likely with a worst load case scenario and see what sort of configuration works best for you AND how much disk space that you can AFFORD to allocate to swapfile usage. Another potential idea would be to look at limited embedded systems which just don't have large amounts of storage OR physical RAM and see what happens with those, as most embedded systems with linux that I have seen usually don't even have a 1:1 ratio of RAM & VM, usually it's a very small VM space with RAM that is several times that amount, of course this gets a little fuzzy as most embedded systems don't have anything other than some sort of RAM(flash, static, etc.).
gig of RAM costs 50 times more than a Gig of HDD (Score:5, Interesting)
So, to answer the original question :
Optimal amount of Swap ? 0 !
Even my old PIII 1Ghz takes 2 gigs of ram. Newest system we have at the office takes 64 Gigs.
=> right now I consider the Linux systems @ work as having a problem if they use swap...
Adding just a wee bit of RAM to your system and seeing swap disappear means your perfomance just exploded on this particular task...
Best Regards,
D.
In the old days you could control it.... (Score:3, Interesting)
For Windows XP the geniuses at Microsoft removed this ability and the whole system runs much worse because of it.
Every time you do something which reads big files from disk on XP all your apps get paged out to disk. I don't know in which fantasy world this is supposed to be an "improvement", but it's one of my favourite reasons to hate XP.
Re:gig of RAM costs 50 times more than a Gig of HD (Score:5, Interesting)
1) On a system with zero swap, when apache gets slammed (say you get to the top of digg or slashdot), apache starts consuming lots of memory to handle new inbound requests. When it runs out, the machine grinds to a halt because it can't allocate more and requires a power cycle. (Setting a low max children really only helps if you are happy denying traffic to the people who are trying to see your site...it's best to plan for capacity and put quite a few servers load balanced).
2) On a system with any appreciable swap (IMHO, more than 128 Megs, up to 512 Megs), if you're monitoring the system (watch -n 1 df -h, for example) and all of a sudden it starts using swap, the machine is on the edge of dying. This gives you an early warning that maximum machine performance/throughput is occurring. You can restart apache or shut it down or similar, you can do something to temporarily lower or remove load from that machine. This doesn't give you *much* time, but it gives you some.
In our real world experience, at digg and slashdot loads you have about 10-15 seconds to stop apache once it starts swapping. After that, the performance degrades so bad that the machine becomes catatonic, the same as #1, requiring a power reset (obviously because virtual memory on HD is magnitudes slower than RAM, as numerous others have suggested). The key here is that you must realize that some swap is good for allowing unused programs to be swapped out, such as login terminals that just sit there. It's great for detecting problems, but if your heavy app is the one utilizing swap, your machine is about to crash anyway.
Re:Pre-emptive swapping... (Score:1, Interesting)
At least as much as RAM for FreeBSD and crash(8) (Score:1, Interesting)
So, when building a machine, I typically include a single swap partition equal to the amount of RAM plus 96MB or so. Disk is cheap, and the ability to diagnose panics can be extremely valuable.
Note: This has worked well for FreeBSD 4.x, 5.x, and 6.x.