How Much Virtual Memory is Enough? 544
whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
lots (Score:5, Funny)
gig of RAM costs 50 times more than a Gig of HDD (Score:5, Interesting)
So, to answer the original question :
Optimal amount of Swap ? 0 !
Even my old PIII 1Ghz takes 2 gigs of ram. Newest system we have at the office takes 64 Gigs.
=> right now I consider the Linux systems @ work as having a problem if they use swap...
Adding just a wee bit of RAM to your system and seeing swap disappear means your perfomance just exploded on this particular task...
Best Regards,
D.
Re: (Score:3, Informative)
I still get software suppliers, (mostly SAP AG) moaning that we've got to allocate 3.5xRAM, which is arrant nonsense. It might have been necessary years back when 2GB was a lot of memory. Now I've got servers with 10s of GBs and I really don't want to was
Always nice to have some spare (Score:3, Insightful)
Yes ram is incredibly cheap, and any amount of serious swapping is to be avoided. On the other hand, once in a while you do something stupid like having VI load a 2GB log file into RAM, or whip firefox into an 800mb frenzy and then load that 16kx32k image into GIMP, or do that database query that uses *way* more ram than you'd expected.
In general, I'd rather have my system
Re:gig of RAM costs 50 times more than a Gig of HD (Score:5, Interesting)
1) On a system with zero swap, when apache gets slammed (say you get to the top of digg or slashdot), apache starts consuming lots of memory to handle new inbound requests. When it runs out, the machine grinds to a halt because it can't allocate more and requires a power cycle. (Setting a low max children really only helps if you are happy denying traffic to the people who are trying to see your site...it's best to plan for capacity and put quite a few servers load balanced).
2) On a system with any appreciable swap (IMHO, more than 128 Megs, up to 512 Megs), if you're monitoring the system (watch -n 1 df -h, for example) and all of a sudden it starts using swap, the machine is on the edge of dying. This gives you an early warning that maximum machine performance/throughput is occurring. You can restart apache or shut it down or similar, you can do something to temporarily lower or remove load from that machine. This doesn't give you *much* time, but it gives you some.
In our real world experience, at digg and slashdot loads you have about 10-15 seconds to stop apache once it starts swapping. After that, the performance degrades so bad that the machine becomes catatonic, the same as #1, requiring a power reset (obviously because virtual memory on HD is magnitudes slower than RAM, as numerous others have suggested). The key here is that you must realize that some swap is good for allowing unused programs to be swapped out, such as login terminals that just sit there. It's great for detecting problems, but if your heavy app is the one utilizing swap, your machine is about to crash anyway.
IMO 1GB is too much. (Score:3, Informative)
How much swap you have should be related to the longest you are willing to wait for stuff to be swapped in and out.
Adjust your swap so that your computer is as slow as you can tolerate when it runs out of memory.
For example: if you have a typical ATA drive, random read transfers would be about 10-15MB/sec. So if you ever need to swap in 400MB of stuff, you'd have to wait about 30-40 seconds before a
Watching their hard drive light? Mozilla users (Score:3, Insightful)
I do - any time I'm running Mozilla with a lot of tabs open and it decides to go into annoying-swapping-mode (on WinXP and predecessors) for no obviously good reason, so I've got to wait for Mozilla to swap itself in or out before I can see the web page or other application I want. It doesn't help that I mainly use it on a laptop, where the drive is slow and the RAM is a fairly large 384MB, but it also happens on my home desktop, where the drives a
Re: (Score:3, Informative)
Power Mac G5
OS X.4.7
3GB physical RAM
64MB swap file, which has never grown bigger since I added the extra RAM
...so, no, at least on OS X there's no point in having 6GB swap files.
Not much, anymore... (Score:5, Insightful)
I run a Core Duo laptop with 1GB of RAM and have never swapped out in Linux, no matter what I was doing.
Re:Not much, anymore... (Score:5, Informative)
Re:Not much, anymore... (Score:4, Informative)
The old guideline of swap size = 2X RAM size still holds as RAM usage (application bloat) / system memory increases automatically mean swap space increases. But that was a general purpose guideline, and the guidance has ALWAYS been to set your swap space size to what you need based on actual usage. your only other option is to just set it to a ridiculously high number.
If you are concerned about something yet are unwilling to spend 10 minutes educating yourself on how to deal with your concerns, then you have to live with the current situation or pay someone to handle your concerns for you. There is no magic bullet.
Re: (Score:3, Insightful)
Two of my (Linux) servers have lots of memory and lots of small processes so anything that does swap out swaps out quick. These don't use a lot of swap (512Mb?) and don't have gig sized processes to write into swap... so they don't really need the 2+ gig of allocated swap.
One other (Linux) server has big processes (1Gig or more) and when they have to swap out, watch the machine fall apart while the process is swapped out - it takes a while to write 1 gig of ram into
Re:Not much, anymore... (Score:4, Informative)
You seems to miss the idea of swap. All modern OSes combined with processors (from 386 in the x86 range) will swap 4KB pages. So if memory is needed, the last accessed page (4KB) in RAM will be swapped (and the algorithm continues until no more RAM is required). When one of the swapped 4KB pages is needed, it's retrieved from swap in free RAM (if no free RAM is available, it swaps out another page).
I don't think it swaps out all of your application, and if it does, you should increase you RAM. The thing is that your app can try to access the "just swapped" page, which is a preformance killer. Swapping is done on page chunks, not app chunks.
PS: the term pagefile probably comes from windows 95 because it contains "pages". All modern processors have MMU (http://en.wikipedia.org/wiki/Memory_management_u
Re: (Score:3, Informative)
I work under a mainframe OS and before VM was introduced (20 years ago?) the OS would happily swap processes or unused parts of processes out, it would kill any process which tried to allocate more memory than was physically available.
Re: (Score:2)
Re:Not much, anymore... (Score:5, Informative)
#echo [0-100] >
A better question is how much memory you can address. Could your 32 bit Windows system address over 2^36 bits of memory (64GB), for example? And could you allocate over 2GB to windows kernel?
Could your 64-bit linux system address over 2^48 bits of memory?
Re: (Score:3, Interesting)
Could your 64-bit linux system address over 2^48 bits of memory?
Doubt it. I think AMD64 tops out at 41-42 address lines right now.
Re: (Score:3)
Re:Not much, anymore... (Score:5, Funny)
I hope this is not your example of how Linux is ready for the mainstream.
Re: (Score:3, Informative)
Just because the kernel has this tuning feature does not mean everyone has to muck with it. Having the capability to tune / customize is what makes linux flexible enough to use on devices from watches to supercomputing clusters / mainframes. If you don't want to make your own Linux Myth PVR, get a Linux based TIVO that doesn't require any mucking around at all. Linux, the kernel, has been in the mainstream for YEARS.
Re:Not much, anymore... (Score:4, Insightful)
The idea is the users's battery life is extended slightly without them realising how.
Re:Not much, anymore... (Score:4, Informative)
Also, remember that suspend2 requires swap, so figure how much of an image you'll need (and how much is cache that can be freed) and get a bit more than that. My own rule of thumb is, swap is roughly 1x to 1.5x RAM, so that I can be sure I have room for the suspend. But I have the space, and Windows doesn't use swap for this anyway, it uses hiberfil.sys
Pre-emptive swapping... (Score:5, Informative)
There are generally two strategies:
-The common-sense one where you swap when you run out of memory. This makes a lot of practical sense on systems with limited write cycles (flash based swap, though you really never ever should do that anyway), and systems that want to spin down drives to conserve power for battery conservation. Performance wise (this may surprise people who haven't spent time thinking about it), this can often be bad. Avoiding swapping is generally only good on systems where resource utilization is carefully managed and you know it won't swap ever (the IO operations of unneeded can interfere with the productive activity of a constantly busy system). This is actually a vast minority of systems in the world (no matter how l33t one may think themselves, they most certainly don't have a usage pattern that would be impacted by the extraneous IO operations of occasional write to swap.
-Pre-emptive swapping. When the IO subsystem is idle and the system can afford to copy memory to swap area, it does so (depending on criteria). Generally speaking it will select memory not accessed much and write it to disk, but leave the memory copy in place if the physical memory is not immediately needed. A fair amount of swap used in an apparently underutilized system is duplicated in physical memory and swap space. The benefit here is that if the process reads back that memory, it doesn't incur any penalty in reading it back despite it being also in swap (the system may make certain decisions on what is the best swap candidate and write to disk different data). The benefit of writing this stuff to swap even when not needed is clear when an application comes along that allocs more memory than the system has free in physical space. In the first strategy, this means the malloc blocks while data is written to disk, and the new application starting or needing a lot of data is severely impacted. In the pre-emptive swap case, system notices the condition, knows what memory it has a backup in swap of that hasn't been used lately, and can free that memory and satisfy the malloc pretty much instantly.
To those who have 1GB of RAM or so it becomes less likely that the system will have to flush memory from physical RAM, but there is a balance to be struck between memory used directly invoked by applications, what the application memory access pattern is, and what ram you can use to buffer filesystem access. If your total application memory allocation is 75%, it still may make sense performance wise to only keep 50% of your physical memory dedicated to the applications, (the other bit relegated to swap), and 50% of the memory to buffer disk I/O.
But more on topic... (Score:5, Insightful)
On a production server or a problematic system where I want support and the OS likes to dump a core to swap, I'll ensure a generous swap partition is available (generally observed active swapx1.5+physical memory size). In this case a file-backed swap may depend on layers of the kernel that are in an invalid state, and a swap partition is more likely to be reliably writable. The only system I would even theoretically hibernate on is my laptop, and I only ever suspend to ram or shutdown completely, so I don't consider my laptop as needing a swap partition of any significant size.
But really, who cares? (Score:5, Insightful)
What I find curious is that you have a strategy. On what relevant experience do you base this strategy? 1 GB of disk space costs less than $0.50. [pricewatch.com] Set up 3 GB of VM if it makes you feel good. The latte you drink while you set it up costs more than the extra disk space!
So go for it!!! Who cares what you do? Heck, give yourself 10x the RAM and see if it actually makes any difference!!! (it won't)
This is sort of like asking: "Which goes faster: the yellow Pacer or the red Pacer?"!
Re: (Score:3, Interesting)
In general, I approve of your philosophy. But remember there are addressing overheads to map all that disk space into memory, and all that page management can give you a bit of a performance hit too. It isn't the cost of disk, it's the cost of managing it that means you have to put a little bit of thought into it. I know the stuff is cheap, but you still have to compute with it
Re:But really, who cares? (Score:5, Funny)
You think like a dinosaur (Score:4, Interesting)
But dude, my next box will have two GIGABYTES of RAM!
Every one of your usage options assumes you'll run out of physical ram. Maybe if the OS is wasting it on pointless disk caching, but don't you think the programs in memory should have priority over blind disk caching?
Lest a foolish reader believe your two options (swap immediately, or swap as lazily/late as you can) are the only two possibilities, how about swapping when, say, only 20% of physical RAM is left? That way my Firefox and Eclipse don't swap to disk and take twenty seconds to swap in when I have 500MB of GODDAMN FREE RAM!
your next box needs swap (Score:5, Funny)
Total: 2910 MB
Yep, you need a gigabyte of swap. OpenOffice.org was made 64-bit clean for a reason. If you plan ahead, not wanting to reallocate disk space in the next few years, you'll allow for this:
2 GB for firefox, 5 GB for OpenOffice.org, 1/2 GB for X, 1/2 GB for desktop odds and ends, 1 GB for Evolution or Thunderbird, and 10 MB for old-style stuff running in the background
That's 9.01 GB. You're exactly 7.01 GB short, so you'll be needing that swap space before you know it.
Re: (Score:3, Informative)
Re: (Score:3, Funny)
Double clicking on its desktop icon?
Re:your next box needs swap (Score:4, Funny)
Re:Pre-emptive swapping... (Score:5, Informative)
In the old days you could control it.... (Score:3, Interesting)
For Windows XP the geniuses at Microsoft removed this ability and the whole system runs much worse because of it.
Every time you do something which reads b
Well... (Score:3, Funny)
Re:Not much, anymore... (Score:5, Informative)
system control panel -> advanced -> performance options -> advanced - > virtual memory.
Set to no paging.
Re: (Score:3, Informative)
bleeding thing cannot smoothly say "You are running out of memory, Setting up an emergency page file now...." without something crashing.
Fix this problem and you are cooking on gas. A modern computer should be able to accomodate every malloc upto memory+free disk space and it can't easily.
Re:Not much, anymore... (Score:4, Interesting)
I see. So how do you get around that little address space issue? I'm quite certain a regular 32-bit PC w/ 4GB of RAM doesn't need a swapfile unless you're running Linux (and an AWFUL lot of software, or software with VERY large RAM requirements). Even in that case, it's a special kernel option, and if you can actually max your 4GB of RAM to the gills by multitasking regular, every-day software, you deserve to be penalized with sluggish performance! Okay no, but still.
Since RAM is dirt cheap anymore, everyone really should have 4GB on their 32-bit computers for the sole purpose of turning their swapfile off; it's probably the least amount of money you could spend per the increased performance in Windows.
Re:Not much, anymore... (Score:5, Informative)
VA is Virtual Address space. For a 32bit processor, you have 32bits of virtual address space - each process can occupy no more than 3G of RAM (on XP, with the
If you have more than one process, you have more than one virtual address space. So saying that each process can only address 3G of RAM doesn't matter - with 30 processes running, you could theoretically have 90G of VA allocated.
What's important is VM.
VM is vitual memory. VM is what backs the pages that are mapped into the VA.
The maximum amount of VM you can have allocated on a machine is measured by the commitment limit on the machine, which is typically measured as "physical RAM + page file space". If overall VM always stays below physical RAM, you don't need a paging file. But if it EVER goes above it, you're toast if you don't have a paging file. All those pages from the boot process that normally would have been discarded to the paging file (or were allocated by daemons that started during boot but haven't done anything since then) stick in the craw of the memory manager taking up space that COULD be used for your application, but can't because you've not told the OS where to put them.
That's why you have a paging file - it gives the OS a place to put the mouldy old pages that were allocated by apps that aren't actively doing things so your application can re-use the memory that those apps were using.
Btw, it's my understanding that ALL modern virtual memory based operating systems have essentially the same VM architecture - Linux, Windows, whatever. They both use paging files for essentially the same things - discarding writable pages that are not in current use by applications (read-only pages can typically be loaded from the binary image).
Re: (Score:3, Informative)
One of the above posts outlines exactly what the min. and max. settings for each version of Windows' pagefile should be, except of course XP's.
Re:Not much, anymore... (Score:5, Informative)
All windows: defrag your drives first.
Win98SE: if RAM is =/> than 256mb, make Min setting equal to the amount of RAM, Max set to 1.5 the amount of RAM.
If less than 256mb, set min setting to 1.5 times amount of ram, max to 2.5 times or 512MB, whichever comes first.
Winnt:
if you have 2 drives (not two partitions, two drives) create swap files with min/max equal to the amount of physical memory in the system on 2 drives. This is a way to make WinNT scream when it comes to disk writes
Otherwise, if RAM is less than 256mb, set Virtual memory, both min & max, to twice your amount of RAM; if you have => 256MB, set min & Max to 1.5 times the amount of RAM.
Win2k: if you have less than or equal to 512MB, set min to 1.5 times RAM, max to 2 times RAM. if you have greater than 512mb, set swap min/max to 1.5 times RAM.
If you ever get an "out of virtual memory" error, defrag and add 100mb to min/max.
If you have =/> 2GB RAM, disable swap, unless you are running server, in which case 4GB is the magic number.
The 2 drive swap method just doesn't seem to work as well on Win2k as it did on WinNT; no clue why, but i've tested it repeatedly.
WinXP Pro: Luser. why are you running the Windows ME of the 21st century? at least you aren't running WinXP home, though. just follow the guidelines for win2k, since that is all WinXP pro is, win2k with add-on crap, no changes to kernel or underlying function.
Win2003: No clue.
Vista: Not only have no clue, but I promise you I never will.
Re: (Score:3, Interesting)
I think most people's issues with this is that there aren't a lot of good options in windows. Either you deal with their crappy swapping decision algorithms, or you go without swap in an OS that has assumed swap has been there for about a decade.
No...not really (Score:5, Informative)
It's a much better idea to set it interactively. Use the system without adjusting the Virtual Memory for a while. Then take a look at your usage and set your virtual memory against that usage.
For instance.
If you're in a Windows machine, let it run normally for a few days.
Run everything the way you normally use it.
Multiple apps, multiple instances, games out the ass, everything.
Then open up the Task Manager and look at the Performance tab.
Take a look at the Peak value under "Commit Charge".
Set your virtual memory, min and max, at about 10% above that value to leave yourself a little headroom.
Normally this will be enough to deal with your maximum swap requests.
If, somehow, you begin bumping against virtual memory limits again AFTER that, bump it another 10%.
If you still have problems, keep bumping it in 10% increments, and start looking for apps that are memory leaking.
Re:Not much, anymore... (Score:4, Insightful)
In Windows 2000/XP you can't disable swap memory- plain and simple. Swap size can be reduced, that's all, but Windows will only follow your seeting until need arises (and that won't be when Windows has ran out of RAM, as other have explained).
Actually they can be turned off in WindowsXP, easily, with no problems what so ever if you have a large memory footprint.
In fact, the way Windows DOES handle memory it is better at running without a paging file than most OSes because it will not shove in crap loads of content to the pagefile anticipating the application will use it.
Windows Vista also can and will run will without a pagefile, without incident.
Where windows has 'sucked' at pagefiles in the past is that it will give priority to file operations that are non-application load related and use the RAM Cache, thereby paging existing applications to the Hard Drive. (This is changed in Vista, file copy operations should no longer consume RAM Cache at the expense of applications.)
Re: (Score:3, Informative)
Depends (Score:5, Interesting)
Rules of thumb are dumb (Score:2)
It depends on what you're doing with the computer, and what hardware resources are available. Out of memory is bad. Very bad. On systems which have oodles of RAM, I tend to give low or no swap; on systems tight on RAM I may give 10x or more the amount of RAM.
Here, "oodles of RAM" and "tight on RAM" are very dependant on what the system's being used for. For a home NAT gateway 64MB may be oodles; for an image processing station, 1GB may be tight (especially when dealing with medium
Re:Rules of thumb are dumb (Score:5, Interesting)
a few years ago, we had a customer with multiple colocated servers complaining that sometimes they crashed for no apparent reasons.
after much debugging, we figured out one of their script was leaking memory. eventually consuming all ram (2.5G) + all swap (1-2g).
now the real problem is this. those were LIVE processes so the system was constantly paging back and forth, using 90%-95% cpu just to swap the damned things in and out and starving the actual processes.
linux 2.4, linux 2.6 (early 2.6). same deal. amazingly, the distro made a difference, redhat was pure hell, debian slightly better (though still not acceptable)
freebsd was much smarter, it just killed the offending processes.it sure wasn't ideal, but at least the server was still serving its clients
to this day, i never put more than 256m as swap even on servers with 4G of ram. that's where we had the least problems.
Is swapping obsolete? (was:Rules of thumb are dumb (Score:5, Interesting)
That raises the question: is swapping obsolete? Or to put it more explicitly, has the speed difference between modern CPUs and hard drives become so large, and RAM so cheap, that it's better to consider running out of RAM to be indicative of a software failure? That way you end up with a system where one or more processes may fail (or be terminated) but at least the machine remains usable and doesn't swap itself into non-responsiveness.
In my experience, the answer is yes: with 2GB of RAM in my machine, I never need to swap, and in the few instances where swapping did occur, it was because of buggy software (memory leaks) and manually terminating the offending processes is what I needed to do resolve the memory shortage. So why not just have the OS do that automatically?
Or to put it a third way, is there any situation where swapping is helpful, anymore?
Re:Is swapping obsolete? (was:Rules of thumb are d (Score:3, Informative)
And RAM can be so "cheap" as you say, but disk is still far cheaper.
With swap, you also have some way to find out that you're running out of memory. You can monitor it, and you can also sometimes see a performance decrease (if it's a desktop), though you'll probably not notice it with SCSI disks. But you still have the monitor, right ?
Re:Is swapping obsolete? (Score:4, Informative)
Sure. Consider Andrew Morton's logic:
http://kerneltrap.org/node/3000 [kerneltrap.org]
In your average program, most code never gets executed, and most data is never used. For a long-lived process, swapping out the unnecessary bits frees the memory for disk cache.
While you may improve overall performance, by minimizing the average completion time for operations, the downside is responsiveness. As a user, I don't care if Firefox reads cached images a few milliseconds faster (by reading from cache instead of disk) if I have to wait 3 seconds for Thunderbird to respond to my clicks (because it has to swap in) after I've been browsing for a while. Average speed be damned!
Having said that, I just set my swappiness to 100.
If you have enough, none (Score:4, Insightful)
Back when I had 512MB of memory, I had a 512MB swap partition, but I noticed that I never came close to using all of it.
When I got my new machine with 1G, I never bothered to make one at all, and I've never had a problem with it. If I do ever find myself in a situation where I need some swap space, I could always just create a swap file. It's a lot more convinient because it wouldn't have to be a fixed size, doesn't take up space when I don't need it, and I have one less partition
Especially if you have 2G or more, I don't see a real reason to use swap
LVM (Score:3, Informative)
Re:If you have enough, none (Score:5, Interesting)
you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.
I typically just make a 1 or 2 GB swap partition since I've got more than enough space to spare. I mean, back in the days when 128MB of RAM was considered a lot, and a 5GB drive was considered huge, no one would consider using 20% of their storage space for swap. Now, it's not unusual to have 300GB of storage, so what's 1% of that being used for swap?
I've also got a serious collection of 2-6GB harddrives kicking around, now, so I've been using them for swap. It's really pointless to have a 4GB partition for data, so I just use the entire 6GB drive for swap on some machines.
my primary server right now has a 4GB swap partition and 1.25GB of RAM... a piece of bad AJAX code that ran overnight wound up using all the RAM and had some seriously detrimental effects on the performance of the server. it took 25 minutes to ssh in in the morning and when I finally got in, I found that the load averages were at over 100 (I've NEVER see that before).
my point is that even if you have a LOT of RAM, it's still handy to have some spillover available.
Re:If you have enough, none (Score:5, Interesting)
That 2/5/6GB drive may have a 20MB/s sequential rate at OD and half that at ID. Modern drives more than double that sequential performance (or triple), which is what's critical when swapping in/out a large job. Many drives in that generation don't support UDMA either, and talk with PIO, meaning you get no data checksum on your transfers.
You can span generations when you're using a cost reduced modern drive (fewer heads, same formats) but the drive that was stretching to make 5GB across 6/8 heads will be a real POS compared to modern drives performance wise.
Thrashing is bad, but thrashing to a slow disk I'd think would be worse. It is even compounded since that 5GB drive is probably PATA, meaning you're going to have your swap drive and primary drive sharing a cable, which will basically nuke most of the savings of 2 disks since they'll be reselecting master/slave at almost every command.
Re:If you have enough, none (Score:4, Insightful)
Re: (Score:3, Informative)
Even then, I'd probably replace the 5GB drive with a more modern 300GB or 400GB spindle. Create 5GB for the swap area on it, use the rest for temp directories, the xlog, a
Re:If you have enough, none (Score:5, Insightful)
Frankly, while I do use swap, in this case I'd rather have the process crash sooner rather than later.
Re: (Score:3, Insightful)
Personally, I prefer a runaway process to run out of resources and stop vs take over my whole system. It takes a long time to page out 1+ Gigs of RAM. It takes a long time to unpage all of that at shutdown or even when an app is closed.
Swap completely depends on the computer's real RAM available and the purpose of the computer and the OS on said computer.
To adequately answer the q
Re: (Score:3, Insightful)
To adequately answer the question, "How much Virtual Memory is Enough?" The correct answer is "It depends".
exactly... and some OSs (read: OSX) caches less-frequently used data (cached window contents, and other images, etc) to the drive to free up real RAM; it doesn't matter how much RAM is installed on the machine, it'll still use the swap. Even my machine at work with 8GB of RAM frequen
Re:If you have enough, none (Score:4, Interesting)
Naturally if you actually had that much physical RAM, the process would have still gone nuts, but your server wouldn't have had to thrash its disk for every process except the prodigal son, so the performance hit probably wouldn't have been noticeable.
Re: (Score:3, Interesting)
"you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got."
If you truly have a runaway process, it will use up all of your swap, no matter how much swap you've got. In most cases, it would be better for it to die sooner rather than later.
I am a very heavy user and run many applications simultaneously. I have been running XP with 1 or 2GB of RAM and no swap file for over a year now. Despite having dozens of tabs open in two different
Re: (Score:3, Interesting)
If you have 2GB of RAM and a process started leaking violently, providing it with 1.5 gigs of (physical) ram to work before it or the box dies or 3.5 gigs of ram (2 of which are swap) is meaningless. If it'll be chugging so much memory, it's probbably leaking without restraint anyway.
This really depends on how likely you see a scenario where you'll be (legitimately) using more than your physical 2GB. For my office desktop box, that's a "never ever ever, not by a long shot", so I plain
Re: (Score:3, Insightful)
The thing is, in that situation, swap just makes things worse. Now instead of having a computer with all its RAM used up, you have a computer with all its RAM and all its swap space being used up, and it's slow as molasses due to constantly waiting for the hard disk I/O.
At least without swap, the runaway process will be killed in a few seconds and then you can continue working.
Enough... (Score:3, Funny)
Equal to. (Score:2)
I use this (Score:5, Insightful)
2G swap for up to 8G RAM
+1G swap for every 4G RAM beyond that
1GB ram using XP (Score:3, Informative)
Don't forget disk cache (Score:5, Insightful)
Re:Don't forget disk cache (Score:4, Interesting)
This is most obvious when you are copying large amounts of data, e.g. during a backup.
Say you have a 250GB disk and you copy it to another one. The system will continously try to keep the files you have read in disk cache (because you may read them again) and try to keep room for many dirty pages that still have to be written to the destination disk (because you may change them again before final writing).
All of this "(because)" is never going to happen as everything is read once and written once and then no longer needed.
But still, it will swap out running processes to make room for the above.
The net effect you see is that the source and swap disks are very busy, the destination disks sits idle long times until the kernel feels like flushing out some dirty buffers, and the other programs slow down to a crawl fighting for the swapspace.
It can be tuned with the "swappiness" variable but it remains a tough thing to control. It looks like Windows does a better job in this (not so hypothetical) case.
There should be some "file copy mode" (used during backups and other large tree copies) where it:
- discards all disk USERDATA caches immediately after use (directory and other filesystem allocation data may be kept)
- immediately writes out any written USERDATA to the destination disk, not having it populate the dirty pages until bdflush comes around to write them
- keeps re-using the same small set of buffers to pump the data from source to destination, without stealing memory from others
Issue is of course: how could this mode be enabled. It could be a special systemcall, but who would call it and where?
Personally, I would already be happy with a program like "nice" or "ionice" that would run a commandline in a special mode (e.g. with a very small buffer quota) to force such behaviour. But the world at large would of course be better serviced if this would happen automatically when lots of data are copied sequentially.
Re: (Score:3, Funny)
LOL. Windows will swap out my web browser when I'm copying a 2GB file from one drive to another.
The whole idea of kicking out real applications to increase disk cache size is absolutely retarded. Unless the cache is below some absolute minimum size, it should never, ever swap out an application just to try to cache data that I'm probably never going to use again. The operating system has no damn clue about how important a file may
Well, there's what I do and then there's reality (Score:4, Informative)
However, you might just do what I do and try out different values to figure out what works. If you're talking about a linux system a real-time memory/swap usage graph can be added to most window managers so that you can see what's happening. You could also try to estimate usages based on what the machine is expected to do.
Excuse me while I reminisce... (Score:2)
After some poking around in the system, we found that we were in the topsy-turvy situation of having the OS running in RAM and all the applications running in the swap file on the HD!
As soon as he got rid of the silly voices and other frippery (cool, though!), it went back to behaving in a more sensible manner.
I think RAM prices have fallen faster than HD speeds have risen, so it has more impact than it used to to
Re: (Score:2, Informative)
1. The 128 KB Mac did not have a HD (though there were some companies that made disks that plugged into the floppy port).
But more importantly:
2. There was no "swap" (Virtual Memory) for the Mac OS until System 7, which wouldn't run anything less than a Mac Plus.
More is better! (Score:5, Funny)
Set it and forget it (Score:5, Insightful)
If you're asking about creating a swap partition for Linux then 1.5X is also recommended. Just be generous, unless -- for some reason -- you've got 2GB of RAM and a 50 meg hard drive. Too much is always better than not enough.
Please memorize this equation (Score:3, Funny)
Still 2-3x physical RAM (Score:2)
Systems typically are paging less now that we have multiple gigs of RAM per server, but if something goes wrong, the disk is so cheap that having the overhead installed and ready to use is fine. Having a live, active safety margin is just good sytem planner sense.
If you skimp on OS hard disks so much that 2-3x physical RAM is an excessive chunk out of the hard disks, then you're doing somethin
auto (Score:3, Insightful)
No swap at all (Score:4, Interesting)
With all of our 64bit 4GB of ram minimum hosts floating around, there is no longer a point to having swap -- if you server really is swapping, it's under a huge load and the io is making the problem worse. Let the OS kill a few processes to get it back under control
Re:No swap at all (Score:5, Interesting)
It's fine to set off alerts and alarms if you're paging. You should set off alerts and alarms if your servers start paging. Randomly killing things instead? Insanity.
You can never build reliable services for users/customers unless you can handle random or accidental error conditions gracefully. Swap space is a cheap and easy key way to do that.
Rule of thumb... (Score:5, Insightful)
But... but... the rule of thumb says to have twice as much swap as RAM!
It's a pet peeve of mine that so many system administrators appeal to "rules of thumb" about decisions such as this, instead of actually thinking it through. Sys admins pass around these nuggets of wisdom with unquestioning reverence, like they were handed down from some bearded UNIX guru sitting on a mountaintop. These rules either 1) happen to reflect reality, 2) do not reflect reality, or 3) reflected reality 20 years ago but nobody got around to issuing some sort of "revocation rule of thumb". :)
My experience is that very little swap is needed these days, and the rule of thumb falls into category #3. Long gone are the days that the OS demanded swap space for all process memory [san-francisco.ca.us].
If I have a machine with 1GB of RAM, I'll usually give it 512MB of swap or so. As discussed elsewhere in this thread, a little bit of swap is good for pre-emptive swapping and for emergencies (to avoid the dreaded Linux "oom killer".) Also, if you're going to use hibernate, you'll want at least as much swap as real memory.
heavy windows usage = 0, anything else = default (Score:3, Interesting)
Very simply, if you use windows and use it heavily (run some intensive tasks or need performance), turning off the page file will give you a nice performance boost.. or rather will not take away from performance.
I have 1GiB of physical memory on my laptop, and reaching the limit in Windows when my paging file was off, posed a challenge (in other words, it worked perfectly well without it)
This is because Windows attempts to use the paging file whenever it can (proactive), unlike Linux, which uses it only when there's no other way (reactive). Depending on the applications you're running, one of the approaches will be better than the other, though from what I've seen, I don't like what Windows does...
Caveat Lector: this might be because I wasn't seeing the slowdowns which might've been caused by reactive approach. I've still yet to formulate an opinion on it - but so far it looks very reasonable.
If using Linux, keep the swap partition and forget about it.
In Windows, the best way to figure out if you need your page file is to load up as many apps as you normally load, maybe a few more - and check the memory usage (don't trust "VM usage" in windows task manager, it doesn't show you what you think it shows you!). If the usage is lower than your physical ram by a [few] hundred MiBs, turn off the page file and don't look back. If it's closer, set the page file to a small size, usually no more than 512MiB. If you set the file, make its size static, so that Windows doesn't try to adjust it all the time (it's too stupid to understand that you want to keep it as small as possible)
Interesting to note that the paging file is not used for hibernation, even though you'd think it were almost tailor-made for that purpose. I've heard that early betas of Windows 2000 woke up from hibernation in a few seconds - I bet they were using the paging file for hibernation then... but I digress
HTH
Re:heavy windows usage = 0, anything else = defaul (Score:4, Informative)
4GB RAM, 4GB swap (Score:4, Insightful)
I have a lot of things running which, usually, are doing nothing. For instance, apache2, mysql, postfix, and courier-imapd-ssl are always running, but they're rarely actually *doing* anything. (If I get a hit or an email, it's relatively rare as I hardly have very little hosted off of my home box - nevertheless, I do want these running). So I'm happy to let these get swapped out. When I start up matlab, and start dealing with huge datasets, I know it's going to swap most of these out. That's good. It will also swap out some of my matlab data that's loaded but not currently being used (and yes, it's quite possible to have >4gb in your workspace). For me, I have the swap because I need it. Figure out what you need, and you will have the answer to your question.
Mac OS X swap (Score:5, Informative)
Re:OSX - 4 gigs RAM, 14 gigs swap?!? (Score:4, Informative)
BSDs like more (Score:5, Insightful)
Disk is always far cheaper and more plentiful than memory. If you have four gigs of memory, what's wrong with carving eight gigs of swap out of your terrabyte RAID? If you have that much memory in the first place, then you're probably running large apps. Do you and them a favor and give them a little breathing room.
Are you using tmpfs or not? (Score:5, Interesting)
Read (please!) (Score:5, Informative)
1) Page space is not swap space. There's a small distinction that's generally lost (and generally ignored). Page space is used to move memory pages to and from disk. Swap space is technically to move entire processes out to disk. The difference is mainly based on when your OS was created (i.e., technological underpinnings) and no need to get into it now... but the difference is meaningful.
2) Page space is not *free*. There's a misconception that if you have 500G of disk space then "how does it hurt" to put 8G of swap on 4G RAM. Depending on your OS, the size of the page table can grow remarkably depending on how much memory (RAM + VM) is allocated. This means that adding 2G of page space may not cost anything, but adding 2.5G may suddenly take up another chunk of real, non-pageable memory because the page table cannot itself be paged. This means that if your app is thrashing, then adding page space may make it worse.
3) Even with lots of RAM, it's still often a good idea (depending on your usage) to have some page space. Modern OSes will still page out unused pages to use RAM for better stuff. I.e., if you have a huge file open in a graphics application, but are not actively using that application for a length of time (an hour, say) then the OS will page it to disk. This makes better use of your physical RAM. On some OSes the OS will use page space even if free RAM is available. It can then toggle a page out by flipping a bit in the page table and not have to do an expensive write.
4) In some systems you can overcommit memory. Applications tend to request a lot more memory from the OS than they'll actually use. This is useful in many instances but it again depends on your usage. If you're running a single application that doesn't dynamically allocate memory then you can run pageless. If a new app requests memory that's not available then it will get a failure on malloc request. This can be desirable in some circumstances.
5) There are benefits to running page space on a separate disk, but for the vast majority of home users, the difference is negligible. This applies to Windows and Linux. Once you start stressing the VM subsystem then a separate disk is highly desirable.
6) You can create page files on Unix/Linux. It's not desirable generally because of the extra filesystem overhead and possibility of fragmentation. But hey, in a pinch it works.
7) Why this 2x RAM rule? A lot of it comes from old VM subsystems that needed a "picture" of the entire memory space. This made the page-out algorithms easier to code. Newer algorithms don't require the 2X RAM.
KL
min(2*RAM, 512Mb) (Score:4, Interesting)
Smarter per-process ressource quotas would probably be better and it would be nice to have a trashiness function according to the disk speed but so far 512Mb sounds like the limit between using the resset button or just taking a coffee break when you see the HD led blinking like a strobe.
It is just easier to try the approach where you consume a lot of RAM first and to re-code if it doesn't work. I work in bioinformatics and we often have huge datasets, I alway try to load the whole thing and to make the computation in RAM. Only when I get and out-of-mem error do I segment the dataset and try a smarter approach. That might explain my choice for 512Mb and the right threshold for other people might be bigger or lower but I'm pretty sure that its bellow 8Gb.
Listen to Mr Cray. (Score:5, Funny)
-- Seymour Cray, on virtual memory.
Re: (Score:3, Funny)
It is usually recommended to use analogies which the target audience can relate to.
"It Depends" (Score:3, Insightful)
Or distilled: less RAM than average needs more than two times that for virtual, average RAM needs one to two times that, and lots more RAM than average can probably get away with less than one times or even none but probably should use one times anyway.
Again note that average refers to the RAM size of a current generation machine configured to run the typical number of typical current programs with reasonable performance.
All memory is virtual these days! (Score:4, Insightful)
Only on-chip memory, i.e. cache, is "real" these days, and all accesses to DRAM will be handled in paging units of 64/128 bytes or so. If this sounds familiar, it should! CPUs with 1 to 4 MB of real memory and lots of virtual memory is what the mainframes and minicomputers had about 20-30 years ago.
What this means is that now, just like then, all performance-critical code needs to be written to keep the working set within the amount of "real" memory you have available. When you passed this limit, you needed to make sure that you handled paging in suitably large blocks, to overcome the initial seek time overhead.
Today this corresponds to the difference between random access to DRAM and burst-mode (block transfer) which can be nearly an order of magnitude faster.
In the old days, when you passed the limits of your drum/disk swap device, you had to go to tape, which was a purely sequential device. Today, when you pass the limits of DRAM, you have to go to disk, which also needs to be treated as a bulk transfer/sequential device.
I.e. all the programming algorithms that was developed to handle resource limitations on old mainframes should now be ressurected!
"those who forget their history, are condemned to repeat it"
Terje
Re: (Score:2)
Re: (Score:2)
Personally i use 5G of mine, thats enough should anything start gobbleing it up (have seen one buggy game use over 6G once, 1.8 in real mem rest was swapped).
But then, i beta a lot of games that have memory handleing like the titanic had water pumps
Re: (Score:2)
Re:Depends... (Score:5, Informative)
In Windows, your RAM is saved to a file called "hiberfil.sys" which is the exact size of your physical RAM. Your swap file stays exactly the way it is, otherwise you'd lose the data that was swapped to it.
In Linux, it depends on what program you are using to suspend, but typically, it's a file in
Re: (Score:3, Informative)