Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

How Much Virtual Memory is Enough? 544

whitroth asks: "Ten years ago, Received Wisdom said that virtual memory should be, on the average, two to two-and-a-half times real memory. In these days, where 2G RAM is not unusual, and many times is not that uncommon, is this unreasonable? What's the sense of the community as to what is a reasonable size for swap these days?"
This discussion has been archived. No new comments can be posted.

How Much Virtual Memory is Enough?

Comments Filter:
  • Depends (Score:5, Interesting)

    by beavis88 ( 25983 ) on Tuesday August 29, 2006 @09:01PM (#16004042)
    My rule of thumb these days is 1.5x RAM, unless you're at 2GB, in which case I go with 2GB swap as well. This is for *gasp* Windows, though.
  • by MyDixieWrecked ( 548719 ) on Tuesday August 29, 2006 @09:18PM (#16004111) Homepage Journal
    not creating a swap partition at all is a bad idea, imo...

    you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.

    I typically just make a 1 or 2 GB swap partition since I've got more than enough space to spare. I mean, back in the days when 128MB of RAM was considered a lot, and a 5GB drive was considered huge, no one would consider using 20% of their storage space for swap. Now, it's not unusual to have 300GB of storage, so what's 1% of that being used for swap?

    I've also got a serious collection of 2-6GB harddrives kicking around, now, so I've been using them for swap. It's really pointless to have a 4GB partition for data, so I just use the entire 6GB drive for swap on some machines.

    my primary server right now has a 4GB swap partition and 1.25GB of RAM... a piece of bad AJAX code that ran overnight wound up using all the RAM and had some seriously detrimental effects on the performance of the server. it took 25 minutes to ssh in in the morning and when I finally got in, I found that the load averages were at over 100 (I've NEVER see that before).

    my point is that even if you have a LOT of RAM, it's still handy to have some spillover available.
  • by edmudama ( 155475 ) on Tuesday August 29, 2006 @09:32PM (#16004169)
    If you've got a 300GB primary drive, it's foolish to use a 5GB drive for your swap. While you gain the benefit of having that drive separate from the primary (and potentially not contending for the bus), those drives are so far apart technology wise that you'd probably be better off with a swap partition on your most modern disk.

    That 2/5/6GB drive may have a 20MB/s sequential rate at OD and half that at ID. Modern drives more than double that sequential performance (or triple), which is what's critical when swapping in/out a large job. Many drives in that generation don't support UDMA either, and talk with PIO, meaning you get no data checksum on your transfers.

    You can span generations when you're using a cost reduced modern drive (fewer heads, same formats) but the drive that was stretching to make 5GB across 6/8 heads will be a real POS compared to modern drives performance wise.

    Thrashing is bad, but thrashing to a slow disk I'd think would be worse. It is even compounded since that 5GB drive is probably PATA, meaning you're going to have your swap drive and primary drive sharing a cable, which will basically nuke most of the savings of 2 disks since they'll be reselecting master/slave at almost every command.

  • No swap at all (Score:4, Interesting)

    by DrZaius ( 6588 ) <gary.richardson+slashdot@gmail.com> on Tuesday August 29, 2006 @09:38PM (#16004207) Homepage
    I think it was one of the Live Journal guys at OScon that said, "If your server starts to swap, you've lost the battle".

    With all of our 64bit 4GB of ram minimum hosts floating around, there is no longer a point to having swap -- if you server really is swapping, it's under a huge load and the io is making the problem worse. Let the OS kill a few processes to get it back under control
  • Re:No swap at all (Score:5, Interesting)

    by georgewilliamherbert ( 211790 ) on Tuesday August 29, 2006 @09:46PM (#16004245)
    If the server starts to swap, you've lost the battle. But randomly killing things or locking up is losing the war.

    It's fine to set off alerts and alarms if you're paging. You should set off alerts and alarms if your servers start paging. Randomly killing things instead? Insanity.

    You can never build reliable services for users/customers unless you can handle random or accidental error conditions gracefully. Swap space is a cheap and easy key way to do that.

  • by Fry-kun ( 619632 ) on Tuesday August 29, 2006 @09:59PM (#16004311)
    OP poses wrong question. Virtual Memory is built into the OS and cannot be turned off. What OP means is Paging or Swap File (i.e. simulating memory using HD space). The rest of this reply will ignore this difference.

    Very simply, if you use windows and use it heavily (run some intensive tasks or need performance), turning off the page file will give you a nice performance boost.. or rather will not take away from performance.
    I have 1GiB of physical memory on my laptop, and reaching the limit in Windows when my paging file was off, posed a challenge (in other words, it worked perfectly well without it)
    This is because Windows attempts to use the paging file whenever it can (proactive), unlike Linux, which uses it only when there's no other way (reactive). Depending on the applications you're running, one of the approaches will be better than the other, though from what I've seen, I don't like what Windows does...
    Caveat Lector: this might be because I wasn't seeing the slowdowns which might've been caused by reactive approach. I've still yet to formulate an opinion on it - but so far it looks very reasonable.

    If using Linux, keep the swap partition and forget about it.
    In Windows, the best way to figure out if you need your page file is to load up as many apps as you normally load, maybe a few more - and check the memory usage (don't trust "VM usage" in windows task manager, it doesn't show you what you think it shows you!). If the usage is lower than your physical ram by a [few] hundred MiBs, turn off the page file and don't look back. If it's closer, set the page file to a small size, usually no more than 512MiB. If you set the file, make its size static, so that Windows doesn't try to adjust it all the time (it's too stupid to understand that you want to keep it as small as possible)

    Interesting to note that the paging file is not used for hibernation, even though you'd think it were almost tailor-made for that purpose. I've heard that early betas of Windows 2000 woke up from hibernation in a few seconds - I bet they were using the paging file for hibernation then... but I digress

    HTH
  • The way I use swap (Score:2, Interesting)

    by Thaidog ( 235587 ) <slashdot753@@@nym...hush...com> on Tuesday August 29, 2006 @10:46PM (#16004536)
    Many people think that you should save the fastest part of the HD to allocate as swap. But after thinking about it no matter where your swap is on a hard disk it's going to be noticeable when the system pages. what is not noticeable is the perceived difference in human terms of the VM at the outer most of the platter or the innermost. And if your allocating a gig of VM or more you're wasting space for system files or applications files where things like time to boot and application launch speed can be faster. (Along with prefetch data) In Linux you can use riser for your system root to further optimize this since most system application files are small in size. (Along with a small block allocation size.) Also, if you're using a secure VM file you may notice a difference if you move the paging file to a secondary drive with the file allocated at the outermost section. No matter what the system I make a page file. If it needs to be secure you should lock the applications pages to ram memory. I any event when using a disk you're going to notice paging and at that point you'll be waiting anyway... I doubt you'll care or notice at that point the page finsihed 2ms faster or not.
  • by gameforge ( 965493 ) on Tuesday August 29, 2006 @10:51PM (#16004564) Journal
    A modern computer should be able to accomodate every malloc upto memory+free disk space and it can't easily.

    I see. So how do you get around that little address space issue? I'm quite certain a regular 32-bit PC w/ 4GB of RAM doesn't need a swapfile unless you're running Linux (and an AWFUL lot of software, or software with VERY large RAM requirements). Even in that case, it's a special kernel option, and if you can actually max your 4GB of RAM to the gills by multitasking regular, every-day software, you deserve to be penalized with sluggish performance! Okay no, but still.

    Since RAM is dirt cheap anymore, everyone really should have 4GB on their 32-bit computers for the sole purpose of turning their swapfile off; it's probably the least amount of money you could spend per the increased performance in Windows.
  • by Feyr ( 449684 ) on Tuesday August 29, 2006 @10:52PM (#16004565) Journal
    all this swapping talk is giving me nightmares.

    a few years ago, we had a customer with multiple colocated servers complaining that sometimes they crashed for no apparent reasons.
    after much debugging, we figured out one of their script was leaking memory. eventually consuming all ram (2.5G) + all swap (1-2g).

    now the real problem is this. those were LIVE processes so the system was constantly paging back and forth, using 90%-95% cpu just to swap the damned things in and out and starving the actual processes.
    linux 2.4, linux 2.6 (early 2.6). same deal. amazingly, the distro made a difference, redhat was pure hell, debian slightly better (though still not acceptable)
    freebsd was much smarter, it just killed the offending processes.it sure wasn't ideal, but at least the server was still serving its clients

    to this day, i never put more than 256m as swap even on servers with 4G of ram. that's where we had the least problems.
  • by irritating environme ( 529534 ) on Tuesday August 29, 2006 @10:56PM (#16004592)
    Sure when you had 128MB of ram, and you had a 256MB swap.

    But dude, my next box will have two GIGABYTES of RAM!

    Every one of your usage options assumes you'll run out of physical ram. Maybe if the OS is wasting it on pointless disk caching, but don't you think the programs in memory should have priority over blind disk caching?

    Lest a foolish reader believe your two options (swap immediately, or swap as lazily/late as you can) are the only two possibilities, how about swapping when, say, only 20% of physical RAM is left? That way my Firefox and Eclipse don't swap to disk and take twenty seconds to swap in when I have 500MB of GODDAMN FREE RAM!
  • by irritating environme ( 529534 ) on Tuesday August 29, 2006 @11:00PM (#16004615)
    I have not run this option, but most people indicate that Windows is totally built around assuming there's a swapfile, and doesn't properly handle not having a swap file in an optimized manner. Enough seemingly-smart people have indicated this that I don't do it.

    I think most people's issues with this is that there aren't a lot of good options in windows. Either you deal with their crappy swapping decision algorithms, or you go without swap in an OS that has assumed swap has been there for about a decade.
  • by fuzz6y ( 240555 ) on Tuesday August 29, 2006 @11:20PM (#16004696)
    you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got.
    You never know when some runaway process is going to eat all yer RAM and swap combined, no matter how much swapspace you've got.
    a piece of bad AJAX code that ran overnight wound up using all the RAM and had some seriously detrimental effects on the performance of the server
    too bad you had all that swapspace for it to run rabid across. if you'd had no swap at all, 1 of 2 things would have happened:
    1. the kernel kills the process because of a low memory condition
    2. an attempt to allocate memory fails. The application then handles this somehow. Since we've established that it's a lousy application, I'd guess it handles it by crashing.
    Either way, the Dude^W server abides.
    Naturally if you actually had that much physical RAM, the process would have still gone nuts, but your server wouldn't have had to thrash its disk for every process except the prodigal son, so the performance hit probably wouldn't have been noticeable.
  • by hpa ( 7948 ) on Tuesday August 29, 2006 @11:33PM (#16004752) Homepage
    One thing to consider is whether or not you're using tmpfs for /tmp. For performance, I recommend using tmpfs for /tmp, and basically treat the swap partition as your /tmp partition. It may seem counterintuitive, "why would it be faster than a filesystem when it's backed to disk anyway, and my filesystems caches just file if need be?" The answer is that tmpfs never needs to worry about consistency. On the kernel.org machines, we have seen /tmp-intensive tasks run 10-100 times faster with tmpfs than with a plain filesystem. The downside, of course, is that on a reboot, tmpfs is gone.

  • by RhettLivingston ( 544140 ) on Tuesday August 29, 2006 @11:36PM (#16004765) Journal

    "you never know when some runaway process is going to eat all yer RAM and need to use swap... no matter how much RAM you've got."

    If you truly have a runaway process, it will use up all of your swap, no matter how much swap you've got. In most cases, it would be better for it to die sooner rather than later.

    I am a very heavy user and run many applications simultaneously. I have been running XP with 1 or 2GB of RAM and no swap file for over a year now. Despite having dozens of tabs open in two different browsers and running programs like Studio 10 for movie editing, I've never come close to running out of RAM.

    Interestingly, it seems that Windows programs in general use less RAM if they have less virtual memory available to them. I have experimented with this to prove it to myself by reloading two identical 2GB machines with identical OS and software loads, setting one to have a 5GB swap and the other to have no swap, and starting up the same dozen or so applications on each. The "commit charge" on the machine without the swap was over 200MBs lower than on the one with swap.

  • by MikShapi ( 681808 ) on Tuesday August 29, 2006 @11:39PM (#16004778) Journal
    But how does swap help?

    If you have 2GB of RAM and a process started leaking violently, providing it with 1.5 gigs of (physical) ram to work before it or the box dies or 3.5 gigs of ram (2 of which are swap) is meaningless. If it'll be chugging so much memory, it's probbably leaking without restraint anyway.

    This really depends on how likely you see a scenario where you'll be (legitimately) using more than your physical 2GB. For my office desktop box, that's a "never ever ever, not by a long shot", so I plain don't need the swap. Were I running a 512MB box with 1GB of ram I'd be fine. 2 gigs of real ram? when and how the hell would I be using that swap?

    Of course, if I were running software that'd be using up 1.8GB on average, say some game with a chubby 3D engine, it'd be a whole different ballgame.

    As always, depends on your personal needs.
  • min(2*RAM, 512Mb) (Score:4, Interesting)

    by YGingras ( 605709 ) <ygingras@ygingras.net> on Wednesday August 30, 2006 @01:39AM (#16005218) Homepage
    I never user more than 512Mb of swap. If you have a runaway process, you can let it live but you avoid a lot of trashing. If more than one process start consuming RAM like crazy you, actually want them to die from an out-of-mem error otherwise your whole system will grind to a halt while it spends most of its time unswaping one and the other. At 512Mb you can do a little excess of memory usage but won't go beyond what you can unswap in a time quantum (mostly).

    Smarter per-process ressource quotas would probably be better and it would be nice to have a trashiness function according to the disk speed but so far 512Mb sounds like the limit between using the resset button or just taking a coffee break when you see the HD led blinking like a strobe.

    It is just easier to try the approach where you consume a lot of RAM first and to re-code if it doesn't work. I work in bioinformatics and we often have huge datasets, I alway try to load the whole thing and to make the computation in RAM. Only when I get and out-of-mem error do I segment the dataset and try a smarter approach. That might explain my choice for 512Mb and the right threshold for other people might be bigger or lower but I'm pretty sure that its bellow 8Gb.
  • by Jeremi ( 14640 ) on Wednesday August 30, 2006 @01:56AM (#16005279) Homepage
    To this day, i never put more than 256m as swap even on servers with 4G of ram. that's where we had the least problems.


    That raises the question: is swapping obsolete? Or to put it more explicitly, has the speed difference between modern CPUs and hard drives become so large, and RAM so cheap, that it's better to consider running out of RAM to be indicative of a software failure? That way you end up with a system where one or more processes may fail (or be terminated) but at least the machine remains usable and doesn't swap itself into non-responsiveness.


    In my experience, the answer is yes: with 2GB of RAM in my machine, I never need to swap, and in the few instances where swapping did occur, it was because of buggy software (memory leaks) and manually terminating the offending processes is what I needed to do resolve the memory shortage. So why not just have the OS do that automatically?


    Or to put it a third way, is there any situation where swapping is helpful, anymore?

  • by Fulcrum of Evil ( 560260 ) on Wednesday August 30, 2006 @03:36AM (#16005534)

    Could your 64-bit linux system address over 2^48 bits of memory?

    Doubt it. I think AMD64 tops out at 41-42 address lines right now.

  • by Nefarious Wheel ( 628136 ) on Wednesday August 30, 2006 @03:55AM (#16005609) Journal
    So go for it!!! Who cares what you do? Heck, give yourself 10x the RAM and see if it actually makes any difference!!! (it won't)

    In general, I approve of your philosophy. But remember there are addressing overheads to map all that disk space into memory, and all that page management can give you a bit of a performance hit too. It isn't the cost of disk, it's the cost of managing it that means you have to put a little bit of thought into it. I know the stuff is cheap, but you still have to compute with it.

    Or to quote my favorite philosopher, Yogi Berra, "In theory, there's no difference between theory and practice. In practice, there is".

  • by pe1chl ( 90186 ) on Wednesday August 30, 2006 @05:17AM (#16005887)
    Unfortunately the Linux system has a hard time determining what are "infrequently accessed pages" and what are useful pages to keep in the disk cache.

    This is most obvious when you are copying large amounts of data, e.g. during a backup.
    Say you have a 250GB disk and you copy it to another one. The system will continously try to keep the files you have read in disk cache (because you may read them again) and try to keep room for many dirty pages that still have to be written to the destination disk (because you may change them again before final writing).
    All of this "(because)" is never going to happen as everything is read once and written once and then no longer needed.
    But still, it will swap out running processes to make room for the above.

    The net effect you see is that the source and swap disks are very busy, the destination disks sits idle long times until the kernel feels like flushing out some dirty buffers, and the other programs slow down to a crawl fighting for the swapspace.

    It can be tuned with the "swappiness" variable but it remains a tough thing to control. It looks like Windows does a better job in this (not so hypothetical) case.

    There should be some "file copy mode" (used during backups and other large tree copies) where it:
    - discards all disk USERDATA caches immediately after use (directory and other filesystem allocation data may be kept)
    - immediately writes out any written USERDATA to the destination disk, not having it populate the dirty pages until bdflush comes around to write them
    - keeps re-using the same small set of buffers to pump the data from source to destination, without stealing memory from others

    Issue is of course: how could this mode be enabled. It could be a special systemcall, but who would call it and where?
    Personally, I would already be happy with a program like "nice" or "ionice" that would run a commandline in a special mode (e.g. with a very small buffer quota) to force such behaviour. But the world at large would of course be better serviced if this would happen automatically when lots of data are copied sequentially.

  • Witch Doctors (Score:2, Interesting)

    by mlwmohawk ( 801821 ) on Wednesday August 30, 2006 @08:31AM (#16006451)
    The computer is a lot more responsive without virtual memory, and I get no thrashing, ever!

    Oh, please. How do you quantify "more responsive?"

    Sorry to say this, but the witch-doctory of computer maintenance is not engineering, it isn't science, it isn't even common sense

    If you get "thrashing" in your system it is because you don't have enough RAM for the applications you are using. At this stage, the application has one of two options, use user space files to reduce the "in RAM" data size or quit with an out of memory error. Most all applications, in this scenario, perform better when the OS provides virtual memory. (Obviously not all because the default swap algorithm can be a worst case, but I digress.)

    The Macintosh's early virtual memory (pre-OS/X) and old OS/2 1.x virtual memory systems really created a lot of lore about the woes of virtual memory. The reality is that the modern VMMs as implemented in the Pentiums and later are very good, and the OS support for them are excellent, and a system is almost always better off with VM than without.

    In an honest discussion it is hard to speak with absolutes, so no one can say any one way is any better than any other way 100% of the time, unless it is an obvious choice like drink poison or drink pure distilled sterilized and safe water in moderate amounts when you are thursty and have balanced electrolytes.

    That being said, unless you have a specific and quantifiable case where virtual memory hurts performance dramatically, use virtual memory, you will be better off.

  • by Anonymous Coward on Wednesday August 30, 2006 @09:21AM (#16006728)
    ... here's what I'm currently experimenting with:
    (coming from a 256M system, and moving to a new build with 2G, win2k pro & Ubuntu 6.06 on the machine currently)

    For Ubuntu (Linux kernel v. 2.6.16) I went with a 2G swap partition, or a 1:1 DRAM/VM ratio.

    For win2k SP4 + all post SP4 updates: I'm currently just letting windows manage the VM space. On the old 256M machine, I went with a fixed 3x DRAM VM pagefile.

    Caveats: I've only recently moved to this system, and have NOT stressed them to any real degree with fully memory intensive operations. The most stressful application that has been to date on the new build has been Oblivion under win2k. The max pagefile use in those conditions have been c. 385M of pagefile use, however judging from performance that space was not being actively utilized(i.e. didn't notice alot of extraneous swapping going on judging by disk activity and pagefile usage growth(none).)

    Under linux, I can't really thinik of anything that I utilize it for that would require vast amounts of memory ATM, although I may examine building of new linux kernels and gcc from source. I expect that gcc build would be the most demanding, but suspect that, that may not even stress a system w/2G. I have no large databases or a way to generate enough use artificially or otherwise(or time or interest ATM really) see what would happen, but I strongly suspect that a swapfile around the size of physical memory may be enough, especially on systems with even more memory as undoubtedly the idea in those situations was to minimize swap space use to avoid thrashing, or even more low level swapping activity...

    Back in the day VM was great as disk space was much cheaper than physical RAM, and large VM swapfiles/partitions allowed us to run applications that may not have been able to run before. Most of these same apps now run on machines with MUCH large physical RAM installations, while using not very much more physical memory than before meaning that they can run directly from RAM which is significantly faster than swapping under VM. It would seem the best strategy is to allocate some(I like the 1:1 ratio) as applications will still allocate large amounts of space most of which will be allocated from the VM pool(reducing physical RAM size) while still avoiding swapping. In more limited(#/size of apps) systems you could probably even turn off VM, although I really don't think that it would gain you much, and each running app would have a MUCH larger physical RAM footprint as all of it's requested memory would have to be from the physical RAM pool...

    i.e. babbling aside, you'll need to do some experimentation likely with a worst load case scenario and see what sort of configuration works best for you AND how much disk space that you can AFFORD to allocate to swapfile usage. Another potential idea would be to look at limited embedded systems which just don't have large amounts of storage OR physical RAM and see what happens with those, as most embedded systems with linux that I have seen usually don't even have a 1:1 ratio of RAM & VM, usually it's a very small VM space with RAM that is several times that amount, of course this gets a little fuzzy as most embedded systems don't have anything other than some sort of RAM(flash, static, etc.).
  • by da5idnetlimit.com ( 410908 ) on Wednesday August 30, 2006 @09:28AM (#16006767) Journal
    And is 1000 times faster ?

    So, to answer the original question :
    Optimal amount of Swap ? 0 !

    Even my old PIII 1Ghz takes 2 gigs of ram. Newest system we have at the office takes 64 Gigs.

    => right now I consider the Linux systems @ work as having a problem if they use swap...

    Adding just a wee bit of RAM to your system and seeing swap disappear means your perfomance just exploded on this particular task...

    Best Regards,

    D.
  • by Joce640k ( 829181 ) on Wednesday August 30, 2006 @12:11PM (#16008137) Homepage
    In the old days (Windows 95/98/NT) you could control the maximum size of the file cache....and it really, reallymade the whole system run a lot smoother. Things like burning CDs became 1000% more reliable when you did it - you could keep on working normally while the CD was burning instead of causing a flurry of paging every time you touch the mouse.


    For Windows XP the geniuses at Microsoft removed this ability and the whole system runs much worse because of it.


    Every time you do something which reads big files from disk on XP all your apps get paged out to disk. I don't know in which fantasy world this is supposed to be an "improvement", but it's one of my favourite reasons to hate XP.

  • by mrball_cb ( 463566 ) on Wednesday August 30, 2006 @12:28PM (#16008279) Homepage
    We've found that 512 Megs of swap is more than enough for our 2 and 4 Gig machines. Why even have swap? Here is an example:

    1) On a system with zero swap, when apache gets slammed (say you get to the top of digg or slashdot), apache starts consuming lots of memory to handle new inbound requests. When it runs out, the machine grinds to a halt because it can't allocate more and requires a power cycle. (Setting a low max children really only helps if you are happy denying traffic to the people who are trying to see your site...it's best to plan for capacity and put quite a few servers load balanced).
    2) On a system with any appreciable swap (IMHO, more than 128 Megs, up to 512 Megs), if you're monitoring the system (watch -n 1 df -h, for example) and all of a sudden it starts using swap, the machine is on the edge of dying. This gives you an early warning that maximum machine performance/throughput is occurring. You can restart apache or shut it down or similar, you can do something to temporarily lower or remove load from that machine. This doesn't give you *much* time, but it gives you some.

    In our real world experience, at digg and slashdot loads you have about 10-15 seconds to stop apache once it starts swapping. After that, the performance degrades so bad that the machine becomes catatonic, the same as #1, requiring a power reset (obviously because virtual memory on HD is magnitudes slower than RAM, as numerous others have suggested). The key here is that you must realize that some swap is good for allowing unused programs to be swapped out, such as login terminals that just sit there. It's great for detecting problems, but if your heavy app is the one utilizing swap, your machine is about to crash anyway.
  • by Anonymous Coward on Wednesday August 30, 2006 @01:26PM (#16008783)
    Go get OO smart cache. You need to go get their defrag anyways since the windows included one doesn't work. It will let you limit your filesystem cache size, as well as make some registry edits for you, like turning off DOS compatable filename creation, and storing access times, which can speed up filesystem use.
  • by Anonymous Coward on Wednesday August 30, 2006 @05:36PM (#16011005)
    Others have already touched on the contents of tuning(7) by saying that FreeBSD's VM functions more efficiently when swap>=2xRAM. However, nobody seems to have yet thought of panics. While rare, system panics still happen on occasion (particularly in research environments, where people may be causing the system to do things it wasn't originally designed to do). If a dump device is defined (using dumpon(8)), FreeBSD will use it to save the crash dump. dumpon(8) says a swap device should be used, and not a filesystem, mainly because (in a panic situation) "the kernel cannot trust its internal representation of the state of any given filesystem....." The manpage also states that dumps typically aren't larger than the actual RAM on the system.


    So, when building a machine, I typically include a single swap partition equal to the amount of RAM plus 96MB or so. Disk is cheap, and the ability to diagnose panics can be extremely valuable.


    Note: This has worked well for FreeBSD 4.x, 5.x, and 6.x.

Old programmers never die, they just hit account block limit.

Working...