Still fails badly though, because the task manager will show lots of available memory when lot of caching is being done
And what's the problem with this? If memory is shown as available then it *is* available. When an application asks for a page of memory, and there are no free/zeroed pages left, the memory manager will take a "cached available" (a.k.a "standby") page, zero it and give it to the application.
It's not very hard to find systems that consider more than 30% memory available, but are considerably slowed down by swap activity.
It's also easy to find systems that are paging like crazy while their commit charge is nowhere near the total amount of RAM. Any workload that involves a lot of cached or memory mapped file IO can potentially cause this, because file backed pages don't consume commit but they can definitely consume a lot of RAM.
The situation you describe (lots of paging even though 30% of RAM is available) is actually not all that common. Let's say we're talking about a 2 GB system, so 30% is around 600 MB of available pages. This is far above the threshold at which the memory manager starts actively trimming working sets. That makes it pretty hard to come up with a scenario that would cause a lot of paging while maintaining at least 600 MB of available memory. Consider the steps that need to occur before a page can be hard faulted on:
- A page is allocated and added to the working set of a process
- The page is trimmed and placed on the modified list if it's dirty (otherwise it goes directly to standby list)
- The page is written to the pagefile (or its backing file, if it's a file page) and placed on the standby list
- The page is repurposed from the standby list
- Finally, the page is accessed again, causing a hard fault
The first hurdle is step #2. If there is no trimming, pages will simply accumulate in process working sets (until available memory drops below the trimming threshold). And even if the app explicitly trims its pages, they will normally sit on the standby list for a long time. The standby list is FIFO, so somebody will need to push 600 MB worth of pages into it before a given page is repurposed.
Low available memory on the other hand is a great predictor of paging. If available pages get low that means the standby list is much shorter, and trimming is more frequent. Both of these are necessary conditions for paging. Also note that it doesn't matter whether RAM is being consumed by pagefile-backed or file-backed pages - available memory will drop in either case, unlike commit charge which only counts pagefile-backed pages.
If you have a system with 2 GB of memory and 3GB of commit, it's almost always a given you will soon suffer from memory starvation.
I'm typing this on a 2 GB x64 win7 machine. It's running two Office apps, Visual Studio, several browser windows and a couple of memory hogging apps that we use internally where I work. Commit charge is stable around 3.4 GB, but there is ~800 MB of available pages. Reads from the pagefile (according to resmon) are very sporadic - maybe a couple per minute. Pagefile writes are even less frequent (Page Writes/sec in perfmon shows a steady 0, and !vm 9 in the kernel debugger shows that the last write occurred 12 minutes ago).