Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:If only the cache were actually -good- (Score 1) 334

Now, it would be nice if we could see what's in the cache in Windows, but you can't.

If you're really curious, you can download windbg and run !memusage from a local kernel debugging session (or using livekd). You can also use meminfo -a -f, though you'll probably need to post process the output to group used pages per file like !memusage does.

It would nice if you could peg a file to the cache, in Windows, but you can't.

Not that I would recommend that, but you can write a small app that maps a file into memory and VirtualLock's the entire view. That will make sure the file's pages remain resident.

Now you might think that after a day of builds and recompiles, at least some of that stuff would wind up in cache, but it feels like it doesn't.

It's also possible that something else makes things slow. You could profile the build process using xperf, or at least check whether these files are actually being read from disk using resmon.exe or some other disk tracing tool.

Even if it turns out that the files are in fact not cached, it's not usually the operating system's fault. Something must have pushed the cached pages out of memory. For example, maybe Visual Studio itself (or the tools it spawns, like the linker) temporarily consumes a lot of memory during the build, eating into the cache. (Though in that particular case, Superfetch should eventually reload the files back into memory)

Comment Re:So (Score 4, Informative) 334

Starting with Vista, working sets of GUI processes are no longer emptied when the main window is minimized.

For the standby cache recycle problem, Superfetch can help a lot. First of all, it can detect when apps do things like read lots of files sequentially without using FILE_FLAG_SEQUENTIAL_SCAN (or when they do this through a mapped view) and deprioritize these pages so they don't affect normal standby memory. And if useful pages still end up being recycled (e.g. because some app temporarily consumed lots of memory), Superfetch can re-populate them from disk later.

Comment Re:Windows 7 and Vista Lie about memory usage (Score 1) 334

Still fails badly though, because the task manager will show lots of available memory when lot of caching is being done

And what's the problem with this? If memory is shown as available then it *is* available. When an application asks for a page of memory, and there are no free/zeroed pages left, the memory manager will take a "cached available" (a.k.a "standby") page, zero it and give it to the application.

It's not very hard to find systems that consider more than 30% memory available, but are considerably slowed down by swap activity.

It's also easy to find systems that are paging like crazy while their commit charge is nowhere near the total amount of RAM. Any workload that involves a lot of cached or memory mapped file IO can potentially cause this, because file backed pages don't consume commit but they can definitely consume a lot of RAM.

The situation you describe (lots of paging even though 30% of RAM is available) is actually not all that common. Let's say we're talking about a 2 GB system, so 30% is around 600 MB of available pages. This is far above the threshold at which the memory manager starts actively trimming working sets. That makes it pretty hard to come up with a scenario that would cause a lot of paging while maintaining at least 600 MB of available memory. Consider the steps that need to occur before a page can be hard faulted on:

  1. A page is allocated and added to the working set of a process
  2. The page is trimmed and placed on the modified list if it's dirty (otherwise it goes directly to standby list)
  3. The page is written to the pagefile (or its backing file, if it's a file page) and placed on the standby list
  4. The page is repurposed from the standby list
  5. Finally, the page is accessed again, causing a hard fault

The first hurdle is step #2. If there is no trimming, pages will simply accumulate in process working sets (until available memory drops below the trimming threshold). And even if the app explicitly trims its pages, they will normally sit on the standby list for a long time. The standby list is FIFO, so somebody will need to push 600 MB worth of pages into it before a given page is repurposed.

Low available memory on the other hand is a great predictor of paging. If available pages get low that means the standby list is much shorter, and trimming is more frequent. Both of these are necessary conditions for paging. Also note that it doesn't matter whether RAM is being consumed by pagefile-backed or file-backed pages - available memory will drop in either case, unlike commit charge which only counts pagefile-backed pages.

If you have a system with 2 GB of memory and 3GB of commit, it's almost always a given you will soon suffer from memory starvation.

I'm typing this on a 2 GB x64 win7 machine. It's running two Office apps, Visual Studio, several browser windows and a couple of memory hogging apps that we use internally where I work. Commit charge is stable around 3.4 GB, but there is ~800 MB of available pages. Reads from the pagefile (according to resmon) are very sporadic - maybe a couple per minute. Pagefile writes are even less frequent (Page Writes/sec in perfmon shows a steady 0, and !vm 9 in the kernel debugger shows that the last write occurred 12 minutes ago).

Comment Re:Windows 7 and Vista Lie about memory usage (Score 4, Informative) 334

Back in the day of Windows 2000 and XP, the Task Manager chart reported the memory comit charge. Basically, that was the amount of memory applications (and Windows) requested allocated. This does not mean that much memory was actually used, but with the exception of very badly written/buggy programs, it should be close

Not necessarily. Many programs commit large chunks of memory in case they need it later but only use a small portion initially. This simplifies program logic because you don't have to free and reallocate the buffer when you need more space, deal with potential reallocation failures etc. Or a program might want to specify a larger-than-default stack commit size to make sure it doesn't hit a stack overflow if it tries to extend the stack while the system is temporarily out of commit (most services and other system critical processes do that). Or it might map a copy-on-write view of a file, in which case commit is charged for the entire view but no extra physical memory is used until the program actually writes to the pages. Etc etc... The end result of this is that you can't really say anything conclusive about physical memory usage by looking at commit charge

Commit charge is a virtual memory metric. It's great for detecting memory leaks and deciding how big your pagefile needs to be, but not so great for understanding physical memory usage. Often it might seem like there is a correlation between commit charge and physical memory, but you can also find systems that are very low on available RAM yet have plenty of available commit, and vice versa.

Task manager now shows used physical memory (defined as Total - Available). Available memory is the most straightforward way to understand whether your system needs more memory or not, and this is why in Vista/Win7 it was chosen as the main indicator of "memory usage".

Comment Re:So (Score 4, Informative) 334

So,pray tell, where do I learn the meanings of the various stats in Task Manager?

You can press F1 while in task manager and then search for a particular metric, e.g. "available memory". This produces results that seem moderately useful, for example:

Under Physical Memory (MB), Total is the amount of RAM installed on your computer, listed in megabytes (MB). Cached refers to the amount of physical memory used recently for system resources. Available is the amount of memory that's immediately available for use by processes, drivers, or the operating system. Free is the amount of memory that is currently unused or doesn't contain useful information (unlike cached files, which do contain useful information).

For more details about particular counters you can check the Windows Internals book, or Memory Performance Information on MSDN. Also, many counters in task manager have similar or identical perfmon counters, and perfmon has its own help (IIRC there's a "show description" option in the counter selection dialog)

.

Comment Re:When do people get this (Score 1) 613

> The Win7 task manager does show a "cached" stat, though, so your effectively free memory is "free"+"cached".

Not quite. "Cached" includes pages on the modified list, which are not immediately available (or may not be available at all, if your pagefile is full/disabled... (which is one of the reasons why you shouldn't disable it, BTW))

"Effectively free" memory is shown as "Available" in win7's task manager. It is the sum of "Free" (or more precisely, free+zeroed) and "Standby". Check out the memory usage chart in the the resource monitor to get a better idea of how all these counters relate to each other.

Comment Re:Plans set years ago (Score 1) 720

I don't think this does anything to undercut my basic assertion that Win 7 is really a service pack for Vista

Windows service packs are mostly collections of bug fixes. New features or large under-the-hood changes in service packs are extremely rare (things like the new firewall in XP SP2 are an exception rather than the rule). Something like a completely different taskbar UI would never make it into a service pack, not to mention things like a major rewrite of the scheduler, or extensive changes in the memory manager (e.g. the removal of the PFN lock).

Comment Re:Plans set years ago (Score 1) 720

Yet in May of 2007 Slashdot reported that Microsoft announced that Vista was to be it's last 32 bit OS and that the sucessor to Vista would be 64 bit only. See here...

Actually, what was announced was that WS 2008 would be the last 32-bit *server* release. This is even mentioned in some of the comments from the link you supplied:

http://slashdot.org/comments.pl?sid=235071&cid=19172261

Which is exactly what happened by the way - the server edition of win7 (WS 2008 R2) is 64-bit only.

Comment Re:Great startegy (Score 1) 279

Win7 on 512 MB works fine for simple tasks like browsing.

> That says to me that the 700MB commit isn't all cache

Cached file data is not even included in the commit charge.

You can't actually make any conclusions regarding physical memory usage based on the 700 MB number. Part of that 700 MB is resident in RAM, another part is in the paging file, and yet another is purely virtual - it doesn't exist anywhere until the corresponding pages are accessed by the application (think guard pages in thread stacks etc). Commit charge tells you how much *virtual* memory is in use. For RAM usage, check out the "physical memory usage" graph in task manager.

Slashdot Top Deals

If all else fails, immortality can always be assured by spectacular error. -- John Kenneth Galbraith

Working...