Forgot your password?
typodupeerror

Comment Re:Way to twist reality... (Score 1) 85

If you beat your head against the wall repetitively, do you blame the wall? Browser that compile js into native code use VirtualProtect. VirtualProtect call looks like this. BOOL VirtualProtect( [in] LPVOID lpAddress, [in] SIZE_T dwSize, [in] DWORD flNewProtect, [out] PDWORD lpflOldProtect ); You can call it once per page, or once per a range of pages. Guess which Firefox was doing?

Comment Re:In the future. (Score 1) 187

The people with the big bucks control the requirements, not peons on Slashdot. The United States Department of Defense (DoD) specifies that "new computer assets (e.g., server, desktop, laptop, thin client, tablet, smartphone, personal digital assistant, mobile phone) procured to support DoD will include a TPM version 1.2 or higher where required by DISA STIGs and where such technology is available." DoD anticipates that TPM is to be used for device identification, authentication, encryption, and device integrity verification.[11] https://en.wikipedia.org/wiki/...

Comment Re:16 GB of RAM to open the browser on Windows 10 (Score 1) 140

You're assuming that the active allocations are contiguous or almost contiguous throughout the commit space. They aren't. That's why the segment heap uses less RAM. it decommits the empty spaces between active pages. Chatty allocators like the browser leave sandbars in the heaps and much additional commit. Do this enough times and you have active allocations on many nearly-empty pages. In other words, if the pages were empty, the excess commit would only be a page file usage issue, but they are not.

Comment Re:16 GB of RAM to open the browser on Windows 10 (Score 1) 140

If you're using Windows Task Manager to look at Chrome's memory usage you are viewing process private working set, not process commit. I'd bet your commit use is higher than 10 GB, with most of that in the page file, or most of your tabs have been discarded (killed) and will be reloaded when you switch to them. Use the browser task manager (Shift-Esc) and then look at the memory column. That's the process private commit for each browser process. chrome://discards will show you the tab state. I'll bet most of your tabs have been discarded and are just placeholders for the last URL visited, and the entire page contents will need to be reloaded if you switch to them.

Comment Re:16 GB of RAM to open the browser on Windows 10 (Score 1) 140

Actually these are not abnormal amounts of RAM. Those of you using the Windows Task Manager to look at memory usage are most likely seeing the private working set of the process, not how much commit the process is using. You can add the commit value to the Windows Task Manager but why not use the browser task manager instead? Right-click on Chrome's or Edge's title bar and select "Browser task manager" or use Shift-Esc when the browser is in focus. Here you will see how much commit (process private memory) each browser process is using, which will almost always be a lot more than the private working set shown in the Windows Task Manager. Seeing 1+GB for the GPU process and a long-lived mail or Facebook tab is normal. More without the Segment heap, unfortunately. Note it is the excess commit that creates memory pressure as well as large working sets that will lead to paging and decreased system responsiveness. In that case having a fast CPU doesn't save you.
Image

Opossums Overrun Brooklyn, Fail To Eliminate Rats 343

__roo writes "In a bizarre case of life imitates the Simpsons, New York City officials introduced a population of opossums into Brooklyn parks and under the boardwalk at Coney Island, apparently convinced that the opossums would eat all of the rats in the borough and then conveniently die of starvation. Several years later, the opossums have not only failed to eliminate the rat epidemic from New York City, but they have thrived, turning into a sharp-toothed, foul-odored epidemic of their own."
Operating Systems

Extreme Memory Oversubscription For VMs 129

Laxitive writes "Virtualization systems currently have a pretty easy time oversubscribing CPUs (running lots of VMs on a few CPUs), but have had a very hard time oversubscribing memory. GridCentric, a virtualization startup, just posted on their blog a video demoing the creation of 16 one-gigabyte desktop VMs (running X) on a computer with just 5 gigs of RAM. The blog post includes a good explanation of how this is accomplished, along with a description of how it's different from the major approaches being used today (memory ballooning, VMWare's page sharing, etc.). Their method is based on a combination of lightweight VM cloning (sort of like fork() for VMs) and on-demand paging. Seems like the 'other half' of resource oversubscription for VMs might finally be here."

Comment Re:Tits on a bull (Score 1) 334

Vista and Win7 has I/O priorities, but it is done in the I/O stack above the hardware. Low priority IO that is not in flight on the hardware will be delayed. The longest a single low priority I/O operation can block access to the hardware is the length that it takes to complete, which is relative short, less than 10 ms typically. Once higher priority I/O requests are received, low priority I/O is delayed until all higher priority I/O is completed. After no higher priority I/O requests are received for a significant period, then low priority I/O resume. SuperFetch has an extremely small interference with foreground application I/O in Win7 due to this mechanism. In addition, if FF is a process you use frequently, SuperFetch will have the relevent code and data files in memory before you launch it. The disk track buffer is used mostly for disk writes, since the I/O bandwith to the rotational media can't keep up with the bus transfer bandwidth.

Comment Re:When do people get this (Score 1) 613

No that tells the OS to keep all of your process private pages in memory. Anything backed by a file will be purged from memory, like code and data files when other processes need memory. Sequential reads from code files (.dll, .exe) and data files tend to be faster than random IO from the page file. How much forcing the OS down this path impacts performance is very dependant on the memory usage patterns of the system and all of the processes executed on it.

Comment Re:When do people get this (Score 1) 613

If you disable the page file, you don't disable demand paging (guys, swaping went out of vogue eons ago). What you do is you chnage how the OS can respond to memory pressure by forcing it to write out or repurpose pages of files in memory. For example the code not currently being executed, or data files. Remember only process private pages are writen to the page file (think heap and virtual alloc). It also forces the OS to use more physical memory for things like thread stacks, which grow dynamically, and change thier physical memory usage. If you disable the page file the entire commit size of the stack (typpically 1 MB for windows apps per thread) comes from physical memory. With the page file enabled, only a few stack pages will be in physical memory and the rest are accouted for by space reserved in the page file. Since a typical system with a few applications will have between 500 and 1,000 threads you could be blowing a 0.5 to 1 GB on stack pages the system will never use. Disabling the page file is an extreemely bad idea if you really want your system to remain performant. There are cases where it can help, but most users don't know enough about how demand paging works to know when it is appropriate.

Slashdot Top Deals

Uncertain fortune is thoroughly mastered by the equity of the calculation. - Blaise Pascal

Working...