I've been happy with gkg.net. I like that they started offering IPv6 glue records very early.
I've been happy with gkg.net. I like that they started offering IPv6 glue records very early.
I just wish I had more of a reason to use it. As seminal as Slashdot was in the genesis of geek culture, it's decayed. We've left the nest.
So why not apply this less-rigorous process when upgrading from Firefox 7 to Firefox 8?
How do you distinguish between a user and an application he's running? How do you do it over a network?
Any piece of information an application (here, a client-side program) can access, a user can access too. If we can't distinguish between users and applications, we're forced to rely on the user as the unit of trust.
The situation is different for a "web application" that can store information inaccessible to users. But for local applications, a secret key is pointless.
Do we really have to re-learn the same lessons every 5-10 years? Trust users, not programs; don't trust the client; security through obscurity is no security at all: these are fundamental concepts, but we keep forgetting them.
What exactly is the point of the API key? Anything an application can do, a user with access to that application can do. Spammers can extract a key from application and pretend to be that application. You stop spam at the user level.
These attacks by LulzSec, Anonymous, et. al. remind me of the old Twilight Zone episode "It's a Good Life". In that episode, a child with godlike mental powers causes untold misery when, without understanding, he compels the residents of a small Ohio town to conform to his whim. Likewise, these hacktivist groups wield previously-unknown power, and they use to capriciously destroy whatever offends their ego, whimsy, or underdeveloped sense of justice. In the process, they not only hurt innocent bystanders not only undermine the legitimacy of their cause, but actually encourage more stringent regulation of the Internet. Like a character from a Sophocles play, they hasten the outcome they would fight.
They are legion. They do not forgive. They do not forget. They do not plan. They do not show restraint. They do not not choose their battles. They do not help.
CreateMemoryResourceNotification is precisely the wrong thing for reasons the GP has already described. It fails the "what if every application did this?" test.
The right solution is better support for discardable memory. But that's a "nice to have", not a "must have" feature.
What's the worst case?
Look: you can always give up efficiency to gain predictability. That's how real-time operating systems work. If you need hard bounds on access time, you can turn off the pagefile (or lock your application into memory). But the price is much less efficient use of RAM --- when you disable paging, the OS is forced to keep useless junk in memory, making less available for useful things.
In the real world, we don't need hard realtime guarantees in the vast majority of situations. In the real world, paging is the right thing to do because it's a huge efficiency win, and because the OS makes the right page replacement decisions most of the time.
But sure, if you're writing robot control software and people will die if the velocity control routine needs to be paged in, fine. Turn off paging. Or better yet, use a realtime OS like QNX. But for the rest of us, letting the OS manage access patterns is the right thing to do because the OS knows more than your application possibly can.
handle some asynchronous task because things are taking too long takes up 1 or more megabytes of RAM
You're still working with a naive mental model of how memory works. Thread stacks don't "take up" memory when they're created. Memory is not real estate. Thread stacks take up no memory until they're used.
Yes, the kernel will set aside pagefile space to make sure it can satisfy requests for memory (unless you're using Linux and you use overcommit) --- but that's not the same as actually keeping all that memory resident.
Calling mlock() can indeed prevent memory from being paged out
And that's why it's a privileged operation. Yes, there are exceptions to the general theme, but real-world systems are always more complex than one would suppose from a distance. The presence of mlock doesn't change my overall point though: the operating system decides what gets to stay resident.
A single program that hogs memory
You don't get it. The operating system arbitrates between applications and decides whose memory is actually in physical RAM. It makes these decisions based on access-pattern information unavailable to normal applications. Yes, all things being equal, accessing less memory is better. But imagining applications as being "selfish" and as "hogging" memory is using a very naive mental model to deal with a very complex real world system. In general, that doesn't go very well.
let the OS know that instead of paging this memory to disk when low on memory, it should instead just free it and let the application know it has done so
the OS really doesn't have any intelligent insight into the usefulness of a particular allocation to an application
But applications can tell the OS what pages are important. On Unix, applications can use posix_fadvise and madvise. On NT. each page has a priority attached to it, and pages with lower priority are evicted only after those with higher priority are gone.
Of course it's better to touch fewer bytes and to keep the bytes you do touch as close together as possible. Virtual memory doesn't magically make these things happen for you. What it does do is make decisions about what makes to keep in RAM based on access patterns for the whole system, something no individual program can do on its own. In other words, it's exactly what the OP asked for!
If it were possible for programs to allocate caches that work like the filesystem cache, where old items get discarded automatically to make room for anything more important, then this would make sense
The system you describe is called malloc()!
In a system with a unified buffer cache (essentially, every OS in wide use except OpenBSD), it makes little difference whether a page of memory comes from a private memory allocation (e.g., a heap allocation), a memory-mapped file, or the OS's disk cache. When a process needs a page not already present in memory, the kernel's memory manager tries to find an unused page. If one is available, it hands it to the program that requested memory.
Otherwise, it looks for an in-use page, saves its contents, and hands the just-freed page to the program requesting memory. If that page is "dirty" --- i.e., it's backed by a file and somebody's written to that part of the file, or it's a private allocation backed by the page file --- the memory manager writes the page out to disk first. If the page isn't dirty, the memory manager can just discard its contents because it knows it can reconstruct it by reading back the original file.
When the memory manager has to go to disk to satisfy a request for a new page, it's called a hard fault. The mission of the memory manager is to reduce the number of hard faults, because hard faults are slow. The fewer hard faults you have, the less time will be spent waiting for the disk, and the faster your system will run.
The most important part of the memory manager is page replacement: i.e., how the memory chooses what page to evict in order to satisfy a memory allocation request. Most systems use an approximation of LRU (i.e., least recently used), throwing out pages that haven't been accessed in a while. It doesn't usually matter where a page came from. The only important factor is how recently it was accessed.
So, you can see that there's no difference between a program mapping a file into memory and modifying it, reading and writing it using file APIs, and just manipulating an equal amount of information in buffers created with malloc. To the kernel, all memory is made up of pages.
The "go away for a while" problem isn't caused by any particular memory strategy. It's an artifact of the memory manager's LRU approach. How does it know that the pages corresponding to Firefox are going to be used again? If some other program needs those pages, the older ones will be discarded. There is nothing applications can do.
Instead, the OS itself has to be tweaked to preserve interactivity. Sometimes the memory manager will prefer disk cache pages to malloc-backed ones. Sometimes (e.g., for Windows SuperFetch) the OS will try to identify pages belonging to activate applications and try harder to keep those in memory. Some systems favor keeping executable pages over private allocations. You can tweak the page replacement algorithm, but the basic idea, that all memory is made up of pages subject to the same management scheme, applies.
Ultimately, it's ridiculous to hear people talk about programs "keeping things in memory" like we were still dealing with DOS 6 and OS 9. The actual situation is a lot more subtle, and silly memory counters don't even come close to giving you a good picture of what's actually going on.
In short, don't worry about fine-tuning what's "in memory". Don't change behavior based on total amount of memory in the system. Operating systems (OpenBSD aside) ALREADY DO THAT. Just let the memory manager do its job, and give it enough information (via interactivity information, memory priority, etc.) to do its job properly. Don't try to hack around problems at the wrong layers.
Is anyone else absolutely sick of sensationalized headlines?
You passed up a perfect opportunity to use, "if my calculations are correct".
Physician: One upon whom we set our hopes when ill and our dogs when well. -- Ambrose Bierce