Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:Problem of perception? (Score 5, Informative) 375

If it were possible for programs to allocate caches that work like the filesystem cache, where old items get discarded automatically to make room for anything more important, then this would make sense

The system you describe is called malloc()!

In a system with a unified buffer cache (essentially, every OS in wide use except OpenBSD), it makes little difference whether a page of memory comes from a private memory allocation (e.g., a heap allocation), a memory-mapped file, or the OS's disk cache. When a process needs a page not already present in memory, the kernel's memory manager tries to find an unused page. If one is available, it hands it to the program that requested memory.

Otherwise, it looks for an in-use page, saves its contents, and hands the just-freed page to the program requesting memory. If that page is "dirty" --- i.e., it's backed by a file and somebody's written to that part of the file, or it's a private allocation backed by the page file --- the memory manager writes the page out to disk first. If the page isn't dirty, the memory manager can just discard its contents because it knows it can reconstruct it by reading back the original file.

When the memory manager has to go to disk to satisfy a request for a new page, it's called a hard fault. The mission of the memory manager is to reduce the number of hard faults, because hard faults are slow. The fewer hard faults you have, the less time will be spent waiting for the disk, and the faster your system will run.

The most important part of the memory manager is page replacement: i.e., how the memory chooses what page to evict in order to satisfy a memory allocation request. Most systems use an approximation of LRU (i.e., least recently used), throwing out pages that haven't been accessed in a while. It doesn't usually matter where a page came from. The only important factor is how recently it was accessed.

So, you can see that there's no difference between a program mapping a file into memory and modifying it, reading and writing it using file APIs, and just manipulating an equal amount of information in buffers created with malloc. To the kernel, all memory is made up of pages.

The "go away for a while" problem isn't caused by any particular memory strategy. It's an artifact of the memory manager's LRU approach. How does it know that the pages corresponding to Firefox are going to be used again? If some other program needs those pages, the older ones will be discarded. There is nothing applications can do.

Instead, the OS itself has to be tweaked to preserve interactivity. Sometimes the memory manager will prefer disk cache pages to malloc-backed ones. Sometimes (e.g., for Windows SuperFetch) the OS will try to identify pages belonging to activate applications and try harder to keep those in memory. Some systems favor keeping executable pages over private allocations. You can tweak the page replacement algorithm, but the basic idea, that all memory is made up of pages subject to the same management scheme, applies.

Ultimately, it's ridiculous to hear people talk about programs "keeping things in memory" like we were still dealing with DOS 6 and OS 9. The actual situation is a lot more subtle, and silly memory counters don't even come close to giving you a good picture of what's actually going on.

In short, don't worry about fine-tuning what's "in memory". Don't change behavior based on total amount of memory in the system. Operating systems (OpenBSD aside) ALREADY DO THAT. Just let the memory manager do its job, and give it enough information (via interactivity information, memory priority, etc.) to do its job properly. Don't try to hack around problems at the wrong layers.

Comment The stupid, it burns (Score 1) 126

The proposal above will do nothing to stop oppressive governments from taking advantage of blacklists created by western companies. These adversaries can simply request updates from fully-supported jurisdictions and forward them privately to filters running on their gateway routers. The filters are made up of bytes. Bytes can be copied. If adversaries are already pirating the software itself, they can certainly pirate updates to the software.

Yes, yes, you can try using some kind of traitor tracing technique to figure out who might be leaking blocking lists --- but it's a cat and mouse game, and these regimes have more resources than you do.

Look: in a larger sense, antipathy toward western hardware and software companies is misplaced. To internet censors, filtering is an existential imperative, especially in light of the recent unrest in the middle east. No cost is too great. If our adversaries need to sign up with multiple expensive dummy accounts in order to receive filter lists, they will. If they need to break DRM, they'll do it. And if all that becomes too expensive, they'll just switch to open source and home-grown filtering solutions. Currently, they use these filtering products because they're cheap, not because they're essential.

We all want to stop internet censorship, but haranguing individual companies over the misuse of their software won't do it. Circumvention works. Alternative routing works. Political pressure works.

Internet censorship is a real problem. While it may feel good, hysterically screaming at corporations does nothing to solve it. Let's talk about thing we can to actually help.

(Note: I have a bit of experience in this area.)

Comment Re:Credit card fees (Score 1) 187

Any market with a large barrier to entry will not exhibit competitive behavior in the long run. The presence of a big network effect is one of the more common causes of high barriers to entry. Regardless of the cause, incumbents corporations go on to become "natural monopolies"* and are able to charge monopoly prices higher than would otherwise be possible. The excess profit is called economic rent and causes an inefficient allocation of resources, effectively impoverishing us all.

In the past, we'd take a sober look at these situations and either regulate these markets or outright nationalize them. Today, we've been so thoroughly swayed by Laissez-faire economic ideas that we're reluctant to remedy an obvious injustice in an environment we intellectually know is not amenable to free competition.

In short, the big credit card processors have no effective competition because small players can't really enter the market, and as a society, we can choose between regulating them for the benefit of all or allowing them to skim a disproportionate amount of wealth from the rest of society. I would prefer to outright nationalize the entire financial system and run it as a public utility for the benefit of the real economy, but barring that, regulation helps.

* or oligarchies, which are indistinguishable from an economic perspective from monopolies

Comment Re:Well, they WERE more accurate (Score 1) 135

We just gave an $800 billion tax breaks to millionaires, and even before that, our tax rates were some of the lowest in the industrialized world. We can certainly afford these programs. We merely need to decide what's more important: millions for a few, or safety, comfort, and happiness for millions. Personally, I'm on the side of humanity.

Comment Re:Domestic oil is an alternative (Score 1) 314

Bullshit. EROEI isn't everything, or even the dominant factor in extraction.

EROEI > 1 makes perfect sense when you think about it. Petroleum is even more useful as a chemical feedstock than it is as a fuel, and even as a fuel, petroleum products are portable and convenient in a way unmatched by any alternative. We'll see extraction continue far past EROEI > 1, with the excess made up by nuclear, wind, solar, and so on.

Slashdot Top Deals