Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re: pytorch on intel (Score 1) 23

An aspect of proprietary that has downstream effects for corporations: regardless of whether the product is good or bad now, if it was ever good in the past, it is likely to have been adopted to a degree that far surpasses the customers' capability to rid themselves of the dependency. Being proprietary reinforces this strongly with incentives from all sides to keep status quo. The same cannot be said for more open technology options.

Comment Re:Well, finally (Score -1, Troll) 407

Translation into US terms, with the tinge of blueanon lens removed: the Taliban is a political uni party, like the Democrat Party; al Qaeda is a terrorist organization that the Taliban sheltered while in power, like Antifa; ISIS are religious fundamentalist revolutionaries, like those nut jobs you see on TV who have lots of vaccines and think they're going to overthrow the virus.

Comment Probably not the IO scheduler (Score 5, Informative) 472

This is almost certainly not the IO scheduler's problem. IO scheduling priorities are orthogonal to CPU scheduling priorities.

What you are likely running into is the dirty_ratio limits. In Linux, there is a memory threshold for "dirty memory" (memory that is destined to be written out to disk), that once crossed, will cause symptoms like you've described. The dirty_ratio values can be tuned via /proc, but beware that the kernel will internally add its own heuristics to the values you've plugged in.

When the threshold is crossed, in an attempt to "slow down the dirtiers", the Linux kernel will penalized (in rate-limited fashion) any and every task on the system that tries to allocate a page. This allocation may be in response to userland needing a new page, but it can also occur if the kernel is allocating memory for internal data structures in response to a system call the process did. When this happens, the kernel will force that allocating thread (again, rate-limited) to take part in the flushing process, under the (misguided) assumption that whoever is allocating a lot of memory is the same thread that is dirtying a lot of memory.

There are a couple ways to work around this problem (which is very typical when copying large amounts of data). For one, the copying process can be fixed to rate limit itself, and to synchronously flush data at some reasonable interval. Another way that a system administrator can manage this sort of task (if automated of course) is to use Linux's support for memory controllers which essentially isolates the memory subsystem performance between tasks. Unfortunately, it's support is still incomplete and I don't know of any popular distributions that automate this cgroup subsystem's use.

Either way, it is very unlikely to be the IO scheduler.

Slashdot Top Deals

Hotels are tired of getting ripped off. I checked into a hotel and they had towels from my house. -- Mark Guido

Working...