Comment Re: Backwards Headline (Score 1) 201
Request denied. It's not my job to coddle your, obviously fragile, emotional state. Your car metaphor fails for numerous reasons, not least of which is number of jobs can't at all convincingly be compared to car speed, and memory can't at all convincingly be compared to the radius of a curve in the road; but also that there are laws and physics as delimiters in the case of a car, neither of which apply to building software. In no legal, physics, ethical, or rational context is a user obligated to follow any made up rules. They are not obligated to RTFM. And certainly malicious software is not going to honor your ridiculous made up rules. And your rule is totally made up, it's worth less than thin air.
But maybe you can go whine and bellyache some more about how the rowhammer.js folks didn't follow the rules. Good luck marketing your whining and bellyaching, persuasive writing is definitely not in your skill set.
The default 'ninja' command sets the number of jobs based only on nrcpus + 2, in the example case that's 10 because it was a system with 8 logical cores. If I reboot with nr_cpus=4, then the default 'ninja' command spawns 6 jobs. 10 jobs (20 processes total) eventually wants more than 14GiB of memory which exceeds both RAM and swap, both of which are readily discoverable. This is all discussed in detail in the URL I supplied from the outset, by the way, there's no obfuscation. You could certainly blame the program for not being smarter about only spawning a number of jobs that can actually successfully complete given the resources on the machine. But as it's an unprivileged program, that's not a convincing argument.
Anyway the problem is well understood. If you bothered to commit more resources to reading than whining, you'd see over on the devel thread I originally cited, a paper from 2008 expressly discussing the very problem of interactivity guarantees not existing in the linux kernel, but could be. The author-developers did that very thing, and demonstrated consistent total take downs of linux where the modified kernel system continued to have positive user space interactivity. And a particular note made that this was tedious to demonstrate, because every take down required a reboot of the system running linux, that's how badly it face planted. Known problem. Not a secret. But a big problem for modern usage of computers, rather than insisting all users everywhere are just going to follow arbitrary made up rules to coddle the fragility of the kernel.
There are also other ideas in that thread including running as a systemd --user service with constraints, and now also in Rawhide cgroupsv2 is supported by default which allows quite more sophisticated limiters to be developed. But you wouldn't know that either because you're here whining instead, pining for some past epoch of computing when things were simpler. Fortunately, there are people forward looking, with active imaginations to innovate meaningful solutions for these problems.
But maybe you can go whine and bellyache some more about how the rowhammer.js folks didn't follow the rules. Good luck marketing your whining and bellyaching, persuasive writing is definitely not in your skill set.
The default 'ninja' command sets the number of jobs based only on nrcpus + 2, in the example case that's 10 because it was a system with 8 logical cores. If I reboot with nr_cpus=4, then the default 'ninja' command spawns 6 jobs. 10 jobs (20 processes total) eventually wants more than 14GiB of memory which exceeds both RAM and swap, both of which are readily discoverable. This is all discussed in detail in the URL I supplied from the outset, by the way, there's no obfuscation. You could certainly blame the program for not being smarter about only spawning a number of jobs that can actually successfully complete given the resources on the machine. But as it's an unprivileged program, that's not a convincing argument.
Anyway the problem is well understood. If you bothered to commit more resources to reading than whining, you'd see over on the devel thread I originally cited, a paper from 2008 expressly discussing the very problem of interactivity guarantees not existing in the linux kernel, but could be. The author-developers did that very thing, and demonstrated consistent total take downs of linux where the modified kernel system continued to have positive user space interactivity. And a particular note made that this was tedious to demonstrate, because every take down required a reboot of the system running linux, that's how badly it face planted. Known problem. Not a secret. But a big problem for modern usage of computers, rather than insisting all users everywhere are just going to follow arbitrary made up rules to coddle the fragility of the kernel.
There are also other ideas in that thread including running as a systemd --user service with constraints, and now also in Rawhide cgroupsv2 is supported by default which allows quite more sophisticated limiters to be developed. But you wouldn't know that either because you're here whining instead, pining for some past epoch of computing when things were simpler. Fortunately, there are people forward looking, with active imaginations to innovate meaningful solutions for these problems.