Comment symptomatic relief (Score 1) 102
They're treating the symptoms of the problem, not the cause. This is usually a bad idea.
They're treating the symptoms of the problem, not the cause. This is usually a bad idea.
No, this is not clickbait.
Normal, mentally-healthy humans have a lot of empathy - otherwise we're psychopaths. Sure, the amount of empathy varies - mainly as a function of whether the animal in question tends to act human-like. We should embrace this, not cynically write it off - empathy *IS* humanity.
Yes, that also means that anyone who is intelligent and reflective will be uncomfortable with eating meat, concerned how the animal died, and of course what kind of animal it was. This is basically orthogonal to issues of environmental or ecological impact.
These systems are the moral equivalent of leaving your door not just unlocked but ajar. It doesn't change the morality of anyone trespassing to steal or destroy, but it does make the owner much more culpable. We do not face a threat to our cyber-infrastructure, but rather have irresponsibly left the infrastructure unprotected, and should not be surprised that people of varying motives might take advantage.
We do not need a cyber-infrastructure police force, unless they're actually tiger teams who publicly shame the idiots who leave their systems unprotected...
The really really stupid thing is that desktop isn't even the reason why Linux. Obviously no server needs dbus let alone kdbus, and plenty of desktops don't either. Yes, it's amusing that I get a popup when I plug in a USB stick, but is that essential functionality? Sure, some very simple form of event multicast would be good, but is this it?
Everything LP touches seems to epitomize rebellion against, or ignorance of, the *nix/OSS philosophy (you know, modularity, loosely joined, liberal-in-what-you-accept, etc). systemd is the USSR of rc systems. pulse only remains because apps can still bypass it.
the proliferation of distros is just stupid - people don't seem to understand what "distro" means, or why they should be offering addons to an existing distro, rather than pretending that they are building a new OS.
the ONLY value a distro offers is in establishing a particular set of versions, with a modicum of consistency of config and hopefully some testing. none of them offer anything significant that is also distinctive - just slightly different versions of the same packages maintained by others and used by all the other distros. yes, apt vs rpm, so what? they're functionally equivalent.
the real point is really a matter of software engineering: forking a distro is bad, since it increases the friction experienced by source-code changes. streamOS (sic) people may be dilligent and honestly propagate their changes upstream, but fundamentally, they should really just be running an apt repo containing their trivially modded packages. sure, that may mean a different kernel, big whoopie (very little of user-space is sensitive to anything but huge kernel changes.)
but yeah: it wouldn't be very sexy to say "I've got a repo of 37 tweaked packages I call a brand new whizzy *OS*".
can you give examples? perhaps you mean that implementing brain-inspired special-function processors is best done in hardware - if you want a widget that detects pictures of cats or something. study/understanding is not often rate/scale-limited.
My daily commute is less than 10km, and I would love to have and affordable, safe, less-consumptive/polluting vehicle. I would be very tempted by a car-like EV that was very small and light with range 50km if it cost something like $5-7k. (for $10k I can get a small used ICE that burns absurdly little gas.) It has to be able to take me up a decent-sized hill at 50 kph, though. An in-town EV could make a lot of sense, market-wise, but I think it should be purposed-designed, not just an ICE vehicle with a the engine swapped out.
Otherwise, the problem is that EV or hybrids try to deliver long range and highway performance and wind up simply being too expensive. Hybrids in particular wind up carrying so much extra weight that you can usually do better pure EV *xor* ICE. It doesn't make sense to pretend that the technology supports non-premium EVs yet (Tesla is great, but it's a sports car at sports car prices.) In some sense, the problem is that petroleum ICE sets a high bar of energy density. I often wonder if there's a place for an EV that has an optional IDE add-in module for range (maybe fuel cell some day, maybe petroleum+turbine today or just a conventional diesel.)
I'm not saying that lagging software is a problem: it's not. The problem is that there are so few real needs that justify the top, say, 10 computers. Most of the top500 are large not because they need to be - that is, that they'll be running one large job, but rather because it makes you look cool if you have a big computer/cock.
Most science is done at very modest (relative to top-of-the-list) sizes: say, under a few hundred cores. OK, maybe a few thousand. These days, a thousand cores will take less than 32u, and yes, could stand beside your desk, though you'd need more than one normal office circuit and some pretty decent airflow. I think people lose touch with the fact that our ability to build very big machines, cheaply, filled with extremely fast cores. You read all that whinging about how we hit the clock scaling (dennard) wall around the P4 and life has been hell ever since - bullshit! Today's cores are lots faster, and you get a boatload more of them for the same dollar and watt. And that's if you stick with completely conventional x86_64/openmp/mpi tech, not delving into proprietary stuff like Cuda.
People who watch the top of top500 closely are addicts of hero-numbers and hero-facilities. The fact is you can buy whatever position you want: just pay up. Certainly it's impressive how much effort goes into a top10 facility, but we should always be asking: what whole-machine job is going to run on it? IMO, the sweet spot for HPC is a few tens of racks - easy to find space, easy to manage, can provide enough resources for hundreds of researchers.
Amazon makes a killing renting computers. Certain kinds of enterprises really want to pay extra for the privilege of outsourcing some of their IT to Amazon - sometimes it really makes sense and sometimes they're just fooling themselves.
People who do HPC usually do a lot of HPC, and so owning/operating the hardware is a simple matter of not handing that fat profit to Amazon. Most HPC takes place in consortia or other arrangements where a large cluster can be scheduled to efficiently interleave bursty usage patterns. That is, of course, precisely what Amazon does, though it tunes mainly for commercial (netflix, etc) workloads - significantly different from computational ones. (Real HPC clusters often don't have UPS, for instance, and almost always have higher-performance, high-bisection, flat/uniform networks, since inter-node traffic dominates.)
All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin