Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment The stupid thing is (Score 2) 341

The really really stupid thing is that desktop isn't even the reason why Linux. Obviously no server needs dbus let alone kdbus, and plenty of desktops don't either. Yes, it's amusing that I get a popup when I plug in a USB stick, but is that essential functionality? Sure, some very simple form of event multicast would be good, but is this it?

Everything LP touches seems to epitomize rebellion against, or ignorance of, the *nix/OSS philosophy (you know, modularity, loosely joined, liberal-in-what-you-accept, etc). systemd is the USSR of rc systems. pulse only remains because apps can still bypass it.

Comment this is idiotic. (Score 2, Insightful) 201

the proliferation of distros is just stupid - people don't seem to understand what "distro" means, or why they should be offering addons to an existing distro, rather than pretending that they are building a new OS.

the ONLY value a distro offers is in establishing a particular set of versions, with a modicum of consistency of config and hopefully some testing. none of them offer anything significant that is also distinctive - just slightly different versions of the same packages maintained by others and used by all the other distros. yes, apt vs rpm, so what? they're functionally equivalent.

the real point is really a matter of software engineering: forking a distro is bad, since it increases the friction experienced by source-code changes. streamOS (sic) people may be dilligent and honestly propagate their changes upstream, but fundamentally, they should really just be running an apt repo containing their trivially modded packages. sure, that may mean a different kernel, big whoopie (very little of user-space is sensitive to anything but huge kernel changes.)

but yeah: it wouldn't be very sexy to say "I've got a repo of 37 tweaked packages I call a brand new whizzy *OS*".

Comment it's the price, stupid. (Score 2) 810

My daily commute is less than 10km, and I would love to have and affordable, safe, less-consumptive/polluting vehicle. I would be very tempted by a car-like EV that was very small and light with range 50km if it cost something like $5-7k. (for $10k I can get a small used ICE that burns absurdly little gas.) It has to be able to take me up a decent-sized hill at 50 kph, though. An in-town EV could make a lot of sense, market-wise, but I think it should be purposed-designed, not just an ICE vehicle with a the engine swapped out.

Otherwise, the problem is that EV or hybrids try to deliver long range and highway performance and wind up simply being too expensive. Hybrids in particular wind up carrying so much extra weight that you can usually do better pure EV *xor* ICE. It doesn't make sense to pretend that the technology supports non-premium EVs yet (Tesla is great, but it's a sports car at sports car prices.) In some sense, the problem is that petroleum ICE sets a high bar of energy density. I often wonder if there's a place for an EV that has an optional IDE add-in module for range (maybe fuel cell some day, maybe petroleum+turbine today or just a conventional diesel.)

Comment We can build bigger than we can use. (Score 1) 118

I'm not saying that lagging software is a problem: it's not. The problem is that there are so few real needs that justify the top, say, 10 computers. Most of the top500 are large not because they need to be - that is, that they'll be running one large job, but rather because it makes you look cool if you have a big computer/cock.

Most science is done at very modest (relative to top-of-the-list) sizes: say, under a few hundred cores. OK, maybe a few thousand. These days, a thousand cores will take less than 32u, and yes, could stand beside your desk, though you'd need more than one normal office circuit and some pretty decent airflow. I think people lose touch with the fact that our ability to build very big machines, cheaply, filled with extremely fast cores. You read all that whinging about how we hit the clock scaling (dennard) wall around the P4 and life has been hell ever since - bullshit! Today's cores are lots faster, and you get a boatload more of them for the same dollar and watt. And that's if you stick with completely conventional x86_64/openmp/mpi tech, not delving into proprietary stuff like Cuda.

People who watch the top of top500 closely are addicts of hero-numbers and hero-facilities. The fact is you can buy whatever position you want: just pay up. Certainly it's impressive how much effort goes into a top10 facility, but we should always be asking: what whole-machine job is going to run on it? IMO, the sweet spot for HPC is a few tens of racks - easy to find space, easy to manage, can provide enough resources for hundreds of researchers.

Comment Just a stunt. (Score 1) 54

Amazon makes a killing renting computers. Certain kinds of enterprises really want to pay extra for the privilege of outsourcing some of their IT to Amazon - sometimes it really makes sense and sometimes they're just fooling themselves.

People who do HPC usually do a lot of HPC, and so owning/operating the hardware is a simple matter of not handing that fat profit to Amazon. Most HPC takes place in consortia or other arrangements where a large cluster can be scheduled to efficiently interleave bursty usage patterns. That is, of course, precisely what Amazon does, though it tunes mainly for commercial (netflix, etc) workloads - significantly different from computational ones. (Real HPC clusters often don't have UPS, for instance, and almost always have higher-performance, high-bisection, flat/uniform networks, since inter-node traffic dominates.)

Comment screw circuits; it's gates that count (Score 1) 37

This would be far more interesting if they could produce even low-performance transistors. But I suspect you'd want to start out with a flatbed, and you'd wind up focusing on non-flexible devices that you could build up through many layers. Interestingly, big, low-performance transistors would change some of the typical features of VLSI: you could do incremental testing (before layering on more circuits - perhaps even printing replacement devices if certain already-printed components didn't work. You'd probably also not worry as much about heat, since if your cpu is spread out over much area, its heat density is going to be n^2 lower.

Comment systemd tries to do too much (Score 2) 362

systemd falls into the same trap as "desktop environments". It starts with appealing goals (basically, make startup a graph that is traversed parallel-breadthfirst), but it winds up sucking. Consider what happens when systemd dies. This happened to me recently (fedora19, upon resume) - there's not much you can do except reboot. Yes, this could have happened with sysvinit, but who among us ever had a crash of init? I certainly haven't, and I'm a certified greybeard.

AFAIKT, the problem is that it's trying to borg a whole bunch of subsystems that do a great job by themselves. For instance, systemd tries to replace syslog for the most part. It's easy to see why it would want to do this, since daemon/server IO is a useful part of managment. But trying to do so, the system becomes more fragile and *narrower* in its applicability - more specific to how one guy (Lennart) thinks every system should behave.

I suspect what will happen is that systemd will get shaved down a bit with some of the excess functionality removed, and in the process will become reasonably robust (ie, NEVER crash).

Comment The real question is power (maybe network) (Score 1) 115

Containerized servers are old hat, and they don't make a lot of sense under normal conditions. Mobility and redeployment really need to be important goals to justify the compromises.

Containers are roughly 8x8x40, so naively could contain 80x 54u racks, which means up to 2 MW/container. In reality, density probably wouldn't be nearly that high, but probably the better part of 1 MW. Water cooling with aquasar-type heatsinks would be an obvious implementation. The barge looks like a 3x3x2 prism of these containers, so will likely want around 20 MW. My first guess about cooling would just be to make the whole hull into a heat-exchanger - double-walled hulls are quite common in shipbuilding and it wouldn't take that much engineering to create a reasonably efficient circulation pattern.

But I'm pretty skeptical about whether that kind of power could be gotten from wave generation.

Comment certification (Score 2) 73

People tend to focus on surface issues when considering how traditional Higher Education (HE) will relate to Online Education (OE). Things like the concept of lectures, or the character of universities if research and teaching are severed.

But much of the value (and much of an instructor's effort) actually goes toward establishing some measure of competency of the student: a grade. Other comments here have mentioned Honor Code, for instance, but that's not so much a problem as simply an attempt to ensure that a face-to-face course's grading is accurately assigning competence to individuals. for OE, it's even more natural to seek some form of collaborative learning (or outside assistance), especially if the OE course is self-paced. And really, why shouldn't a student simply continue to take the OE course until they are competent (or give up)? In which case, the import of an OE course is mainly in the competency testing - it's certification aspect.

So, is certification the way that traditional HE institutions become relevant to the future where everything is OE?

Comment Re:64 bit - Really, what's the point? (Score 1) 259

The point is the new register set. Registers being wider is a happy side-effect, as is greater virtual address space. But the main point of AMD64 is more registers. and it started a sequence of ISA extensions that have dramatically improved compute-bound throughput via SIMD.

Comment should we be helping? (Score 1) 220

as a bit of a strawman, I'm suggesting that we IT people have a moral obligation to get involved in projects like this. sort of the way doctors are obliged to help any patient that presents, regardless of who they are or what they've done.

these sort of megaprojects seem to be self-justifying in some weird way: managers who don't know what they're doing adopt an incredibly conservative attitude toward risk management when any large project is proposed. once that phase-space is entered, it's an upward spiral to oblivion, since the project becomes more and more scary, and gains a kind of management momentum. the event horizon is when it exceeds the fear threshold of the strongest and/or highest-up manager.

a major part of the problem is that these projects happen in a domain where money is funny - a bit made up, subject to arbitrary stretching (or inflation). certainly governments, but certain kinds of businesses, and definitely public institutions. (the higher ed landscape is littered with smoking radioactive craters of failed ERP projects.)

typically these projects are considered internal - improving the business process, and so not really offered for public review. but maybe that shouldn't be the case, at least for branches of government.

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...