Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re: Piss off systemd (Score 5, Insightful) 416

Ah, but with every single major distro adopting it, you better quit crying and get used to it, buddy!

They changed to systemd, they can change away just as well. Oh sure, the systemd cancer has spread to many daemons, but it can be excised from them as well. (Ironically, the daemons need exorcism...)

Comment Re:Ah, Berlin (Score 3, Informative) 416

A binary file avoids the need to have a bloated text parsing engine.

It doesn't do that, because all the data is interpreted by humans as text, and has to be presented to them as such. It also has to be understood as text, so if the journal processing tools do any of the things that systemd proponents claim they do, they will need a "bloated" text parsing engine.

Comment Re:Startup management subsystem (Score 1) 416

It does seem a bit much, but the systemd transition is a slow one. Many packages are still using init.d startup scripts, which means we can't take advantage of systemd's features with them.

You should be able to take advantage of all of systemd's features whether the daemon is designed to be run from an init script or not, and even whether it is run from an init script or not. If not, there is either something deeply wrong with you (incompetence) or deeply wrong with systemd (poor design.)

Comment Re: Really? (Score 1) 528

Note that the problem with taking things out when shooting straight up is that human's are really pretty crappy at judging distance directly overhead. Which makes judging the lead you give the target pretty much guesswork...

That's why you don't lead the target. Instead, you sweep the barrel. Get the bead moving with the target and pull the trigger. Then you don't have to guess. Humans are pretty crappy at gauging distance in general. We use tricks to compensate.

Comment Re:We have no idea what "superintelligent" means. (Score 1) 262

Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart.

Harold Innis talked about this in his lecture series, and subsequent book, Empire and Communications. Excerpt:

The rise of absolutism in a bureaucratic state reflected the influence of writing and was supported by an increase in the production of papyrus.

See also Empire and Communications at Wikipedia. Excerpt:

The spread of writing hastened the downfall of the Roman Republic, he argues, facilitating the emergence of a Roman Empire stretching from Britain to Mesopotamia. To administer such a vast empire, the Romans were forced to establish centralized bureaucracies. These bureaucracies depended on supplies of cheap papyrus from the Nile Delta for the long-distance transmission of written rules, orders and procedures. The bureaucratic Roman state backed by the influence of writing, in turn, fostered absolutism, the form of government in which power is vested in a single ruler. Innis adds that Roman bureaucracy destroyed the balance between oral and written law giving rise to fixed, written decrees. The torture of Roman citizens and the imposition of capital punishment for relatively minor crimes became common as living law "was replaced by the dead letter."

See also Harold Innis's communications theories.

Innis has his admirers -- John Brunner's epic Stand on Zanzibar opens with a laudatory quote from Marshall McLuhan about Innis.

But Innis also has his critics, who dismiss (or ridicule) the idea that the Roman empire fell because it lost access to Egyptian papyrus. See:

Comment Obligatory (Score 1) 80

No.

If we look back into the shrouded mists of time, we see that Moblin begat Meego begat Tizen.

Moblin was Linux with a cool OpenGL interface from Intel on which Intel spent most of their effort ripping out the parts they didn't need.

Meego was the effort to put those parts back and make something useful on more than just intel hardware.

Tizen is the attempt to convince you that this zombie project has life lift in it. It doesn't. It's dead. Stick a fork in it.

Comment Re:wft ever dude! (Score 1) 215

One of the design goals of IPv6 was to simplfy the routing logic so we could make faster and cheaper hardware. That's why there is no more IP fragmentation for example. Making the fields variable size defeats that. It's much easier to build hardware for fixed field sizes.

Plus you can't project exponential growth out to infinity. It is inevitable that some factor will come to limit the growth. It has been really incredible how long transistors have maintained their growth, but even that seems to be coming to an end.

Also, we're probably not going to have a 64->128 bit transition. Not without a fundamental change in the way we do computing.

Comment We have no idea what "superintelligent" means. (Score 4, Insightful) 262

When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?

I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.

Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.

We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.

So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.

Comment Re:It's coming. Watch for it.. (Score 1) 163

The overriding principle in any encounter between vehicles should be safety; after that efficiency. A cyclist should make way for a motorist to pass , but *only when doing so poses no hazard*. The biggest hazard presented by operation of any kind of vehicle is unpredictability. For a bike this is swerving in and out of a lane a car presents the greatest danger to himself and others on the road.

The correct, safe, and courteous thing to do is look for the earliest opportunity where it is safe to make enough room for the car to pass, move to the side, then signal the driver it is OK to pass. Note this doesn't mean *instantaneously* moving to the side, which might lead to an equally precipitous move *back* into the lane.

Bikes are just one of the many things you need to deal with in the city, and if the ten or fifteen seconds you're waiting to put the accelerator down is making you late for where you're going then you probably should leave a few minutes earlier, because in city driving if it's not one thing it'll be another. In any case if you look at the video the driver was not being significantly delayed by the cyclist, and even if that is so that is no excuse for driving in an unsafe manner, although in his defense he probably doesn't know how to handle the encounter with the cyclist correctly.

The cyclist of course ought to know how to handle an encounter with a car though, and for that reason it's up to the cyclist to manage an encounter with a car to the greatest degree possible. He should have more experience and a lot more situational awareness. I this case the cyclist's mistake was that he was sorta-kinda to one side in the lane, leaving enough room so the driver thought he was supposed to squeeze past him. The cyclist ought to have clearly claimed the entire lane, acknowledging the presence of the car; that way when he moves to the side it's a clear to the driver it's time to pass.

Slashdot Top Deals

Moneyliness is next to Godliness. -- Andries van Dam

Working...