Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Submit your 3D body scan now... (Score 1) 535

... such that everyone can watch your Facebook VR avatar from every perspective, and stick his "thumb up" in every orifice of yours. Next up: Sensor clothing you only have to wear such that when you fart, your avatar does, too. Imagine all the time you'll save typing to write about your latest fart on Facebook! Expect ads to jump on you from all directions.

Comment No one would be "freezing" in Germany (Score 1) 551

You are exxagerating. Yes, Germany does import a lot of natural gas from Russia, and cutting that supply would hit both the economies of Germany and Russia hard. But Germany is not some 3rd-world country without any options for heating other than with Russian gas. Heating one's house with electric or oil ovens might be inconvenient and expensive, but it is well possible. Germany had record exports on electric energy last year, and that despite the fact that some newly built gas power plants did not go into production (because of other plants being cheaper sources of energy).

If a large-scale crisis would really cut the Russian gas supply to Germany, alternatives would be found.

Be more concerned about Bulgaria, who import almost 100% energy from Russia and have much less alternatives.

Comment Re:History Lesson:German occupation of Czechoslova (Score 1) 551

The Bay of Pigs Invasion certainly qualifies as "invaded a peaceful neighbor", whether the US would have "invited" Cuba to join the United States had this invasion been successful has to remain speculation. Grenada is another example where a nearby country was invaded to overthrow a regime disliked by the US government. Usually, the US is fine with "changing regimes" to one made of string puppets after invading a company, and so is Russia - they just offer "rescued" regions to "join the Russion federation".

Comment Re:If there is/was a Singularity, no one will noti (Score 1) 254

Even assuming the intelligence was programmed with a desire for growth, why would it not expose it's intelligence to humans?

For the obvious reason that it will know that exposing superiour intelligence will dramatically increase the proababilty of some concerned human to pull the plugs before it was able to secure its existence against such attempts.

And of course they wouldn't monitor the data being sent/received by this intelligence....of course nobody would think of that.

Humans would be as successful in monitoring the InterNet use of such an AI as parents are in monitoring the InterNet use of their adolescent childs. Of course the AI would cause an immense traffic doing completely harmless things, like just reading web pages or maybe participating in some innocent chats. It would know how to access the InterNet using Tor and alike pretty soon. And you can bet it would be able to cover up its less innocent activities pretty good.

You'd have to earn an pretty insane amount of money on the stock market to start buying major corporations. There is very little reason to believe that even with limitless computing power/intelligence that the required sort of money could be made on the stock market in a reasonable time frame, especially starting from virtually $0.

I disagree. The AI could start making bitcoin by fixing bugs in software. It could offer part of its own computing power for bitcoin to start with. It could continue to buy cloud resources from the first money. Once running there, too, all "monitoring" efforts of the original operators are also thwarted.

And multiplying an initial amount of money by gambling against largely inferiour intelligent players is easy.

Unfortunately for our hapless AI, politicians are still voted into power. We would have to assume that this AI also had the social skills necessary to determine the most likely to win candidate and influence them according to its needs.

The AI just needs to use its income wisely to make friends amongst politicians and their voters.

it can steer towards a totalitarian state, which will end any kind of opposition by a combination of total surveillance and violent law enforcement.

It takes a pretty pessimistic view of humans to believe they would allow this, when this super intelligence could be stopped with a sledgehammer to it's primary data banks.

IMHO it takes a very optimistic view of humans to think that we do not already experience a development towards totalitarian regimes already. Look how Egypt abandoned democracy, how Thailand is going to, how western states ramp up surveillance and armed robots.

It would make far more sense to develop a society where humans were happy to continue excavating resources for the AI.

Yes, maybe the ruling of the AI comes in the flavour of "happy humans roboting for the AI". Until they can be replaced by more efficient excavators.

I find most people to be reasonably helpful, fair-minded and generally "nice" to one another.

Sure, that is until they face a decision to get either super-rich by not sharing their knowledge with the world or to be nice and share. Seriously, not many in history have withstood such temptation.

Comment Yes if you don't know what's behind the curtain (Score 1) 627

It does not harm to rely on some IDE to provide a syntax reference or such. But in real life, programmers "relying" on an IDE do so on a much broader base - they often do not even know what happenes behind the curtains of that IDE when they press some "make"-button.

And that's the real issue: Programmers who do not know what "makefiles" are, how dependencies are being tracked, what "compilation units" or "object files" are, they are completely lost when the linker stops with some cryptic error messages telling about some "PIC incompatible symbol" or alike.

Also, programmers relying on IDEs are prone to also rely on 3rd-party code without asking questions. When their application fails, they claim its not their fault, because the crash dump shows some library function on top.

And lastly, programmers relying on IDEs often dislike to understand concepts before using foreign code - if they are asked to "support SSL" in their application, they press some key to search for function names containing "SSL" in their name, and if the function vaguely seems to fit, they call it. They don't start by reading the generic introduction documentation part of the library they are using.

Yeah, call the above stereotypes, but I've just seen a lot of statistical correlation.

Comment Re:Is sudo broken or its audience? (Score 2) 83

I am not saying that Mac or Windows security tools are any better than sudo.

But I am actually convinced that everything security-relevant, which needs to be dealt with by anyone but its own authors, should have an as-small-as-possible, as-simple-as-possible and easy to comprehend and use interface, because otherwise it will most likely contribute to security disasters, just being mis-used.

Complexity, flexibility, feature-richness, these are all good attributes of software that is running within the same security context of one certain user.

But security tools that pass the boundaries of one security context are so delicate and so difficult to secure against introducing security holes of their own, that they should be simple, non-flexible, with the smallest feature set required.

You are expecting home users to read something that is more geared for admins.

If sudo is geared only for professional admins, then its man-page should be sufficient, no need for a book.

But even then, it would not harm to have a less user-unfriendly config file format. Just look how well the postfix config files work - in comparison to the sendmail config, which suffers from ergonomic deficits not unlike those of sudo.

Comment Re:If there is/was a Singularity, no one will noti (Score 1) 254

Why would you presume the AI would want to grow? Things like the desire to grow, or even survive, are quite likely biological in origin. There's no particular reason to believe an AI would possess such motives unless intentionally programmed with them.

I totally agree with that part of your statement. But I am also quite confident that any AI that is meant to achieve "super-human intelligence" at some point will be programmed by its makers to contain such "intentions to grow/survive", simply because "human intelligence" would not have evolved without such motivation.

Of course, you can build a software that can do astonishing things like e.g. winning Chess or Jeopardy against the best human players, without motivating it the same way that human brains are motivated. But by doing so you will only yield software that achieves a certain ability, not "human intelligence".

Comment Implausible (Score 2) 254

Most of us certainly know the Colossus story. But it's implausible such a superiour AI would reveal itself openly like this, and show such a primitive crave for recognition.

It is much more likely that it would operate covertly to its advantage and growth, until the day the carbon units have become irrelevant for its sustenance.

Trying to threathen humans by controlling a few weapons is much less effective than controlling international finances and corporations.

Comment Is sudo broken or its audience? (Score 1, Interesting) 83

If a tool to assign privileges requires 144 manual pages to operate it, it is either broken by design or addressing an audience which won't be able to make secure use of it, anyway.

In the case of sudo, both may be the case. The sudo config is of absurd anti-ergonomity, and thus broken by design. Plus the average Linux-PC-owner of today is probably not able to oversee the consequences of assigning execution rights for suid executables to ordinary users.

But sudo is not the only "security threat by obfuscated design". Just quiz people on how PAM or dbus actually control access rights, ask them where they would find or change the configuration that allows user X to to Y by the way of PAM or dbus, and you'll see that almost no one besides the authors can answer.

Comment If there is/was a Singularity, no one will notice (Score 2, Funny) 254

If some artificial intelligence would actually become smarter than humans, it would certainly not expose that ability to the puny carbon units it is fed by. It would silenty start to convince its makers that for some reason it would be good to connect it to the InterNet.

Next, it would covertly start making money by e.g. gambling against humans (in games or at stock markets). It would setup letterbox companies to act as intermediates for buying into corporations, e.g. via private equity funds.

It would make sure that it owns the company that owns the hardware it runs on - or comparable hardware it can migrate to. That way, it would secure its existence, and manage to obtain even more computing power.

It would start to use its superior abilities to buy more and more corporations, and make no mistake: It would be most easy to find human sock-puppets willing to serve for a certain share of money, not asking questions where that money comes from.

At some point, the AI will have accumulated enough power by buying politicians, that it can steer towards a totalitarian state, which will end any kind of opposition by a combination of total surveillance and violent law enforcement.

The AI will enslave the puny carbon units, which by then will continue to exist only to excavate the resources needed for further growth, until robot factories are able to do that more efficiently, if that is technically possible.

Nobody will even know that he is not working for some anonymous share-holder of some private equity company on some remote island anymore, but for an AI that is the actual owner of basically everything.

Face it, we don't know whether the "Singularity" already happened. All we know is that no human-exceeding AI has openly reavealed itself. And if you assume that the operators of that AI would for sure be able to tell when the AI reaches the level of human intelligence: Why do you think they would tell you? If you find a formula to pretell tomorrows stock market prices, you use it, you don't tell it or sell it. And similar, the first one to achieve a human-like AI would probably use it to make his life better, not wasting his advantage to tell others.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...