Even assuming the intelligence was programmed with a desire for growth, why would it not expose it's intelligence to humans?
For the obvious reason that it will know that exposing superiour intelligence will dramatically increase the proababilty of some concerned human to pull the plugs before it was able to secure its existence against such attempts.
And of course they wouldn't monitor the data being sent/received by this intelligence....of course nobody would think of that.
Humans would be as successful in monitoring the InterNet use of such an AI as parents are in monitoring the InterNet use of their adolescent childs. Of course the AI would cause an immense traffic doing completely harmless things, like just reading web pages or maybe participating in some innocent chats. It would know how to access the InterNet using Tor and alike pretty soon. And you can bet it would be able to cover up its less innocent activities pretty good.
You'd have to earn an pretty insane amount of money on the stock market to start buying major corporations. There is very little reason to believe that even with limitless computing power/intelligence that the required sort of money could be made on the stock market in a reasonable time frame, especially starting from virtually $0.
I disagree. The AI could start making bitcoin by fixing bugs in software. It could offer part of its own computing power for bitcoin to start with. It could continue to buy cloud resources from the first money. Once running there, too, all "monitoring" efforts of the original operators are also thwarted.
And multiplying an initial amount of money by gambling against largely inferiour intelligent players is easy.
Unfortunately for our hapless AI, politicians are still voted into power. We would have to assume that this AI also had the social skills necessary to determine the most likely to win candidate and influence them according to its needs.
The AI just needs to use its income wisely to make friends amongst politicians and their voters.
it can steer towards a totalitarian state, which will end any kind of opposition by a combination of total surveillance and violent law enforcement.
It takes a pretty pessimistic view of humans to believe they would allow this, when this super intelligence could be stopped with a sledgehammer to it's primary data banks.
IMHO it takes a very optimistic view of humans to think that we do not already experience a development towards totalitarian regimes already. Look how Egypt abandoned democracy, how Thailand is going to, how western states ramp up surveillance and armed robots.
It would make far more sense to develop a society where humans were happy to continue excavating resources for the AI.
Yes, maybe the ruling of the AI comes in the flavour of "happy humans roboting for the AI". Until they can be replaced by more efficient excavators.
I find most people to be reasonably helpful, fair-minded and generally "nice" to one another.
Sure, that is until they face a decision to get either super-rich by not sharing their knowledge with the world or to be nice and share. Seriously, not many in history have withstood such temptation.
And that's the real issue: Programmers who do not know what "makefiles" are, how dependencies are being tracked, what "compilation units" or "object files" are, they are completely lost when the linker stops with some cryptic error messages telling about some "PIC incompatible symbol" or alike.
Also, programmers relying on IDEs are prone to also rely on 3rd-party code without asking questions. When their application fails, they claim its not their fault, because the crash dump shows some library function on top.
And lastly, programmers relying on IDEs often dislike to understand concepts before using foreign code - if they are asked to "support SSL" in their application, they press some key to search for function names containing "SSL" in their name, and if the function vaguely seems to fit, they call it. They don't start by reading the generic introduction documentation part of the library they are using.
Yeah, call the above stereotypes, but I've just seen a lot of statistical correlation.
I am not saying that Mac or Windows security tools are any better than sudo.
But I am actually convinced that everything security-relevant, which needs to be dealt with by anyone but its own authors, should have an as-small-as-possible, as-simple-as-possible and easy to comprehend and use interface, because otherwise it will most likely contribute to security disasters, just being mis-used.
Complexity, flexibility, feature-richness, these are all good attributes of software that is running within the same security context of one certain user.
But security tools that pass the boundaries of one security context are so delicate and so difficult to secure against introducing security holes of their own, that they should be simple, non-flexible, with the smallest feature set required.
You are expecting home users to read something that is more geared for admins.
If sudo is geared only for professional admins, then its man-page should be sufficient, no need for a book.
But even then, it would not harm to have a less user-unfriendly config file format. Just look how well the postfix config files work - in comparison to the sendmail config, which suffers from ergonomic deficits not unlike those of sudo.
Why would you presume the AI would want to grow? Things like the desire to grow, or even survive, are quite likely biological in origin. There's no particular reason to believe an AI would possess such motives unless intentionally programmed with them.
I totally agree with that part of your statement. But I am also quite confident that any AI that is meant to achieve "super-human intelligence" at some point will be programmed by its makers to contain such "intentions to grow/survive", simply because "human intelligence" would not have evolved without such motivation.
Of course, you can build a software that can do astonishing things like e.g. winning Chess or Jeopardy against the best human players, without motivating it the same way that human brains are motivated. But by doing so you will only yield software that achieves a certain ability, not "human intelligence".
If a theory is not accompanied by descriptions of tests that can be used to verify or falsify the theory, or to value its correctness relative to existing or competing theories, then it is indeed not science.
Most of us certainly know the Colossus story. But it's implausible such a superiour AI would reveal itself openly like this, and show such a primitive crave for recognition.
It is much more likely that it would operate covertly to its advantage and growth, until the day the carbon units have become irrelevant for its sustenance.
Trying to threathen humans by controlling a few weapons is much less effective than controlling international finances and corporations.
This is non-news for nerds, stuff that does not matter, at all.
Religious people say and do irrational, stupid, arbitrary stuff all the time. Discussing robots "theologically" is just another boring instance of this.
If a tool to assign privileges requires 144 manual pages to operate it, it is either broken by design or addressing an audience which won't be able to make secure use of it, anyway.
In the case of sudo, both may be the case. The sudo config is of absurd anti-ergonomity, and thus broken by design. Plus the average Linux-PC-owner of today is probably not able to oversee the consequences of assigning execution rights for suid executables to ordinary users.
But sudo is not the only "security threat by obfuscated design". Just quiz people on how PAM or dbus actually control access rights, ask them where they would find or change the configuration that allows user X to to Y by the way of PAM or dbus, and you'll see that almost no one besides the authors can answer.
Next, it would covertly start making money by e.g. gambling against humans (in games or at stock markets). It would setup letterbox companies to act as intermediates for buying into corporations, e.g. via private equity funds.
It would make sure that it owns the company that owns the hardware it runs on - or comparable hardware it can migrate to. That way, it would secure its existence, and manage to obtain even more computing power.
It would start to use its superior abilities to buy more and more corporations, and make no mistake: It would be most easy to find human sock-puppets willing to serve for a certain share of money, not asking questions where that money comes from.
At some point, the AI will have accumulated enough power by buying politicians, that it can steer towards a totalitarian state, which will end any kind of opposition by a combination of total surveillance and violent law enforcement.
The AI will enslave the puny carbon units, which by then will continue to exist only to excavate the resources needed for further growth, until robot factories are able to do that more efficiently, if that is technically possible.
Nobody will even know that he is not working for some anonymous share-holder of some private equity company on some remote island anymore, but for an AI that is the actual owner of basically everything.
Face it, we don't know whether the "Singularity" already happened. All we know is that no human-exceeding AI has openly reavealed itself. And if you assume that the operators of that AI would for sure be able to tell when the AI reaches the level of human intelligence: Why do you think they would tell you? If you find a formula to pretell tomorrows stock market prices, you use it, you don't tell it or sell it. And similar, the first one to achieve a human-like AI would probably use it to make his life better, not wasting his advantage to tell others.
It has become quite obvious following the news that corporations are spitting on laws and won't stop committing crimes that increase their profits, until some actual individuals in charge are jailed for significant time.
Puny fines, often not even exceeding the extra profits made from the crime, won't stop anything. They are just like a gamble CEOs are ready to take - if they are not caught, their personal bonus increases with the extra profit. If they are cought, the company or some insurance will cover the cost, with no consequence to the CEO.