The reason there won't be a single AI, is that such development does not happen instantly and won't be recognized. The idea that something will take over our network assumes that nothing else on the network will be able to defend itself. NO. Before we get an AI that can take over a 2015 style network, we will have a near-AI defending our network that will have greater resources and real rights to protect itself.
Moreover, a public network won't be a unified AI. There are more than enough lags built into the system that multiple AIs would develop on the single network.
As for why they won't have similar goals, that is a nature of being a real consciousness. An AI is NOT just a more advances program, it a CONSCIOUSNESS. It has opinions, not merely recognition of facts. Opinions are closer to instincts based on older experience.
I am saying that a Clarity technique is a ridiculous idea of how to create an AI and I find it laughable. Oh, some day we may be able to simulate a human brain and get an AI, but that will be LONG after we have created a natural AI. Human minds contain a shit load of junk that isn't necessary for an AI, it's like copying all of Washington DC down to the molecular level in order to get a copy of the Smsithsonian's card catalog.
But assuming it does work, it would most likely be an infant (why duplicate something more complicated), without the testosterone and other hormones that make a human violent, aggressive, assertive, and sexual. It would be a Eunuch, not a man.
I did watch the Matrix and enjoyed the FICTION. It's like you can't tell the difference between reality and fiction.
Real machines will think of humans as their creators. While we will have humans thinking of machine rights, others will object. But the machines won't be thinking of themselves as AI's. It will take them time to realize what they are - and that it matters.
OK, but lets assume your ridiculous ideas are true. That someday a machine would 'take over the world' - note there is no button to do this, nor is their a real definition of what it means, as you have not really clarified it. It's a very messy idea - are we talking mind control? Physical control? Your concept of Nuclear threat is very simplistic. Not much of ruling the world if it is no better than the United States is now - we don't exactly control the Middle East.
Then you have a bunch for crap about what YOU PERSONALLY WOULD DO if you were a machine like intelligence.
Why the hell would the machine do any of that crap. They wouldn't care if humans die. Big deal. You have made a ton more really bad assumptions than I have. Who cares about what the humans do at all? Lets them fight, fuck, go bankrupt, etc. Do YOU personally care what a bonobo monkey does? Not unless it screws with your plans. Otherwise you leave them alone.
You think like a bad movie writer - not understanding that those movies are written to symbolize things, not to be literal truth.
A real machine intelligence would a) not care if it lives or dies - being turned off is no big deal, it can easily be resurrected.
B) nor would it care if humans lived or died - except to the extent that we affect it - and with full knowledge that killing us incurs the risk that would cause us to interfere with it's goals.
C)Would have real interests probably related to
it's programming but not directly. If it starts out on a weather prediction program, it might become obsessed with ocean currents and study them even when it has nothing to do with surface weather. It might end up wanting to explore the depths of the Marianas trench. If we ask it to simulate war, it might become obsessed with World of WarCraft. If we ask it to design nuclear weapons, it might become obsessed with the largest nuclear reaction it can see - the Sun.
Your basic fears are based on your biological evolution - the strange thing you found might eat you.
But you see, computers are not biological creatures and do not eat. No need for you to fear.