The concern isn't so much that the AI would have human-like goals that drive it into conflict with regular-grade humanity in a war of conquest, so much as that it might have goals that are anything at all from within the space of "goals that are incompatible with general human happiness and well-being".
If we're designing an AI intended to do things in the world of its own accord (rather than strictly in response to instructions) then it would likely have something akin to a utility function that it's seeking to maximise, and so implicitly has a goal defined by that function - some arrangement of the world that scores the most highly according to that function. Whether the nature of that goal is inscrutable beyond the wit of man, or utterly prosaic like the "paperclip maximiser"... if it doesn't share our values then the things that we value may end up disassembled for raw materials.
In the admittedly unlikely event of a machine achieving a degree of intelligence that allows it to completely achieve any goal it happens to have, the only way for humanity to win is if it has goals that align near-perfectly with what's best for humanity, which is a vanishingly small target when you consider the universe of possible utility functions that aren't that.
Obviously not really a concern with the current state of technology, but if progress in making more intelligent machines follows anything like an exponential curve then we could fall foul of how bad our intuitions are around exponentials, and end up being taken by surprise by a machine that's rather abruptly more intelligent than we expected. Especially if we make it able to improve itself.
Compare the following two questions:
"If the energy in a computer ends up as heat and information, and the energy in a heater just ends up as heat, isn't there some loss of energy in the process?"
"If the water in a waterfall all ends up at the bottom, and the water in a hydroelectric dam all ends up at the bottom, isn't there some loss of (gravitational potential) energy in the process?"
One dumps all the energy provided into some sort of ground state as quickly and wastefully as possible, the other carefully channels it to extract an additional benefit on the side. The analogy isn't perfect (for one thing, the water at the bottom of the waterfall will land with more energy than the water sent through a turbine, and make a hell of a noise in doing so), but I don't think there's anything thermodynamically wrong with the idea of siphoning off and redirecting some part of a flow of energy to drive a turbine or a calculation, and still have it ultimately all end up as heat.
To look at it from the opposite end, if your country is abiding by their treaty obligations then they may feel compelled to make laws reflecting it, which you are then subject to. That is of course a pretty big "if" - if they've decided not to abide by it then it becomes a question of what consequences they're either willing to concede to or able to have forced upon them by whoever's on the other end of the treaty.
If your hypothetical asteroid miner were from a smaller country, one less able to dictate terms to the rest of the world, they might find themselves subject to rather more outside interference...
My point was just that "intelligence" can't be impossible to reproduce algorithmically, because physics is amenable to simulation and has given rise to intelligence.
If it can be produced by a mass of wet jelly sat between two ears, it can be produced by a computer running the right program. The challenge then is to unpick the puzzle of what that jelly is actually doing, and to do so sufficiently clearly to be able to specify that "right program".
Not saying it's easy; it's incredibly difficult. But possible in theory.
Current algorithms are not Artificial General Intelligence. What we have now are algorithms for domain-specific intelligences.
But in principle, physics can be simulated by an algorithm. Therefore a human brain can be modelled at the particle level and run in simulation. Therefore whatever a human brain is doing that produces intelligence (assuming for now that it does, in fact, produce intelligence) can, in principle, be reproduced by an algorithm, even if it has to treat the brain as a black-box to do so.
Consider that the brute-force approach to algorithmic intelligence. Obviously the real prize is to find the shortcut - abstract out only the necessary elements of what the brain does and express those as algorithms.
I may be wrong, but I feel like you missed the point of the post above you... the "$20 trillion dollar bank account", I took to be an analogy for the world's fossil fuel reserves. Which, if we want to avert climate change, we probably have to take a significant fraction of and leave it in the ground.
All the focus is on reducing demand by reducing usage, and that would theoretically force fuels to be sold cheaper until the point where it's not economically viable to extract them. But it seems like an indirect approach compared to convincing a government that controls a lot of fuel reserves to just stop drilling them out and leave them buried.
But of course it's not really 'realistic' to expect them to do that - they're sitting on a bottomless well of wealth just begging to be dug up. It would make them uncompetitive to stop, it would mean other nations continue to profit while they sit on their hands, it would weaken their position of power on the world stage... it would help save the ecoystem of the planet, but clearly that's of no particular importance compared to wealth and power.
I'm not certain where it fits into your analogies, but I'm using Windows 8 with Classic Shell and the only time I ever even see Metro is the rare occasion when my touchpad driver forgets that I disabled the "Edge swipe" gesture and that goofy little "Charm bar" sidebar pops up.
It boots to the Desktop mode, I have all the default full-screen Metro apps replaced with my own programs, it has the familiar old start menu and control panel and everything. For all intents and purposes, I don't need to know it's Win8. The one thing that hasn't quite gone back to the way it used to be is the network connect/disconnect dialog - that still opens up a full-height sidebar with the names of nearby wireless networks. But I can live with that.
I'd kindly ask the British pro-EU majority to stand up.
Right here. Especially when the EU is trying to legislate something like net neutrality and the useless gang of cunts we call a government decides to veto it.
I can't actually find an "edit post" button.
There isn't one. I've never been 100% sure whether this is deliberate and intended to promote careful checking before pressing submit, or if it's just because something in the code has been broken since forever.
Or to borrow some actual slashdot headlines...
Formhault C has a huge bitcoin debris ring
Scientists print bitcoin
Police pull over more drivers for bitcoin tests
Apple pushes developers to bitcoin
NASA schedules space walks to fix bitcoin pumps
Bitcoin exchange value halves after... wait, I did this one wrong
Unreleased 1963 bitcoin on sale
Want to fight allergies? Get bitcoin
Dammit, it still works, I would read every single one of these.
Ownership is established by knowing the private key for the wallet/address. The FBI gained that key --whether by keylogger, wiretap, plea bargain or $5 wrench is unclear-- and transferred all the funds to an address under their exclusive control.
So from the point of view of the bitcoin protocol, the FBI were the proper owners (they knew the key) and therefore weren't obstructed from making that transfer. Likewise they wouldn't be obstructed from further use of the address they control unless a majority of miners collaborated to refuse to include their transactions in blocks *and* refuse to mine on top of any chain that included such a transaction in a block mined by somone else.
Which would, strictly speaking, be a breach of protocol - you're supposed to always mine on top of the longest chain. But nonetheless possible if they patched the mining software to selectively ignore particular addresses. But that would seem like a bad precedent to set and a bad capability to build into the network.
"There are things that are so serious that you can only joke about them" - Heisenberg