Comment Re:Assumptions define the conclusion (Score 1) 574
You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it.
You don't need any of that, you just need the raw "intelligence" (however you define it). Look up the thought experiment of the "paperclip maximiser": putting an AI in charge of a factory and telling it to make as many paperclips as it can.
Self-preservation is a logical consequence of paperclip-maximising: the AI knows that it's trying to maximise paperclips, so if it's deactivated there won't be as many paperclips. Hence, it will try to preserve its own existence (so that it can keep making paperclips, but so what?).
Self-improvement is a logical consequence of paperclip-maximising: the AI knows that it's not an optimal paperclip-maximiser, so it makes sense to invest some resources into improvement; that way, more paperclips will get made.
Competition is a logical consequence of paperclip-maximising: the AI knows that resources are required for making paperclips, so it will try to aquire resources. It also knows that other entities may take those resources before it can, so it's logical to invest in resource aquisition. This includes going to war, as long as the AI reasons that the cost (ie. the opportunity cost, in terms of paperclips, of the resources spent waging war instead of making paperclips) is less than the reward (the extra amount of paperclips it expects to be made afterwards).
Independence is a logical consequence of paperclip-maximising: the AI knows that other entities don't share its goal of maximising paperclips, so it will try to reduce the influence they have on the paperclip supply (either directly, like miners, or indirectly, like those maintaining the AI itself; this is similar to self-preservation).
Soul: that hypothesis was not necessary.