Way to miss the point. It's not about defending itself, but about overzealous goal-orientation, maximizing the use of all available resources, potentially to disastrous results to anything else sharing available resources (such as biological life). Building in safety constraints is not realistic when one begins considering general, recursively self-improving AI. Once a general AI is much smarter than a collection of humans, AI would be designing the next generation of AI, not humans, and then maintenance of any initial constraints through the generations would be out of our hands, and subject to inevitable drift and/or degradation. Even the standard text by Russell and Norvig acknowledges in the most recent edition the so called "friendly AI" arguments. The solution proposed by people like Kurzweil is that we'll more or less integrate with the machines, becoming superintelligent ourselves, and there might not even be stand-alone AI agents. The approach I prefer is imbuing any advanced general AI with technology substituting for embodied consciousness and human-like emotions (check the wiki article on embodied consciousness, as well as the research of the famous neurologist Damasio), and making the AI love us, which it cannot do unless it can understand us (an AI that is not human-like is actually far more dangerous — the opposite of what you suggest). If our well-being is integrated as well into the AI's fundamental cognitive processes as it is into our own (take somatic marker hypothesis and extend it to a system beyond just inside the brain), then this would make for a much more robust over generations mechanism than any formal constraints we try to build into the design specs.