Plot device, perhaps, but if you've read the entire "robot" series of novels, you'll see that it was used to provide a unique "angle" from which to tackle some classical problems of ethics. As a practical matter, I rather doubt that such a set of such laws, even if they were logically sound, could be reliably built into a machine such that no contrivance, hardware or software, could be used to circumvent them.
I think part of the problem is , they fit into a logic/proof solving tradition of AI but not so much into a connectionist/neural-net model.
The problem is, I honestly suspect once neural nets get complex enough and self-organizing enough they can tackle human-style intelligence, we're really not going to have a lot of insight into how they work anymore. Too hard, too complex. How do you build those hard limits into a machine that can redesign itself and that we don't fully understand.
At best we could create an "instinct" to follow them. But what we know from our own intelligence is that instincts can be pretty muteable things (Ie we have a powerful instinct not to die, but make a man miserable enough and he'll hurl himself off a cliff)
We need to tread quite carefully, and make sure when we do create intellects as brilliant as our owns, that those intellects will want to be on our sides. Its almost like the reverse of theology. In Theology thinkers asked "What must we do to be ethical servants of the creator?". Now the tables are turned and WE are the creators, and we might need to ask the opposite, "What must we do to be an ethical creator to our creations".
Because if we don't, we're inviting ragnarok on ourselves.