They can't even develop global rules for what size of the road to drive on or what a household electrical socket should look like. Good luck agreeing on rules for AI.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think any set of rules this council creates will end up being just as problematic as the three laws, just as full of loopholes, but will be much, much more complicated.
They can't even develop global rules for what size of the road to drive on or what a household electrical socket should look like. Good luck agreeing on rules for AI.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
2. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.