An anonymous reader writes "With the personal robotics revolution imminent, a law professor and a roboticist (called Professor Smart!) argue that the law needs to think about robots properly. In particular, they say we should avoid 'the Android Fallacy' — the idea that robots are just like us, only synthetic. 'Even in research labs, cameras are described as "eyes," robots are "scared" of obstacles, and they need to "think" about what to do next. This projection of human attributes is dangerous when trying to design legislation for robots. Robots are, and for many years will remain, tools. ... As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your commands) and the outputs (the robot's behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the robot will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the robot. While this mental agency is part of our definition of a robot, it is vital for us to remember what is causing this agency. Members of the general public might not know, or even care, but we must always keep it in mind when designing legislation. Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake."