As a software engineer that daily works on developing robot control software and algorithms for industrial robots, (yes, I love my job) I can assure you that we are very far indeed from even having robots that know they are scratching their own arses, let alone having anything like the reasoning capacity embodied in the three laws.
Robots of today are dumb - sure, there are clever planning algorithms that make them flexible enough to work in a relatively predictable dynamic environment, but we are no where near the point of having robots implement the first law.
As for the second law - wel computers (and by extension robots) are infamous for doing exactly what they are instructed - even if the result is garbage. Part two of that law is problematic given we can't really do part 1.
For the third law, actually that's almost the oppsite of what we try to achieve - we try our hardest to make sure that the robot will flat out refuse to do something that will harm it, even if told to do so by a human. if the robot gets given an instruction to start plasma cutting it's tracks or the cabinet containing it's drive controllers, it damn well better ignore that order. At bese, we can do collision avoidance of stuff in the environment to prevent harm, but I don't see us any time soon have them having behaviour programmed in to ufulfil the 'inaction" clause - for example, rush over and stop me cutting myself on broken glass, or recognising I am in danger from a falling beam and catching it (or even beeping a warning) , or something like that.