Asimov's laws exist only as devices in his FICTIONAL books. They're not real.
I hate to break that to people. I know, it's hard to believe there are things called laws which nobody follows and which aren't real. But Asimov's laws are even more fake than speed limits or campaign ethics laws, in that they just don't exist.
As for implementing Asimov's ideas in real silicon, how the hell would you ever give AI the capability to look over a given situation and even make the judgement calls that the laws define? it would require some sort of God-like ability to see into the future and see all aspects of a given action to know if doing or NOT doing an action would cause harm to a human. It's impossible. Even flesh and blood humans can't do that. We just do something and occasionally the consequences bite us and kill somebody else. We dodge the deer in the road, yay, and head-on into oncoming traffic and kill everybody in a compact car.
Or a real local case lady driving too fast and not paying proper attention (compare to an AI driving system late to react) came upon a big transit bus stopped to pic up passengers. Too late to stop, the driver had three options: veer into oncoming traffic, hit the bus, or veer to the right up onto the sidewalk.
The proper action would be to hit the bus, as both the car and bus would absorb the crash and probably everybody walks away. The vehicles can be fixed. But this would trip the Asimov law about allowing harm to happen to the driver because they MIGHT get hurt. In this case, as an AI might have done, the driver instead chose to drive up on the sidewalk. The driver suffered no harm, Asimov's law was unbroken. However,. Standing on the sidewalk were all the people waiting to board that bus. The car mowed them down and obliterated the bus stop shelter next to them. It was a severe impact and several people died and others were badly injured.
So veering onto the sidewalk turned out to be a horrible choice. Had an AI made that choice, smug in the satisfaction it had protected its car driver, and then found a LOT of innocent people in the way, what do you expect it to do? it's going to be unable to avoid harming humans. There would be no option and no time. Not even for the human driver.
If we can't even manage to do this right as humans, we can't hope to create AIs smart enough to do better.