AI may be advancing with giant strides, but robotics is still far, far away from doing anything remotely similar to a Terminator, even the simplest models
Musk and many others are not thinking that AI is already dangerous. They are thinking about something called the singularity - the point at which AI can improve upon itself, creating a positive feedback loop where AI evolution outpaces our ability to follow, understand - or stop it.
The tipping point is not "when will the first computer achive sentience?" - that is ill defined and it might not ever be sentient in a human sense, but instead in a different way. The tipping point is "when does machine evolution decouple from human understanding?". As some systems already evolve, and some systems already do things in ways we don't understand, that point seems near. And once AI has reached that point, given the massive processing power available, it could advance away from us, and be permanently not just one but two, three, ten, one hundred steps ahead of us. And then if it decides it doesn't need humans anymore, it won't be like in the movies. We won't even understand what happened. It will have watched all those movies and make sure it doesn't make any of the mistakes those movie AIs made.
When the technology is there, maybe you actually can legislate it away, but it won't matter anymore. The only point where you can stop this is before the runaway effect starts.