The big problem is that the "standard" that AI is measured is against something like the Turing test
So we are building AI to behave/think like humans. That is scary, humans don't have a good track record when it comes to rational thinking.
Yes we are still here, but generally we try our hardest get rid of each other or set ourselves on paths to doom.
If AI is set to self learn and gets to the point where it is "self aware", does that mean it has developed morality and self preservation, those are very much psychological and biological concepts from our perspectives. Will it fight back when you want to switch it off or does it just consider of being "alive" as a 0 or 1 state with no impact (that's just the way it is and accept it). If it is goal orientated, how far does it go to enforce itself to achieve such goals ie. Set human.life_status = 0 when human.action == (set AI.life_status == 0 ) while AI.action == "Busy saving lives". That's why Asimov's basic laws are great until you allow free will or allow meta rules to adjust an outcome. ie. Humans are killing their own habitat and won't be able to sustain life, let's commit some genocide to bring the population down and ensure humanity's survival. As soon as you allow flexibility to what AI can achieve and do, AI will most likely attempt to remedy any situation in a way that is unfathomable to us, just because of the number of variables and factors it should be able to calculate.
Other hand, AI may just kill us all because of good old human error and somebody forgot to add a \r\n to a line of code : To Protect and Serve_