The funny thing is that if artificial intelligence does kill us, it won't be because of malevolence or a warped sense of justice like most fiction portrays it, it will be because someone made a programming error and/or didn't include a proper failsafe. It will kill us because we programmed it to kill us. I don't think AI will ever reach the point of doing something I would consider "thinking" or "reasoning." At least not through software and hardware development the way we understand it right now.
Take the case of a sentry gun or unmanned drone programmed to use computer vision to identify and kill human targets. That's totally an AI murdering people, but I would never consider it to be "thinking" or "reasoning." It's not malevolent or spiteful. It doesn't have some sense of justice or revenge. It is literally just doing what it was programmed to do. Now imagine an orbital nuclear launch platform with similarly simplistic AI. That's how we'd kill ourselves with AI. Not The Terminator. Not Dune. More like a microwave.