I think you are operating on a false dichotomy. Though I also wish we could more effectively mitigate the effect of morally viscous humans and human ideologies, that concern is neither mutually exclusive to all other concerns nor district from the particular concern of morally vicious AIs
There are myriad and massive systems, techniques, etc. devoted to the task of human governance, however inefficient or efficient they may be. It’s not like humans aren’t trying on this front, it’s just a difficult problem because humans have this quality we define as intelligence. So, if you are concerned about the moral viciousness of humans, who have lots of evolutionarily built social instincts, you should be concerned for the future of AIs, because it will most likely be humans that engineer AIs' instincts, at least at first. The problem is intricately interwoven in a reciprocal relationship. Humans are devoting considerable energy into birthing AI. It is likely that strong AIs will eventually emerge (but who knows when), and it’s wise to devote as much if not more energy into engineering the parameters from which AI will emerge. We need to engineer it with as much forethought, prudence, and importantly, respect as we can muster. Moreover, there will be morally vicious humans who will attempt to use AIs and their precursors in morally vicious ways, so the rest of us should not burry our heads in small horizons.