Another comment asked for "strict, rigorous, in-depth testing,"; I think there's a big problem with this. The first thing to say is that you can in principle have strict rigorous proofs that software is "correct" ("Beware of bugs in the above code; I have only proved it correct, not tried it." - Mr. K.) you cannot have strict rigorous tests because it is impossible to test all possible software stated.
The AI is absolutely brilliant at finding paths through tests. Mostly and often ones that make the software correct. It is also brilliant at finding paths through the tests that achieve the goal it thinks it's trying to achieve and which are wrong whist seeming correct to humans. I think the code review for AI should be something different. You don't care about the individual lines of code, which are an artifact that can be rewritten. My guess is that you should think of the AI as a North Korean programmer working in your company because the bosses decided she's cheaper than your old friend they fired. Unfortunately your wealth is in share options that are only going to vest if you and the company survive till next year. She's always trying to slip a new backdoor into the code or destroy the company, but you aren't allowed to admit that, so maybe each time you code review, try to work out what new tests should be being added to make sure that her backdoor and destructive scripts don't go through and demand that they get written and verified.