The LLM and the compiler and the formatter will get the low-level details right.
Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.
Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,
Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.
and then to scan the code for anomalies that make your antennas twitch,
Vibe error detection goes nicely with vibe programming. That being said, experienced programmers have a talent to detect errors. But detecting some errors here and there is far from full code review. Well, you can ask LLM to do it as well and many proposals it provides are good. Greg Kroah-Hartman estimates about 2/3 are good and the rest is marginally somewhat usable.
then dig into those and start asking questions -- not of product managers and developers, usually, but of the LLM!
Nothing goes as nicely as discussing with LLM. The longer you are at it the more askew it goes.
My point is that 25k LOC a month (god forbid a week) is a lot. It may look working on the outside but it is likely full of hopefully only small errors. Especially when you decide that you do not need to human-review all the LLM generated code. But if you consider e.g. lines of an XML file defining your UI (which you have drawn in some GUI designer) to be valid LOC then yeah. 25k is not a big deal. Not all LOCs are equal.