It seems like an open question whether being repetitive and rule based is actually a virtue as an AI use case or not.
'AI' is an easy sell for people who want to do some 'digital transformation' they can thought-leader about on linkedin without actually doing the ditch-digging involved in solving the problem conventionally "Hey, just throw some unstructured inputs at the problem and the magic of Agentic will make the answer come out!"; but that's not really a a good argument in favor of doing it that way. Dealing with such a cryptic, unpredictable, and expensive tool is at its most compelling when you have a problem that isn't readily amenable to conventional solutions; while it looks a lot like sheer laziness when you take a problem that basically just requires some form validation logic and a decision tree and throw an LLM at it because you can't be bothered to construct the decision tree.
There are definitely problems, some of them even useful, that are absolutely not amenable to conventional approaches; and those at least have the argument that perhaps unpredictable results are better than no results or manual results; but if you've got some desperately conventional business logic case that someone is turning into an 'AI' project either because they are a trend chaser or because they think that programming is an obscurantist conspiracy against the natural language Idea Guys by fiddly syntax nerds that's not a good sign.