However, even if the factors that make minorities more risky are already accounted for, an AI may be biased against them because the training data contained a correlation between race and perceived risk.
There is no such thing as "perceived risk." There is either risk or there is not. If you perceive risk and it is real risk, it's not perceived. If you perceive risk and it is not real, you are in error.
There is this broad SJW initiative to discount reality whenever reality conflicts with what SJW's want to be true. Reality laughs at things like this because it is reality. If poor people have a higher risk of defaulting on a loan, that is simply fact. The algorithm isn't racist for determining that. The fact that a significant fraction of the poor is also a racial minority is irrelevant to the algorithm. Only overly-sensitive SJW humans make that connection and, despite the reality of the situation, want the algorithm to ignore to reality and proceed as if the risk didn't exist.
Then, when reality intrudes, the loans default, and the banks go belly up because they were terrified of a civil rights lawsuit if they didn't grant the loan, those same SJW's deny their actions had anything to do with the situation. And those of us who opposed this idiotic reality-denial end up paying the tab. And the SJW's never learn and do it all over again. And again. And again.