These kinds of poor outcomes were described thoroughly in the book "
Weapons of Math Destruction" by Cathy O'Neil. She cites examples in bail / parole recommendation algorithms, HR screening tools, insurance, etc. In her view, a
WMD is a computer system that has some/most of these characteristics:
* that makes serious decisions affecting people
other than the person using the tool,
* uses proxy measurements (zip code, socioeconomic status) for the thing they're actually trying to quantify (e.g., risk of recidivism),
* whose inner workings are opaque, and/or built on data of unknown provenance,
* are not or cannot be corrected in light of new data or mistakes,
* are difficult or impossible to contest,
* have little to no regulation.
That was published in 2017, well before LLMs and AI really hit the scene. But the dangers were already apparent even then, and f*&k-all has been done to mitigate them.