Except that people don't review the AI generated text they're sending out. Lawyers are getting in trouble for submitting hallucinated precedents to judges. The US health department got busted for releasing a report that cited papers that don't exist. Journalists have published AI-generated articles of top 10 books containing books that the cited authors never wrote, or that don't exist at all. My wife is a psychologist. The industry is switching to a technology that takes recorded audio of a session, uses speech to text on it, and then uses an LLM to summarize the session into notes, and then it deletes the recorded conversation for patient confidentiality reasons. The psychologist is responsible for reviewing the summary for accuracy, but we all know there are many who won't do it out of laziness or naivety. That means the summaries of sessions are going to be saved with inaccurate information. LLMs are thus rewriting history. This is very dangerous, and it's going to take several years before regulators are going to react to all this nonsense.