I expect several consequences of this, including:
1. Model collapse. Training LLMs on the output of other LLMs has been shown to lower the model's quality, and it gets worse with each iteration. So, the Internet has become less valuable as an LLM training data source, and this trend will continue, making it more difficult to train new models or improve existing ones.
2. Increased demand for guaranteed human-generated content. This is both from competition between LLM training businesses who need original sources to use as training data, AND from humans who want or need something that is not hallucination-polluted slop.
3. Increased incidence of humans submitting LLM generated slop AS human-generated content. We have already seen this happening in every place you might expect, with comical effect from catching people red-handed lying about this.
4. The bursting of the LLM bubble. Recently experts in the field have said that current training methods have already hit "peak AI" even from good data sources. The landscape continues to change rapidly so I don't know if that is true, but it is at least possible given what is known. An overall decrease in the availability of high quality training data will only make this worse. Then ensuing stagnation in LLM improvement will flatten out the demand curve for LLM services in general.
5. Profit! Especially for everyone who managed to eliminate a lot of human employees thanks to LLMs.