Because legal landscape for AI database collection activity has significant risks in it, we decided to reject AI technology outright. The AI vendors need to understand that they had easier market entry, because professional authors are rejecting their area for copyright reasons. These legal rejections are dangerous, because while they reduce competition in the area, they also indicate that there are significant legal risks involved. Basically professional authors are voting with their feet and leaving the area to would-be criminals or people who can absorb 300,000 dollars damage awards,
We have developed easy way to execute a test for detecting copyright infringement in AI databases:
1) remove all content that was not properly licenced from original owner of the material
2) if your product still works, you're ok. If it doesn't work, you're infringing.
Basically the teaching phase of AI systems are all failing in this easy test.
Consiquences of this easy test for AI systems is that the content creation must happen before AI system creation. So all investment to content is still included in the resulting AI system, and there is clear causality relation between content creation and AI system creation.
This time-based causality is good way to ask for damage awards from courts.