If an AI is trained by copying art from a variety of artists, wouldn't that by definition mean that anything that the AI creates by definition is derived from the originals?
And if the AI model can't show what it took from each of the trained images to create what it's final image is, it would be necessary to assume that all artists' works could have influenced the AI, with some element copied in some in some manner, further reinforcing the idea that the AI's work is derived from the work of the original artists.
Finally, if the AI were not trained by anything at all, would the AI be able to create anything close to what it did with the artists' works? Again, the artists' works were necessary for the AI to create its art, again becoming a derivative work.
Consider Pharrell Williams being sued, and losing, for a song that the estate/family of Marvin Gaye claimed was a rip off of one of Marvin Gaye's songs. Pharrell acknowledged he was influenced by it.
This is an oversimplification of everything, but at its heart, given laws around derivative works in copyright, this could be a problem for AIs trained on copying lots of data. This is different than a human seeing it while out and about, here someone or some entity deliberately took copies of images and fed them to an algorithm. That's very different from a human seeing something and being influenced by it.
At first I thought fair use might be a good defense for the AIs, but while the work created is novel and it can be argued it is transformative, how much of the artists' works were used to train the AI. Was it complete images? If so, that may be another issue and it may make it more difficult to argue successfully about fair use.
It will be interesting to watch this court play out and to see what kind of precedent ultimately gets set.