And yet the training photos have been identified from AI output because the AI incorporated bit by bit identical chunks
I'd love to see the information around that. How big is this chunk? Because there's a threshold where obviously it's not an "identical chunk" by any reasonable analysis, yet technically it is an identical chunk. What I mean is, if it's one pixel - cannot be a copy / taken chunk by definition really. Or it loses meaning and plausibly could be "taken" from about everything. But in high res images, IDK, 128x128 pixel chunks might well fit that. Consider just the existence of a gradient etc. But now it gets fuzzier -> Look at image compression algorithms. That changes the bit by bit copy, probably quite a bit. Is that now "not a copy"? If you reject that "sidestep" then it seems like it goes the other way too. Having a set of eyes isn't a copy or lack of creativity. But there's a finite (if large) amount of photos with eyes in them. There's a lot of similarity between two given set of eyes in an image, that's what makes them a set of eyes. How different does it have to be before it's considered not copying?
This AI isn't outputting plaster casts, those are exact copies of a whole sculpture. From someone on the outside, this looks very similar to giving a brief to an artist and getting 4 or 6 options to pick from or refine further. The artist doesn't have unlimited creativity, they have to produce something that fits the brief. But the artist surely isn't just plaster casting there either.
Yes, the human painter is influenced by everything they've seen before but the vast majority of that isn't even artwork.
Ok, and so your argument is that a bigger training set is needed.
The painter has self-awareness, tastes, preferences independent of ANY input
All of that is basically random transforms adjusted by external inputs (life experience). That is more or less relevant for spec creations, which is what people are using these AIs for. It just seems to be a difference in complexity. This just seems like an odd "god of the gaps" argument in reverse - because we can fully explain what the AI is doing makes it less impressive. I just think that's wrong - I bet eventually we'll fully explain what the human brain is doing in a similar way to the AI explanations, and that won't fundamentally change the creativity of the brain just because we can explain it.