So you could attribute a black viking or a black George Washington to error. Fine. I will say it's unlikely, given there is a lot of training data out there (ie images) that would not steer the model in this direction. So a rather unlikely error at best.
What do you make then of the refusal of the program to depict, say, a happy family with white skin pigmentation, but is fine with other skin pigmentations? Is it conspiratorial to say that at some point a human actively made that decision?
Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?