> There was no ideology that resulted in its training bias that caused it to produced a biased output, just a lack of training.
Gemini had a filter telling it to provide diverse results, regardless of what it's asked. We don't know the exact lobotomization recipe, but someone notoriously got it to produce white people by asking it for a family eating watermelon. Gemini is still my answer. Normal people can add two and two together and realize why it happened. It was not a lack of training, it was a perfectly fine model handicapped by a filter editing the user's prompt.
>The fact that the immediate following version corrected this shows that there was no ideology.
No, it shows that there was public outrage which forced google to refine their lobotomization process.
> Gemini is not such a case.
This is my last post in this thread. Given that you can post this, it's hopeless. I guess it really is a mystery why Gemini ended up the way it did. The entire internet is just filled with pictures of black kangs and the AI was just reflecting the reality of the training data.