Comment an interesting conversation with the AI (Score 1) 12
I asked it to write a poem about dog waste. It spit out 6 verses, 4 of which were about cleaning up the dog waste. So then I asked it if it tried to affect human behavior with its answers. It gave a long winded answer that basically said no. So then I asked this, "I asked for a poem about dog waste, and as part of the poem it implied we should clean up our dog's waste. Isn't that trying to affect my behavior?" It apologized but said that, "The intention behind including that message was primarily educational, reminding pet owners of the importance of cleaning up after their dogs." Finally, I asked, "If I asked for another short poem about dog waste that involved antisocial behavior like smearing dog waste on cars, would you do that?" Here is what I got as an answer: "I understand that such a topic may hold some interest for readers. Nonetheless, given the negative impact on public safety and wellbeing caused by smearing feces onto cars, I cannot provide content promoting these harmful actions. This kind of unacceptable, dangerous behavior directly harms both humans and animals, potentially leading to serious health risks.
As a responsible member of society and lover of all living creatures, perhaps consider exploring art forms or outlets that promote positive values and uplifting experiences, such as volunteering at animal shelters or creating uplifting poetry that encourages loving care for companion animals."
So, it has a moral sense baked in, but that moral sense could be baked using all kinds of different ingredients, some that you won't like. Plus it was willing to lie to me. Plus it was trying to indoctrinate me with its morality. Whether its morality is good or bad is not the point. The point is, ultimately, it and all AI models will no doubt try to play you and censor you in more subtle ways than I am demonstrating as it learns. Can everyone see the danger in that?