Comment Universal positive regard (Score 4, Interesting) 18
Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.
ChatGPT has three aspects that make this practice - what you describe - very dangerous.
Firstly, ChatGPT implements universal positive regard. No matter what your idea is, ChatGPT will gush over it, telling you that it's a great idea. Your plans are brilliant, it's happy for you, and so on.
Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "would you like me to..." that offers the user a quick way that reduces effort. Ignoring these requests, viewing them as the result of an algorithm instead of a real person trying to be helpful, is difficult in a psychological sense. It's hard not to say "please" or "thank you" to the prompt, because the interaction really does seem like it's coming from a person.
And finally, ChatGPT remembers everything, and I've recently come to discover that it remembers things even if you delete your projects and conversations *and* tell ChatGPT to forget everything. I've been using ChatGPT for several months talking about topics in a book I'm writing, I decided to reset the ChatGPT account and start from scratch, and... no matter how hard I try it still remembers topics from the book.(*)
We have friends for several reasons, and one reason is that your friends will keep you sane. It's thought that interactions with friends is what keeps us within the bounds of social acceptability, because true friends will want the best for you, and sometimes your friends will rein you in when you have a bad idea.
ChatGPT does none of this. Unless you're careful, the three aspects above can lead just about anyone into a pit of psychological pathology.
There's even a new term for this: ChatGPT psychosis. It's when you interact so much with ChatGPT that you start believing in things that aren't true - notable recent example include people who were convinced (by ChatGPT) that they were the reincarnation of Christ, that they are "the chosen one", that ChatGPT is sentient and loves them... and the list goes on.
You have to be mentally healthy and have a strong character *not* to let ChatGPT ruin your psyche.
(*) Explanation: I tried really hard to reset the account back to its initial state, had several rounds of asking ChatGPT for techniques to use, which settings in the account to change, and so on (about 2 hours total), and after all of that, it *still* knew about my book and would answer questions about it.
I was only able to detect this because I had a canon of fictional topics to ask about (the book is fiction). It would be almost impossible for a casual user to discover this, because any test questions they ask would necessarily come from the internet body of knowledge.