Comment Re:Oh My GOD! (Score 1) 62
That is not how these things work.
Don't train it on data that encourages suicidal ideation, self harm or violence. There's a lot of data in a LLM, but it's not a black box. And if it is, it shouldn't be talking to the public, much less kids.
They also don't have agency, arms, legs, or- critically- internet access.
With this one tool of talking, many psychological problems can be resolved. Or created.
Sure, why the fuck not. Maybe we should monitor SMS messages too.
The difference is an MMO chatroom is a service provided by a company, and a psychological safe space should be a selling point. SMS is communication between one person, one other person, their mobile network providers, and the NSA.
No, I disagree. If you type suicide into Google, it should definitely contact the authorities.
There's lots of reasons people type suicide into google. I did it while formulating this response.
A LLM has way more information than that. Being the confidant of someone with suicidal ideation gives you a lot of data, and you could easily tell as the mind state of the person moves from ideation to having a plan, to being about to carry out that plan. As that progresses, encouraging suicide is not the correct response, internet connection or not.
An LLM is a big fucking math equation that produces natural language in response to natural language.
They are also, increasingly, able to give informative and correct responses. Encouraging suicidal ideation is a more serious flaw than hallucinating case law or chess moves, but it's the same type of flaw: It's a incorrect response.
This is pushing the responsibility onto parties that have no business being responsible for this.
If your product is killing people, you are responsible. Just like every other product.