Comment Re:Tragedy is not a sufficent reason for liability (Score 3, Insightful) 112
As a person who had a parent do himself in because an undiagnosed medical condition in him caused him to waste away and go insane, what happened to that kid is terrible, but, I've worked with and known a few people in my life who took their own lives, back in the 1980s. My father went that way in the late 1990s. As much as we hate to admit it, people sometimes fall apart for physical reasons, or just depression, get stuck in a rut they can't escape, and decide to end their life. The best defense for AIs at discouraging people from abusing them to learn efficient ways to do themselves in? For adults, Idk if that's even achievable. Probably the best thing AI companies can do to prevent this from happening again? Do age requirements, to screen for minors asking such questions, and just tell them no at giving them the information, until they turn 18 in the U.S. After the age of 18? If an adult is determined to commit suicide, they're going to do it, unless someone intervenes to stop them. In my father's case, he tried once, got the 3 day 5150 hold, when he was released, a few months later, slipped out into his backyard late one night, found some high gauge extension cords, and a high tree branch, and you can guess the rest. Should we have been able to sue the psychiatrists and hospital that had him on anti-depressants and treated him? It was determined later he had undiagnosed celiac disease which caused his depression because he just was not testing positive for anti-bodies to gliadin proteins in his blood work. Should OpenAI pay for that kid's suicide? Idk. That's going to be an interesting case to follow. But, Imo, age-limit the answers to questions about how to commit self-harm. Also, maybe limit the answers to questions teens ask about committing mass harm using firearms or other weapons? Seems like a good idea.