Your questions are why we do research. We can do better than nothing, and we tune the result over time. It starts with knowing that a problem exists, which is where we are today.
We actually don't even know that a problem even exists yet, other than just isolated incidents. Science doesn't work that way. The first step for each of these is a case study for each patient. I.e. here we have a case where patient presents with X, friends and relatives say Y, let's do a full workup to see if we can rule in or rule out any other explanations, look at their medical records, possibly even public records like criminal history, and see if there may even be any episodes or hints about them that friends and relatives aren't necessarily aware of. After you get a few of those, then you're at "there may be a greater problem, more research is needed" instead of "previously misdiagnosed or undiagnosed schizoid and/or dissociative disorder".
I mean, shit, we don't even know at this point whether all of these guys simply have a case of alkalosis, which is known to cause psychosis and can be caused by diet alone. The way you guys jump to conclusions with so little being known is just bonkers.
One possible answer: We know that LLMs can be steered away from various topics, and they can be programmed to give canned responses to some queries. Their system prompts can be tuned. None of these involve trigger warnings.
Soooort of -- it's far from perfect. If you keep your ear to the ground, every now and then you'll see stories about somebody doing some kind of prompt injection even on the mainstream services like chatgpt. Their engineers can't figure out how to prevent deliberate attacks to divulge data that the company has a major vested interest in keeping private, but you expect that they'll somehow be able to prevent the thing from gradually, over a long period of time and many prompts as in the case here, prevent the dialogue going in a direction that is otherwise forbidden? Take for example the mention of the (simulated) guy talking about losing his job and then asking what the tallest buildings in new york are. Does that mean it should associate any negative emotion with suicidal thoughts? And how is it even supposed to know that it's a negative emotion? Guarantee you the "sorry to hear about your job" bit was just statistically the most relevant answer it could give, with zero in the way of understanding what it meant.
It wouldn't even surprise me if the whole reason the guy who attempted suicide went down that path was because the guy's own prompts were very gradually going in that direction, quite possibly from his own subconscious thoughts gradually ending up in his prompts without him being consciously aware of it, the LLM only picked up on it as a statistical matter, and is just slowly giving him back what he put into it, only in a different way.
Or like this bit:
He doesn't remember much of the ordeal — a common symptom in people who experience breaks with reality
How is a chatbot going to cause a person to forget like that? This smells of a dissociative disorder. I'm not a psychiatrist, but I don't see how reading text on a computer is going to be a big enough cause of trauma to cause a person to dissociate like this. Because if it is, I can think of more dangerous places on the internet than chatgpt.