Artificial Intelligence Can Now Predict Suicide With Remarkable Accuracy (qz.com) 161
An anonymous reader writes: Colin Walsh, data scientist at Vanderbilt University Medical Center, and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week. The prediction is based on data that's widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts. This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent.
Have things changed in recent years... (Score:2)
I never did find ELIZA to be that effective as a program.
https://en.wikipedia.org/wiki/ELIZA [wikipedia.org]
Re:Have things changed in recent years... (Score:5, Funny)
Why is it you say you don't find ELIZA to be that effective as a program?
Re: (Score:2)
Why is it you say you don't find ELIZA to be that effective as a program?
I like it, I like it.
Re:Have things changed in recent years... (Score:4, Funny)
Why is it you say you don't find ELIZA to be that effective as a program?
All those questions are enough to drive someone to commit suicide. Wait a minute...
An Algorithm.... (Score:4, Insightful)
Re: (Score:3)
Re: (Score:3)
Artificial Intelligence uses algorithms.
Natural Intelligence uses algorithms too ...
Re:An Algorithm.... (Score:4)
Re: (Score:3)
Unless you don't know what you're doing, then you're going try "heuristically"
Heuristics are algorithms.
Re: (Score:3, Insightful)
Re:An Algorithm.... (Score:4)
Or can you write a formal, terminating, deterministic sequence of elementary steps for reliably generating "Eureka!" moments in humans?
There is no requirement that algorithms be formal. Or terminating. Or deterministic. Or a sequence. Or consist of elementary steps.
Exempli gratia: ANNs (Artificial Neural Nets).
Re: (Score:2)
Re: (Score:2)
Your argument that BNNs don't use algorithms can be equally applied to ANNs.
If you use a vague definition of algorithm, then it can apply to both.
If you use a strict definition, it will apply to neither.
Re: (Score:2)
Re:An Algorithm.... (Score:5, Funny)
Drugs. Lots of drugs.
It may not be the "Eureka" moment you are expecting but from my perspective I discovered the meaning of existence.
Re: (Score:3)
Or can you write a formal, terminating, deterministic sequence of elementary steps for reliably generating "Eureka!" moments in humans?
People can't reliably generate Eureka moments, so it would be impossible to put that in an algorithm.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Heuristics are algorithms.
Not according to my profs in school. A key part of the definition of algorithm was that it was guaranteed to terminate. It may take a long time, but it was guaranteed to return an answer someday. A heuristic doesn't have a guaranteed stopping condition, just a time limit that the caller is willing to wait for the most optimal solution.
I believe this to be the typical definition of algorithm, not just a specialization for computer science. Note that the Merriam-Webster definition [merriam-webster.com] includes a particularly key
Re:An Algorithm.... (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Since you admit that you don't know how it works, how would you tell the difference between the "real thing", and a suitably advanced "ersatz" ?
Re: (Score:2)
Re: (Score:2)
I -- and everyone else -- don't know how HUMAN cognition/consciousness/self-awareness/actual THOUGHT/creativity works
If you don't know how it works, then you can't claim that someone didn't capture the essential elements in an algorithm. The only meaningful thing you can do is look at the output, and the output is pretty good.
We know how this fake AI works: Poorly, by comparison.
It seems to work better than a human psychologist.
Re: (Score:2)
Until you can show me a so-called AI that is at least everything that defines us as human beings, then all you've got is a piss-poor imitation that doesn't deserve to be called 'artificial intelligence'. 'Machine learning' and 'algorithms' aren't even as smart as a dog-brain and don't qualify.
Re: (Score:2)
You are absolutely correct about the high school thing, but I think you need to learn more about algorithms and realize how broad of a term it is. It's math on paper, it's used to solve Rubik's Cubes for example, and is used in computing. From all the responses above, I doubt anyone really bothered to do any research before commenting and just wanted to be "on screen." So, I gave a few links below to hopefully lessen the burden.
https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/
http
Re: (Score:2)
Re:An Algorithm.... (Score:4, Insightful)
This story is about machine learning. Whether you consider machine learning to be "artificial intelligence" probably says more about your definition of "artificial intelligence" than it does about machine learning.
Machine learning definitely replaces human judgment at certain tasks -- in this case classifying a thing by its attributes -- however it does it in ways that an unaided human brain cannot duplicate. For example it might examine the goodness of fit of a large number of alternative (although structurally similar) algorithms against a vast body of training data.
Many years ago, when I was a college student, AI enthusiasts used to say things like, "The best way to understand the human mind is to duplicate its functions." I believe that after three decades that has proven to be true, but not in the way people thought it would be true. It turns out the human way of doing things is just one possible way.
I think that's a pretty significant discovery. But is it "AI"? It's certainly not what people are expecting. On the plus side, methods like classification and regression trees produce algorithms that can be examined and critiqued analytically.
Re: (Score:2)
Your carefully crafted expert systems need knowledge.
however it does it in ways that an unaided human brain cannot duplicate.
You also cannot duplicate my human method without aid (and pretty sure not even WITH aid)
You are adding requirements that don't exist, and not even doing is honestly.
Re: (Score:2)
Expert systems work in a completely different way than machine learning approaches. Expert systems do indeed require the analysis of human knowledge as a starting point. Machine learning approaches do not; they just need data.
You also cannot duplicate my human method without aid (and pretty sure not even WITH aid)
My point is that duplicating the way you think isn't really necessary. You can in many cases be replaced by something that works in a completely different way.
Re: (Score:2)
Expert systems work in a completely different way than machine learning approaches.
Proving that you dont know what you are taking about.
Converting your shit into a car analogy: "Bicycles work in a completely different way to tractor trailers"
You are just proving that you dont know anything about at least one of the two things you are trying to talk about. Didnt you know bikes are ridden? Didnt you know tractor trailers haul cargo? You think the difference is how they 'work' ? really?
Re: (Score:2)
Converting your shit into a car analogy: "Bicycles work in a completely different way to tractor trailers"
Exactly. I don't see why you think that's ridiculous. Bikes and tractor trailers have some broad similarities, but they're built to accomplish different things so analogies between them aren't particularly useful.
Re: (Score:2)
There is AI and there is AI. This program is almost certainly AI in some sense of the term.
On the one hand we have what people sometimes call general intelligence or "true AI". That means capable of independent and original thought, and possibly passing the Turing Test someday. (I'm not convinced that even a true AI will pass the Turing Test because its life experiences will be so different from those of a human, or at least won't pass until it becomes enough smarter than humans to be able to fake out the t
Re: (Score:2)
Re: (Score:2)
Skynet (Score:1)
Perhaps this study is just a cover, and SkyNet is actually developing a subtler approach to offing humanity?
Simple solution (Score:5, Insightful)
Give people a reason to not kill themselves and you'll see rates drop.
Re: Simple solution (Score:2)
How about "a chance to prove the AI wrong."
Re: (Score:2)
Dude, this.
This, so fucking much.
Re: (Score:2, Insightful)
Technology is wonderful but it has a dark side for the society it brings so much convenience: It requires conformity. As individuals put their lives online, those who disagree with the group-think and propaganda are are easier to detect and punish; essentially criminalizing all deviation from normality. This is the very reason we don't want people with guns, often known as 'the government', watching everything we do.
Obedience to social conventions is required everywhere: At work, in town, in other perso
Re: (Score:2)
You may rest assured that I know exactly how depression works.
Re: (Score:2)
Given that I know exactly how depression works in a certain individual, WTF did you mean?
False positive rate (Score:1)
Does the two-year 80-90% accuracy also translate to a false positive rate of 10-20%?
If yes: What do you do with the millions of false positives? An overall small suicide rate does translate to a huge fraction of false positives at 10% false positive rate.
Re: (Score:3, Funny)
You have been deemed to be suicidal. Please check into your nearest healthcare location. Refusal to do so will result in you being placed imminently into level two treatment. Which may result in loss of job, loss of family, and the loss of your pet named spot.
Re:False positive rate (Score:5, Insightful)
It's probably mostly meaningless. I mean, they scanned for features of people who are suicidal. They were in the hospital because they inflicted self harm, and were on medications specifically prescribed to make people not do that. So as far as I can tell, this doesn't predict anything, it juts measures that "80-90% of the time doctors do the same thing for folks who would hurt themselves".
It's not like they randomly picked a bunch of people off the street and determined from THAT. Like basically every single other artificial intelligence or machine learning story, it's a bunch of dumb hype, eventually to get folks investing in stupid startups.
We Need Better Reporters (Score:3)
I wish that those reporting on medicine had a basic grasp of science and simple statistics so that they could ask the relevant questions such as: what is the false positive rate?, does 80-90% mean that your stat
Re: (Score:3)
Comment removed (Score:5, Interesting)
Re: (Score:2, Insightful)
Distopian prediction: Life insurance payout denied. Despite a clean tox screen, your relative was suicidal (according to our algorithm) and was intentionally driving at a time of night when she knew a lot of drunk drivers would be on the road.
Re: (Score:2)
Given that very few people want to kill themselves in any given year, false positives can be approximated as 10-20% of the general population wanting to do themselves in in the next two years.
So, it'll show around 50M Americans wanting to do themselves in. Which, given the last general election, might not be too far out, I suppose....
Re: (Score:2)
Because I am sure that this will calculate that I am willing to kill myself, even if I have no desire to do so and tell me that I won't when I am willing to do so.
I'd like to take that test . . . just to see if I can avoid any long-term planning issues. So when the bank invites me to come around, so they can turn my worthless surplus cash in my bank account into their juicy sales commissions for dubious financial "products", I can tell them with a good conscience, "No, thanks, I'm probably going to commit suicide within the next two years anyway. AI said I would."
Re: (Score:2)
In the actual paper, they report precision = 0.79 and recall= 0.95, which means that they predicted nearly all of the attempts (very few false negatives) and most of what they predicted were actual suicide attempts (few false positives). They report the actual numbers, too, but that table is pain to copy and paste.
http://journals.sagepub.com/do... [sagepub.com]
Hmm... (Score:2)
All of their study group had indicated suicidal tendencies, and around 60% had actually attempted suicide.
I don't need a computer to tell me that there is a good chance some of these people will attempt suicide again.
Re: (Score:3)
I don't need a computer to tell me that there is a good chance some of these people will attempt suicide again.
Yes, but which ones? That's the whole point, surely? You'd want to use this as a diagnostic tool, in cases where you're dealing with a lot of depressed people and you need to know which ones you particularly need to watch out for in terms of suicide risk. Mental health clinics would find this invaluable, wouldn't they?
It's pretty much the same thing as being able to tell a cardiac clinic which of their heart-disease patients are most at risk of having a heart attack soon. Obviously everyone who is a patient
Re: (Score:2)
Re:92% accuracy! (Score:5, Insightful)
your are correct, these morons used a group of suicidal patients for their case study and now are claiming great success.
Re: (Score:2)
Comment removed (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
So, having an algorithm delivering 92% accuracy would imply that people could detect these situations even more accurately than that(?!)
No, it wouldn't. The different patterns of behavior could be so complicated and subtle that people can't pick them up, especially in an area where people tend to have biases.
Re: (Score:2)
Re: (Score:2)
But the question is: how are you expecting an algorithm, precisely developed by a person, to succeed where people will fail? It doesn't seem too logical, right?
Teach the algorithm by providing it with a list of properties from patients in the past, together with the patient outcome (suicide after N days, or no suicide). The algorithm then searches for patterns in the properties that have a high chance of resulting in suicide.
The developer doesn't even need to be educated in the field of psychology.
Re: (Score:2)
Re: (Score:2)
For example, the programmer building a chess engine can understand why it performs each movement, but will always lose in a game against it.
So, having an algorithm delivering 92% accuracy would imply that people could detect these situations even more accurately than that(?!)
If a chess engine developer can be outperformed by his own algorithm, then a suicide predictor developer can also be outperformed by his own algorithm. It's the same concept.
Re: (Score:2)
Re: (Score:2)
This is wrong. The basic requirements are a set of data on each individual case, including the desired final outcome. We enter data for patient 1 and whether patient 1 attempted suicide. We do the same for all the other patients in the "training" process. The "required knowledge" is objectively recorded, including whether the patient attempted suicide. The "training" is a mechanical process
Re: (Score:2)
Re: (Score:2)
Except when it works, and it often works much better than you appear to think. What matters is not what you think of the process, but how well the end product works. If the end product does a better job than human judgment, then it is a success.
Re: (Score:2)
Re: (Score:2)
Statistically speaking, is there a reliable way to win the lottery? Statistically speaking, does whatever Charlie's mom does (I haven't watched the video) work? I'm an empiricist. Give me some evidence, such as a comparatively better success rate.
Let's see.
- Person one thinks that a deeper expert understanding is a basic requisite.
- Person two intelligently defends other approaches by pointing to evidence that they sometimes work. Person two has also mentioned that the approach used doesn't always
Re: (Score:2)
Re: (Score:2)
Optimization (Score:5, Funny)
Minority report procogs? (Score:2)
Percentages are misleading (Score:5, Insightful)
Simple accuracy percentages are misleading when applied to low-probability events. An "AI" that always returned "No" to the query "Will this person commit suicide within the next two years?" would be 97.2% accurate (and 99.975% accurate for the next-week variant). And yet, that "AI" would be absolutely useless for any practical purpose.
Not to mention, with suicides, access to means has been a better statistical predictor than anything else, even mental illness. A person with no personal or family history of mental illness, but with a gun and a gas oven in their house, is at higher risk of killing themselves than a bipolar alcoholic with neither.
Re: (Score:2, Insightful)
A person with no personal or family history of mental illness, but with a gun and a gas oven in their house, is at higher risk of killing themselves than a bipolar alcoholic with neither.
Knowing the mortality trends are for those with bipolar disorder - I'm going to have to call bullshit.
Re:Percentages are misleading (Score:4, Informative)
Not that I think this is a particularly useful bit of research but - the study's patients pretest probability of suicide was much higher than the general population. These are people who are ADMITTED TO A HOSPITAL with concerns of self harm. They've already passed a bunch of screens to separate them from everybody else.
So you are talking a group of people that the current system thinks is at some non trivial risk of suicide and trying to figure out which ones are at the highest risk.
So it's quite a bit more useful than some of the posters have been assuming. Still not sure how generalizable this will be, but give the researchers a bit of a break.
Re: (Score:2, Informative)
Suicide is a low probably event in the general population but their initial data set was not random, it was 5000 patients already exhibiting symptoms of self harm. Picking out the people in that group likely to kill themselves is a pretty impressive feat.
So what? (Score:2)
Re: (Score:2)
Re: (Score:2)
Sounds like the results are better than what doctors can do:
https://www.scientificamerican... [scientificamerican.com]
No, please. (Score:4, Interesting)
As someone who's been down that road (but never gone through with an attempt), I automatically hate this invention. When depressed to that point, emotions tend to swing so hard and so fast that any mention of predictions during this state of mind is utmost bullshit.
The very slightest of triggers can either send you overboard or keep you in one piece depending on how your inner conversation is going with yourself. This can be anything... a faint sound, perhaps a song that reminds you of good/shitty times, from a car passing by not too far away.
I consider myself lucky to be both scared of the afterlife enough to have thoughts force second-guessings into me (although the older I grow the less I care), and have enough positive triggers to bring myself back. Nobody, not even myself, could predict if these will always work for me as well as they have however.
Suicidal/depressive folks definitely need help, but not from the machines of this day and age. A positive trigger could well be overridden by a "fuck it", and it only takes a split second to follow through the act. You can't predict that kind of stuff with a high degree of accuracy, at least not yet.
Disclaimer : I did not RTFA. I find stuff like this appalling as it hits me right in the feels and I would be deeply insulted if a machine tried to guess whether I was going to kill myself or not. There's much more to it than some algorithms a team engineers wrote.
Re: (Score:2)
So while I'll grant that it is improbable that the machine could predict *what* will push you too far, I suspect that it is far better than the average human at identifying whether you'r
Re: (Score:2)
Than the average human, yes.
Then again the average human seems more worried about what Trump tweeted last night than the fact their spouses came in the door visually exhausted and down.
What we need is to address ours and our fellow human's emotions, not work ourselves to death while absorbing as much entertainment and drugs during our down time with the money we've made.
Re: (Score:2)
I think you completely missed my point. That's OK though.
Re: (Score:2)
When depressed to that point, emotions tend to swing so hard and so fast that any mention of predictions during this state of mind is utmost bullshit.
It doesn't try to predict if a person will try to commit suicide this second. Rather, I assume it tries to predict when a person will get "depressed to that point". So yes, emotions are unpredictable, but if you are sufficiently depressed, at some point you are likely to consider or attempt suicide.
It's like saying "winter is cold" even though you might have a couple 60 degree days in December - true enough in the big picture.
Of course, the software could be worthless, but I think such software *could* work
Re: (Score:2)
Nobody is special.
In fact we are quite lucky to even be having this conversation, you and I, Anonymous Coward. Astronomically so.
I am however, far from wrong. These "statistics" are hogwash. Place them back in your ass where they came from.
Predict or arrange? (Score:2)
This reminded me of a sci-fi novel in which an AI arranges for people to die in bizarre and apparently accidental ways by interfering with other automated systems.
As mentioned in other comments, this is just an algorithm but maybe it's not a huge leap to a more complex system doing the same this and given the goal of improving the accuracy percentage... well there's one option that would work, just kill off individuals that have already been flagged at risk.
Involuntary commitment? (Score:5, Insightful)
We don't know what the rate of false positives are, but with our current state of health insurance, getting locked up for a week and then getting a $50k bill would probably drive most people to suicide.
Re: (Score:2)
And can they be sued for false negatives? If someone commits suicide but the family finds out that they system didn't flag them as a risk then are they at risk for a lawsuit? I'm sure that someone will sue but what the courts decide their responsibility was is a different matter.
I doubt the person would get locked away for the week but I'm sure that a visit from a social worker or someone with some training in spotting the signs of someone who might commit suicide soon would be sent. Which then leads into w
Wrong title (Score:4, Insightful)
The title speaks of suicides while the article only of _attempted_ suicides, checking admissions to hospitals
Real suicides get admitted to the morgue instead.
I'd make a suggestion, but you wouldn't listen. (Score:2)
No one ever does.
Re: And we thought skynet will kill us all... (Score:1)
Re: (Score:1)
I just read the article, and no, it is as mathematically vacuous as the summary.
Re: (Score:2)
Like the new kids remember Clippy.
Re: (Score:2)
Paid.
Re: (Score:3)
Re: (Score:2)
lol. He's poisoning his own well.
So when you lot see him do ridiculous shit, do you briefly roll your eyes and go "aw shit, I have to defend this/put a spin on this" or do you just think "HURRRRRR YEAH GO DONNNNNNIE HAHAHAUHUHUHUH"?
Re: (Score:2)
Re: (Score:2)