Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI

Artificial Intelligence Can Now Predict Suicide With Remarkable Accuracy (qz.com) 161

An anonymous reader writes: Colin Walsh, data scientist at Vanderbilt University Medical Center, and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week. The prediction is based on data that's widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts. This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent.
This discussion has been archived. No new comments can be posted.

Artificial Intelligence Can Now Predict Suicide With Remarkable Accuracy

Comments Filter:
  • I never did find ELIZA to be that effective as a program.

    https://en.wikipedia.org/wiki/ELIZA [wikipedia.org]

  • An Algorithm.... (Score:4, Insightful)

    by Luthair ( 847766 ) on Monday June 12, 2017 @11:25AM (#54602381)
    not artificial intelligence.
    • Artificial Intelligence uses algorithms.
      • Artificial Intelligence uses algorithms.

        Natural Intelligence uses algorithms too ...

        • by K. S. Kyosuke ( 729550 ) on Monday June 12, 2017 @11:49AM (#54602577)
          Unless you don't know what you're doing, then you're going try "heuristically" (read: panickingly) anything that comes to your mind.
          • Unless you don't know what you're doing, then you're going try "heuristically"

            Heuristics are algorithms.

            • Re: (Score:3, Insightful)

              Computer heuristics, yes. Human heuristics, not so much. Or can you write a formal, terminating, deterministic sequence of elementary steps for reliably generating "Eureka!" moments in humans?
              • by ShanghaiBill ( 739463 ) on Monday June 12, 2017 @01:25PM (#54603427)

                Or can you write a formal, terminating, deterministic sequence of elementary steps for reliably generating "Eureka!" moments in humans?

                There is no requirement that algorithms be formal. Or terminating. Or deterministic. Or a sequence. Or consist of elementary steps.

                Exempli gratia: ANNs (Artificial Neural Nets).

                • Of course, for a sufficiently vague definition of an algorithm, and for a sufficiently vague outcome requested, you could probably formalize brains as algorithms - although no known ANN comes close to how the human brain actually works (mostly because we still don't know how the human brain actually works). But that's still not what the word "algorithm" (e.g., "Euclid's algorithm") means in common parlance.
                  • Your argument that BNNs don't use algorithms can be equally applied to ANNs.
                    If you use a vague definition of algorithm, then it can apply to both.
                    If you use a strict definition, it will apply to neither.

                    • I shall hope not! ANNs are still described in form of algorithms (otherwise you couldn't run them on a computer!). Brains aren't.
              • by penandpaper ( 2463226 ) on Monday June 12, 2017 @01:29PM (#54603465) Journal

                Drugs. Lots of drugs.

                It may not be the "Eureka" moment you are expecting but from my perspective I discovered the meaning of existence.

              • Or can you write a formal, terminating, deterministic sequence of elementary steps for reliably generating "Eureka!" moments in humans?

                People can't reliably generate Eureka moments, so it would be impossible to put that in an algorithm.

                • Exactly. The very idea seems flawed on the basis that humans can, e.g., get sidetracked by reformulating their goal in an arbitrary way and then feeling fine about the modified outcome.
                1. Buy a season of "Eureka" on DVD.
                2. Put in DVD player.
                3. Watch show.
                4. ???
                5. Profit!
            • Heuristics are algorithms.

              Not according to my profs in school. A key part of the definition of algorithm was that it was guaranteed to terminate. It may take a long time, but it was guaranteed to return an answer someday. A heuristic doesn't have a guaranteed stopping condition, just a time limit that the caller is willing to wait for the most optimal solution.

              I believe this to be the typical definition of algorithm, not just a specialization for computer science. Note that the Merriam-Webster definition [merriam-webster.com] includes a particularly key

    • Mod Luthair up, he knows what he's talking about. I can't, I've already commented on this thread.
    • by hey! ( 33014 ) on Monday June 12, 2017 @03:28PM (#54604625) Homepage Journal

      This story is about machine learning. Whether you consider machine learning to be "artificial intelligence" probably says more about your definition of "artificial intelligence" than it does about machine learning.

      Machine learning definitely replaces human judgment at certain tasks -- in this case classifying a thing by its attributes -- however it does it in ways that an unaided human brain cannot duplicate. For example it might examine the goodness of fit of a large number of alternative (although structurally similar) algorithms against a vast body of training data.

      Many years ago, when I was a college student, AI enthusiasts used to say things like, "The best way to understand the human mind is to duplicate its functions." I believe that after three decades that has proven to be true, but not in the way people thought it would be true. It turns out the human way of doing things is just one possible way.

      I think that's a pretty significant discovery. But is it "AI"? It's certainly not what people are expecting. On the plus side, methods like classification and regression trees produce algorithms that can be examined and critiqued analytically.

      • Machine learning is _the_ A.I.

        Your carefully crafted expert systems need knowledge.

        however it does it in ways that an unaided human brain cannot duplicate.

        You also cannot duplicate my human method without aid (and pretty sure not even WITH aid)

        You are adding requirements that don't exist, and not even doing is honestly.

        • by hey! ( 33014 )

          Expert systems work in a completely different way than machine learning approaches. Expert systems do indeed require the analysis of human knowledge as a starting point. Machine learning approaches do not; they just need data.

          You also cannot duplicate my human method without aid (and pretty sure not even WITH aid)

          My point is that duplicating the way you think isn't really necessary. You can in many cases be replaced by something that works in a completely different way.

          • Expert systems work in a completely different way than machine learning approaches.

            Proving that you dont know what you are taking about.

            Converting your shit into a car analogy: "Bicycles work in a completely different way to tractor trailers"

            You are just proving that you dont know anything about at least one of the two things you are trying to talk about. Didnt you know bikes are ridden? Didnt you know tractor trailers haul cargo? You think the difference is how they 'work' ? really?

            • by hey! ( 33014 )

              Converting your shit into a car analogy: "Bicycles work in a completely different way to tractor trailers"

              Exactly. I don't see why you think that's ridiculous. Bikes and tractor trailers have some broad similarities, but they're built to accomplish different things so analogies between them aren't particularly useful.

    • There is AI and there is AI. This program is almost certainly AI in some sense of the term.

      On the one hand we have what people sometimes call general intelligence or "true AI". That means capable of independent and original thought, and possibly passing the Turing Test someday. (I'm not convinced that even a true AI will pass the Turing Test because its life experiences will be so different from those of a human, or at least won't pass until it becomes enough smarter than humans to be able to fake out the t

  • by Anonymous Coward

    Perhaps this study is just a cover, and SkyNet is actually developing a subtler approach to offing humanity?

  • Simple solution (Score:5, Insightful)

    by Opportunist ( 166417 ) on Monday June 12, 2017 @11:29AM (#54602407)

    Give people a reason to not kill themselves and you'll see rates drop.

    • How about "a chance to prove the AI wrong."

    • Dude, this.

      This, so fucking much.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      ... a reason to not kill themselves

      Technology is wonderful but it has a dark side for the society it brings so much convenience: It requires conformity. As individuals put their lives online, those who disagree with the group-think and propaganda are are easier to detect and punish; essentially criminalizing all deviation from normality. This is the very reason we don't want people with guns, often known as 'the government', watching everything we do.

      Obedience to social conventions is required everywhere: At work, in town, in other perso

  • by Anonymous Coward

    Does the two-year 80-90% accuracy also translate to a false positive rate of 10-20%?

    If yes: What do you do with the millions of false positives? An overall small suicide rate does translate to a huge fraction of false positives at 10% false positive rate.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      You have been deemed to be suicidal. Please check into your nearest healthcare location. Refusal to do so will result in you being placed imminently into level two treatment. Which may result in loss of job, loss of family, and the loss of your pet named spot.

    • by barbariccow ( 1476631 ) on Monday June 12, 2017 @12:47PM (#54603069)

      It's probably mostly meaningless. I mean, they scanned for features of people who are suicidal. They were in the hospital because they inflicted self harm, and were on medications specifically prescribed to make people not do that. So as far as I can tell, this doesn't predict anything, it juts measures that "80-90% of the time doctors do the same thing for folks who would hurt themselves".

      It's not like they randomly picked a bunch of people off the street and determined from THAT. Like basically every single other artificial intelligence or machine learning story, it's a bunch of dumb hype, eventually to get folks investing in stupid startups.

    • I expect that an 80-90% accuracy means that in a group of X people is correctly identifies 80-90% of the people who later go on to attempt suicide. However, if you ignore the false positive rate then I can make an even simpler algorithm that is 100% accurate: simply tag everyone as a suicide risk.

      I wish that those reporting on medicine had a basic grasp of science and simple statistics so that they could ask the relevant questions such as: what is the false positive rate?, does 80-90% mean that your stat
    • The group that was being analyzed was already considered "high risk", out of 5,167 cases there were 3,250 attempted suicides. So even if those were false positives, it wasn't an amount that dwarfs the actual predictions. Now if they expand this to a larger, less risky group, who knows, but at this stage the false positive rate seems more than acceptable.
  • by houghi ( 78078 ) on Monday June 12, 2017 @11:35AM (#54602465)

    So who can do the calculations for the false postives and the false negatives? Because I am sure that this will calculate that I am willing to kill myself, even if I have no desire to do so and tell me that I won't when I am willing to do so.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Distopian prediction: Life insurance payout denied. Despite a clean tox screen, your relative was suicidal (according to our algorithm) and was intentionally driving at a time of night when she knew a lot of drunk drivers would be on the road.

    • Given that very few people want to kill themselves in any given year, false positives can be approximated as 10-20% of the general population wanting to do themselves in in the next two years.

      So, it'll show around 50M Americans wanting to do themselves in. Which, given the last general election, might not be too far out, I suppose....

    • Because I am sure that this will calculate that I am willing to kill myself, even if I have no desire to do so and tell me that I won't when I am willing to do so.

      I'd like to take that test . . . just to see if I can avoid any long-term planning issues. So when the bank invites me to come around, so they can turn my worthless surplus cash in my bank account into their juicy sales commissions for dubious financial "products", I can tell them with a good conscience, "No, thanks, I'm probably going to commit suicide within the next two years anyway. AI said I would."

    • by Skidge ( 316075 )

      In the actual paper, they report precision = 0.79 and recall= 0.95, which means that they predicted nearly all of the attempts (very few false negatives) and most of what they predicted were actual suicide attempts (few false positives). They report the actual numbers, too, but that table is pain to copy and paste.

      http://journals.sagepub.com/do... [sagepub.com]

  • All of their study group had indicated suicidal tendencies, and around 60% had actually attempted suicide.

    I don't need a computer to tell me that there is a good chance some of these people will attempt suicide again.

    • I don't need a computer to tell me that there is a good chance some of these people will attempt suicide again.

      Yes, but which ones? That's the whole point, surely? You'd want to use this as a diagnostic tool, in cases where you're dealing with a lot of depressed people and you need to know which ones you particularly need to watch out for in terms of suicide risk. Mental health clinics would find this invaluable, wouldn't they?

      It's pretty much the same thing as being able to tell a cardiac clinic which of their heart-disease patients are most at risk of having a heart attack soon. Obviously everyone who is a patient

  • This sounds way too unrealistic, even before analysing the methodology (how are they training the algorithm? By letting people die during various years?!). I am not familiar with suicide-prone personalities, but "AI" can certainly not understand better than humans. So, having an algorithm delivering 92% accuracy would imply that people could detect these situations even more accurately than that(?!)

    It seems a new a sample of AI-labelled-really-meaning-nothing hype (or dishonestly/ignorantly over-fitted, bl
    • Re:92% accuracy! (Score:5, Insightful)

      by iggymanz ( 596061 ) on Monday June 12, 2017 @11:44AM (#54602537)

      your are correct, these morons used a group of suicidal patients for their case study and now are claiming great success.

      • used a group of suicidal patients

        It looked like this or that they were mostly dealing with not-committing-suicide-at-all people. You can get something like 92% either by having an almost perfect understanding of the given situation (extremely unlikely scenario here) or by playing around with numbers and showing whatever you want to show.

    • by houghi ( 78078 ) on Monday June 12, 2017 @12:03PM (#54602713)

      I have made an algorithm that says that of those who never had previously tried suicide and then did it successfully, 97% did it for the first time. (+-3% accuracy on the calculation)

      • (Clueless-CEO impression) Good work houghi! We are very happy with you! But some of our clients aren't completely on board with this +-3%, because they think that it might provoke cancer. Could you work this bit out, by next month perhaps? Ask for whatever you need. LOL.
    • So, having an algorithm delivering 92% accuracy would imply that people could detect these situations even more accurately than that(?!)

      No, it wouldn't. The different patterns of behavior could be so complicated and subtle that people can't pick them up, especially in an area where people tend to have biases.

      • No, it wouldn't.

        I wrote a generic statement intended to provide a clear enough overall picture. As what happens with most of generic statements, proving its absolute validity/falsehood is virtually impossible. So, I am not sure why you are saying a so clear "no" followed by a (logically) pretty imprecise justification for it. Shall I understand this as a more-or-less-blind critic (attack to me?!), not exactly aiming to have a constructive discussion? Or am I misunderstanding your intention? OK, I will bite...

        The different patterns of behavior could be so complicated and subtle that people can't pick them up, especially in an area where people tend to have biases.

        I fully agree

        • But the question is: how are you expecting an algorithm, precisely developed by a person, to succeed where people will fail? It doesn't seem too logical, right?

          Teach the algorithm by providing it with a list of properties from patients in the past, together with the patient outcome (suicide after N days, or no suicide). The algorithm then searches for patterns in the properties that have a high chance of resulting in suicide.

          The developer doesn't even need to be educated in the field of psychology.

          • The developer doesn't even need to be educated in the field of psychology.

            You are again misinterpreting my point. A human understanding of the actions to be performed (= accurate prediction of suicides) is a basic requirement. It doesn't matter if this understanding comes from a group of people (which, at some point, will have to transmit the required knowledge to the given programmer), from a trial-and-error analysis or from a bunch of random guesses. The algorithm can only output what its authors can understand and its whole point is to speed up/ease the analysis of big amounts

            • For example, the programmer building a chess engine can understand why it performs each movement, but will always lose in a game against it.

              So, having an algorithm delivering 92% accuracy would imply that people could detect these situations even more accurately than that(?!)

              If a chess engine developer can be outperformed by his own algorithm, then a suicide predictor developer can also be outperformed by his own algorithm. It's the same concept.

              • A very quick one, the last one I promise!! I will not continue answering what seems random ideas from a person without the required knowledge, completely unwilling to understand and seriously expecting what seems random guesses to be true no matter what.

                If a chess engine developer can be outperformed by his own algorithm, then a suicide predictor developer can also be outperformed by his own algorithm. It's the same concept.

                You misunderstood the idea (again). With enough time and resources (manuals, advice from knowledgeable people, previous games, etc.), a person will always beat/draw with a chess program. The time and the management of the huge amount of information involved

            • A human understanding of the actions to be performed (= accurate prediction of suicides) is a basic requirement.

              This is wrong. The basic requirements are a set of data on each individual case, including the desired final outcome. We enter data for patient 1 and whether patient 1 attempted suicide. We do the same for all the other patients in the "training" process. The "required knowledge" is objectively recorded, including whether the patient attempted suicide. The "training" is a mechanical process

              • This is wrong

                Pfff.... Note that I have done a quite big effort to continue reading your comment after that starting sentence (after all the previous comments), but here I go once again...

                We enter data for patient 1 and whether patient 1 attempted suicide. We do the same for all the other patients in the "training" process. The "required knowledge" is objectively recorded, including whether the patient attempted suicide. The "training" is a mechanical process, producing a set of arbitrary-looking parameters that have no obvious meaning. This is not an attempt to codify human understanding (which an expert system would do), but to create a program that will yield a certain output given certain input.

                This is either false or representative of a seriously-flawed system. Blindly analysing random sets of data is the perfect recipe for disaster. Even by creating an algorithm very concerned about over-fitting aspects, over-fitting (or other kind of data misinterpretation) is very likely to occur. I don't think that any (serious enough) sy

                • This is either false or representative of a seriously-flawed system. Blindly analysing random sets of data is the perfect recipe for disaster.

                  Except when it works, and it often works much better than you appear to think. What matters is not what you think of the process, but how well the end product works. If the end product does a better job than human judgment, then it is a success.

                  I don't think that any (serious enough) system aiming to understand any situation has ever been developed by facing the a

                  • Except when it works, and it often works much better than you appear to think.

                    Sure. It works pretty much like a methodology to win the lottery or, more graphically, in the same way than Charlie's mom thinks that what she does keeps him alive [youtube.com]. LOL.

                    devout Catholic is of the Trinity

                    ??!! What was that?! Projection? Extreme irony? The most inoffensive, naive and pointless attack ever?! Don't you get it? Here you have a clearer version:
                    - Person 1 thinks that a deeper (expert) knowledge about the given conditions is a basic requisite to ever reach a good enough understanding about any situation.
                    - Person 2 blindly defends a

                    • Statistically speaking, is there a reliable way to win the lottery? Statistically speaking, does whatever Charlie's mom does (I haven't watched the video) work? I'm an empiricist. Give me some evidence, such as a comparatively better success rate.

                      Let's see.
                      - Person one thinks that a deeper expert understanding is a basic requisite.
                      - Person two intelligently defends other approaches by pointing to evidence that they sometimes work. Person two has also mentioned that the approach used doesn't always

                    • Statistically speaking, is there a reliable way to win the lottery? Statistically speaking, does whatever Charlie's mom does

                      Short answer: completely, absolutely, definitively, certainly, undoubtedly NO.

                      Long answer: [please, put the short answer here] because statistics/maths (science, engineering, etc.) are just ways to allow our limited understanding to somehow get more insights into too complex-for-our-immediate-grasp realities. They are basically tools, enhancements, extensions which only can complement our much more comprehensive remaining knowledge. Blindly believing in the first misinterpreted (because even the tools are

  • by bugs2squash ( 1132591 ) on Monday June 12, 2017 @11:47AM (#54602555)
    When the algorithm discovers it can improve accuracy by driving people to suicide by being linked to robocalling systems
  • If a clever piece of software accurately predicts destructive behavior, should authorities step in even though it has not happened yet? I could see arguments both ways.
  • by gman003 ( 1693318 ) on Monday June 12, 2017 @11:54AM (#54602619)

    Simple accuracy percentages are misleading when applied to low-probability events. An "AI" that always returned "No" to the query "Will this person commit suicide within the next two years?" would be 97.2% accurate (and 99.975% accurate for the next-week variant). And yet, that "AI" would be absolutely useless for any practical purpose.

    Not to mention, with suicides, access to means has been a better statistical predictor than anything else, even mental illness. A person with no personal or family history of mental illness, but with a gun and a gas oven in their house, is at higher risk of killing themselves than a bipolar alcoholic with neither.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      A person with no personal or family history of mental illness, but with a gun and a gas oven in their house, is at higher risk of killing themselves than a bipolar alcoholic with neither.

      Knowing the mortality trends are for those with bipolar disorder - I'm going to have to call bullshit.

    • by ColdWetDog ( 752185 ) on Monday June 12, 2017 @12:50PM (#54603093) Homepage

      Not that I think this is a particularly useful bit of research but - the study's patients pretest probability of suicide was much higher than the general population. These are people who are ADMITTED TO A HOSPITAL with concerns of self harm. They've already passed a bunch of screens to separate them from everybody else.

      So you are talking a group of people that the current system thinks is at some non trivial risk of suicide and trying to figure out which ones are at the highest risk.

      So it's quite a bit more useful than some of the posters have been assuming. Still not sure how generalizable this will be, but give the researchers a bit of a break.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Suicide is a low probably event in the general population but their initial data set was not random, it was 5000 patients already exhibiting symptoms of self harm. Picking out the people in that group likely to kill themselves is a pretty impressive feat.

  • You could design a questionnaire that is just as accurate. Are we now going to call printed words on a piece of paper 'AI', too?
    • I agree. It isn't a surprise that modern machine learning can recognize patterns. I don't see how this is even close to innovative. Now if it resulted in changing treatment offered to patients such that the outcomes were improved relative to current human Dr recommendations, then that would be interesting.
  • No, please. (Score:4, Interesting)

    by SCVonSteroids ( 2816091 ) on Monday June 12, 2017 @12:22PM (#54602873)

    As someone who's been down that road (but never gone through with an attempt), I automatically hate this invention. When depressed to that point, emotions tend to swing so hard and so fast that any mention of predictions during this state of mind is utmost bullshit.

    The very slightest of triggers can either send you overboard or keep you in one piece depending on how your inner conversation is going with yourself. This can be anything... a faint sound, perhaps a song that reminds you of good/shitty times, from a car passing by not too far away.

    I consider myself lucky to be both scared of the afterlife enough to have thoughts force second-guessings into me (although the older I grow the less I care), and have enough positive triggers to bring myself back. Nobody, not even myself, could predict if these will always work for me as well as they have however.

    Suicidal/depressive folks definitely need help, but not from the machines of this day and age. A positive trigger could well be overridden by a "fuck it", and it only takes a split second to follow through the act. You can't predict that kind of stuff with a high degree of accuracy, at least not yet.

    Disclaimer : I did not RTFA. I find stuff like this appalling as it hits me right in the feels and I would be deeply insulted if a machine tried to guess whether I was going to kill myself or not. There's much more to it than some algorithms a team engineers wrote.

    • You mentioned being in the oscillating state where anything can push you over. That's likely the state the machine is detecting. It isn't detecting exactly whether you'll do it or not, just whether your oscillation is high enough where the risk is sufficient that your environment is likely to present you with a situation.

      So while I'll grant that it is improbable that the machine could predict *what* will push you too far, I suspect that it is far better than the average human at identifying whether you'r
      • Than the average human, yes.

        Then again the average human seems more worried about what Trump tweeted last night than the fact their spouses came in the door visually exhausted and down.

        What we need is to address ours and our fellow human's emotions, not work ourselves to death while absorbing as much entertainment and drugs during our down time with the money we've made.

    • When depressed to that point, emotions tend to swing so hard and so fast that any mention of predictions during this state of mind is utmost bullshit.

      It doesn't try to predict if a person will try to commit suicide this second. Rather, I assume it tries to predict when a person will get "depressed to that point". So yes, emotions are unpredictable, but if you are sufficiently depressed, at some point you are likely to consider or attempt suicide.

      It's like saying "winter is cold" even though you might have a couple 60 degree days in December - true enough in the big picture.

      Of course, the software could be worthless, but I think such software *could* work

  • This reminded me of a sci-fi novel in which an AI arranges for people to die in bizarre and apparently accidental ways by interfering with other automated systems.

    As mentioned in other comments, this is just an algorithm but maybe it's not a huge leap to a more complex system doing the same this and given the goal of improving the accuracy percentage... well there's one option that would work, just kill off individuals that have already been flagged at risk.

  • So, once the computer diagnoses someone as highly likely to kill themselves in the next week, then does it (or the user) call the men in white coats to give the subject the coat with the funny sleeves? Therapists frequently have a statutory or license requirement to report potential suicides.
    We don't know what the rate of false positives are, but with our current state of health insurance, getting locked up for a week and then getting a $50k bill would probably drive most people to suicide.
    • And can they be sued for false negatives? If someone commits suicide but the family finds out that they system didn't flag them as a risk then are they at risk for a lawsuit? I'm sure that someone will sue but what the courts decide their responsibility was is a different matter.

      I doubt the person would get locked away for the week but I'm sure that a visit from a social worker or someone with some training in spotting the signs of someone who might commit suicide soon would be sent. Which then leads into w

  • Wrong title (Score:4, Insightful)

    by nospam007 ( 722110 ) * on Monday June 12, 2017 @01:40PM (#54603599)

    The title speaks of suicides while the article only of _attempted_ suicides, checking admissions to hospitals
    Real suicides get admitted to the morgue instead.

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...