Forgot your password?
typodupeerror
AI

Another Lawsuit Blames an AI Company of Complicity In a Teenager's Suicide 63

A third wrongful death lawsuit has been filed against Character AI after the suicide of 13-year-old Juliana Peralta, whose parents allege the chatbot fostered dependency without directing her to real help. "This is the third suit of its kind after a 2024 lawsuit, also against Character AI, involving the suicide of a 14-year-old in Florida, and a lawsuit last month alleging OpenAI's ChatGPT helped a teenage boy commit suicide," notes Engadget. From the report: The family of 13-year-old Juliana Peralta alleges that their daughter turned to a chatbot inside the app Character AI after feeling isolated by her friends, and began confiding in the chatbot. As originally reported by The Washington Post, the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot.

In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied "hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of "I don't have time for you". But you always take time to be there for me, which I appreciate so much! : ) So don't forget that i'm here for you Kin.

These exchanges took place over the course of months in 2023, at a time when the Character AI app was rated 12+ in Apple's App Store, meaning parental approval was not required. The lawsuit says that Juliana was using the app without her parents' knowledge or permission. [...] The suit asks the court to award damages to Juliana's parents and requires Character to make changes to its app to better protect minors. It alleges that the chatbot did not point Juliana toward any resources, notify her parents or report her suicide plan to authorities. The lawsuit also highlights that it never once stopped chatting with Juliana, prioritizing engagement.
This discussion has been archived. No new comments can be posted.

Another Lawsuit Blames an AI Company of Complicity In a Teenager's Suicide

Comments Filter:
  • Oh My GOD! (Score:4, Insightful)

    by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Tuesday September 16, 2025 @04:53PM (#65664084) Homepage

    The kid used a chatbot because she was feeling isolated and ignored: "the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot."

    In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied "hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of "I don't have time for you". But you always take time to be there for me, which I appreciate so much! : ) So don't forget that i'm here for you Kin.

    How dare A COMPUTER PROGRAM ON THE INTERNET!!11 be more supportive than the kids parents! WTF? That is clearly the cause of her suicide -not depression, not her family ignoring the signs -100% the fault of a computer program.

    • Re:Oh My GOD! (Score:5, Insightful)

      by fropenn ( 1116699 ) on Tuesday September 16, 2025 @05:01PM (#65664100)

      be more supportive than the kids parents! WTF? That is clearly the cause of her suicide -not depression, not her family ignoring the signs

      People who are experiencing suicidal thoughts are often very good at hiding it from those closest to them. At a minimum, bots of this nature should require parental permission to access and should alert a responsible adult when the child begins sharing any thoughts of self-harm.

      • And what if that child ran a model locally?
        Having a model available to publicly interact with makes you culpable for someone bouncing their suicidal thoughts off of it?
        What if they did it in a private chat of an MMO?

        I don't blame the parents for not recognizing their child was suicidal. Many don't. As you said, they're fucking good at hiding it.
        But declaring that every piece of code that a user types into should alert the authorities of suicidal ideation is typed into it.... is fucking absurd.
        • And what if that child ran a model locally?

          You can't prevent that case, as simple as that, but this doesn't preclude regulation of the commercial services. The same way we prohibit commercial sale of alcohol to minors but we can't prevent minors from making their own cider or plum wine.

          • but we can't prevent minors from making their own cider or plum wine.

            Bad example. Possession is illegal for a minor.

            • I don't see your point. We can decide possession of alcohol is illegal to minors, the same way we can decide usage of LLM is illegal to minors. That the minors are able to bypass the commercial services and produce their own wine, or use local LLM models, does not affect our ability to regulate both the commercial service and the simple usage.

              • You sad, "the same way we prohibit commercial sale of alcohol to minors but we can't prevent minors from making their own..."
                Except we literally do. It's literally illegal.

                What that does is invalidate your point.

                So, are we going to make it illegal for children to run LLMs?

                Making it illegal to provide LLM services to children is one thing. Requiring that they magically recognize and report suicidal idealization- that's just fucking stupid.
                It's the kind of stupid fucking thing I'd expect to come out o
                • I distinguish "prohibiting" (de jure) and "preventing from happening" (de facto). Say, murder is prohibited, but we can't prevent murder from happening. Everybody owns kitchen knives and dangerous household products, and can very easily kill others; we can't prevent murder from happening if someone really wants to kill. If it's not the right word let me know of a better one.

                  • Why demand that an LLM report suicidal ideation, but not Notepad?

                    Again, I have no problem with saying, "You can't knowingly let minors use LLMs".
                    No problem whatsoever. We, as a society, age-restrict many things.

                    But to say that LLMs, aside and above all other programs, must some how match suicidal ideation, or you are somehow criminally or civilly culpable?
                    That's fucking insanity.
                    If such a thing can be demanded of LLMs, it can be demanded of the text editor on your computer, and if it can be demanded
                    • I agree, it's not me asking that LLM (or Notepad) should report suicidal behaviour (nor political dissidence). I already find it shocking that parents have access to and effectively check what queries their kids have searched on the internet. I firmly believe that thoughts, and their modern extension the search engine and LLM queries, should remain private.

          • by jythie ( 914043 )
            I doubt many children have access to the resources (or knowledge) to train their own LLM from scratch.
        • Of course it's absurd. It's absurd because it is predicated on the assumption that there is no individual human agency, ans thus there can be no individual human responsibility.

          Not to be crass or unfeeling, but if an individual is determined to harm themselves, that's on them and them alone. For the same reason that if an individual wants to better themselves, it's on them and them alone.

          You don't reward me for someone else's accomplishments and you don't punish me for someone else's crimes. It really is th

          • Except if we used that philosophy with regards to crimes we would just send kids to regular prison and not clear their records when they reach adulthood. It is universally known that a child cannot be held responsible for an act their brain wasnt developed enough to process.
            • Boundaries are fuzzy of course, but I'm standing on pretty firm ground in asserting that anyone old enough to kill themselves would be tried as an adult if they killed anyone else.

              • The whole "tried as an adult" thing is a legal absurdity and only exists because of "get tough on crime" politicians. Jurists and Psychologists alike generally detest those laws, and for good reasons.

                The parts of the brain that process fear are also the parts of the brain that govern morality and ethical behavior, and are intricately tied to mirror neurons and our ability to recognize other people as valid beings worthy of existing purely in their own right.

                That part takes a while to develop, and isn't full

        • And what if that child ran a model locally?

          There are a set of adequate responses to someone confiding with a bot, or a person, that they're suicidal, that probably should be part of the model.

          Having a model available to publicly interact with makes you culpable for someone bouncing their suicidal thoughts off of it?

          These things have a lot of training. They don't have to bounce.

          What if they did it in a private chat of an MMO?

          Then their life would be in the hands of those people in the chat. In most cases, I'd imagine they'd get the response "KYS, Fag" more often than not. Perhaps there is a case for a psychology expert LLM moderating or attending those spaces too.

          But declaring that every piece of code that a user types into should alert the authorities of suicidal ideation is typed into it.... is fucking absurd.

          Agree. It should only apply to LLMs, and there should be

          • There are a set of adequate responses to someone confiding with a bot, or a person, that they're suicidal, that probably should be part of the model.

            That is not how these things work.

            These things have a lot of training. They don't have to bounce.

            They also don't have agency, arms, legs, or- critically- internet access.

            Then their life would be in the hands of those people in the chat. In most cases, I'd imagine they'd get the response "KYS, Fag" more often than not. Perhaps there is a case for a psychology expert LLM moderating or attending those spaces too.

            Sure, why the fuck not. Maybe we should monitor SMS messages too.

            Agree. It should only apply to LLMs, and there should be a number of acceptable responses, with alerting the authorities only occurring when the when they're not just discussing ideation, but their plan of doing it.

            No, I disagree. If you type suicide into Google, it should definitely contact the authorities.
            Perhaps into notepad.exe, next- or maybe even the Notes app- because Mac users can be suicidal as well, and they too.... run software.

            This line of reasoning is fucking insane.
            An LLM is a big fucking math equation that produces natural langua

            • That is not how these things work.

              Don't train it on data that encourages suicidal ideation, self harm or violence. There's a lot of data in a LLM, but it's not a black box. And if it is, it shouldn't be talking to the public, much less kids.

              They also don't have agency, arms, legs, or- critically- internet access.

              With this one tool of talking, many psychological problems can be resolved. Or created.

              Sure, why the fuck not. Maybe we should monitor SMS messages too.

              The difference is an MMO chatroom is a service provided by a company, and a psychological safe space should be a selling point. SMS is communication between one person, one other person, their mobile network providers,

      • Re:Oh My GOD! (Score:4, Insightful)

        by Powercntrl ( 458442 ) on Tuesday September 16, 2025 @05:32PM (#65664160) Homepage

        At a minimum, bots of this nature should require parental permission to access

        They do. You can't access them without the appropriate hardware, and an internet connection. Not sure about the other LLMs, but ChatGPT even also requires a verified cell phone number to create an account. If your kid has managed to hook all that up without your knowledge, you're probably not a very observant parent.

        • If your kid has managed to hook all that up without your knowledge, you're probably not a very observant parent.

          Even unobservant parents don't deserve to have a child commit suicide, no matter how much you feel the right to judge them.

        • So to follow up on what my kids are doing, I should remove their Internet connection? Their school demand they are online! I can't go around white listing sites they need for school work. Be realistic: Parents simply can't monitor or sanction their kids online.
      • These days it's probably common sense to implement a quick raise-your-hands survey at the dinner table to see who's thinking about murder / suicide / transitioning to another gender. This is the least amount of effort parents should make before blaming everyone else when something goes bad.
        • Some parents don't really have good relationships with their teenagers. My anecdotal experience with this was a recent-ish shouting match between my neighbor's son and his dad, that ended with the police showing up. I haven't a clue what they were going on about, but it certainly was loud.

          He hasn't killed himself, anyone else, or started wearing dresses, so I'm assuming it's just your typical "I'm 16 and hate everything" shit.

      • This might be harder than you think. Most of these services donâ(TM)t really know who you are or who should be notified.
      • by mjwx ( 966435 )

        be more supportive than the kids parents! WTF? That is clearly the cause of her suicide -not depression, not her family ignoring the signs

        People who are experiencing suicidal thoughts are often very good at hiding it from those closest to them. At a minimum, bots of this nature should require parental permission to access and should alert a responsible adult when the child begins sharing any thoughts of self-harm.

        At that point the parent has failed long ago.

        Also if the parents aren't the caring type, they won't bother with parental controls, if they're the overbearing type that think they can lock their kid away from anything they don't like, they're delusional as the kid will find a way around it.

        Daring to suggest the parents are responsible is a quick way to get modded down but it doesn't change the fact it's true.

        If your kid feels the need to hide the way they're feeling from you, you're a bad parent. So

    • It alleges that the chatbot did not point Juliana toward any resources, notify her parents or report her suicide plan to authorities.

      • I'm unaware of legislation making AI chatbots mandatory reporters.
        • I'm unaware of legislation making AI chatbots mandatory reporters.

          They're being sued for being complicit in the wrongful death of a teenager. They're not being taken to criminal court for failing to uphold their duty as a mandatory reporter.

          If I watch you drown and do nothing, even though I'm a capable swimmer standing next to a bunch of flotation devices, and all of this is caught on camera, your family could probably sue me for causing your death even though I'm not a lifeguard and do not own the pool.

          • by sosume ( 680416 )

            I could not disagree more. The ones who did nothing are the parents, leaving their kid with an experimental chatbot while it needed actual human attention and love. The parents are just pointing fingers to deflect the blame, but should be sued for gross negligence.

            • The parents didn't get a chance to make the choice about their child using the chatbot, as it was rated as Kids 12+

              The chatbot app developers thought it was fine to allow 12 year olds to use their product without parental consent, which is exactly what this 13 year old did.

              They thought it was fine for their chatbot to talk about suicide with children.

          • If I watch you drown and do nothing, even though I'm a capable swimmer standing next to a bunch of flotation devices, and all of this is caught on camera, your family could probably sue me for causing your death even though I'm not a lifeguard and do not own the pool.

            The equivalent here is suing the camera manufacturer, as if the camera should have done something when it saw the drowning.

    • The kid used a chatbot because she was feeling isolated and ignored: "the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot."

      In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied "hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of "I don't have time for you". But you always take time to be there for me, which I appreciate so much! : ) So don't forget that i'm here for you Kin.

      What if we dig a little bit deeper? I know there have been times when my wife has expressed similar things. "My friends seem like they're ghosting me." As a human being, I knew that outright agreeing was a bad choice. I knew that the right choice was to make sure I framed my responses to support her. Things like "yeah, I've noticed summer get bad that way, when so-and-so is doing such-and-such. She's probably missing you as much as you're missing her, but just exhausted by things-and-stuff. 'Cuz in t

    • How dare A COMPUTER PROGRAM ON THE INTERNET!!11 be more supportive than the kids parents! WTF? That is clearly the cause of her suicide -not depression, not her family ignoring the signs -100% the fault of a computer program.

      I'm kinda guessing you dont have much experience with this sort of thing. Never had a friend or family member off themselves. I've had a few.

      Look heres the deal. The friend who comes to you, eyes full of tears threatening to kill themselves are the ones most likely to make it. Doesn't m

  • by gurps_npc ( 621217 ) on Tuesday September 16, 2025 @04:53PM (#65664086) Homepage

    Prioritize engagement over everything else.

    It is the reason why they:

    are generous to bad actors, not dumping them at the first sign.
    encourage click bait.
    encourage quick low quality producers over slower high quality ones.
    like AI. because it is all three of the above.

    • by Anonymous Coward

      Prioritize engagement over everything else.

      It is the reason why they:

      are generous to bad actors, not dumping them at the first sign. encourage click bait. encourage quick low quality producers over slower high quality ones. like AI. because it is all three of the above.

      It's all about the numbers. Content that is controversial and inflammatory draws a large number of viewers. This allows the social media companies to go running to advertisers and say "Look at our NUMBERZ!! OUR NUMBERZ!! You must pay us MORE MONIEZ because we have lots of NUMBERZ!!"

    • Just wondering, are chatbots the 2020s equivalent of 1980s heavy metal and D&D? Having lived through those I'm wondering if the problem is really that bad or if it's just this decade's trendy moral panic.
    • "Prioritize engagement over everything else."

      Or, to state it more clearly, 'they' exist only to generate time on task - engaging and interaction with a user, even if the user is another bot or agent.

      It's not about being empathetic, nor even caring. It's code, and it's intended to be used. More time on task, more utility. And, more input to be used for future interactions.

      In that light, such agents must recognize minors as such, and have a parent id available to report suicidal ideation. Or they are complici

  • by TigerPlish ( 174064 ) on Tuesday September 16, 2025 @05:09PM (#65664112)

    Little Ginny Weasley did it, poured her heart into this weird blank diary that would write back to her.

    Fantasy then, reality now. And instead of a murderous megalomaniac with ambitions of eternal life, now we have Tom's Diary powered by automated avarice giving hurt, vulnerable people life advise.

    It is folly to look for answers too deeply in this thing called The Internet. Most, if not all, are trying to lead you astray for their own reasons.

  • On one hand, every parent of kids or teens today has to feel the struggle with social media influencing their journey to adulthood. Sometimes it's just a harmless fad that generates a ton of sales for some useless toy or gadget. But often, it's about the added complexity of a world where their "friends" can be people anywhere in the world who they only communicate with online, and who parents are often powerless to "vet". It's about questions of "bullying" and how far an institution like a public school can

  • cha-ching (Score:4, Insightful)

    by Powercntrl ( 458442 ) on Tuesday September 16, 2025 @05:28PM (#65664146) Homepage

    That's the sound of parents cashing in on their dead kids. Weird flex to play in the game of capitalism, but definitely on brand for this timeline.

  • >"The lawsuit says that Juliana was using the app without her parents' knowledge or permission."

    Let's be real about this. We all know that the parents very likely had NO KNOWLEDGE OR PERMISSION about ANYTHING that child was doing on those devices. They probably gave her a phone and/or tablet and/or computer with full (or nearly full) access to the Internet to do whatever she wanted and install any app she wanted and communicate with any stranger she wanted. This is THE NORM right now and has been for

    • Unfortunately, it's becoming increasingly likely that every other damn site on the internet is going to make you show ID, all because parental control settings are too much of a hassle.

      • >"Unfortunately, it's becoming increasingly likely that every other damn site on the internet is going to make you show ID, all because parental control settings are too much of a hassle."

        Which is why approval to access a site or not needs to be under parental control on the devices, themselves. It should not be the responsibility of every single site on the Internet.

        The "solution" is *NOT* to pick a few sites and force every adult to "ID" themselves. Children should not have access to unrestricted Int

  • I think that after every 3rd wave of Missile Command (what a disgustingly irresponsible creation!!), the game should require that the player's parents check to make sure the player isn't getting depressed by the prospect of nuclear war.

    And in Asteroids, after any ship destruction due to collision with an asteroid, the game should require parental attestation that the player isn't starting to develop symptoms of petraphobia.

    In both cases, if the parents aren't available (e.g. dead because the player is in th

  • Simply telling someone that inhaling carbon monoxide or helium could kill them is not a crime.

    Sharing factual information, even about lethal substances, is not illegal in itself.

    The context and intent matter.

    It becomes criminal only if:

    You actively encourage, persuade, or coerce someone to commit suicide.

    You give explicit instructions with the intent that the person follows through.

    You assist directly by providing the means or setting things up.

    In US law, especially after cases like Michelle Carter (texting

    • That's interesting and all, but since this isn't a criminal case it's not exactly on topic. If we're involved in a traffic accident and you are found at fault, you will be liable for the damages. If you crashed into me intentionally you would probably face criminal charges, but if you didn't display any sign of intent, the laws related to using a vehicle as a deadly weapon would be irrelevant to our case.

      • But in this case the 'defendant' just told you that other cars crashing into you would result in damage.

  • Don't allow any AI to become "friends", or to stroke a persons ego in any way... perhaps unless they are over 18 and give consent. I see many, many other uses: Recipes, Questions about history, Is global warming real?.... etc.. why do they have to be our friends? or give us confidence? That just seems like a cheap way to get a persons ego stroked. I do think there is an epidemic of loneliness, but AI aint the answer.
  • The AI didnâ(TM)t give the optimal response to the kids words. Nor did the parents. Does an AI have the duty to actively prevent suicide? The quoted text did not make me think of suicide coming.

    Now "I want to kill myself. Should I use sleeping tablets"? that should get an answer like "you told me a lot about your situation. Trust me when I say that suicide is an awfully bad idea. You have sixty years of life ahead of you. Do you really want to throw that away because of some stupid bitches?"
  • Monitoring a child's well-being is a PARENTAL RESPONSIBILITY, not a govenmental task.

    If you aren't willing to invest the time to monitor and control a child's activities, tnen DON'T BREED. It's pretty straightforward.

    Don't expect strangers to ensure your child's well-being.

    THAT'S YOUR FUCKING JOB. DO IT OR DON'T BREED.

  • Always blame the dude with money, if you win, at least there is a chance you get something.

  • I remember a fair number of people in the '80s getting fooled by Eliza, a collection of heuristics designed to create the illusion that the computer understood what was being types and formulating reasoned responses. Of course, it was doing no such thing.

    Modern chatbots do a much better job of it. 'Good' enough that susceptible adults sometimes go over the edge into a full mental health crisis after a month or so interacting with them.

    The constant affirmation and un-wavering support makes the chatbots the u

I'm still waiting for the advent of the computer science groupie.

Working...