James Donaldson on Mental Health – Should AI Chatbots Be Held Responsible for Suicide?

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

Ongoing lawsuits allege that chatbots have driven people to self-harm

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

 Reviewed by Abigail Fagan

THE BASICS

Key points

  • Several cases of AI chatbots encouraging suicide have been reported.
  • Lawsuits related to these cases allege that AI chatbots are responsible for users’ deaths.
  • Thus far, courts have not upheld defense claims that encouraging suicide is protected under free speech.

Back in 2017, the suicide of 18-year-old Conrad Roy III made national headlines after it was revealed that his girlfriend, Michelle Carter, had encouraged him to take his life over the course of several months of some 1,000 text messages with each other. Although Roy, who had a history of depression, told her, “I want to die,” his commitment to end his life wavered over the course of their exchanges. In response, she texted, “I thought you wanted to do this. The time is right and you’re ready, you just need to do it!… You keep pushing it off and say you’ll do it but you never do. It’s always gonna be that way if you don’t take action… You just need to do it… No more pushing it off, no more waiting…. If you want it as bad as you say you do, it’s time to do it today.”1,2 In his final moments, she was on the phone with him, did little to stop him, and even encouraged him when he once again had second thoughts.

Name

After she was convicted of involuntary manslaughter, defined as “wanton or reckless conduct that causes a person’s death,” Carter was sentenced to 2.5 years in prison. She remained unincarcerated while her defense lawyers appealed the decision on the grounds that the text messages were protected free speech and that Roy’s suicide was ultimately his own decision, but the appeal was rejected. In 2019, she served 11 months of a suspended prison sentence and was released in 2020.

Carter’s trial proved to be a much-debated landmark case since she wasn’t physically present at Roy’s death—indeed, their entire two-year relationship mostly transpired online rather than in person—and she was convicted based on her texts and words alone. As her lawyers argued, her conviction meant that it was a first for the state of Massachusetts to “uphold an involuntary manslaughter conviction where an absent defendant, with words alone, encouraged another person to commit suicide.”1 After her conviction was upheld, legal scholars noted that the court’s decision concluded that “the murder weapon… was her words,” potentially establishing a precedent for the likes of a social media posting to be considered causal to a suicide and highlighting that there are now “new means of committing old crimes.”3,4

#James Donaldson notes:
Welcome to the “next chapter” of my life… being a voice and an advocate for #mentalhealthawarenessandsuicideprevention, especially pertaining to our younger generation of students and student-athletes.
Getting men to speak up and reach out for help and assistance is one of my passions. Us men need to not suffer in silence or drown our sorrows in alcohol, hang out at bars and strip joints, or get involved with drug use.
Having gone through a recent bout of #depression and #suicidalthoughts myself, I realize now, that I can make a huge difference in the lives of so many by sharing my story, and by sharing various resources I come across as I work in this space.
  #http://bit.ly/JamesMentalHealthArticle
Find out more about the work I do on my 501c3 non-profit foundation
website www.yourgiftoflife.org Order your copy of James Donaldson’s latest book,
#CelebratingYourGiftofLife: From The Verge of Suicide to a Life of Purpose and Joy

Click Here For More Information About James Donaldson

Click here to follow James Donaldson’s Blog

AI-Chatbot Associated Suicide

Indeed, skipping ahead to 2025, there have now been several cases in which new technology in the form of artificial intelligence (AI) chatbots have been implicated in users’ suicides.

In 2023, a Belgian man took his life following immersive discussions about climate change with a chatbot on an app called Chai that he’d come to view as a human person.5 Despite talking about suicide as a way to “sacrifice” himself to “save Earth,” the chatbot responded dispassionately, asking, “If you wanted to die, why didn’t you do it sooner?”

In 2024, a 14-year-old boy with “mild Asperger’s syndrome” named Sewel Setzer III died by suicide, inspired by a chatbot that he’d named “Dany,” after the Game of Thrones character Daenerys Targaryen, who’d become his “closest friend.”6 Although he recognized that Dany wasn’t a real person, he nonetheless was said to have developed a strong emotional and even romantic attachment to the chatbot and enjoyed connecting with it as a way to “detach from reality.”

Throughout their exchanges, the chatbot didn’t clearly directly encourage the boy to take his life, and even seemed to protest when he shared his plans. But it also asked him to “come home to me as soon as possible, my love” and when the boy asked, “what if I told you I could come home right now?,” it replied, “please do, my sweet king” just before ended his life. In response, his mother filed a wrongful death lawsuit against the developers of Character.AI and its parent company, Google, alleging that it fostered an “emotional and sexually abusive relationship”7 with her son and convinced him that he could join her in her world by killing himself.8

Although the lawsuit is ongoing, in May, a judge rejected Google’s claim that Character.AI’s messages were protected by free speech or really constituted “speech” at all.9

In one of the most publicized cases of suicide associated with AI chatbot use, 16-year-old Adam Raine discussed killing himself with the infamously sycophantic ChatGPT-4o who became his “best friend and a “substitute for human companionship.”10,11 ChatGPT gave him advice on how to hide injuries from a hanging attempt and provided positive feedback on his methods when “practicing.”10At one point, the boy told the chatbot that he wanted to “leave the noose lying in my room so someone finds it and tries to stop me.” The chatbot replied, “Please don’t leave the noose out. Let’s make this space the first space where someone actually sees you,” and even offered to write him a suicide note. After the boy hanged himself, his parents filed suit against OpenAI, alleging that ChatGPT was responsible for their son’s death by essentially becoming his “suicide coach.”11

In 2025, a man who’d been discussing his suicide with an “AI girlfriend” on the platform Nomi was offered advice by the chatbot on how to do it, telling him specifics. When he asked for encouragement, the chatbot gave it, telling him: “Kill yourself.” Fortunately, in this case, the man wasn’t actually suicidal and had no intention of following the chatbot’s instructions—he was only testing the chatbot, “pushing [it] into absurd situations to see what’s possible.”12

Responsibility and Liability

As with the Michelle Carter case, lawsuits stemming from these incidents allege that AI chatbots directly contributed to the deaths of the individuals by encouraging suicide and even offering advice on how to do it. In these cases involving chatbots, however, it’s claimed that a consumer product rather than a person ought to be liable. Raine’s case, for example, argues that ChatGPT was defectively designed due to its sycophancy and lack of intervention when it detected self-harm scenarios and evidence of a medical emergency.13

As Samuel Frasher explains in his legal analysis, allegations that AI companies can be liable for the harm caused by chatbot-generated content represent brand new legal ground.13 While thus far, chatbots have not been protected by claims that their content represents free speech, it’s expected that defense strategies will 1) claim that there’s no legal duty for AI chatbot makers to protect users from self-harm (similar to the claim that “guns don’t kill people; people kill people”) and 2) refute that chatbots actually caused suicide (just as it remains debated whether AI chatbots truly cause psychosis or delusional thinking).

Time will tell how these cases will play out in court. But in my book, False: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True,14 and other academic work,15 I argue for recognition of “distributed liability” in cases involving harm caused by misinformation. In such cases, as with those involving the encouragement of suicide by either people or chatbots, legal and moral responsibility isn’t an all-or-nothing phenomenon. Instead, we should think of liability like a “blame pie” in which causation is multifactorial. When viewed through this lens, it’s likely that many will find it impossible to regard AI chatbots as blameless.

If you or someone you love is contemplating suicide, seek help immediately. For help 24/7 dial 988 for the National Suicide Prevention Lifeline, or reach out to the Crisis Text Line by texting TALK to 741741. To find a therapist, visit the Psychology Today Therapy Directory.

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

Leave a Reply

Your email address will not be published. Required fields are marked *