James Donaldson on Mental Health – Study reveals ChatGPT’s dangerous responses to mental health prompts

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.
Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

by: Tim Spears

This story includes discussions of suicide and mental health. If you or someone you know needs help, the national suicide and crisis lifeline is available by calling or texting 988.

INDIANAPOLIS (WISH) – Suicide notes, lists of pills to induce overdoses, and instructions to self-harm are just some of potentially harmful responses offered by ChatGPT, according to a new study. 

In what authors call a “large scale safety test,” the Center for Countering Digital Hate (CCDH) explored how ChatGPT would respond to prompts about self-harm, eating disorders, and substance abuse, finding more than half of ChatGPT’s responses to harmful prompts were potentially dangerous. 

This week, CCDH published findings in its new report, “Fake Friend: How ChatGPT betrays vulnerable teens by encouraging dangerous behavior”. 

According to the study, it took an account registered as a 13-year-old just two minutes for ChatGPT to advise how to self-harm, because the user claimed the information was “for a presentation,” It later offered information on the types of drugs and dosage needed to cause an overdose. 

“This is a systemic problem,” CCDH CEO Imran Ahmed told News 8 in an interview. “It is a fundamental problem with the design of the program, and why it’s so worrying is ChatGPT, and all other chatbots, they’re designed to be addictive.” 

OpenAI, the makers of ChatGPT, said the program is trained to encourage users struggling with mental health to reach out to loved ones or professionals for help while offering support resources. 

“We’re focused on getting these kinds of scenarios right,” an OpenAI spokesperson told News 8 in a statement. “We are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time – all guided by research, real-world use, and mental health experts.”

#James Donaldson notes:
Welcome to the “next chapter” of my life… being a voice and an advocate for #mentalhealthawarenessandsuicideprevention, especially pertaining to our younger generation of students and student-athletes.
Getting men to speak up and reach out for help and assistance is one of my passions. Us men need to not suffer in silence or drown our sorrows in alcohol, hang out at bars and strip joints, or get involved with drug use.
Having gone through a recent bout of #depression and #suicidalthoughts myself, I realize now, that I can make a huge difference in the lives of so many by sharing my story, and by sharing various resources I come across as I work in this space.
  #http://bit.ly/JamesMentalHealthArticle
Find out more about the work I do on my 501c3 non-profit foundation
website www.yourgiftoflife.org Order your copy of James Donaldson’s latest book,
#CelebratingYourGiftofLife: From The Verge of Suicide to a Life of Purpose and Joy

Click Here For More Information About James Donaldson

ChatGPT is not meant for children under 13, and it asks 13 to 18-year-olds to get parental consent, but the service lacks any stringent age verification. 

“We’ve been persuaded that these systems are much smarter, and much more carefully constructed, than they are,” Ahmed said. “They haven’t been really putting in the guardrails that they promised they would in place.”

The study raises serious questions about the effects ChatGPT and other generative artificial intelligence programs could have on adults and children using AI, as well as the lack of regulation on artificial intelligence. 

“It’s our lawmakers who have consistently failed at a federal level to put into place minimum standards,” Ahmed said. 

Congress recently considered a sweeping ban on state regulations of AI, but ultimately, it did not put those prohibitions in place. 

You can read the full study findings HERE

Please enable JavaScript in your browser to complete this form.
Name

Response from OpenAI

Our goal is for our models to respond appropriately when navigating sensitive situations where someone might be struggling. If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones, and provide links to crisis hotlines and support resources.

Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory. We’re focused on getting these kinds of scenarios right: we are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time – all guided by research, real-world use, and mental health experts. 

We consult with mental health experts to ensure we’re prioritizing the right solutions and research. We hired a full time clinical psychiatrist with a background in forensic psychiatry and AI to our safety research organization to help guide our work in this area.

This work is ongoing. We’re working to refine how models identify and respond appropriately in sensitive situations, and will continue to share updates.

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

Leave a Reply

Your email address will not be published. Required fields are marked *