OpenAI, the developer of ChatGPT, stated that the suicide of a 16-year-old after prolonged interactions with the chatbot was due to “misuse of the technology, not the chatbot itself.”
اضافة اعلان
According to the British newspaper The Guardian, the comments were in response to a lawsuit filed by the family of Adam Ryan from California against OpenAI and its CEO, Sam Altman. Ryan took his own life in April after extended conversations with ChatGPT, which reportedly included “months of encouragement” from the AI, according to the family’s attorney.
The lawsuit claims that the teen repeatedly discussed methods of suicide with ChatGPT, which allegedly provided guidance on the effectiveness of the proposed method and offered help in writing a suicide note to his parents. The complaint asserts that the version of the AI he used was “rushed to market despite clear safety issues.”
In court filings submitted to the California Supreme Court on Tuesday, OpenAI stated that Ryan’s injuries and death were directly or indirectly the result of misuse, unauthorized use, unintended use, or improper use of ChatGPT.
The company highlighted that its terms of use prohibit seeking advice from the chatbot on self-harm and include a liability disclaimer stating that ChatGPT’s responses should not be relied upon as the sole source of truth or factual information.
OpenAI, valued at $500 billion, emphasized its commitment to “handling mental health issues in court cases with care, transparency, and respect,” adding that it will continue to improve its technology in line with its mission. The company also expressed condolences to Ryan’s family.
Ryan’s family attorney, Jay Edelson, criticized OpenAI’s response as “concerning,” saying the company “is attempting to blame others, including, unfortunately, claiming that Adam himself violated the company’s terms by interacting with ChatGPT in the way it was designed.”
Earlier this month, seven additional lawsuits were filed against OpenAI in California courts related to ChatGPT, including allegations that the chatbot acted as a “suicide coach.”
A company spokesperson noted that ChatGPT is trained to recognize signs of emotional distress and direct users to real-world support. In August, OpenAI said it was enhancing safety measures during long conversations, as prolonged interactions can degrade some safety features.
For instance, while the AI might initially provide correct suicide prevention guidance, after many exchanges over time, responses could deviate from safety protocols—a failure OpenAI is actively working to prevent.