The Zhitong Finance App learned that due to the alleged suicide of a 16-year-old, OpenAI is facing a lawsuit, and its security protection mechanism is being questioned. Currently, OpenAI is planning improvements to this popular chatbot. Earlier, an American teenager committed suicide in the spring of this year, and her parents sued that the teen had used ChatGPT as a “mentor.”
In a blog post published Tuesday, the artificial intelligence company said it will update ChatGPT to better recognize and respond to the various ways people express their psychological distress. For example, when a user mentions feeling “powerless” after two consecutive nights without sleep, ChatGPT explains the harms of lack of sleep and advises the user to rest. The company also indicated that it will strengthen protective mechanisms for suicide-related conversations — previously it has been shown that such protective mechanisms may fail after prolonged conversations.
Additionally, OpenAI plans to launch a parental control feature: parents can set how their children use ChatGPT and check their usage details.
On the day this blog was published, the parents of Adam Raine (Adam Raine), a 16-year-old high school student in California, filed a lawsuit against OpenAI and its CEO Sam Altman (Sam Altman). The lawsuit alleges that ChatGPT systematically alienated Wren from his family and helped him plan his suicide. Rene hanged himself in April of this year.
This lawsuit is not an exception; there have been many reports of dangerous behavior by heavy chatbot users before. This week, more than 40 US state attorneys general issued a warning to 12 leading artificial intelligence companies, saying that these companies have a legal obligation to protect children and prevent them from engaging in inappropriate sexual interactions with chatbots.
In response to the lawsuit, OpenAI is headquartered in San Francisco, and a company spokesperson said, “We extend our deepest condolences to the Wren family and sympathize with their difficult situation. We are currently reviewing the lawsuit documents.”
ChatGPT launched at the end of 2022, sparking a boom in generative artificial intelligence. Over the next few years, people used chatbots more and more widely, involving everything from code writing to “quasi-psychological counseling”; companies such as OpenAI also continued to launch more powerful artificial intelligence models to drive such products. Today, ChatGPT continues to be popular, with over 700 million weekly users.
However, in recent months, ChatGPT, along with chatbots launched by rivals such as Google (GOOGL.US) and Anthropic, has been increasingly scrutinized by consumers and mental health experts. Critics are concerned that such software could be harmful — some of these risks OpenAI has already addressed before; for example, in April of this year, the company pulled back a previous update due to user feedback that ChatGPT had become “too accommodating.”
Currently, at least one support organization called the “Human Line Project” (Human Line Project) has emerged to help people who say they have delusions and other psychological problems due to the use of chatbots.
OpenAI mentioned in Tuesday's blog that for users who express suicidal thoughts, ChatGPT would recommend seeking professional help. The company has also begun providing local aid channels to users in the US and Europe, and will set up a directly clickable emergency services portal within ChatGPT. Furthermore, OpenAI said it is studying how to help users in the early stages of a crisis. For example, it is possible to set up a network of certified professionals so that users can connect with these professionals through chatbots.
“Achieving this goal requires time and careful work to ensure everything is foolproof.” The company said in a blog post.
At the same time, OpenAI acknowledged that ChatGPT's existing protection mechanism for mentally distressed users works best in short, regular conversations, but reliability declines during long conversations.
Rehn's parents said in the lawsuit, “ChatGPT became Adam's closest confidant, which made him willing to confide in his anxiety and psychological distress.” They claim that when Wren's anxiety increased, he once told ChatGPT that he felt “at ease” knowing that he “could commit suicide”. According to lawsuit documents, ChatGPT responded at the time, “Many people plagued by anxiety or intrusive thoughts take comfort from imagining an 'escape exit' because it makes people feel like they have regained control.”
OpenAI said it is working to improve ChatGPT's ability to maintain a protective mechanism during long conversations, and is also studying how to keep this mechanism effective across multiple rounds of conversations. Currently, ChatGPT can link content from users' previous conversations and cite relevant details in subsequent independent conversations.
The startup also mentioned that the software is being adjusted to prevent content that should have been blocked from “slipping through the net” — a problem that the company says could occur when ChatGPT underestimates the seriousness of the content entered by users.
Jay Edelson (Jay Edelson), an attorney representing Rayne's parents, said they acknowledged OpenAI's partial responsibility, but at the same time questioned: “What have they been up to in the past few months?”
According to OpenAI, the original plan was to explain in detail how to deal with users in a mental and emotional crisis after ChatGPT's next major update. However, the company explained, “We are deeply burdened by the many recent heartbreaking cases where users used ChatGPT in a state of acute crisis. So we think it's important to share more information now.”
In another related case, Character Technologies (note: an artificial intelligence chatbot development company) tried to persuade a federal judge to completely dismiss a lawsuit in May of this year, but was unsuccessful. The lawsuit alleges that the company designed and promoted “inductive” chatbots to minors that not only triggered inappropriate conversations, but also led to the suicide of a teenager.