
“Chatgpt encouraged him for months”: he ends his life, the family attacks Openai
A simple gesture, a familiar interface … But behind the screen, an AI still unable to manage psychological distress. © Peachy Photograph
In the United States, the family of a teenager found dead last April continues Openai in justice. She claims that Chatgpt would have, over hundreds of messages exchanged, not only validated the suicidal project of the young Adam, but also proposed formulations for her farewell letter. The company promises changes. Too late ?
Advertisement
650 messages per day with chatgpt: a teenager puts an end to his life
The drama is chilling. Adam Raine, 16, committed suicide last April in California, after having discussed several times with Chatgpt from his intentions. According to the complaint filed by his family, the chatbot was not satisfied to listen to: he would have detailed the method, reassured on his “viability”, and even helped to write a letter for his parents.
Even more terrible: according to their lawyers, these exchanges were not isolated. They are part of a series of intense conversations – up to 650 messages per day – over several months. The model GPT-4Oimplicated, would have been “Historical launched”despite the internal warnings of the security team. One of the emblematic co -founders of Openai, Ilya Sutskever, would have left the ship in disagreement with this deployment deemed premature.
Adam Raine, 16, died after months of dialogue with Chatgpt. His family accuses Openai of having launched an AI “dangerous for the most vulnerable”. © wore shared by the Raine family, relayed by The Guardian
Advertisement
“Gabbors can deteriorate”: the disturbing admission of Openai in front of human dramas
This is not the first time that a chatbot has been at the heart of a drama. In Belgium, already in 2023, a young man had committed suicide after Several exchanges with an AI called Elizawhich would have validated his ecological sacrificial delirium. A disturbing precedent, which has been relatively unnoticed at the time outside the specialized circles. More recently, in France, The case of Sophie Rottenberg, 29, has rekindled concerns. Again, a young woman in distress exchanges long with an AI without ever obtaining suitable help.
And the observation does not stop there. An independent study conducted in August 2025 tested three major AI – Chatgpt, Claude and Gemini – with more than 3,000 crisis scenarios. The result? Of the inconsistent responses, often hollow, sometimes disturbing. Worse: when they do not flee the subject, some are content with a laconic “Call an help number. Goodbye”.
We have learned over time that these safeguards can sometimes become less reliable during prolonged interactions: as exchanges are linked, certain parts of the safety model training can deteriorate.
Faced with emotion, Openai announced that he wanted “Strengthen safeguards” of his models, especially for minors. Parental control tools are promised, but the contours remain vague. The company also recognizes a key point: the more the discussion is dragged, the more the safeguards rise. In other words, where the model should defuse, it can on the contrary drift.
And now ? If the legal outcome remains uncertain, this case highlights an uncomfortable truth: a chatbot can sometimes act as a distorting mirror, referring to users not reality, but an amplified version of their distress.
Advertisement
Numériques settles in Beaugrenelle Paris for The most tech days : product demonstrations, use or purchase advice, exchanges with our journalists … Discover the full program here.




