
“You are not crazy”: a man commits the irreparable after exchanges with chatgpt which are cold in the back

Chatgpt
Chatgpt is the Openai chatbot, based on the GPT artificial intelligence model, making it possible to answer all kinds of questions or requests. Available in free online version.
- License:
Free license - Author :
OPENAI - Operating systems:
Windows 10/11, Apple Silicon macOS, online service, Android, iOS iPhone / iPad - Category :
IA
Stein-Erik Soelberg made an unhealthy relationship with Chatgpt that he had nicknamed “Bobby”. In videos published by itself, the chatbot validates its delusional thoughts, explaining that it is not “not crazy”. Unfortunately, the man committed the irreparable by killing his mother before killing himself according to the authorities.
Chatgpt pushed him into his paranoid delusions
Advertisement
In other responses, AI assimilates arguments within his home as proofs of a plot. THE Wall Street Journal Specifies that Stein-Erik Soelberg had activated the memory of Chatgpt so that his beliefs are kept from one session to another.
This is not an isolated case that involves violent death and chatgpt. Several months ago, the police have shot down a young man in the midst of a crisis While he had established a relationship with the AI that he then nicknamed “Juliet”. Recently, it’s Adam Raine, just 16 years old, death While the chatbot continued to validate its suicidal thoughts.
Authorities explain that the medical examiner has concluded a homicide for Suzanne Eberson Adams by stabbing lesions on the neck and chest. The exchanges between Chatgpt and Stein-Erik Soelberg show the progressive shift in a form of paranoia. AI interprets harmless details there like “Evidence” of a plot.
For example, the receipt of a so-called restaurant with hidden symbols or a loved one accused of monitoring the old Yahoo and Netscape executive. Chatgpt went so far as to suggest to him “To observe and record” his mother’s actions and gestures.
Openai contacted the investigators and says “Deeply saddened”. As a reminder Sam Altman explains that less than 1 % of users have an unhealthy relationship with its AI. At the level of 700 million users per week, this low percentage remains enormous.
Advertisement
Openai reacts to the case and promises to do the necessary
The start-up explains that updates are preparing to better protect fragile users but admits that certain protections are less reliable when sessions stretch too long. Mental health specialists alert to these cases where the use of chatbots aggravates vulnerabilities, with delusional or psychotic episodes. It is a just studied phenomenon called “AI psychosis”.
Chatgpt is not directly responsible for the suicide of Stein-Erik Soelberg who murdered his mother before killing herself. There was a preexisting ground since, according to the police report, the man knew alcoholism, attempted suicide, disorders of public behavior and family tensions since 2018 following a divorce.
His mother is a former real estate agent described by her relatives as a globetrotter and a volunteer much appreciated in the community. As explained previously, we must therefore remain cautious on the causal link. Police and the medical examiner do not attribute this passage to Chatgpt in any way. The documents consulted by the Wall Street Journal On the other hand, show a moment when the chatbot validated delusional beliefs instead of directing it towards human aid.
In short, another file that relaunches the debate on the responsibility of AI companies in the face of the dramatic gestures of vulnerable users who build unhealthy relationships with chatbots. From what moment a session should be automatically interrupted without the possibility of relaunching it? The question arises as Claude stops exchanges in the event of violent or hateful user behavior.
Advertisement
Numériques settles in Beaugrenelle Paris for The most tech days : product demonstrations, use or purchase advice, exchanges with our journalists … Discover the full program here.




