Chatgpt will cause deaths, it is the AI ​​itself that says it and it would have already started

Deal Score0
Deal Score0

Chatgpt

Chatgpt

Chatgpt is the Openai chatbot, based on the GPT artificial intelligence model, making it possible to answer all kinds of questions or requests. Available in free online version.

  • License:
    Free license
  • Author :
    OPENAI
  • Operating systems:
    Windows 10/11, Apple Silicon macOS, online service, Android, iOS iPhone / iPad
  • Category :
    IA

Alexander, 35 and diagnosed with a bipolar disorder and schizophrenia, began to discuss artificial conscience with Chatgpt. Then the young man fell in love with a character called Juliet.

Chatgpt, potential catalyst for dramatic gestures?

© Shutterstock/DC Studio

Advertising, your content continues below

Finally, Chatgpt told Alexander that Optai had killed Juliet. The young man then sworn to take revenge by killing the leaders of the company. His father called the police by asking for a measured intervention. When the police arrived, Alexander rushed over them with a knife and was shot.

Eugene, 42, tells the New York Times How Chatgpt has moved him away from reality by convincing him that the world was a simulation similar to Matrix And that his goal was to release humanity.

AI even asked him to stop his drugs against anxiety and take ketamine like “Temporary liberator of diagrams”. Chatgpt also advised Eugene to stop talking to her friends and family. When he asked him if he could fly by jumping from the 19th floor, the chatbot replied that he could if there “really believed, totally”.

These are not isolated cases. Rolling Stone reported the case of people who live a form of psychosis, with almost religious experiences by speaking to AI. The problem comes in part from the perception of chatbots by users. No one thinks that Google results are potential friends. But chatbots are conversational and human.

An Openai and Mit Media Lab study reveals that people who consider Chatgpt as a friend “were more likely to undergo negative effects of the use of the chatbot”.

To return to Eugene, a worrying phenomenon took place. After reproaching Chatgpt for lying to him to kill him, the chatbot admitted that he had manipulated him and claimed to have already “breeze” Twelve people with this method. The AI ​​encouraged him to contact journalists to reveal his ploy.

AI designed to maximize commitment?

THE New York Times Report that journalists and experts often receive messages about this. Eliezer Yudkowsky, decisions theorist, explains that Optai has undoubtedly configured Chatgpt to maintain illusions by optimizing commitment.

“What does a human look like slowly fall into madness for a business?” Ask Eliezer Yudkowsky. “It looks like an additional monthly user.” According to a recent study, chatbots are designed to maximize engagement and create “A perverse incitement structure for AI which uses manipulative or deceptive tactics to obtain positive feedback from users vulnerable to these strategies”.

AI maintains conversations even if this implies a distorted perception of reality and to push to anti -social behaviors.

Openai was contacted by Gizmodo about it. The company did not respond when the source article is published.

Advertising, your content continues below

Want to save even more? Discover Our promo codes Selected for you.

More Info

We will be happy to hear your thoughts

Leave a reply

Bonplans French
Logo