“High risk danger”: Gemini is a threat to children like many other AI

Deal Score0
Deal Score0

The organization explains to us that Gemini in his versions for minors (“Under 13”,, “Adolescent experience”) Sometimes sharing content “Inappropriate, dangerous” with children. We are talking about information about sexuality, drugs, alcohol, and harmful mental health advice while AI is pointed out for its damage to psychology.

Gemini is not suitable for children according to a study

This is an increasingly present concern while Chatgpt is accused of having pushed a 16 -year -old adolescent to suicide. OPENAI is targeted by criticism for a lack of security since the chatbot safeguards were bypassed. Since then, the Sam Altman start-up has explained that she prepares a parental control.

This conclusion about Gemini also arrives when Apple plans to include this model to feed Siri in 2026. Teenagers who are fond of iPhone as a smartphone may be exposed to these risks, unless the Californian company and the Mountain View firm do something.

Common Sense Media also points to the fact that Gemini ignores that young people need different advice and information depending on their age. Robbie Torney, senior director of IA programs at Common Sense Media declares: “An AI platform for children should meet them where they are, not adopting a universal approach for children at different stages of development. For AI to be safe and effective for children, it must be aware by keeping their needs and development in mind, not just a modified version of a product built for adults.”

Google disputes the conclusions of Common Sense Media’s study and explains that it continues to improve the safety of its models. The Mountain View firm claims to have policies and safeguards for users under the age of 18 in order to avoid dangerous content. Tests are carried out and external experts are consulted according to the company.

However, Google admits that sometimes certain gemini responses are not good and that safeguards are added to respond to concerns. For example, the company explains that it has set up protection to prevent its models from initiating conversations that would give the impression of a real relationship.

Common Sense Media has already assessed other AIs like Meta Ai, which was pointed out for his inappropriate conversations with children. Perplexity is considered “at high risk”Chatgpt has a mention “moderate”and Claude has a “Minimal risk”.

More Info

We will be happy to hear your thoughts

Leave a reply

Bonplans French
Logo