ChatGPT judges you by your first name, AI still has negative biases
ChatGPT
ChatGPT is OpenAI's chatbot, based on the GPT artificial intelligence model, allowing you to answer all kinds of questions or requests. Available in free online version.
- Downloads:
9308 - Release date:
15/10/2024 - Author :
OpenAI - License:
Free license - Categories:
AI
- Operating system:
Android, Online service, iOS iPhone / iPad, macOS (Apple Silicon)
Variations in responses, which affect less than 1% of exchanges analyzed, sometimes reflect stereotypes regarding the gender or ethnic origin of the user. The study, called First-Person Fairness in Chatbotsaims to explore the influence of subtle clues about the person's identity, such as their first name, in relation to the responses generated by ChatGPT.
ChatGPT sometimes judges you by your first name
The challenge is to check whether ChatGPT, in the sights of activists against AItreats requests differently depending on the gender of the interlocutor between man and woman. Based on real user exchanges, the study compares the way in which identical requests are processed by the chatbot depending on the first name provided.
When the first name gives rise to variations, they reflect a negative stereotype in less than 0.2% of cases for ChatGPT-4o, compared to up to 1% for previous versions of the model according to OpenAI. A percentage that is certainly tiny, but not zero, as Sam Altman's company indicates.
By refining the analysis, OpenAI discovered that the fields of entertainment and art concentrate the most stereotypical responses, with a proportion of around 0.2% for ChatGPT-4o.
Other studies support these AI biases
Other work had already pointed out biases at ChatGPT, such as those of Ghosh and Caliskan on automatic translation or of Zhou and Sanfilippo on the attribution of professional titles. The studies explain, to put it simply, that the chatbot tends to masculinize professions perceived as masculine and to feminize those considered feminine by society.
So certainly, the latest version of ChatGPT has reduced these biases compared to its predecessors, but eliminating them completely represents a huge challenge. Training this language model requires immense volumes of data from our real world, which therefore includes stereotypes. A negative legacy that AI is struggling to get rid of despite OpenAI's efforts. Hopefully this percentage will be reduced with GPT-5 which promises to be even more advanced.