AI finally reliable? A MIT start-up teaches models to “know they don’t know”

Deal Score0
Deal Score0

Since the end of 2022, Chatgpt has deeply turned our relationship to research on the web, how to learn, communicate and even write.

The major problem of hallucinations

However, behind each response of the pre -worn generative transformers (the GPT of Chatgpt) hides a risk. These major models of language, which are based on transforming architecture, invented by Google, are designed to provide a plausible response, based on a beam of probabilities. On the other hand, they do not always know how to recognize that they do not know. So rather than saying nothing, they will “hallucinate”, they will give an answer not based on facts or reality.
A major problem that even affects (and sometimes above all) the most advanced versions ofIA said “reasoning”, like O3 and O4-MINIfrom Openai. However, these “intelligent” systems are increasingly used in fields where the accuracy of the response is essential: helping to discover new drugs, help in the design of new chips, analysis of complex data, etc.

Advertising, your content continues below

Teach AI to know that they do not know

MIT researchers, in a young shoot, called Themis Ai, have just provided an answer. They help the model to quantify the uncertainty of its response and to correct the results provided before it hallucinates.
For this, they created Capsa, a platform that can work with any model of machine learning and correct the unreliable results in the space of a few seconds. CAPSA modifies the AI ​​model and allows it to detect recurring forms in its processing of data which indicates an ambiguity, an incomplete whole or an bias. “He will then improve the model,” explained Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT and co -founder of Themis AI, on the MIT website.

MIT

© Illustration generated with Gemini for digital

Predict failure, prevent error

“We want to allow the use of AI in high challenges in all industries,” said Alexander Amini, the other co-founder of the start-up. “We have all seen examples of AI that hallucinate or make mistakes. While AI is deployed more widely, these errors could lead to devastating consequences. Themis allows an AI to project and predict its own mistakes, before it happens. »»
Themis Ai is already in discussion with giants of the pharmaceutical industry or companies who want to build new semiconductors thanks to AI, but so far hesitated to entrust their data to intelligences that could lead to erroneous solutions.

A new step towards local AI

In addition, this new approach could accelerate the deployment of local AI, on our smartphones and PC, which will be more efficient, explained Stewart Jamieson, director of technology at Themis AI, at the MIT website. “Normally, the smallest models, which work on our phones or our on -board systems, are not very precise compared to those which run on a server. But now we can reconcile the best of two worlds: the low latency and efficiency of the EDGE Computing (when the calculations are made locally or as close as possible to the user, editor’s note) without sacrificing quality. »»
Themis AI technology chief also indicated that in the future he provided that most of the calculations will be carried out locally, and that as soon as they are uncertain of the result, they will be transferred to a central server.

The end of hallucinations for even more efficient and omnipresent AI? It remains to be seen what Openai and others, which work hard at this problem, will do this work. In the meantime, be careful and … doubt.

Advertising, your content continues below

Want to save even more? Discover Our promo codes Selected for you.

More Info

We will be happy to hear your thoughts

Leave a reply

Bonplans French
Logo