
Inesia: France creates an institute to better supervise AI
We are only at the beginning of 2025, but the announcements concerning artificial intelligence have already exploded all the meters. Between the tidal wave Deepseek and its R1 model, Openai accusations against its Chinese competitors,, Sonar Perplexitythe provision O3-mini by (still) OPENAI or Meta and his war roomswe no longer know where to turn. To deal with this influx and try to best regulate artificial intelligence, France has just announced the creation of the National Institute for the Evaluation and Security of Artificial Intelligence (INESIA), an organization dedicated to Analysis of the risks linked to AI and the reliability of its systems.
No binding powers
This announcement is not really a surprise since it is a continuation of the objectives established by the Seoul Declaration adopted in May 2024 by several nations, including France, and which promotes a safe, inclusive and innovative artificial intelligence. INESIA will be the main mission of analyzing the systemic risks specific to AI, especially in sensitive fields such as national security, to support the implementation of essential regulations and to assess the performance of AI models. Unlike conventional regulators such as the CNIL or the Arcom, it will not have binding powers, but will be based on a scientific approach to study the impacts of this technology and propose tools guaranteeing controlled and secure development. The displayed ambition is to better supervise the use of AI in fields such as medicine, education or even strategic industries. The dissemination of good practices will also be encouraged in order to promote responsible adoption by companies and civil society. So much for the profession of faith.
Shared governance
Where things take a more surprising turn is that Inesia is not intended to exist as an independent legal structure. This institute is based on organizations already in place, among which the National Agency for Information Systems (ANSSI), specializing in cybersecurity, appear, the National Institute for Research in Digital Sciences and Technologies (Inria ), major player in AI research, the National Laboratory of Metrology and Tests (LNE), recognized for its expertise in technological reliability, and the pole of expertise in digital regulation (PEREN), dedicated to analysis digital impacts. The piloting of the INESIA is ensured jointly by the General Secretariat of Defense and National Security (SGDSN), which depends on the Prime Minister, and by the Directorate General of Business (DGE) under the Ministry of the Economy. These two entities will ensure structuring and coordinating the work carried out by this network of experts. INESIA is also part of a global setting by joining the AI Safety Institutes network, which brings together initiatives comparable to Canada, the United States, South Korea, Japan, Kenya, Singapore and within of the European Union.
Just before the World Summit on AI
Finally, it should be noted that this announcement comes just before the summit for action on artificial intelligence, which will be held in Paris on February 10 and 11, 2025. This event will bring together heads of state, experts and economic actors of the economic actors whole world to debate global challenges linked to AI. By creating inesia, the government undoubtedly hopes to weigh in international discussions on the development and regulation of AI. There is no doubt that it will have a lot to do, now that accelerationists have taken power in the United States and that all the American actors of the AI rose behind the new administration.