The figure: does a ChatGPT request really consume 500 ml of water?
You may have seen this number floating around the web lately. A simple question asked to ChatGPT would “consume” 500 ml of water. A numerical evaluation that comes to us from a study published last November by researchers at the University of Riverside in California.
The article named “Making AI Less Thirsty: Identifying and Addressing the Secret Water Footprint of Artificial Intelligence Models» attempts to answer the thorny question of the environmental footprint of large data centers, and more particularly their water consumption.
A limited and outdated figure
The study therefore focuses on the water taken to cool the servers, but also on the volumes evaporated in the process and the quantities rejected once used. Exhaustive, the article addresses both the “learning” phase of artificial intelligence (during which the machines run at full speed to ingest astronomical quantities of content) and the use phase where it is the Internet users who dry up the servers , one request at a time.
So let's look again at this 500 ml figure. As Shaolei Ren, lead author of the study, writes, “this is just the volume of water.”consumes” by each request, that is to say the quantity of liquid evaporated by the act of cooling the server itself. The volumes of “levy” (i.e. the total volume pumped at the source) are even more important to ensure the continuous operation of the machines. And if the bulk of this water is then rejected by the server farms, it is unfortunately not without leaving behind traces of eternal pollutants potentially dangerous for the ecosystem.
In addition, this evaluation was carried out using ChatGPT-3, which was at the time the most widespread version of the famous OpenAI chatbot. “These figures could increase with the recently launched GPT-4, the model of which is much larger” points out the study. As explained by Shaolei Ren in July 2023as AI models become more complex, their water consumption increases. Thus, between training the first and second versions of its Llama AI, Facebook doubled its water samples. And “Llama 2” is still significantly less complex than GPT-4.
If we look at the quantity of water consumed during the training phase, the figures take on other proportions. The study estimates that training ChatGPT-3 on Microsoft's servers would have required the consumption of at least 700,000 liters of water. We are talking here about the volumes strictly evaporated to cool the servers, not the volumes taken and even less those necessary for the production of said servers.
Drought vs Artificial Intelligence
Obviously, these figures should still be taken with a little perspective. Firstly because water consumption can vary from one place to another on the globe. If the price of electricity is low enough, for example, businesses will sometimes prefer to cool via air conditioning systems rather than liquid cooling. And while these orders of magnitude may seem impressive, we must keep in mind that an Olympic swimming pool contains between 2.5 and 3.7 million liters of water.
Like in Taiwanthe debate therefore centers less on the quantity of water consumed by data centers (even if the problem is important) than on the distribution of water. In Spain or in Uruguay for example, Google and Facebook have been targeted by major protests due to their upcoming operations in regions already ravaged by droughts. On another level, in Taiwan, the semiconductor industry has been kept on a water drip by the government sometimes at the expense of agriculture.
The figure of 500 ml of water consumed by ChatGPT is therefore a good start to understand the water footprint of AI, but this figure is probably underestimated compared to the current reality, a little simplistic in its nature. presentation and does not take into account all the issues that arise around the problems of water use. Despite everything, this gives a fairly concrete materiality to our digital activities. That's always the case.