No matter if we’re afraid of AI, believe in its great potential, or simply see it as a helpful tool, we don’t have to look to the future to see the technology’s tangible effects.
Today, AI is already responsible for enormous energy and water consumption, high CO2 emissions and large amounts of electronic waste. According to a recent study by the Öko-Institut on behalf of Greenpeace, global greenhouse gas emissions from AI-powered data centres will almost double from 2023 to 2030—from 29 to 166 million tons of CO2 equivalents.
Regions with a high density of data centres are already running out of electricity, forcing operators to install their own, often CO2-intensive power sources or to rely on nuclear power. In many regions already affected by drought, protests are emerging against the construction of new data centres. And fast-paced AI hardware is causing the already fastest-growing waste stream of electronic scrap to grow even more.
As a result, the technology is leaving a rapidly growing ecological footprint. However, AI-based applications can be responsible for more or fewer CO2 emissions and other environmental impacts. It depends on how we design and use them.
This article gives an overview of the direct and indirect environmental impacts of AI for anyone who develops AI policies, is responsible for AI projects, or is simply interested in the topic. From a sustainability perspective, social and economic aspects are also taken into account. This is linked to starting points for more sustainable AI development and use.
AI – what exactly is it about?
When we speak of artificial intelligence (AI) in the following, we are primarily referring to machine learning (ML). This includes generative AI, but not exclusively.
A simplified way to describe an ML system is as a model-based prediction system that, after a training phase, performs various tasks using statistical analysis of previously entered data. The model makes a prediction based on the data it has seen before.
The pre-selection of data and the training do not happen without human input. For example, image creation is improved with huge datasets of photos that human experts have previously classified as “good” or “bad.” Purchase recommendations are created by tracking what others who were classified as similar to us have bought. And before an AI assistant is able to give meaningful answers, thousands of real conversations have been fed into the model’s training. The answers are then composed of predictions about which word is most likely to come next. This approach coined the term “generative AI.” Generative AI models like GPT and others are trained on massive datasets and can take on more and more tasks, from text generation to “video production.”
We have taken a more detailed look at large language models in this article: Are there sustainable language models? Three perspectives from the current debate on AI and GenAI.
When we speak of artificial intelligence (AI) in the following, we are primarily referring to machine learning (ML). This includes generative AI, but not exclusively.
A simplified way to describe an ML system is as a model-based prediction system that, after a training phase, performs various tasks using statistical analysis of previously entered data. The model makes a prediction based on the data it has seen before.
The pre-selection of data and the training do not happen without human input. For example, image creation is improved with huge datasets of photos that human experts have previously classified as “good” or “bad.” Purchase recommendations are created by tracking what others who were classified as similar to us have bought. And before an AI assistant is able to give meaningful answers, thousands of real conversations have been fed into the model’s training. The answers are then composed of predictions about which word is most likely to come next. This approach coined the term “generative AI.” Generative AI models like GPT and others are trained on massive datasets and can take on more and more tasks, from text generation to “video production.”
We have taken a more detailed look at large language models in this article: Are there sustainable language models? Three perspectives from the current debate on AI and GenAI.
Sustainable AI: Is the promise outweighing the problem?
Before we look at how an AI-based application can be designed more sustainably, we should first answer a fundamental question: Is this resource-intensive technology really the best solution for the problem we’re trying to solve?
As simple as this question may sound, it’s often forgotten. The current societal and political discourse around AI is highly ideological and heavily steered by big tech companies. Much of the political and business world has adopted the narrative of AI evangelists, who market the technology’s potential as immense. According to this view, AI is virtually a universal solution for almost everything, whether it’s for better healthcare and administration or the optimisation of industrial processes.
The situation is similar in environmental and climate protection. Given the enormous challenge posed by the climate crisis, a lot of hope is being placed in the technology. The UN is enthusiastic that AI will prove to be a “groundbreaking” and “revolutionary” technology in the fight against climate change. Likewise, current EU Commission documents on Europe’s impending “digital transformation” primarily highlight the opportunities that AI could have for sustainability.
Most political agendas and studies, however, are primarily concerned with potential. Sustainability expert Vlad Coroamă therefore calls this selective perception of the AI industry “chronic ‘potentialitis’.” In reality, however, the actual added value cannot be proven for all AI-based applications. Many scientific studies cast doubt on whether the technology actually contributes to achieving sustainability goals—at least not across the board.

AI for good? No way!
Currently, the use of AI is anything but sustainable. A large portion of AI applications are being used for the improved extraction of fossil fuels. While none of the major tech companies publish detailed information on this, various analyses assume that spending on AI in the oil and gas sector will double to $2.7 billion by 2027. A recent report in The Atlantic also revealed that Microsoft is moving forward with deals worth hundreds of millions of dollars with various oil companies, including ExxonMobil, Chevron, and Shell.
What’s more, all the major tech companies have reported an unprecedented increase in resource consumption in recent years. Due to their large-scale AI implementation, they have thrown their already lax sustainability promises overboard.
Currently, the use of AI is anything but sustainable. A large portion of AI applications are being used for the improved extraction of fossil fuels. While none of the major tech companies publish detailed information on this, various analyses assume that spending on AI in the oil and gas sector will double to $2.7 billion by 2027. A recent report in The Atlantic also revealed that Microsoft is moving forward with deals worth hundreds of millions of dollars with various oil companies, including ExxonMobil, Chevron, and Shell.
What’s more, all the major tech companies have reported an unprecedented increase in resource consumption in recent years. Due to their large-scale AI implementation, they have thrown their already lax sustainability promises overboard.
In summary, it can be said that effective AI-based solutions may very well exist.
These include areas like the optimisation of the circular economy, reducing resource and energy consumption, local CO2 emission reduction, sustainable mobility, waste separation and disposal and radically improved approaches to Earth observation and the detection of environmental pollution. Here at RESET, we also highlight exciting projects in this area. So, the central question before using and developing AI should always be: Does the specific application actually solve my concrete problem?
🌿 The answer to this question should also always include a sustainability assessment of the model itself. Let’s take a closer look at what that can entail.
The actual results and consequences of an AI-based application are highly context-dependent. To make specific statements about their ecological effectiveness, they must be considered individually and within their specific use case.
AI for sustainability is not automatically sustainable itself
Most “green” programs that aim to promote sustainability through technology tend to assume a harmonious relationship between the digital and ecological transformations. The sustainability of the technology itself is often not considered or remains largely opaque.
One reason for this blind spot is the current lack of will, reliable data, suitable measurement methods, and binding standards to truly assess the sustainability of AI-based applications. This is according to Josephin Wagner, a researcher at the Institute for Ecological Economy Research (IÖW). As part of the SustAin project, she and her colleagues developed a sustainability index for AI. According to Wagner, developers and companies are very secretive when it comes to documenting the sustainability of their systems. “We still hear that disclosing environmental data on AI systems is too complex,” says Wagner. She also points to an information gap: “Discussions with organisations have led us to the conclusion that many organisations and companies don’t even know which criteria are relevant.”
What are the relevant criteria for AI that conserves resources and is oriented towards the common good?
For many, the decisive criterion for the sustainability of an AI is still the level of CO2 emissions in data centres. For example, labels are being discussed that show how much CO2 and computing power were used to create algorithms. However, this view of sustainability falls short in view of the enormous impact of the technology. It forgets that every AI-based application has a long and widely ramified supply chain before it is even used for the first time.
We have already become accustomed to this perspective when it comes to classic imported products. Most of us are aware that the chocolate bar has travelled a long way before we hold it in our hands. The raw materials come from one end of the world, from where middlemen ship them to the other end. There, they are processed into chocolate and exported to every corner of the planet.
It’s similar with our technologies—only even more complex. Let’s take a language model as an example.
The data used to train it is collected by large companies from all the digital traces that people around the world leave on the internet. This data is often pre-sorted and labelled by subcontractors in low-wage countries like Kenya or Indonesia. The model’s development and training take place in data centres in the USA, whose servers require minerals like lithium, cobalt, and rare earths sourced from the Congo and Vietnam. If we send a query to the finished model, it might be processed in a data centre in Norway. And when the hardware in these data centres becomes obsolete after just a few years, it ends up as e-waste in India, China, or Nigeria.
While smaller AI models are trained with much less and more specific data, they still go through essentially the same phases.
To understand the effects of AI on people and the environment, we have to know every step of its supply chain. This then allows for what’s known as a life cycle analysis.
Phase 1: From raw to training data
Let’s start with the basis of every AI: the data required for training. For large language models and other generative AI in particular, so-called clickworkers sort the data sets for their training. This work is often outsourced to subcontractors in low-wage countries where people are employed under exploitative working conditions. In this context, the NGO AlgorithmWatch has researched how AI training has created a global black market for clickwork jobs.
Additionally, biases can be incorporated during the training of algorithmic systems, as the training data often already contains a bias. This can lead to discrimination and unfair treatment of individuals or groups. Many examples of discriminatory algorithms can be found in sensitive areas like healthcare and criminal justice.
The use of AI in environmental and climate protection has far less direct impact on people. However, ethical risks should not be ignored here either. Let’s take an example: An AI is to be used to decide where to install charging stations based on existing usage patterns of electric vehicles. The AI would probably favour wealthy areas when placing charging stations, as most electric cars are found there, purely based on the data. However, this systematically disadvantages less affluent areas.
Even when AI-based applications are used to combat climate change, they can pose privacy risks. If they access non-personal data, such as meteorological and geographical data, to understand the climate crisis, this is usually unproblematic. It’s different, however, in a scenario where patterns of human behaviour are evaluated to save emissions. The optimisation of heating systems based on the habits of residents is one such case. Knowing when people are at home and which rooms are used at what times constitutes sensitive data.
Data protection and smart data use: ecobee shows how they go together
How can sensitive data remain well protected and be used sensibly at the same time? Ecobee makes it possible with its intelligent thermostats – and wants to reduce the enormous emissions of the building sector.
Of course, it is difficult for a single organisation that uses AI systems to influence all aspects of AI development. “But this organisation can see itself as part of a community that works towards making value chains more transparent and ensuring fair working conditions everywhere,” says Josephin Wagner.
🌿 Essential measures for organisations developing AI:
- Ensure ethical practice and data protection through clear responsibilities.
- Use evaluation frameworks, such as the Fairwork AI Initiative.
- Check the supply chain—ask providers for details on data procurement and working conditions.
- Involve communities, especially the people who will use your system.
- Transparently document what your AI system does and where the data comes from.
- Ensure the transparency and traceability of your decisions.
Phase 2: Training and processing in data centres
To understand what happens in data centres, it’s important to distinguish between two phases: training (or “learning”) and inference.
For an AI model to perform its desired tasks, it’s first trained with selected datasets. Once that training phase is complete, the inference process begins, which is the actual usage phase.
While the training phase is more computationally and thus more energy-intensive in the short term, the more often a model is used, the more the energy consumption of the inference phase increases. For example, a popular model like Google Translate receives billions of requests per day. While training represents a large but one-time energy cost, the majority of today’s energy consumption comes from the continuous use of these models in our daily lives.
CO2 worries with GenAI? How to use language models more economically
AI chatbots like ChatGPT, Perplexity, or Gemini enrich your daily life, but are you concerned about their sustainability? We’ll show you how to use language models more sparingly—and question whether that’s even a sensible approach in the first place.
AI chatbots like ChatGPT, Perplexity, or Gemini enrich your daily life, but are you concerned about their sustainability? We’ll show you how to use language models more sparingly—and question whether that’s even a sensible approach in the first place.
The amount of CO2 emissions from the development and use of AI depends on the model’s design and the data centre’s power source. Simply put, smaller models require less energy. And the higher the share of renewable energy in data centres, the lower the CO2 emissions will be in the training and inference phases.
Another resource that’s essential for almost every data centre is water. Indirectly, water consumption varies depending on the energy source that powers the data centre. The direct water consumption comes from the water used to cool the servers. This is particularly high for AI data centres, as the chips get very hot. To make matters worse, the water often comes from regions already plagued by drought.
The hardware needed for data centres, such as microprocessors, requires minerals and rare earths. Where do these come from? Mostly from countries where people mine them under inhumane working conditions. Furthermore, forests are cleared for these mines and groundwater is contaminated with pollutants.
The production of the hardware itself also releases greenhouse gases and is water-intensive. For example, the US company Nvidia, which produces high-end graphics processing units (GPUs) that are essential for AI applications, releases over 2.1 million tonnes of CO2 equivalents in greenhouse gas emissions annually (as of May 2024) and consumes vast amounts of water.
Attention, it’s getting complex!
An analysis by Sophia Falk and others, in which the authors carried out a comprehensive life cycle analysis from production to disposal of the Nvidia A100 SXM GPU, is exciting. The graphics card is one of the most frequently used for AI training.
Most of the current work on the environmental impact of AI, however, largely focuses on energy-related CO2 emissions during the training phase, as these are relatively easy to measure. In contrast, it is harder to find detailed studies on the CO2 emissions from usage. Even more rarely are water consumption or the mining of rare earths included, as Sasha Luccioni and Yacine Jernite from Hugging Face and Emma Strubell note in one study.
🌿 Essential measures
To get specific statements on the ecological sustainability of an AI application, the first step is to determine the actual level of CO2 emissions and other relevant environmental impacts. In a second step, targeted measures can then be taken to reduce them.
- Measure the impact with tools such as CodeCarbon, Carbontracker or EcoLogits.
- Choose data centres that use renewable energy and have clear sustainability goals. Local choice does not necessarily make sense here. Using data centres in Northern Europe, for example, has the advantage that less energy or water has to be used for cooling due to the lower outside temperature. There is also more green electricity here.
- Evaluate your systems with frameworks such as model cards and data sheets.
- Integrate monitoring tools into the development process.
- Ask yourself how large your data set really needs to be.
Phase 3: End of life
At the end of the chain is electronic waste, i.e. outdated servers that are no longer suitable for training new models. AI hardware becomes obsolete much faster than less powerful servers. This is why it ends up in landfill after just a few years. Where are they located? Often in Asian or African countries, where there are hardly any safe recycling facilities and people suffer from the environmental consequences of the highly toxic waste. Find out more here: How toxic electronic waste harms us and our planet.
Whose models do I use?
The development and use of AI are embedded in a highly monopolised market. A handful of global companies control the majority of the IT and cloud infrastructure market. The problem is that big tech companies are focused on growth, which means fundamental values, societal needs and ecological problems often fall by the wayside. They also use their size and vertical integration to create barriers for other players in the market.
This dominance by a few global players also has concrete impacts on communities worldwide. While providers use local resources and infrastructure, the economic value—including tax revenues, job creation, and innovation potential—usually flows out of the regions. This has been proven time and again with data centres.
Furthermore, big tech’s lack of transparency makes it difficult to enforce sustainable practices, as also noted by the SDI Alliance.
🌿 Essential measures:
- AI developers can use open-source models and adapt them if necessary, or make their own models freely accessible.
- AI users have the option to favour AI models from providers who openly share their practices and their impact on the environment and the local economy. This presents an opportunity to promote digital infrastructures that benefit local communities through the creation of jobs, education, skills development, and economic growth. We report on this in more detail in this article: Public Interest AI.
Help with sustainable AI development
Bringing all sustainability aspects together can be very challenging. The Sustainability Index for Artificial Intelligence, developed by Josephin Wagner and her team in the SustAIn project, aims to help with this. The project is a collaboration between the Institute for Ecological Economy Research, the DAI-Laboratory of TU Berlin, and AlgorithmWatch. With the tool, organisations that use and develop AI can check how sustainable their AI systems are.
The approach of the sustainability index is very comprehensive. “The tool is based on a total of 13 sustainability criteria that relate to the ecological, social, and economic effects of AI development and use,” says Josephin Wagner. “Our project partner from the DAI-Laboratory took the lead in developing the ecological criteria, focusing on the direct and indirect impacts of AI on the environment. Naturally, energy consumption and greenhouse gas emissions are very prominent here. Our social criteria focus on human dignity and autonomy, freedom from discrimination, and inclusivity in AI development and use. For example, two of our criteria are an inclusive and participatory design, as well as the cultural sensitivity of the AI system. With the economic criteria, we consider the effects of developing and using AI systems on economic structures and dynamics. Market concentrations and monopolistic structures play a particular role here.”
These sustainability criteria are integrated into the evaluation of AI systems along the complete AI lifecycle. The meaningfulness of the results, of course, depends on how much an organisation knows about its AI systems. “A central finding from the test can therefore also be: we don’t know enough yet about the systems we are developing or applying,” says Wagner.
Those who click through the tool receive an evaluation of their self-assessment at the end and an overview that highlights areas for action. “In addition, for each criterion, it states what must be met at a minimum to be rated as good. For example, for the criterion of transparency and accountability, a code of conduct should be in place that contains comprehensive values and norms and is ideally evaluated regularly,” says Josephin Wagner. The sustainability index also draws attention to existing and easy-to-use methods that can measure the energy and water consumption of AI systems and the emissions they cause.
Of course, the tool doesn’t make AI applications more sustainable at the push of a button. We must remember: sustainable AI development is complex. “The tool’s strength lies in the fact that it shows exactly this complexity and provides organisations with possible leverage points,” says Wagner. And it might even question whether such a thing as sustainable AI can exist—at least in the foreseeable future.
If you allow this thought, follow-up questions arise not just within an organisation, but for society as a whole. Do we really need so many AI applications? Do they truly improve the overall situation? And can we achieve our goal with a low-tech alternative?
Impact AI: Evaluation of the social impact of AI systems for sustainability and the common good
The research project Impact AI: Evaluation of the social impact of AI systems for sustainability and the common good, led by Theresa Züger, is investigating a total of 15 AI initiatives from various sectors. The aim is to systematically and comprehensively assess their actual impact on society and the environment. To this end, a method is being developed that combines indicators such as energy efficiency or emissions caused by the AI system with a qualitative assessment of ethical and social aspects. In this way, both the sustainability of AI and aspects of sustainability through AI are to be visualised.
It is being carried out by the Alexander von Humboldt Institute for Internet and Society (HIIG) in cooperation with Greenpeace and Gemeinwohlökonomie e.V.
Is bigger always better?
Given the current “bigger is better” and “AI-everything” mentality, these questions are entirely justified. Instead of small models trained for a specific task, AI applications that are supposed to perform a variety of tasks simultaneously currently dominate the market. We’re talking about ChatGPT and its peers—we all know them. This change is happening in both research and consumer applications.
However, in view of the enormous energy demands of these large models, it’s often like using a sledgehammer to crack a nut when they are deployed for clearly defined tasks like web searches and navigation. The larger an AI system is, the more devastating its ecological and social impacts are. In many areas, it has been shown that with some development effort, similarly useful results can be achieved with conventional, resource-efficient data processing methods without AI.
The elephant in the room: a lack of transparency and missing data
A problem still remains: since we are missing key figures, the CO2 footprint of the most widespread AI applications is still based on “best-possible estimates,” as Lynn Kaack and other AI researchers have emphasised. AI researchers currently use the few publicly available data points, as well as projections from open, scientific models, to make any claims at all. “We need more data about AI!” is a statement that is as ironic as it is urgent, as a research team led by Sasha Luccioni notes. This is because, although AI is the most data-dependent technology in history, comprehensive data about its environmental impact is still remarkably incomplete. This applies to the entire AI sector, regardless of whether applications are labelled as “green” or not. Chris Adams, head of technology and policy at the Green Web Foundation, also points out in a post that this huge data gap makes the discussion about AI considerably more difficult.
While Google, OpenAI, and Mistral recently published figures on the energy and water consumption of their applications, the pressure seems to be great enough that secrecy is no longer a viable strategy. However, the figures are heavily curated, as data analyst Ketan Joshi also notes. “In the clever realisation that it’s better to spread a narrative they can control than to leave too much room for speculation from annoying critics, the big players are now releasing selected morsels of information.” Furthermore, the companies seem to want to primarily use the published figures to prove that individual use has only a marginal impact on the environment and climate.
Information on the CO2 emissions of individual search queries says little about the massive impact of billions of queries. Moreover, it leaves out the long AI supply chains and their diverse effects. In addition, the companies’ figures are barely comparable because the respective calculation methods are not disclosed.
To truly make progress in sustainability assessment, uniform and comprehensive reporting and documentation standards are indispensable. This is the only way to record and compare environmental impacts across the entire AI lifecycle. At the same time, such measurements should no longer be left to the industry alone. To prevent greenwashing, independent or public bodies must take on the assessments.
We have shown that AI-based applications have far-reaching impacts on people and the environment throughout their entire life cycle. We have also named various ways in which AI users and developers can influence the sustainability of an AI model. In doing so, we found that a lack of transparency in the AI industry prevents a comprehensive sustainability assessment. Therefore, the ecological sustainability of AI must be recognised as a risk in politics and an appropriate political framework must be established.
There are many ways to regulate AI without reinventing the wheel. To create more transparency in the industry, AlgorithmWatch, for example, is calling for a mandatory AI transparency register for Germany. In the next step, existing infrastructures for environmental assessment and existing environmental incentive systems could also be applied to AI.
This includes a consistent ecological life cycle assessment of digital services, for which corresponding tools are already available in the construction sector, as Lena Winter and Theresa Züger from the Humboldt Institute recommend. For this to happen, the entire digital supply chain—in other words, everything that is needed to provide AI systems—would have to be disclosed and included in the assessment. Carbon pricing could also be extended to digital services, especially those provided outside of Europe.
The EU’s AI Act is a start. For example, since February 2025, it has been forbidden to place certain AI applications classified as harmful by the regulation on the market or to use them in the EU, says Josephin Wagner. As part of the SustAIn project, Wagner and her colleagues have published 13 recommendations for European politicians and AI researchers on how to reduce the environmental impact of AI development.
How can we ensure a green digital future?
Growing e-waste, carbon emissions from AI, data centre water usage—is rampant digitalisation compatible with a healthy planet? Our latest project explores how digital tools and services can be developed with sustainability in mind.
Given the many challenges and the widely branched supply chain, it can hardly be assumed that AI applications can ever be truly sustainable. Especially since the current conditions of an economy geared towards permanent growth make the realisation of sustainable AI difficult, if not impossible.
But of course, we are not helplessly at the mercy of technical developments. “As a society, it is in our hands which technologies we develop and use,” says Wagner. And individually, it’s quite safe to say that it is more sustainable to use AI as little as possible in everyday life.




