Looking at the Entire Life Cycle: Tips for Sustainable AI Development and Use

AI users and developers can influence AI sustainability. We present the essential steps to creating a sustainable AI model.

Author Sarah-Indra Jungblut:

Translation Kezia Rice, 10.13.25

Are you already aware that artificial intelligence, and generative AI in particular, has a massive impact on people, the environment and the climate? Do you want to change that? Whether you develop AI guidelines, are responsible for AI projects or are simply interested in the topic, you’ve come to the right place! This article highlights levers for more sustainable AI development and use.

Erdkugel mit Code

AI is bad for the environment?!

AI causes high CO2 emissions, has a large water footprint and makes our mountain of electronic waste grow even faster? If you’ve never heard of this before, or you’re simply wondering how it all fits together, then read on! The article Sustainable AI Means Looking Beyond Data Centres provides an overview of the direct and indirect effects of AI on the environment, from a sustainability, social and economic perspective.

First things first: this article focuses on machine learning and also considers large language models. You can find more background information on generative AI in this article: Are there sustainable language models? Three perspectives from the current debate on AI and GenAI.

Want to learn more about how you can use ChatGPT and other generative AI more sustainably? We offer tips in this article: Worried about GenAI’s carbon footprint? How to use language models more efficiently.

Step 1: Do I really need AI?

Does the application actually solve my specific problem? We should ask ourselves this key question before every use and development of AI. It sounds trivial, but it’s often forgotten in the current hype. A resource-intensive AI application is not always the best way to answer a question, and using large models for simple tasks has been compared to ‘using a sledgehammer to crack a nut’. The larger an AI system is, the more devastating its environmental and social impact. In most cases, conducting a simple online search instead of using ChatGPT or using a simple algorithm instead of a large language model will deliver equally meaningful results.

In many areas, you can achieve similarly useful results using conventional and resource-saving data processing methods without AI.

However, if AI is really needed, the next step should be to assess the sustainability of my model. AI-based applications are responsible for varying levels of CO2 emissions and other environmental impacts, depending on how we design and use them.

Step 2: Consider the entire life cycle

When it comes to the sustainability of AI, people usually focus on the CO2 emissions during training or inference. However, given the many stages of AI development, we can only draw real conclusions by conducting a life cycle analysis.

We’re already accustomed to this perspective when it comes to traditional imported products. Most of us are aware that a bar of chocolate has travelled a long way before we hold it in our hands. It’s similar when it comes to our technologies—only even more complex.

Let’s take a language model as an example: large companies train it on data they collect from all the digital traces that people around the world leave behind on the internet. This data is often pre-sorted and labelled by subcontractors in low-wage countries such as Kenya or Indonesia. The model is developed and trained in data centres in the United States, whose servers require minerals such as lithium, cobalt and rare earths from the Congo and Vietnam. When we submit a query to the finished model, it may be processed in a data centre in Norway. And when the hardware in the data centre becomes obsolete after a few years, it ends up as electronic waste in India, China or Nigeria.

From raw data to training data

AI is widely criticised for exploitative working conditions in the processing of training data, distorting bias and discrimination and inadequate data protection. Furthermore, the very people who will later use the models are rarely involved in their development. However, this can be counteracted.

🌿 Key measures for organisations developing AI:

  • Ensure ethical practice and data protection by clearly defining responsibilities.
  • Use assessment frameworks such as the Fairwork AI Initiative.
  • Review the supply chain—ask suppliers for details on data collection and working conditions.
  • Involve communities, especially the people who will be using your system.
  • Transparently document what your AI system does and where the data comes from.
  • Ensure the decisions behind your AI system are transparent.

Training and processing in data centres

The level of CO2 emissions generated by the development and use of AI depends on the design of the models and the power source of the data centre. Put simply, smaller models require less energy. And the higher the proportion of renewable energies in the data centres, the lower the CO2 emissions in the training and inference phases. There are also ways to reduce water consumption.

🌿 Key measures:

  • Measure the impact using tools such as CodeCarbon, Carbontracker or EcoLogits.
  • Choose data centres that use renewable energy and have clear sustainability goals. This won’t necessarily be a data centre that’s local to you. Data centres in Northern Europe, for example, require less water or energy for cooling because temperatures are lower. There’s also more green electricity available here.
  • Evaluate your systems with frameworks such as model cards and data sheets.
  • Integrate monitoring tools into the development process.
  • Ask yourself how large your data set really needs to be and reduce it to a minimum.

End of Life

AI hardware becomes obsolete much faster than less powerful servers. As a result, it ends up in landfill after only a few years.

🌿 Key measures:

  • Consider hardware lifespan and recycling options.
  • Choose data centres that take social and environmental responsibility seriously.
Green digital futures

How can we ensure a green digital future?

Growing e-waste, carbon emissions from AI, data centre water usage—is rampant digitalisation compatible with a healthy planet? Our latest project explores how digital tools and services can be developed with sustainability in mind.

Step 3: Whose models do I use?

Big tech companies focus on growth. Fundamental values, social needs and environmental issues fall by the wayside. Furthermore, the lack of transparency in Big Tech makes it difficult to learn about the actual impact of technology, let alone enforce sustainable practices.

🌿 Key measures:

  • AI users have the opportunity to favour AI models from providers who openly share their practices and their impact on the environment and the local economy. This presents an opportunity to promote digital infrastructures that benefit local communities through job creation, education, skills development and economic growth. Here, we report in more detail on what is known as public interest AI.
  • AI developers can draw on open-source models and adapt them as necessary, or make their own models freely available.

Help with sustainable AI development

Bringing all aspects of sustainability together can be very challenging. The sustainability index for artificial intelligence developed by Josephin Wagner and her team in the SustAIn project aims to support people with this. The project is a collaboration between the Institute for Ecological Economy Research, the DAI Laboratory at TU Berlin and AlgorithmWatch. The tool enables organisations that use and develop AI to assess how sustainable their AI systems are.

The sustainability index takes a comprehensive approach, with assessments based on the environmental, social and economic effects of AI development and use. Those who click through the tool will receive an evaluation of their self-assessment and an overview of the actions they need to take. The sustainability index also draws attention to the existing and easy-to-use methods that can be used to measure the energy and water consumption of AI systems and the emissions they cause.

Impact AI: Evaluation of the social impact of AI systems on sustainability and the common good

The research project Impact AI: Evaluation of the social impact of AI systems for sustainability and the common good, led by Theresa Züger, is examining a total of 15 AI initiatives from various fields. The aim is to systematically and comprehensively assess their actual impact on society and the environment. To this end, a method is being developed that combines indicators such as energy efficiency or emissions caused by the AI system with a qualitative assessment of ethical and social aspects. The project will shed light on the sustainability of AI and is being carried out by the Alexander von Humboldt Institute for Internet and Society (HIIG) in collaboration with Greenpeace.

Politics must set the framework for more sustainable AI development

We have outlined various ways in which AI users and developers can influence the sustainability of an AI model. Nevertheless, a lack of transparency in the AI industry prevents or at least hinders a comprehensive sustainability assessment. Therefore, the environmental, social and economic sustainability of AI must be recognised as a risk in politics, and appropriate framework conditions must be established. For example, uniform and comprehensive reporting and documentation standards are needed to shed light on the situation. Equally, we shouldn’t leave such measures to the industry alone. To prevent greenwashing, independent or public bodies must take charge of assessments.

Sensor von Dryad am Baum.
© Dryad
‘Digital Noses’ in the Forest: How Sensors and AI Detect Fires

Start-up Dryad's sensor system sounds the alarm in the event of a forest fire, stopping them before they become difficult to control.

Could AI-Powered Robots Be the Answer to Europe’s E-Waste Problem?

A team of researchers have developed AI-powered robots to tackle Europe's gigantic e-waste recycling crisis.

How To Avoid AI Features: Open-Source Software Lets You Escape the ‘Walled Garden’

Carbon-hungry AI is everywhere. We explain how to avoid AI features and why open-source software and decentralised networks can help.

Sustainable AI Means Looking Beyond Data Centres

The massive impact of AI on people and the environment is just the tip of the iceberg. True AI sustainability requires a life-cycle approach focused on the common good.

Die Satellitenaufnahme zeigt die Blaualgenblüte in der Ostsee.
ESA
Algae Monitor: Measuring Buoys and Satellites Help Protect Rivers and Waterways

AI-powered analysis of satellite images and local data helps us detect and counter changes in algae ecosystems early on.

Renewable Energy and Waste Heat Re-Use: Supercomputer MeluXina’s Strategies for Green, Fast Computing Power

Our modern world relies on carbon-emitting supercomputers. Enter: MeluXina, a supercomputer setting the standard for green computing power.

Does Ocean “Dark Matter” Hold the Key to Better Marine Protection?

The AI MareExplore project uses AI to identify marine enzymes that break down plastic and bind CO2.

Are There Sustainable Language Models? Three Perspectives From the Current Debate on AI and GenAI

Will AI save the climate? Or are applications like ChatGPT just CO2 guzzlers? We take a look at three perspectives on language models.