The ongoing AI boom poses a challenge. On the one hand, generative AI and AI chatbots such as ChatGPT can be very practical in everyday life. At the same time, they are extremely harmful to the environment. In our guide on the sustainability of language models, we don’t recommend using unspecific language models (such as those from OpenAI, Meta and co) because of their environmental impact.
AI systems from Big Tech companies also raise ethical and political concerns but those aren’t the focus of this article. Instead, we’re outlining the ways you can reduce your digital carbon footprint while using Gen AI. Here are our rules and recommendations for using language models sustainably.
Rule 1: Does my task really require a language model?
Turning to ChatGPT or Perplexity when writing texts or conducting research usually has one main advantage: it seems to save time. Even if you’re critical of language models, you can hardly deny that simple text work or rough research using AI chatbots is quick. However, we shouldn’t focus solely on convenience when deciding whether to use AI systems.
What is the environmental impact of GenAI?
A few pictures here, a few questions there—does GenAI really have an impact on the planet?
A meta-study by the Öko-Institut analysed 95 studies on the environmental impact of AI models. According to the study, the electricity demand of AI data centres will be eleven times higher by 2030 than in 2023, while water demand will increase threefold to 664 billion litres by 2030.
Overall, the authors of the study predict that GenAI will extend the service life of fossil fuel power plants and jeopardise climate targets.
There are many reports that the data basis of GenAI discriminates against certain groups of people (study by the Anti-Discrimination Agency), reproduces inequalities and reinforces right-wing ideologies. Data sourced by AI chatbots is also not categorised critically and search queries often lead to incorrect information.
This unreliability can mean that a ‘clean’ search via AI chatbots takes longer than directly using Google or a green search engine. And since a single query to ChatGPT requires ten times the amount of energy as a search query, we advise simply Googling directly.
If you do decide to use a language model, ask the following questions: Is the source reliable? Are the results one-sided? And do the facts it gives me actually correspond to reality?
Rule 2: Do I want to support Big Tech companies while doing my task?
The most powerful and best-known applications for GenAI come from a few technology companies in the US, such as OpenAI, Google and Meta. These companies benefit when we use their services—even if they are free of charge and we don’t think we’re handing over much personal data via our search query.
The popularity of ChatGPT or Gemini earns Big Tech companies money in various ways. OpenAI recorded a USD 40 billion investment in 2025; the insanely high usage figures for ChatGPT may have played a role in this. The shares of hardware manufacturer NVIDIA even overtook Apple’s share price in 2025.
The bigger Big Tech companies become, the greater their influence on politics, society and many other areas of life. Social researcher Paul Schütze explained to us in an interview that AI technologies from Big Tech are currently “suspected of fuelling a fascist restructuring of the state” in the USA. AI service providers such as OpenAI are also developing weapons systems such as autonomous drones or, like Elon Musk’s xAI, building data centres in structurally weak areas and operating them with gas generators that are harmful to health.
A conscious and sustainable approach to technology should take such developments into account and favour alternatives.
Rule 3: Proportionality and specialised applications
Language models can process a huge range of tasks and queries. This is because they have been trained on huge and very unspecific data sets so that they can offer universal solutions. However, this flexibility also means that their training and day-to-day operation—also known as inference—is extremely resource-intensive.
Tech blogger ‘stk’ raises the following question in his video on AI-supported administration: When is the use of large language models proportionate? He illustrates this with the following comparison:
“I could heat up a tin of ravioli with a marine diesel engine that I run in my garden. But it might not be the most sensible way [to heat the ravioli].” In other words, why does a huge language model have to be used for simple tasks such as a search query?
Specialised computer programs—with or without the AI label—often deliver better results and are much more efficient. Take the AI tool Simba, for example, which can convert German text sentences into simple language. It is available online free of charge and is based on a customised version of the Foundation model Llama-3-8B-Instruct. According to the EcoLogits calculator, this rather small language model is around 23 times more efficient than OpenAI’s ChatGPT 4o.
Simba was developed by a team of researchers from the Alexander von Humboldt Institute for Internet and Society (HIIG). Project Manager Dr Theresa Züger tells us more about so-called ‘public interest AI’ in this interview.
Rule 4: AI chatbots are not humans
At the end of 2024, IKEA announced the introduction of a new chatbot for customer support based on ChatGPT. However, IKEA already has a virtual customer advisor, which has been available on the IKEA website since 2005.
Anna was programmed to simulate natural customer conversations. She greets users and asks how she can help. Accordingly, users tend to greet her, thank her and say goodbye to her at the end of the conversation. Large language models continue this tradition with their natural-looking chat applications.
However, greetings, thank-yous, apologies and other phrases must always be processed as new requests. Even if it seems unnatural, we advise being ‘unfriendly’ with an AI chatbot to save energy. Remember, you are chatting with a computer system and not a human being. And you can’t hurt the feelings of a computer because it has no feelings. A reminder that is also useful and healthy in view of the increasing number of parasocial relationships with AI chatbots.
Rule 5: Longer answers mean more computing power
According to studies, the longer an answer from GenAI, the higher the energy, water consumption and load on the hardware. The company EcoLogits offers a calculator that compares the resource consumption of current language models. We can also compare how these factors change depending on the length of the text output.
As users, we can operate AI chatbots more efficiently if we formulate requests that generate the shortest possible answers. The author Edgar Linscheid has developed an approach under the label ‘Sustainable Prompting’ which teaches sustainable and ethical AI skills.
How much energy does my AI use need??
Even if Big Tech companies are reluctant to talk about the true resource consumption of their applications, there are tools on the web that can help us find out.
The non-profit organisation GenAI Impact offers the EcoLogits Calculator, a free web tool that can be used to compare the resource consumption of different models.
Alternatively, ChatUI-energy allows the use of some language models, including calculation of the energy used and CO2 emissions.
He advised RESET to formulate requests so we received the desired content in as few prompts as possible:
‘The better you brief generative AI, the better the result, the fewer iterations you need, the more resource-efficiently you use AI.’
It’s not necessarily advisable to ask a chatbot to always reply in one sentence or with just a few words. Because then we may have to make several requests to achieve the desired result.
However, there are settings in ChatGPT that can be used to customise the outputs. In the basic settings, for example, you can prioritise ‘short, precise answers’ and instruct the chatbot to ‘avoid repeating the question and introductory sentences’. You can also activate the storage of instructions so that ChatGPT does not have to ask for them repeatedly.
So-called deep research functions, which enable AI chatbots to provide more precise answers, require additional energy and should also not be permanently switched on.
Rule 6: Stock photos and stock videos instead of AI generation
In addition to text, GenAI can also create images and videos. However, creating images is many times more computationally intensive than generating text. We recommend using image AI tools with their environmental impact in mind.
In principle, it’s not difficult to find free images and videos on the internet. Stock platforms such as Pexels, Unsplash or Pixabay or knowledge databases such as Wikimedia offer a lot of free content. Just make sure to correctly cite them when using images. With these platforms, you can also be sure that photographers have given permission for their work to be used, which isn’t always the case with AI applications.
However, AI-generated content is increasingly being found on stock platforms. This content is not generated by the platforms themselves but is created and subsequently uploaded by users. Similar to car sharing, where the CO2 emissions of a car are split by the people using it, the digital CO2 footprint of GenAI is also reduced when AI-generated content is shared.
If the AI generation of images and videos cannot be avoided, similar rules apply as with texts. Requests that are formulated as precisely as possible mean that we need fewer prompts to get to the ideal result.
Rule 7: Search for AI-supported applications on green servers
Our last rule does not relate directly to language models but is significantly influenced by the current AI boom. Companies are increasingly integrating AI functions into their digital products, which are often also based on large language models. For example, Apple’s ‘Apple Intelligence’ or the new customer advisor from IKEA mentioned above.
How can we ensure a green digital future?
Growing e-waste, carbon emissions from AI, data centre water usage—is rampant digitalisation compatible with a healthy planet? Our latest project explores how digital tools and services can be developed with sustainability in mind.
Companies with sustainable digital strategies that are oriented towards CDR or generally focus on sustainability sometimes also pay attention to green hosting for their AI applications. However, there is no general answer as to whether this is a really sensible implementation or greenwashing.
Conclusion: less is more, avoidance is better
In view of their massive ecological and social impact, the most sustainable approach is not to use AI applications at all. And if you do, then we recommend using specialised models. They can handle many of the tasks better than large language models and are more efficient.
Of course, it can save time or simply be more entertaining to use AI chatbots in everyday life. They also give people barrier-free access to information and content online. Using them as sparingly as possible depends above all on the length of the answers and the number of requests. Users should therefore make queries as precise as possible and give the AI chatbot instructions on how to formulate short answers.

