Are There Sustainable Language Models? Three Perspectives From the Current Debate on AI and GenAI

Will AI save the climate? Or are applications like ChatGPT just CO2 guzzlers? We take a look at three perspectives on language models.

Author Benjamin Lucks:

Translation Kezia Rice, 07.21.25

Rarely has a new technology excited so many people—or at least aroused their interest. According to a Deloitte study of 30,000 Europeans, 44 percent had used generative AI. The AI boom has seemed unstoppable since the release of ChatGPT at the end of 2022. When we talk about digitalisation, we can no longer ignore language models: from OpenAI’s GPT models to Google’s Llama models to BLOOM from Hugging Face, with apps such as ChatGPT, Gemini, Perplexity and others.

It’s not surprising that these AI chatbots are so popular. On the one hand, the technical advances of AI chatbots and tools for generative AI are constantly making headlines. On the other hand, language models are changing the way we interact with computers. Since the end of 2022, the future finally seems to be developing into what sci-fi films with AI fantasies (think J.A.R.V.I.S. or HAL-9000) have been promising us for decades.

AI, LLM, language models, GenAI: What are the differences?

A brief explanation of the key terminology:

  • AI: Stands for artificial intelligence and describes all computer systems that can perform human-like tasks. This is usually done with machine learning. AI is used as an umbrella term to categorise all other terms.
  • Language model: In somewhat simplified terms, a mathematical model that puts words in a certain order based on probabilities.
  • LLM: ‘Large language model’. They form the basis for tools such as ChatGPT or Gemini: mathematical models that have been trained on data sets.
  • Parameters: Parameters are the basic building blocks of LLMs—not words or sentences but numerical values. The more parameters, the more complex.
  • GenAI: Stands for Generative AI and describes all AI applications that can generate new texts, images, videos, lines of code and more.

With a little more time, could AI-based applications help us solve the climate crisis? Developers promise new technologies that will meet our growing energy needs without emitting greenhouse gases. AI could automate the transport sector with autonomous private transport and even make us immortal in the distant future.

Increasingly critical voices counter these promises, questioning both current and future AI developments. In 2025, it’s difficult to distinguish between true AI-related innovation and AI hype.

In countries such as Argentina, the US and Germany, populist parties use AI-generated content in election campaigns. Former supporters of artificial intelligence also warn of the dangers of AI-based superintelligence. And pretty much all major tech companies abandoned their sustainability goals immediately after the publication of ChatGPT, significantly increasing their CO2 emissions.

In this article, we will present three perspectives from the current debate on sustainable language models. Then we’ll give our recommendations for taking action!

Why aren’t language models sustainable?

The development of language models requires huge amounts of energy, data, water and other resources. In order to provide users with the most suitable answers, language models calculate the most likely sequence of words and letters that exist in response to our query according to their dataset. To do this, they are guided by huge data sets from natural language and programming languages.

The basis for these datasets is books, documents, scientific studies, images, videos, social media chats, lines of code and other digital content that have to be processed in a months-long training process. In June 2025, some established models use hundreds of billions of parameters. Training language models causes high CO2 emissions before we even make any requests. According to one study, ChatGPT generates as much carbon dioxide per month as 260 flights from New York to London. Generally speaking, submitting a search query via ChatGPT requires 10 times more energy than a Google search.

The reason for this is that training new language models takes place in huge data centres. And by their very nature, these require large amounts of energy, water for cooling and hardware in the form of semiconductors. The day-to-day work of queries and responses (known as ‘inference’ in the AI world) means language models continue to consume resources throughout their lifetime.

But exactly how high is this resource consumption? Is the investment of resources worthwhile? What is the social benefit of language models? This is precisely where the following three perspectives differ significantly.

First perspective: AI will solve the climate crisis and other social problems as a ‘superintelligence’

In their Social Media Watchblog, Simon Berlin and Martin Fehrensen summarise some quotes from the CEOs of tech companies:

Luis Ahn from Duolingo describes the future of his company as ‘AI-first’. He warns against waiting to take action when major upheavals (editor’s note: such as the AI boom) occur. “It’s not a question of if or when—it’s already happening.” Barbara Peng, CEO of Business Insider, wants “100 percent of her employees to use ChatGPT in the future.” This is despite the fact that she wants to reduce the size of her organisation by 21 percent thanks to new AI technologies. A somewhat contradictory expectation, which Micha Kaufman from the freelance job exchange Fiverr summarises once again: “AI is after your jobs. To be honest, it’s after mine too.”

Is ‘AI’ problematic as a collective term?

Should we use the term ‘AI’ at all? According to authors Emily M. Bender and Alex Hanna, by using the term, we’re contributing to the artificial ‘AI hype’ of big tech companies.

The authors describe this in detail in their book ‘The AI Con: How to Fight Big Tech’s Hype and Create The Future We Want’. You can also listen to this podcast on the topic.

These quotes have one thing in common, which illustrates the way many people view language models and ‘AI’. They see these systems less as a new technology and an option in a pluralistic and differentiated digital society. Rather, they see them as a natural development of a technological society. And in doing so, they reflect a technological determinism that equates the spread of language models with the industrial revolution or the printing press. The ‘AI age’ is a further stage in the development of a technological society, which in itself represents a positive social change.

Indeed, 800 million people used ChatGPT in April 2025: almost 10 per cent of the world’s population. The platform has 1.5 billion visits per month, making it one of the fastest-growing online platforms in the history of the internet.

At the same time, it’s increasingly difficult to avoid language models. Google rolled out its multimodal chatbot Gemini Live to millions of Android devices in April 2025. Meta integrated its own ‘Meta AI’ into the world’s most popular messenger, WhatsApp, without any deactivation options. And most search engines reportedly already offer AI-based functions.

Despite this huge proliferation, the ecological impact of language models is negligible, according to proponents of the technology. A blog post by OpenAI CEO Sam Altman illustrates why.

“About a fifteenth of a teaspoon”

In the article published in June 2025, Altman comments on ChatGPT’s energy and water consumption. “People are increasingly curious about how much energy a request to ChatGPT consumes,” he writes. “The average request consumes about 0.34 watt-hours. This is how much an oven consumes in a second or an efficient light bulb in an hour. One request also uses 0.000085 gallons of water, about one-fifteenth of a teaspoon.”

According to Altman, the resource consumption of a single ChatGPT request is therefore null and void. Even if we remember the 1.5 billion visits per month. According to Sam Altman, there is no reason to worry:

“The rate of technological progress will continue to accelerate…and there will be difficult things, like a whole class of jobs disappearing. But on the other hand, the world will get richer so quickly that we will need completely new ideas for social strategies. We may not have a new social coexistence immediately but when we look back in a few months, after many small changes, we will see that something big has emerged.”

Raum mit Servern.
© CERN
Data centres are becoming increasingly important in times of generative AI. But they consume high amounts of water and electricity.

According to Sam Altman, companies such as OpenAI are developing a new form of society in the long term with their AI technologies. And language models such as ChatGPT will ultimately lead to the development of ‘general artificial intelligence’ or AI superintelligence. According to some experts, this will be able to solve and even reverse the challenges of man-made climate change.

Systems that work with machine learning can actually be used to increase efficiency in supply chains, improve efficiency in agriculture and to develop new technologies to reduce CO2 emissions. The IMF estimates that the spread of ‘AI systems’ will bring economic benefits and could reduce CO2 emissions in the long term. Historically, however, increases in efficiency have often led to rebound effects, which ultimately do not reduce CO2 emissions; this is known as the ‘Jevons paradox’. But Altman and his colleagues don’t seem concerned.

As a recommendation for action from this technology-deterministic perspective, we can therefore conclude: we just have to hold out a little longer until AI solves our problems. And then the world will become a better place. What would this better world look like? Which social group would this world benefit? How will AI superintelligence stop climate change? This perspective doesn’t answer these questions.

Second perspective: Language models offer potential if they are used correctly

The second perspective that we want to explore views language models and AI-based technologies primarily as tools whose social benefit depends on their use.

German IT expert Björn Ommer describes why generative AI is revolutionising the way we deal with computer systems:

“With generative AI, computers finally understand us in natural language. They learn what we need and then provide us with solutions that really meet our needs. This means that everyone can fully utilise the capabilities of computers, not just those who can programme.”

Generative AI means that what companies advertised as ‘personal computers’ decades ago are actually becoming more ‘personal’ and needs-oriented. While Björn Ommer sees this primarily as a necessity for European companies to develop their own AI systems, it also offers advantages for an egalitarian and barrier-free approach to computers.

Potential of public interest AI

Language models don’t only enable people who can’t program to interact with computers in a new way. They also provide computer access to people who were previously largely excluded from digital life. The AI tool Simba, for example, specialises in simplifying complex texts. People can use it to simplify complex public institution websites, for example.

Simba is part of a project for ‘Public Interest AI’. It illustrates how AI technologies can lead to positive social change even without the promise of super intelligence.

In an interview with RESET, Theresa Züger from the Alexander von Humboldt Institute for Internet and Society criticises the main directions companies are currently pursuing when it comes to AI: “The AI industry puts market interests first. The question of the social impact of AI systems is of secondary importance.” The majority of AI models we use in Germany and Europe were developed by Big Tech companies in the US. When analysed closely, their products rarely add value to society as a whole.

Züger would therefore like to explore the potential of public interest AI with her ‘AI & Society Lab’ at the HIIG. She believes AI systems with the right focus can certainly add value to society. As a first step, she’s compiled an overview of AI systems with this type of focus.

Specialised AI applications can circumvent the problems of LLMs

Other projects are also trying to think about language models and generative AI in a more sustainable way. According to Therry Zhang, ‘frugal’ AI requires a different approach to the current large-scale language models. With BLOOM, the company Hugging Face is trying to develop an ethical and sustainable language model. However, with 176 billion parameters and 46 different natural languages, BLOOM is also extensive and unspecific.

Such projects show that the hype surrounding language models also brings social and sustainable approaches. However, many of these alternatives are more specialised, so they aren’t considered alternatives to ChatGPT, Bard and the like. They’re popular because they’re free and provide accurate information and human responses while understanding our input in natural language.

However, as our last perspective shows, these supposed advantages also conceal fallacies. And as with many supposedly free services on the Internet—such as cloud storage from Google or social media platforms—the use of ChatGPT and its counterparts comes with hidden costs.

Third perspective: Language models are inherently unsustainable and replicate inequalities and discrimination

A critical examination of language models has many dimensions. From an ecological perspective, it’s primarily data centres to which experts such as Ralph Hintemann attribute a “major part of the CO2 emissions of digitalisation” in the future. The supposedly marginal energy and water costs that Altman describes in his blog post look very different in independent studies.

A meta-study carried out by the Öko-Institut on behalf of Greenpeace summarises the results of 95 studies. According to the study, the electricity requirements of AI data centres will be eleven times higher by 2030 than in 2023. “AI will then require as much electricity as all conventional data centres combined today,” summarises the team of authors. The amount of water required to cool the server towers in AI data centres will triple from 175 billion litres in 2023 to 664 billion litres by 2030.

data centre construction

How do we deal with AI’s unquenchable thirst?

The water consumption of AI data centres is more complex than just cooling systems.

We explore why colonial patterns are repeated in the construction of data centres and what solutions are available. Read more here.

In addition to cooling resources, the production of semiconductors (which we need to build new data centres) requires water and energy. Expert Julia Hess explains this problem in more detail in an interview in connection with her ‘Semiconductor Emission Explorer’. At the same time, GenAI places such a heavy demand on hardware that the half-life of graphics processors is shorter than for other applications. Because of this, AI likely produces a lot of electronic waste.

Even if the energy consumption of AI data centres could be supplied by 100 percent renewable energies by 2040, as per the meta-study, the experts expect greenhouse gas emissions to increase. This is because the additional electricity required will extend the service life of fossil fuel power plants and jeopardise climate targets overall. The current trend also shows that Big Tech tends to favour nuclear power and gas-fired power plants, sometimes without licences and with devastating effects on surrounding communities.

© Greenpeace/Öko-Institut
According to several estimates, the energy consumption of AI data centres will continue to rise sharply until 2030.

From a purely ecological perspective, voice models are a particularly resource-intensive technology. And their spread is happening at a time when we already need to drastically reduce our greenhouse gas emissions with existing technologies.

However, according to the narrative of proponents such as Sam Altman, these investments will pay off in the long term. Altman believes that the new technology will solve the climate crisis as ‘general artificial intelligence’. However, there are clear doubts about this promise.

‘LLMs always orient themselves with the status quo’

In response to our question about the social benefits of a sustainable language model, Osnabrück-based social researcher Paul Schütze says that we must also critically scrutinise its supposed advantages. For example, it has recently become increasingly clear that “GenAI is primarily destructive in the current socio-economic situation”. AI-generated images, for example, are significantly associated with right-wing aesthetics. The use of AI “is interpreted in the US via Elon Musk and DOGE as a fascist reorganisation of the state”.

Rainer Rehak also warns that current AI systems “can only implement what we as a society have decided so far”. As a result, AI systems are “always created under the current, fundamentally unsustainable conditions” and only ever operate within this framework. Under the current conditions, sustainable AI is therefore only ever able to continue “business as usual”.

© Steve Jones, Flight by Southwings for SELC
Aerial photos show Elon Musk is using environmentally harmful and illegal natural gas turbines in his latest data centre. Unfortunately, the local population in Memphis, USA, is suffering as a result.

Paul Schütze summarises this problem. “Sustainable AI is the technical solution to the climate crisis from a techno-solutionist perspective and merely reproduces the status quo.”

Under this assumption, language models do not solve social problems. They simply perpetuate the inequalities that led to these social problems in the first place. This is because they are trained on datasets that contain these inequalities.

Conclusion: A sustainable future with AI requires specialised solutions instead of language models

If we formulate recommendations for action from these three perspectives, they would be as follows:

From an ecological perspective, we don’t recommend using large language models such as ChatGPT, MetaAI and similar technologies. It’s already clear that they are extremely resource-intensive, reproduce social inequalities and, above all, pursue the interests of the people who are largely responsible for their development and popularity. There are also other critical characteristics. For example, imprecise results due to hallucinations, discrimination against people of colour and non-cis men and much more. Technology critic Paris Marx dedicated a podcast to these issues entitled ‘Tech Won’t Save Us’. re:publica 2025 also held interesting discussions on the topic.

Green digital futures

How can we ensure a green digital future?

Growing e-waste, carbon emissions from AI, data centre water usage—is rampant digitalisation compatible with a healthy planet? Our latest project explores how digital tools and services can be developed with sustainability in mind.

Nevertheless, in view of the three perspectives, it’s important to distinguish between large language models and specialised AI applications. A differentiated critique of AI, as called for by the authors of the social media watchblog mentioned at the beginning of this article, must take into account the social transformation potential of new technologies. Public interest AI also shows that AI systems can ensure greater accessibility in the use of digital technologies. Ethically and ecologically developed systems can also lead to positive social change. Especially if the developed systems are open source.

However, a sustainable approach to language models and artificial intelligence at an individual level must mean critically weighing up when and to what extent the use of such technologies makes sense. Simply looking at the productivity and time benefits of language models is not only too short-sighted. In view of tech-solutionism and the ecological and social problems caused by the technology, it is also increasingly dangerous.

How SkoBots are Revitalising Indigenous Languages

SkoBots is a community-led, interactive and wearable educational robot designed specifically to teach Indigenous traditional languages to children.

Das "Mutter Erde Telefon" in Form eines einfachen, schwarzen Vintage-Telefons an der Wand
© Mother Earth Telephone
Imagine Mother Earth is on the phone…

...and you can ask her any questions you want. With the 'Mother Earth Telephone', you might soon have our planet on the other end of the line.

Greener Data Centres Thanks to Refurbishment? We Asked Techbuyer

The demand for computing capacity is increasing worldwide whilst reports of the environmental impact of data centres grow. Techbuyer aims to counter this with refurbished servers.

“The New Oil”: How Data Centres Threaten Climate Protection—And What We Can Do About It

Data centres threaten the energy transition, drain water resources and harm human health. Can we slow their growth before it's too late?

Sensor von Dryad am Baum.
© Dryad
‘Digital Noses’ in the Forest: How Sensors and AI Detect Fires

Start-up Dryad's sensor system sounds the alarm in the event of a forest fire, stopping them before they become difficult to control.

Looking at the Entire Life Cycle: Tips for Sustainable AI Development and Use

AI users and developers can influence AI sustainability. We present the essential steps to creating a sustainable AI model.

Could AI-Powered Robots Be the Answer to Europe’s E-Waste Problem?

A team of researchers have developed AI-powered robots to tackle Europe's gigantic e-waste recycling crisis.

How To Avoid AI Features: Open-Source Software Lets You Escape the ‘Walled Garden’

Carbon-hungry AI is everywhere. We explain how to avoid AI features and why open-source software and decentralised networks can help.