Public Interest AI: Artificial Intelligence for the Common Good Requires a Rethink

AI that serves the common good? To do so, it must be designed differently to its big tech siblings. Theresa Züger is researching this at the HIIG institute.

Author Sarah-Indra Jungblut:

Translation Lana O'Sullivan, 06.19.25

The hype surrounding artificial intelligence continues unabated. New applications launched on the market are still regularly making headlines. Governments are creating budgets worth billions for the development of new AI models. And countries and companies seem to be in a race to build large AI data centres so as not to be left behind by developments. The big question, however, is whether we as a society will benefit from the new “one-solution-fits-all technology”, at least according to the narrative of AI enthusiasts.

Theresa Züger has serious concerns about this. She heads the AI & Society Lab at the Humboldt Institute for Internet and Society (HIIG) and researches how AI can serve the common good. “Central parts of the AI industry are in very problematic hands, which in my view do not have the common good in mind at all,” says Züger. Her research is therefore focussed on the question of how AI can be developed with added value for the common good. Public Interest AI (PIAI), also known as “AI for Good” or “AI for public interest”, is not just about tackling ecological or social problems with the applications. The approach is also based on a fundamentally different understanding of technology development.

Porträt von Theresa Züger (HIIG)
© HIIG
Theresa Züger from HIIG

AI industry in problematic hands

“The AI industry puts market interests first. The question of the social impact of AI systems is of secondary importance,” says Theresa Züger. This can be seen not only in how ruthlessly new, often unproven AI models are brought to market. It can also be seen in the fact that many big tech companies are working with the fossil fuel industry to use intelligent systems to discover new oil and gas deposits or better exploit existing deposits, as a Greenpeace study has shown. Other systems are ideologically intended to influence societies politically. Züger cites the example of Elon Musk’s Grok.

Another worrying aspect of the AI industry is that 70 percent of the AI models we use in Europe were developed by a few big tech companies in the USA, according to a report by the EuroStack initiative. Incidentally, these are the same big tech companies that own our clouds, our software and the most popular social media platforms. This leads to monopolies with significant consequences. Social division, hate speech and fake news – we’ve all heard about it.

In addition, there are massive effects on our environment. With the campaign of generative AI such as Chat-GPT and co., AI models are becoming ever larger, more complex and more energy-hungry. This is leading to a massive increase in global electricity demand. In 2030, the computing requirements for AI are estimated to be 11 times higher than in 2023, meaning that data centres in the USA could consume more electricity in just a few years than the entire energy-intensive production of goods – cement, chemicals, steel – combined, according to a report published by the International Energy Agency (IEA) in 2025. In addition to the high electricity consumption, high water usage and large quantities of electronic waste also add to the technology’s ecological footprint.

So should we throw the whole technology out the window? Perhaps not. AI can certainly be a useful technology, as we show in many examples on RESET. But we need to ask ourselves how it should be designed so that it has added value for the environment and society. “Out of an interest in the present and our future, we should keep an eye on the potential damage caused by AI applications as well as the question of whether and how AI can be used for the common good,” says Theresa Züger.

Is Public Interest AI an answer?

Together with her research group Public Interest AI, Theresa Züger documented AI projects between 2022 and the end of 2024 that are not geared towards profit, but towards social purposes such as environmental protection, health, education, social justice and political participation. They found a growing number of projects in the field of PIAI and recorded them on an interactive map. These include projects for monitoring biodiversity or CO2 emissions, AI projects that contribute to waste reduction or support recycling.

A unique data set has emerged from the many projects collected. Government institutions are increasingly investing in AI for the common good and some big players have also jumped on the bandwagon. In Germany, for example, the “AI Lighthouses for the Environment, Climate, Nature and Resources” programme, run by the Federal Ministry for the Environment, specifically promotes sustainable AI projects. Google presented the AI for Social Good Awards in 2021 and Microsoft runs its own AI for Good Lab. However, there has not yet been a detailed and structured overview of public interest AI projects.

However, even though the researchers have compiled an impressive number of PIAI projects, we are talking about a niche here. Only a fraction of all AI applications are actually developed with the aim of doing something for the public interest and for environmental and climate protection, says Friedrike Rohde, sociologist of technology at the Institute for Ecological Economy Research (IÖW). “As they only make up a fraction of all applications, their overall impact is also relatively limited.” The budgets that countries and companies invest in these developments are correspondingly modest.

To change this, Theresa Züger and her team’s dataset is intended to provide researchers and society with valuable insights into the practical application of AI for the common good. It should enable developments to be tracked, success factors to be identified and stakeholders to be networked with one another. And the comprehensive data basis helps to make well-founded political and scientific decisions in order to promote and further develop AI for the common good in a targeted manner.

Of course, the positive narrative that AI will help us combat climate change and other major social challenges sounds good and gives us hope, says Züger. “But in order to really understand whether, how and under what conditions AI systems contribute to solutions for the common good, we need to look at projects much more closely and in detail. Sustainable developments can only succeed if we understand the conditions for this and also the failures in detail.”

Green digital futures

How can we ensure a green digital future?

Growing e-waste, carbon emissions from AI, data centre water usage—is rampant digitalisation compatible with a healthy planet? Our latest project explores how digital tools and services can be developed with sustainability in mind.

But what exactly is Public Interest AI or “AI for Good”?

In Theresa Züger’s research project, public interest AI includes all applications that, according to Barry Bozeman’s definition, serve “the long-term survival and well-being of a collective, understood as the public”. The development of these AI applications is therefore not about private or commercial interests, but about the well-being of the community as a whole.

This becomes clearer with a comparison: applications such as ChatGPT or DALL-E are primarily designed to provide services for individuals or companies. In contrast, AI projects focused on the common good use their technologies specifically to address social challenges.

One such AI tool is Simba. The application developed at HIIG helps to break down language barriers on the Internet. The web application simplifies your own texts and automatically summarises texts on websites as a browser extension. The aim is to make complex language easier to understand, especially for people with learning difficulties or little knowledge of German, thus promoting more equal access to information. Unlike many other services, Simba is freely available, open source as far as possible and deliberately designed to be non-commercial.

Theoretical foundations of the term “common good”

Theresa Züger and Hadi Asghari from HIIG have summarised further considerations on the theoretical foundations of the “common good” concept and the connection to AI applications in this article. An overview of the most important principles can also be found on the research group’s website.

For Theresa Züger, the fact that Simba is open to scrutiny by outsiders and also complies with open science standards is a key aspect of the development of AI for the common good. “This should ensure that the systems do what they promise. This requires open code or at least the possibility for auditors to view it.” Another key aspect is the subsequent use of technologies. “What was once created in the interest of the common good should also be able to continue to be used in this sense.”

The latter question in particular concerns many developers in the open source community. This is because the main beneficiaries of open source and free licences are currently the big commercial players. They integrate the developments into applications that they then market commercially – often with little to no added value for the common good. “How licences and openness can be designed for the common good is a question that will continue to occupy us in the coming years,” says Züger.

There are also other important criteria for AI oriented towards the common good. For example, there must be good reasons to use AI as a solution in the first place. “AI is not always the best or most robust solution, especially not for core social problems.” Nor is it always the most sustainable from an ecological perspective.

The systems should also promote equality and take the positions and experiences of those affected into account during the development process and design. “There is also a need for technical standards that guarantee the security and reliability of systems. This prevents them from being misused for other purposes or performing so poorly that they don’t fulfil their purpose,” adds Theresa Züger.

If AI applications fulfil the above criteria, they are not developed without people in mind, but involve them.

Is PIAI better for the environment and climate?

Projects oriented towards the common good differ fundamentally from their big tech siblings in terms of their focus. The large corporations are primarily concerned with advancing innovations and new technologies in order to maximise profits. And where there is not yet a need for an application, this must be created. In contrast, a project oriented towards the common good usually aims to develop a solution for a specific problem. Here, it is easier to start by asking whether an AI system is necessary at all or whether a less resource-intensive solution would also fulfil the purpose.

What’s more, smaller, community-oriented projects are often based on pre-trained models, which has advantages from an ecological perspective. Among other things, because less computing power is required and therefore less energy is consumed than for training a new model. And the PIAI models themselves can be further developed and utilised.

If the applications are available as open source, it is also easier to measure energy consumption. “Projects can use tools to track their own energy consumption in order to estimate how much they are using,” says Züger.

It can also be easier to ensure a sustainable infrastructure in smaller projects. This includes the use of data centres that are powered by renewable energy and the use of technologies that reduce energy consumption in the training of models.

Symbolbild für KI und Klimaschutz

How can the energy guzzler artificial intelligence become more sustainable? Interview with Friederike Rohde (IÖW)

Our new AI world has a large CO2 footprint. Renewable energies alone will not solve the problem, says Friederike Rohde. Further measures are needed.

“Politicians should limit and promote”

A look at the interactive map created as part of the project reveals that most of the PIAI projects are located in Europe. This is mainly due to the location of the project, reports Theresa Züger. “We do try very hard to reach out to international projects so that they can register on the map, but language barriers and geographical distance make this more difficult.” She is hoping for more entries, as the projects themselves would also benefit from this. These include visibility within a global network of stakeholders who are committed to AI for the common good, better networking opportunities with similar initiatives and the chance that researchers, funding institutions or political decision-makers will become aware of the project.

Screenshot der Karte, die Public Interest AI zeigt. The projects can also be filtered by area using the map created in the “Public Interest AI” project.

Since January 2025, Theresa Züger has been leading the transdisciplinary research project Impact AI, in which Gemeinwohl-Ökonomie e.V. and Greenpeace are involved. This project also focusses on the topic of “AI for the common good”. In order to understand what impact they have on sustainability and the common good, 15 international AI projects are being analysed in depth.

However, in view of the massive negative impact of the current course of AI, it is already clear that the technology should be steered. According to Züger, this requires a much stronger political will that does not shy away from intervening in current developments. “Limiting where AI systems exacerbate problematic social developments. Promoting, so that a real ecosystem for AI for the common good can emerge, beyond the hype. And scrutinising, as we do not yet sufficiently understand the social side effects of AI use in many areas. I believe that states have a great responsibility here to lead the way.”

She is hopeful that a growing number of social players are addressing the question of how AI can be used for the common good. In addition, this idea can at least be felt again and again at a political level. The EU’s AI Act is definitely a start here.

However, we should not forget that AI applications are only ever a small part of a socio-technical solution, the researcher reminds us. AI alone will certainly not solve any of our social or ecological problems, but can only ever be a building block in the context of other social and political measures. This is why Rainer Rehak, who researches digitalisation, sustainability and participation at the Weizenbaum Institute in Berlin, also advocates taking a problem- and goal-oriented approach rather than a technology- or even AI-driven one. Because: “From the perspective of limited resources, we have to carefully weigh up where we use AI – and where we don’t.”

There’s a Returns Crisis in Retail: Here’s How Technology Is Mitigating Its Environmental Impact

Online clothing returns emit a staggering amount of carbon—and the items often end up in landfill. Can technology turn the tide?

Uncovering the Hidden Water Footprint of AI: Solutions for Quenching Its Insatiable Thirst

Our digital lives rely on data centres. But their cooling systems are threatening global water supplies, leaving citizens thirsty.

Photonic Integrated Circuits “Break New Ground”, Reducing Waste Heat of Data Centres

New study reveals how Photonic Integrated Circuits could reduce data centre energy consumption by replacing electrons with light particles.

Tomorrow’s AI, Today’s Problem: How Toxic GAI E-Waste Could Engulf the Planet

Generative AI (GAI) is taking over the world—in more ways than one. One new study has shed light on the hidden impact of LLM e-waste, in the hope that we can stem its tide before it's too late.

AI-Model Aardvark Forecasts Weather Without the Need for Carbon-Emitting Supercomputers

AI-model Aardvark can predict weather faster and more accurately than existing systems—all while emitting thousands of times less carbon.

The Hidden Gaps in Our Digital Carbon Footprint: A Conversation with Jens Gröger of Öko-Institut

Devices, data centres and our online lives all generate high CO2 emissions. Jens Gröger shares solutions for this digital carbon footprint.

Simplifying Scope 3 Decarbonisation with ClimateChoice 

Scope 3 emissions can make up to 90 percent of a business’s carbon footprint.  ClimateChoice helps companies understand and tackle them.

38C3 - Blick auf das CCC-Gebäude in Hamburg.
© Thomas Fricke
Hacker Congress 38C3: All About Sustainability in the Digital World

The Chaos Computer Club's annual conference took place at the end of 2024. Here are the top presentations to watch.