When Is GenAI Actually Useful? Stefan Kaufmann Shares His Toolbox for Using AI in Administration

Stefan Kaufmann from Wikimedia Germany reveals the limitations of GenAI and shares his toolbox for figuring out when it's worth using it.

Author Benjamin Lucks:

Translation Kezia Rice, 10.20.25

Crypto mining, NFTs and now GenAI: time and again, the digital world produces technologies that are briefly touted as universal solutions to all kinds of problems. For several years now, it has been language models such as GPT and Gemini that citizens, organisations and companies supposedly need to use in order to survive in the new world of AI. On the other hand, it is becoming clearer that generative AI is unreliable, exacerbates inequalities and has a huge ecological footprint. Even years after the launch of ChatGPT, truly sustainable language models are still nowhere to be found.

But how do we figure out when it makes sense to use generative AI? Does it make sense for Germany to invest billions in its own language model, or are there more reliable technologies? We talked to Stefan Kaufmann from Wikimedia Germany, who uses a pretty simple principle to figure this out.

Generative AI as a magical ‘silver bullet’

“Whether it’s generative AI or blockchain, there’s always a search for a silver bullet,” says Stefan Kaufmann in an interview with RESET. Stefan works as a consultant in the Politics and Public Sector team at Wikimedia Germany. By silver bullet, Kaufmann means using technologies and practices as a universal solution to problems that arise. “Language models benefit from the fact that they feel magical when you use them,” says Kaufmann. “This tempts people to seek salvation in these quick fixes.”

In many areas, however, these quick fixes are highly problematic. Systems such as ChatGPT, for example, are designed to make the work of administrative staff easier by extracting data from continuous text and processing it for further administrative work. In more and more publications, public sector employees are being encouraged to use generative AI systems to handle a wide range of tasks.

Erdkugel mit Code

AI is bad for the environment?!

AI causes high CO2 emissions, has a large water footprint and makes our mountain of electronic waste grow even faster? If you’ve never heard of this before, or you’re simply wondering how it all fits together, then read on! The article Sustainable AI Means Looking Beyond Data Centres provides an overview of the direct and indirect effects of AI on the environment, from a sustainability, social and economic perspective.

However, Kaufmann continues, “Language models always work with heuristics [a computing term that means a model will follow rules of thumb to generate results in an acceptable amount of time]. Answers sound plausible, even if they are not correct.” For administrative staff, this means that they have to manually check all generated data records, texts and other content. If they don’t do this after AI has sped up their work, there’s a risk of passing on incorrect information. The extra checks mean it’s unlikely that AI saves them time. Despite this, more and more administrations and companies rely on the use of language models.

“Tools [such as language models] are often introduced as an end unto themselves, because that is what is demanded politically. By using them, you are modern. And you are following a logic of applause that arises solely from the desire to jump on the bandwagon,” criticises Stefan Kaufmann. Of course, there are also useful applications for connectionist AI, such as LLMs. For example, it can improve text recognition in archives. When digitising archive material, a language model can understand contexts and thus support handwriting recognition in interpreting words and sentences correctly. However, using language models just for the sake of it is too resource-intensive and eats up funds that could be used for more meaningful projects.

Linked open data or ‘good old-fashioned AI’

Kaufmann advocates the use of a different type of AI here. Linked data belongs to the family of symbolic AI and avoids the error-prone nature of large language models. Computer systems only use data that’s coded in such a way that computer systems can evaluate it logically. This removes the risk of heuristic false statements.

This is because “anything that is not coded cannot be answered. This can be an advantage, as in this case, a system does not fervently produce false information, but displays an error message,” explains Kaufmann.

Better air quality with open data!

We see time and again in the field of sustainability how important the free availability of data is. For example, a group of volunteers in Poland were able to bring about political change by collecting data on air pollution and making it freely available. Interested parties could then expand these data sets or use them for their own initiatives. Ultimately, this significantly improved air quality in Poland. Read our article for the full story.

Linked data also increases the reusability of information that can be published as open data. This allows data sets to be linked together to establish new connections. According to Kaufmann, official sources could be usefully linked with proprietary knowledge databases or collaborative sources, such as WikiData.

When is GenAI actually useful?

In addition to ethical and ecological risks, the current AI hype also means that we’re choosing to use language models instead of other sensible solutions. Advancing digitalisation in administration through linked open data would not only prevent errors in the long term. Civil society projects could also use the data to promote their own interests. The CorrelAid Association, for example, made data on the need for bicycle parking spaces in Paris freely available. Citizens could then use the data to submit a citizens’ initiative.

To demystify generative AI and assess its potential and possibilities more sensibly, Stefan Kaufmann proposes a toolbox. This is based on the principle of proportionality, as Kaufmann explains in a lecture given at the Chaos Computer Club’s Goulash Programming Night in the summer of 2025.

This principle is actually intended “to weigh up whether interventions in individual freedoms are justified and proportionate”. From this, Kaufmann formulated four essential questions that we should ask ourselves before using generative AI:

  • Is the goal legitimate or, in the case of GenAI, even defined?
  • Is GenAI at least fundamentally suitable for achieving the goal?
  • Is GenAI even necessary? Are there alternatives with fewer side effects?
  • Are the side effects appropriate for the expected success?

Many proponents of generative AI point out how much time such models save. However, this often turns out to be a fallacy. Stefan Kaufmann’s toolbox illustrates why this is the case.

Green digital futures

How can we ensure a green digital future?

Growing e-waste, carbon emissions from AI, data centre water usage—is rampant digitalisation compatible with a healthy planet? Our latest project explores how digital tools and services can be developed with sustainability in mind.

Regarding the second question, Kaufmann says, “If I require accuracy in knowledge retrieval, language models are out of the question. I’ve made it a bit of a hobby to ask local government chatbots whether the mayor [editor’s note: in his example, Stefan Kaufmann typed in the name of the respective mayor] is responsible for this or that task. The answer is often: “I don’t know who that person is.”

The third question ultimately reveals the crux of the current debate on language models. Alternatives with fewer side effects—including electricity and water consumption—are simply not being promoted due to the hype surrounding generative AI.

How to Choose a Search Engine in the Age of AI

From an overload of ads to AI overviews, choosing a privacy-focused, climate-friendly search engine can be tricky. Here's our handy guide.

Don’t Cry Over Spilt Milk: Waste Dairy Has New Use as Plastic for 3D Printing

From waste product to 3D printing material: leftover milk from dairy farms contains the proteins needed to produce biodegradable plastic.

EU Alternatives to Big Tech: Why Local is the Future

We’ll admit it. Software from the technology giants—Google, Microsoft, and Apple—simply works. Most modern humans have had at least some contact with products from these companies. The elegance of an iPhone, the user-friendliness of Google and the interoperability of Microsoft make them household names. That must be why they’re so popular, right? Well, not exactly.

Green IT: Transforming Hardware to Make It More Sustainable

Is green IT really as simple as refurbishing and recycling? We summarise our findings after numerous interviews and investigations.

A Future Vision of Data Centres: From Big Tech Builds to Community-Owned Cooperatives

Could data centres serve, rather than harm, our communities? Community-owned data centres give citizens control over their digital lives.

Greener Data Centres Thanks to Refurbishment? We Asked Techbuyer

The demand for computing capacity is increasing worldwide whilst reports of the environmental impact of data centres grow. Techbuyer aims to counter this with refurbished servers.

The Modern Work Toolkit: How to Work Sustainably in the Office and Remotely

Modern working is more digital than ever – and increasing our carbon footprint. RESET presents sustainable alternatives for digital working.

“Magic” AI Is Exploiting Data Labour in the Global South – But Resistance Is Happening

Data labour powering generative AI are paid shockingly low wages, making them particularly attractive to AI service providers who seek to maximise profit. But, there's hope.