Artificial intelligence is one of the biggest buzzwords of the day. Experts agree that the burgeoning technology will significantly change the world as we know it. In fact, it already is. We use AI applications in our everyday lives today – whether it’s in the form of handy translation programmes or for image recognition.
Many believe that the technology could have huge potential when it comes to protecting the environment and tackling climate change, with a huge number of projects already working using “AI for good”. AI applications are already being used to shrink the carbon footprint of buidings, reduce food waste and protect the rainforest. But at the same time, emachine learning itself consumes enormous amounts of energy. Artificial intelligence – just like most other technologies – cannot be labelled simply as “good” or “bad” per se.
AI is a tool, not a solution. It’s up to us to decide what its impact will be. And that’s why it’s crucial we address how best to design it to ensure it has a positive impact on our planet.
Stephan, in the study for the Federal Environment Agency, you looked at the potential of AI for environmental sustainability. Which areas did you identify as having the most potential?
In our study we focused on different development paths where AI has the potential for positive impact, mainly on the basis of interviews with experts. Of course, this by no means provides a complete picture – and we’re not able to demonstrate where the greatest potential lies. But there are other studies, such as the excellent study “Tackling Climate Change with Machine Learning“, which was published by David Rolnick and other colleagues at the end of 2019.
Focusing on different sectors, it looks at what is happening in the field of machine learning in a sustainability context. There are two strands of thought: How can AI be used to mitigate climate change and how can AI be used to adapt to the consequences of climate change. Interestingly, the authors of that study also aren’t able to make any scientifically-based statements like: These are the top potentials, or these are the “low hanging fruits”. Unfortunately, there is no one solution that stops climate change or one adaptation strategy that offers us the ultimate answer. But there are many small, very partial solutions that, if used intelligently, could contribute to sustainability.
Which sectors are seeing a lot of innovation in AI and machine learning?
In the energy sector, for example, a fairly large sector that naturally releases a lot of greenhouse gases, the focus is on predicting energy consumption. So: How is energy consumed in a city or in many small households? When is the energy network used to capacity and how much? AI can be used for these kind of very complex systems
When it comes to the mobility sector, it’s about predicting traffic. So how can you positively influence traffic in a city, for example, if you know in advance how it will develop in the next few minutes or hours? But in the mobility sector it is also about identifying patterns. For example, as an energy provider, I need to know at where and when electric cars will be charged – because then I might suddenly need a whole lot of energy. Machine learning can help there too.
But when I use AI to save resources, there might be unexpected rebound effects that aren’t necessarily positive…
True. An example comes to mind from the aviation industry, where AI can be used to improve wind forecasting and ultimately optimise flight routes and save fuel.
It sounds positive at first, but there is a catch?
If I save fuel, then of course I also save money. And then we have the problem of the so-called Jevons paradox. If we increase efficiency, i.e. use less fuel, then the airline can make more profits. In other words, it could offer flights more cheaply in the competitive environment in which it operates. That could lead to more demand for flights. And that in turn could lead to the fact that the savings in kerosene, which we originally achieved by using AI to calculate optimised flight routes, would be lost due to the increased demand. In a worst case scenario, if the increased demand were to exceed the savings potential, that could even have negative effects on the climate. That’s not an AI-specific problem, of course, but an economic and system-specific problem.
© Anna Tiessen Stephan Richter, wissenschaftl. Mitarbeiter am iit, hat an der UBA-Studie „Künstliche Intelligenz im Umweltbereich“ mitgewirkt.
Getting back to your study, you have mainly looked at the sustainability potential of artificial intelligence. But the training AI is an incredibly energy- and data-intensive process. How can artificial intelligence itself be made sustainable?
That’s a tricky question. You have to look at what kind of sustainability you’re talking about. Is it economic sustainability? Then there soon comes the demand for easy data accessibility, or for new and different rules on data ownership so that everyone can access everything. This is very questionable if you put that back into the context of social sustainability. However, when it comes to environmental sustainability, I think we should put the issue of AI in a broader context, namely in the context of digitalisation and capitalism. So we cannot make AI sustainable if we do not make digital capitalism sustainable.
And how can digitalisation, or rather, digital capitalism be made sustainable?
There are already approaches to solving this problem. For example, there is a legislative framework like Germany’s carbon pricing system, which was passed in 2019 and is an environmental tax on carbon dioxide emissions. One idea would be to further expand and tighten up approaches like that. In the future, companies would have to fully offset their product and service-related CO2 emissions, by taking compensatory measures, for example. A binding framework at European, or even better, international level would be a big step towards sustainable digitalisation and also towards sustainable AI.
In very simplified terms, we can keep in mind that the “problems” we have to address in machine learning in the field of environmental sustainability are mainly energy problems. And energy is not usually produced in a 100% sustainable way. You still have a large amount of coal and of nuclear power – and that is definitely not sustainable and not desirable.
This high energy consumption is mostly due to the training of AI applications. How can that training be made more energy efficient?
There are more publications on how to improve life cycle assessments using artificial intelligence than there are on life cycle assessments of AI itself. We need a lot more research into life cycle assessments of AI. One often cited work, lead authored by Strubell at the University of Massachusetts in 2019, showed that training or teaching an AI in speech recognition consumes about as much CO2 as five cars over their entire life cycle, i.e. from creation to operation and disposal. That sounds like a lot of energy at first, and it is. But the question is: What do you do with the result, so how are the AI tools being used? How are they being applied and how large is the impact?
Do you have an example?
The “Stena Fuel Pilot” project, for example, uses AI to optimise shipping routes. In this case, AI was used to calculate how to cover the best possible route between Kiel and Gothenburg with as little fuel as possible. It was shown that two to three percent of fuel costs can be saved. Teaching the AI was certainly expensive and energy-intensive; if the tool were to be used in just one ferry, it would certainly not be worth it. However, if the pilot project were scaled up and the AI tool transferred to the entire fleet, each ferry would save a few percent of fuel on each crossing. If the developed technology were then operated for a long period of time, that makes the impact even greater. Although the training of the AI was energy-intensive, the highly-scaled use of the AI tool in this example ultimately pays off and has a positive climate impact.
What possibilities are there for a “greener”, more sustainable design of AI – for example, for companies or developers?
Training AI tools or machine learning tools uses a lot of energy – when it comes to the hardware, the cloud providers and computing times. The location of the data centre also plays an important role. Germany, Switzerland and the USA all have a different energy mix. In these contexts, for example, as a programmer, I can have at least some influence. Do I work with the most efficient hardware or algorithms that are as efficient as possible? When I look at a cloud provider, I can check whether it uses green energy, whether it is carbon neutral, and what its sustainability commitments look like. And I can also get an overview of the location of the data centre. There’s a nice paper about that, that resulted in the Machine Learning Emission Calculator. You can enter parameters such as hardware, cloud provider, runtime and location to get a rough estimate of how much CO2 is actually emitted by your application. It’s certainly not a conclusive solution, but it’s a small, interesting project developed by the AI community to help raise awareness for the topic.
Are there ways to make the training of AI applications or machine learning less energy-intensive in themselves?
Well, at the end of the day, machine learning is about an algorithm that can detect certain patterns by digging through data. So if you don’t tell the algorithm where to look for or what parameters to optimise, than it can end up looking through a huge number of different patterns and parameters in a data set.
For example, if you ride home from here on a bicycle, you have two or three parameters that you pay attention to in order to get home as quickly or relaxed as possible or as safe as possible by other participants on the road. As a human being, this can be done quite well. If you have ten parameters, it becomes more complex. And if you have 100 parameters that you want to include, and which can all interact with each other, then that is barely possible for a human being. To do this, you could use a machine learning algorithm that analyses, based on data, which parameters, how, could be interrelated and have an impact. However, if the algorithm does not know what to look for, it scours the entire data set extremely intensively. But if we tell it where to search, i.e. restrict the search field, then it has to search less for parameters in that whole mass of data. That means that you can ultimately save energy by knowing which parameters you want to optimise and how.
And what about data efficiency? Training machine learning algorithms always requires huge amounts of data – or can this be reduced?
You always need data sets to train AI. At the end of the day, all you’re doing is building a statistical model using AI to detect or even predict patterns in a data set. However, it is important to select and pre-sort the data sets beforehand. So if you know exactly where you want to go, you can limit the search field, so the AI requires less computing power. This is important for two reasons: it both reduces the work load and uses less energy.
And what can governments do to make AI applications less energy-hungry?
The right way is to invest in the context of research and development, for example to develop more efficient AI algorithms and hardware. And then we have to take a look at digitalisation as a whole. While research and development in the area of efficiency improvement is helping, we also need regulation when it comes to CO2. If every gram of CO2 had a price tag or had to be compensated in some way, then science, business, and individual citizens, who would end up carrying those costs, would work to ensure that digitalisation were as sustainable as possible. That also means that we should primarily be applying AI in use cases where we predict that it will have an overall positive environmental or social impact.
This is a translation of an original article that first appeared on RESET’s German-language site.
This article is part of the RESET Special Feature “Artificial Intelligence – Can Computing Power Save Our Planet?” Explore the rest of our articles in the series right here.