Interview: Can We Trust Artificial Intelligence?

How can we design AI-based systems so that they benefit the common good? RESET talked to Kristina Penner from AlgorithmWatch about algorithmic bias and the potential for AI to reproduce systemic inequalities - but also about transparency and civic participation.

Author Lydia Skrabania:

Translation Lydia Skrabania, 05.26.20

AI-based systems are already being used for automated decision-making processes in many areas of society, including in police work, job application procedures, and in the financial, educational and health sectors. Artificial intelligence is already deciding whether someone gets a loan, who gets ahead in the job market – and even whether or not a child in a family is at risk of abuse. AI-based systems not only shape the lives of individuals, but also have an enormous impact on society as a whole.

What is behind the decisions made by AI-based systems? How can we trust them? How can we detect discrimination? And how can we design and use such systems so that they benefit the common good?

We talked about this with Kristina Penner from the nonprofit organisation AlgorithmWatch.

Kristina, in your 2019 report “Automating Society”, you looked at automated decision-making systems in twelve EU countries and at EU level, including many AI-based systems. The report contains many examples of “social scoring” – a term mainly used in the context of total surveillance of the Chinese population. What should we think about systems like this?

In “social scoring”, i.e. systems that assess people, place them in groups and assign them to certain profiles, it is no longer individual behaviour that counts, but instead which group or profile you belong to. When individual situations and complexities are no longer taken into account, that’s when it becomes tricky.

In all the countries that we examined, we found systems in the area of policing, for example, that definitely could be seen as having a “social scoring” character. In Denmark, they planned to create a system that identified cases of potential neglect of children. However, this is currently on hold because civil society resisted its implementation. In the Netherlands there is a system in place to predict child abuse and domestic violence. And in Spain, they planned to create a system that predicted the likelihood of young offenders re-offending.

The systems that involve this kind of prediction are the ones that are highly problematic – because they’re dealing with predictions of behaviour that hasn’t happened yet. You have to be very careful which data, which categories and which variables are put together, how they are weighted and which statistical calculation methods are used – so that you don’t come to any false conclusions.

What negative social impacts can AI-based systems have? What risks are there?

It is important to realise that AI-based systems can lead to unequal opportunities and unequal access – when it’s used in job application procedures for example. And they can be economically marginalising and discriminatory, for example when there are errors in allocating people to certain profiles, managing people’s social benefits, working out their credit scores or evaluating staff. And if, for example, stigmatisation, stereotypes, prejudices are reinforced or reproduced by the data used to train these systems, they can lead to people being socially disadvantaged – and that can happen as early as the design and development stage, caused by the prejudices of those who develop and use these systems.

Does this mean that systems, which are actually supposed to evaluate and make decisions as objectively as possible, ultimately perpetuate the same patterns of discrimination that already exist in our society?

When it’s not your personal behaviour or your individual case that decides which result, which decision, comes out of the system, but the fact that you belong to a certain group, then yes. That’s something we have to keep in mind and test for, because the potential for discrimination often only becomes apparent when you use the system in real life. When we think of systems that are used in the justice system or in police work, it can even result in people’s civil liberties being restricted and their freedom unlawfully being taken away. If those systems are not developed in a comprehensible way, are not tested independently with the right data or are based on discriminatory assumptions and data, that’s what can happen.

Do you have an example of an AI-based system that has the potential to discriminate?

In Austria, an algorithm-based system has been used since 2020 year to evaluate and predict the employment chances of job seekers. The result of the scoring system then decides the next steps – if the person receives further training for example, or another kind of support. Within this system, women are predicted to have lower chances of finding work again solely on the basis of their gender – because that’s what the model predicts. If other factors such as care obligations for children or parents are added, or if I have an migrant background, then I get an even worse score – and thus less support, even though those factors probably mean I am actually disadvantaged in society already.

What changes would you suggest there? Should the system be more supportive, rather than just reproducing social disadvantages?

People are still arguing: “Well, that’s just the way the job market is, that’s just real life, and that’s why we can’t intervene and change things to how they should be.” You have to question that argument. If it could be used to compensate for inequalities and injustices in a sensitive way, maybe the system wouldn’t be such a bad idea. For men who have childcare responsibilities – and that is the discriminatory thing about this – that variable does not have a negative impact on their score. When they assume those responsibilities, they don’t need to worry about it having any effects on their career in real life – unlike women. The interesting thing about this example is that the system is quite transparent: the way it is weighted and the basis of the calculation, has been made public, so the people in Austria know how it works. That means that civil society to discuss it and those affected can understand how they were assessed. That’s a first step towards a competent handling of systems like these.

So to identify and counteract discrimination by AI-based systems, we need transparency and participation?

Yes, these systems are already being used in many places to make life-changing decisions about people – often without us or the institutions that use them really being aware of how these systems work. There is a lack of transparency. Technologies such as these aren’t always being used with negative intentions, but especially when they are self-learning systems, the people who use them are often not even aware of what the result is and what effect it has. In other words, transparency at various stages of the development and use of these systems is a prerequisite, but still not enough – we also have to come up with ways for people to become empowered and get involved in shaping these systems.

What information and data should be disclosed?

The more risky, the more sensitive the fields of application and the more far-reaching the effects on the fabric of society, the common good and on individual rights and freedoms, the more transparently they should be dealt with.

If we take the public sector as a starting point, for example, there should be transparency about where these systems are used and what they aim to achieve. There is little information available about this. Then there’s whether the system is “successful” – in other words, whether it fulfils its purpose and if that has been defined in advance. We also need transparency about how that goal is achieved: Which assumptions, which hypotheses, which modelling is behind it? What data was used? Has data been used for a purpose other than that for which it was originally collected? Have algorithms been used for a different purpose than the one for which they were originally developed?

Then we need information about who developed the system. We see public authorities in particular as having a duty to publish that kind of information. If systems have a huge influence, we as citizens should know where they are being used and how they work, so that we can deal with them more confidently and self-determinedly and – where possible – be free to decide whether we are subject to them.

You just said that transparency is also needed at the development stage. Are there review mechanisms at the development level to counteract negative impacts such as potential for discrimination?

We’ve been aware of the so-called diversity crisis, or diversity challenge, in most developer teams for a long time. Some teams of developers are facing bigger challenges than others. Developer teams should include diverse people with different experiences and group affiliations and involve all of those people in the modelling, development and programming stages of algorithm-based systems, and also in the data collection. Because at the end of the day, their cultural and social influences and their knowledge are very much part of the development – and the systems they develop will ultimately reflect those experiences.

Technology has often reflected the inequalities and injustices found in society. Cars and seat belts, for example, are simply not effectively designed for women or pregnant women, because those groups are still not, or not sufficiently, involved in the design and testing stages. And there are voice-recognition assistance systems that recognise women’s voices up to 70 per cent worse than men’s voices. Because they were developed and trained using a certain set of experiences and data. Examples like this are well documented and allow us to learn from what has happened in the past.

We need a rethink within the fields of policy and funding. We have to promote and support tech companies and developer teams that are tackling these diversity challenges.

How can we even trust a machine, an algorithm? How can we make sure that the decisions made by an AI-based system do not contain bias?

Even if you’re using a very sensitively-created data set, it will never be completely without bias – and the system will never work without bias. The question of trust is interesting. There are systems that I trust because I understand how they work, for example a recommendation system that offers me the right products or services that are relevant for me – and above all explains to me how this pre-selection is made. But I don’t have to follow its recommendations. But as soon as there’s a system that I don’t understand or where I don’t even know that I’m being influenced or affected by it, the question of trust becomes more complex.

How can we establish the necessary trust in systems like that?

One idea for individuals would be platforms that explain how AI systems works using simulations and scenarios. I might not be able to avoid them completely, but I could at least gain awareness of the criteria that play a role. And if I notice a mistake, I should be able to demand for it to be corrected. After all, knowing how to respond to violations of my rights also creates trust.

And if someone wants to raise an objection, there has to be an easy way of doing so. We can’t place the control of these systems solely in the hands of one individual. We have to be taken seriously as co-designers of the use of these systems. It is very important to promote social dialogue and co-determination and to facilitate a development towards the use of such systems for the common good.

The obsession with “data collection” nowadays often goes against an individual’s right to data privacy. But for many AI applications, it is precisely these huge amounts of data that are needed for learning and training. How can we ensure that the necessary data is available, but that it is used legally and legitimately?

It is important to take a differentiated view – collecting data is not bad per se and collecting the right data for the right use is very valuable. However, being able to assess and ensure this and at the same time minimise the risks not only requires resources and skills, but also poses challenges given the transparency and, in some cases, the legal situation.

Solutions have been suggested, however. For example, the idea of “data custodians”. This is a very interesting area, still in its infancy, but very promising. In this concept, it would not be the platform or the administration, i.e. the data collector who decides who may use which data for which purpose, but an intermediary, independent body that oversees the use of data and offers explanations about what it is being used for and how. The aim is to ensure that these technologies and the processes involved, such as data collection, data use and data evaluation, always serve the common good.

This is a translation of an original article that first appeared on RESET’s German-language site.

This article is part of the RESET Special Feature “Artificial Intelligence – Can Computing Power Save Our Planet?” Explore the rest of our articles in the series right here.

Interview: How Can We Make AI More Environmentally Friendly?

What potential does artificial intelligence have to help us protect the environment and tackle climate change? And with all the computing power it requires, how can we make artificial intelligence itself more environmentally friendly? What can companies, developers and governments do to ensure AI helps us protect - and not destroy - the environment? We put our questions to Stephan Richter from the Institute for Innovation and Technology.

Smarter Sorting: Artificial Intelligence Helps Stop Retail Waste at the Source

Smarter Sorting’s AI-powered platform wants to help drastically cut retail waste, diverting valuable products from landfill and back into recycling and reuse.

Project Zamba: Protecting African Wildlife With Open Data and Artificial Intelligence

Camera traps are often used to protect endangered wildlife, recording thousands of hours of footage of the natural world. But who has the time to watch and analyse all of it? An open data project is helping animal conservation efforts by scanning hours of video footage and automatically highlighting the things that count.

Masakhane: Using AI to Bring African Languages Into the Global Conversation

Researchers from across the continent are collaborating on an open source AI project to develop machine translation for African languages - facilitating communication, increasing accessibility and opening doors to the world’s youngest continent to play a stronger role in shaping the digital world.

Leanheat: Saving Heating Costs and Energy With the Help of Machine Learning

A Finnish company wants to make heating systems more efficient by harnessing the power of artificial intelligence - collecting data on electricity prices and consumer behaviour to sink costs, CO2 emissions and even the likelihood of repairs.

An AI-Powered Plant Doctor Helps Farmers Tackle Crop Pests and Plant Diseases

The Plantix app uses machine learning to detect crop pests and diseases and provides tips on how to treat them - helping ensure greater food security and secure the livelihoods of small farmers.