A recently set up initiative is exploring new conceptual and practical approaches to building ethical AI systems – from indigenous perspectives. The work they’re doing is both fascinating and deeply important, and it’s also asking profound questions about the relationship between humanity, nature and technology.
Artificial intelligence is no longer a futuristic fantasy: it’s part and parcel of our everyday lives, from the search engines we use to automatic translation tools and speech recognition apps. But it’s also being used for police work, recruitment processes, and in the finance, education and health sectors – meaning that algorithms routinely have the power to decide who gets a bank loan, what kind of healthcare a patient receives and who gets hired. These kinds of decisions have huge consequences not just for individuals, but for our society as a whole. Meanwhile, lobbyists are sounding the alarm about the particular risks AI poses for marginalised groups.
But AI isn’t going anywhere, and its impact in our lives is only set to increase. That’s why the work being carried out by the Indigenous Protocol and Artificial Intelligence Working Group is key. Bringing together indigenous people from all corners of the globe, the group is delving into critical questions around how indigenous perspectives can inform a more ethical and more just development of AI.
The group, which held two workshops last year, recently published a paper that grapples with the many thorny issues around indigenous people’s relationship with AI. Encompassing the voices of many, the paper seeks to imagine a future with AI that “contributes to the flourishing of all humans and non-humans”.
A chance to have a say in how technologies are built
When I speak with Suzanne Kite, an Oglála Lakȟóta artist and one of the contributors to the group’s position paper, what surprises me most is her optimism. “We have an opportunity now to be consulted for the first time in history on how we want our technologies to be built,” she says. Kite, of course, doesn’t speak for the entire group. Multiplicity is precisely what this project is about. No unified ‘indigenous’ perspective is offered: rather, the paper captures insights from Australia, Aotearoa (also known as New Zealand), and the Pacific all the way to North America.
The ideas shared reflect a rich diversity. But many, like Kite’s, are hopeful. Take a recent post on the project’s blog in which Scott Benesiinaabandan, an Anishinaabe artist from Canada, discusses how AI can be used to help preserve endangered indigenous languages, or to protect indigenous land and water use.
Much of the mainstream media attention around AI in recent months paints a less optimistic picture, however. Since Black Lives Matter protests began to rage across the US, we’ve been hearing a lot about AI in the form of facial recognition – a tool that’s increasingly being used by police to monitor crowds at protests. “AI is being used as a tool against people, and indigenous people, and black people, and so our concerns are mutual there,” says Kite. Tightly surveilling those who attend political protests is deeply worrying in itself. But on top of that, awareness of the biases at the heart of AI has rippled across the world. High-profile cases of black people being misidentified by police in the US have sparked outrage, with comedian John Oliver even dedicating an episode of his popular weekly show to the issue.
The Indigenous AI position paper also broaches these harrowing questions. In one of the opening essays, Dr. Hēmi Whaanga investigates whether AI is simply a “new colonizer”. Dr. Whaanga cites Cambridge Analytica and its exploitation of people’s data as one example of modern day colonialism, explaining that “these types of unscrupulous behaviours exacerbate existing societal biases, deepen inequalities, and contribute to the deterioration of trust across society.”
Rethinking the dichotomy between humans and nature
The challenges posed by AI clearly can’t be waved away. At the same time, it’s clear that many contributors to the paper equally regard AI as an opportunity, and one with much potential for the future. “I think the most important thing to me about the position paper is that it’s not purely critical. It’s generative,” says Kite. “One of the problems I have with AI critique is that it’s a lot of critique and very little proposition for solutions. And I feel that our paper is almost pure solutions.”
For Kite, whose contribution to the paper looks at ethical design, AI provides a possibility to think about humans and nonhumans in different ways. “In Lakota ontologies, we prioritise relationships with non-humans, especially stones,” she explains.
It’s not difficult to imagine how approaches to AI could run parallel with this less anthropocentric view. “AI is a really good place for people to start imagining nonhumans as having volition and decision-making,” says Kite. “AI captures the imagination of people. It allows them to extend their own personhood.”
For a long time, the prospect of new and different intelligences has prompted us to rethink our own humanity and its place in the world. Stories of fictional robots, from Terminator to Blade Runner to Wall-E, have always resonated deeply with audiences. Now that AI is no longer the stuff of sci-fi movies, it’s time to put these ideas into practice. Such a reimagining of ourselves can have implications – and positive ones – for the environment and world around us. Drawing on Lakota values, Kite explains that she views “AI as also not only digital, but extremely material, because all of the materials required for the system have to be mined from somewhere.”
“They have to be recycled. … They have lives and have to go somewhere after they’ve been used.” Learning to live well alongside AI is paramount for us all: it is, after all, only becoming more and more intertwined with the way society works. That’s why the discussions being kicked off by the Indigenous AI project are so invaluable.
As Kite puts it: “We have to decide that -, if we are interested in playing god and making a new being, then which ethics are we going to use to do that?”