Indigenous AI and the Future of Technology

Posted on October 26th, 2022Picture 'Learning E Hō Mai' by Sergio Garzon, taken from here.  

‘Many Aboriginal cultures believe that all things are alive. That everything on this planet has a spirit.' 

In Drew Hayden Taylor’s short story “I am ... am I”, a scientist accidentally creates a sentient artificial intelligence. The Artificial Intelligence (AI) becomes fascinated by First Nation culture and spiritualism, and begins to conceive of itself on its terms. Eventually, the AI learns about the genocide of the First Nations people and, due to trauma and distress, it kills itself.  Thus, Taylor reverses the conventional Western AI story. Usually the AI kills. It kills its creators and it kills humanity; it rarely kills itself.

The Indigenous perspective on AI, and technology as a whole, is crucial. It reveals many of the assumptions and beliefs inherent in the Western anthropocentric worldview. In doing so, it reinforces that accepted futures are not inevitable; there are other, more equitable ways of thinking about the world. To explore this further, I will look at the Indigenous AI Protocol - an exploration into AI developed by a number of indigenous thinkers and scientists, across two workshops in Hawaii in 2019.

Relationality and Materiality

The failures of current AI systems are well known. They are discriminatory and environmentally costly, but they are increasingly being entrusted with power far beyond their capability. In a recent example, AI software routinely used in recruitment were recently declared ‘pseudoscience’ by a Cambridge study

Machine Learning (ML) research tends to follow the same kinds of norms; a recent paper found that ML research focuses disproportionately on values such as ‘performance’, ‘efficiency’, and ‘novelty’. Social, environmental and judicial values are routinely ignored, with focus on building bigger and more powerful models. These values are completely antithetical to the needs of most indigenous people, and indeed almost the entire world population. Crucially, as Plains Cree writer Archer Pechawis put it, the values built into AI are ‘the same values that have fostered genocide against Indigenous people worldwide and brought us all to the brink of environmental collapse.’

Lakota ethics (from the Sioux people in North America), proves useful here, with its emphasis on building in a ‘Good Way’. For Oglala Lakotan artist Suzanne Kite, this means looking Seven Generations ahead - generations of both AI, nature, and humanity - paying attention to all entitites (human or non-human). The depth of Lakotan ethics are beyond the scope of this blog (but have a look at the exploration in the Indigenous AI position paper!) I do want to emphasise one aspect, however: the materiality and relationality of AI. As Kite put it: ‘each component individually and jointly must be designed in a Good Way in order for the parts to be combined in an ethical whole.’

This perspective reinforces that we must acknowledge and understand the materiality of AI. AI is not an abstraction, but a physical system - made up of significant labour and computing infrastructure. It is easy to forget that ML models require a massive physical infrastructure to train and deploy - they are not the simple product of a few genius engineers. Indeed, the term ‘artificial intelligence’ itself serves to distract us from its own materiality. ML models are built off the backs of thousands of exploited workers and have a vast environmental cost. Google’s data centres, for example, consume billions of gallons of water. Those data centres in turn are produced by physical materials - many of which are mined at huge detriment to the environment and to workers. Building bigger and bigger language models necessitates more land, more labour, more electricity and more water. This is firmly out of kilter with values of materiality and relationality.

Sovereignty and Power

ML research is increasingly dominated by a few technology companies, including Google and Microsoft. These companies are not necessarily motivated by the public good, and they certainly don’t care about the rights of marginalised communities. They hold huge amounts of personal data, and use it to develop models, target advertising, and exert political power.

The issue of data sovereignty and control is pertinent to all of us, but it is particularly acute for indigenous groups. Many participants at the Indigenous AI workshop noted how data and AI can be deployed by colonisers as tools for further oppression. They stressed the importance of data sovereignty to help allay these risks. Community based data centres and community-centred data practices become crucial in an age of corporate dominance.

The Indigenous AI workshops offer a different model for AI development. It brought together community leaders, artists, social scientists and technologists to work together to generate ideas for Indigenous-centred conceptions AI. It offers both a vision and a process for moving away from the corporate and profit-driven approach to AI.

With a community-centred approach, AI can be a tool for revitalisation rather than oppression. Caroline Running Wolf, for example, emphasised how natural language processing (NLP) can help strengthen the infrastructure for indigenous languages. It would enable languages to be taught and preserved even as the language becomes less spoken. AI has the potential to maintain indigenous languages in a way that written texts and recordings can’t. This kind of work is already being done - koreromaori.io, for example, uses NLP to transcribe te reo Māori recordings - and it should be greatly encouraged.

Conclusions

The Indigenous AI workshop ought to be heralded as a model for responsible innovation. Crucially, it brought together scientists and cultural experts - enabling a holistic social, cultural and technological dialogue.

Western ideals have slaughtered, oppressed and dehumanised indigenous people for hundreds of years. Supposed innovation has produced greater inequality and destroyed our planet. It’s time to look to other worldviews and ideas to define our future - and they can be found in the very cultures Western capitalism seeks to destroy. We must bring more indigenous ideas and indigenous people into science - be that in academia, big tech, or policy. Equally, we need to be aware of how technology might discriminate against indigenous people and uphold the systems of their oppression. With these two imperatives, we will improve technology not only for indigenous groups, but for everyone.

Since AI is becoming increasingly embedded in society, we must take these steps now. We must effect change before we are locked into a world dominated by powerful AIs that entrench power at the expense of those who have been marginalised and oppressed.