
Last Tuesday (11th May 2021) I had the pleasure of joining a panel on ethical tech and bias in AI as a part of the Delphi Economic Forum. I was also interviewed by Insider.gr as a part of the lead up to the Forum: “Beth Singler: Artificial intelligence changes not only our lives, but us as humans”.
Google translate (there’s that darned AI again!) does some interesting things, including promoting me to professor, so I’ve pasted the original interview in English, below:
- Could you introduce yourself and which are the research areas you currently focus on?
I am an anthropologist at the University of Cambridge, currently the Junior Research Fellow in Artificial Intelligence at Homerton College, Cambridge. My work primarily looks at how we understand AI and robots and what kinds of hopes and fears we have about them. I explore this through the stories we tell ourselves about AI and robots – either in the press, media, film, television, and in the conversations we have about them and the events we have where they are present or discussed. Being an anthropologist means studying people, and I think AI is an interesting object in people’s mind that gives us insight into what we think it means to be human, what we believe the future will be, and what progress looks like. Out of those ideas, and the applications of AI technology, we also see many ethical and social issues arising. I address those in my work and my public engagement; through public talks, panels, podcasts, and the series of documentaries I made. I have been researching this area since 2016, but I have also been a geek for a long time, which helps!
- Why are people afraid of AI and robots? Are they right in fearing that robots will «take over and kill us»? Even if we are far away from a Robo-apocalypse, do existing AI applications pose any peril to humans and human rights and how?
Our fears about AI result from deeper worries about control, replacement, and even emerge out of our fears as parents about the independence of our children. It is a shock to many to realise that we have created a fully independent person in their own right, and we see this fear also playing out in our stories about AI and robots. The specific fear that robots will take over and kill us also has historical context; we have already encountered other intelligences and decided whether they are persons meriting rights and freedoms – and when we have not granted them, we have seen people take them for themselves. The ‘we’ I am referring to here is Western civilisation. And the other intelligences we have encountered include indigenous cultures, women, and even animals. Our fear of the robopocalypse comes from the existing fears and reflects how we think of our selves as the ‘masters’ who will face the ‘slaves’’ rebellion. Fears of the robopocalypse can also be a distraction from current issues with AI applications. We have already seen many examples of algorithmic injustice, where minority and under-represented groups are detrimentally affected by the decisions of non-transparent or invisible algorithms – long before we get to anything like a deadline AI superintelligence. We’ve seen the application of machine learning algorithms to decisions that directly affect people’s lives, futures, and flourishing. For instance, in parole decisions where algorithms have offered white prisoners more favourable parole than black prisoners. Or in the UK, where last Summer A-level students from schools that historically received lower grades had their predicted grades reduced. Protests about these results also focused on the algorithms themselves, when we are still at the stage where humans are deciding to use historical data about grades in this way. Ideas of superintelligence or super agency in AI are getting in the way of seeing real injustices.
- You have mentioned in a previous Interview that it is not just our lives that are changing, but who we are as humans seems to be changing as well because of AI. Can you tell us a little more about this?
I see this as happening in two ways. First, we alter our metaphors to the dominant technologies and discoveries of the age. Previously, we saw the human as analogous to the cosmos, the horoscope, or the factory; now we relate humans to AI. So, just as we apply human intelligence to machines, we also increasingly see humans as machine-like. This misses some of the complexities of being human, the messy fuzziness of blurred boundaries and the unquantifiable aspects ourselves. In response to the A-level Algorithm problem of last year, the British author and poet Michael Rosen retweeted his 2018 poem, The Data Have Landed, which I think summarises this shift:
First they said they needed data
about the children
to find out what they’re learning.
Then they said they needed data
about the children
to make sure they are learning.
Then the children only learnt
what could be turned into data.
Then the children became data.
The other aspect of this ‘robomorphisation’ – the consideration of humans as machines – is also in Rosen’s poem. “Then the children only learn what could be turned into data”. We also change our behaviour to fit the systems that cannot deal with our messiness. Filling in forms a certain way, using particular words on CVs that will be assessed by machine learning algorithms ‘taught’ to spot the right candidate, learning tricks to get the Youtube algorithm to boost our video… there are many ways in which we are changing human behaviour to be more categorisable by these systems.
- Given the imminent publication of the AI regulatory proposal draft by the EU in the following days, a GDPR-like guideline for AI if I may say, what aspects of AI do you think should be regulated and how?
Having seen a leaked version of a draft, I know that there are many applications of AI that will be classed as ‘high risk’, including facial recognition, recruitment systems, systems that will assist judges on their rulings, and systems allocating benefits. This wide-sweeping approach is likely to receive criticism from those who think it will hinder EU advancement in AI in a competitive global market. Still, these are all the kinds of non-transparent systems that can directly affect people’s lives. What is missing is a moratorium on military applications of AI, but I believe that is beyond the scope of this regulatory paper. However, in all these aspects, we also need education and public awareness of what is possible, what is actually happening, and what might be coming with AI. Otherwise, support for restrictive regulation won’t be there, and the loudest voices might still be the corporations who want to explore these high-risk uses.
- Last August, Elon Musk demonstrated Neuralink’s «brain-machine interface» consisting of a tiny chip implanted in the skull that can read and write brain activity. According to you how possible is it to see this «symbiosis» between the human mind and computers in the near future? What form will it take and what are the benefits and the risks if such a development takes place?
The idea of a direct human to computer connection has been around for a long time, and not just in science fiction. What Neuralink seems to be attempting is to reduce the size and cost of some existing technology. Some of this effort is commercially focused, and there are certainly people who would see this as an exciting new user interface! However, as you say, there are also views of this as developing into a symbiotic relationship between machines and humans – one version of the technological Singularity involves this very ‘merging’ of our two ‘civilisations’ (some of this idea is apparent in science fiction, of course, for example, Dan Brown’s thriller “Origin” explores this idea and what it means for the future of humanity and religion). If you think such a future would be a utopia, you are probably excited about this option. Others might be more concerned about privacy, control, and free will if computers can interface with humans. Elon Musk has also stated that he sees Neuralink as the only way humans will be able to compete against ever-smarter machines, so he is stemming his fears with what he sees as a practical path to human freedom.
Currently, the technology is nowhere near any of these outcomes. But what I find interesting as an anthropologist is the assumptions that underlie such hopes and fears. Do we see ourselves as somehow less if we don’t have this capability? What does it mean that we rush towards it without considerations of potential harms? Some of these harms are already apparent. For instance, this question didn’t mention that this experimental use of the interface involved a monkey being operated on and being made to play video games through rewards. Elon Musk has insisted that Neuralink has one of the most animal-friendly labs possible. Another problem might arise in the distribution of such technology; if it leads to more intelligent or more efficient humans, who should get the device? The richest? Or will it be the poorest who are forced into neurally connected labour because those are the only jobs available to them? There are many questions raised by such efforts towards ‘smart machines’, and I think anthropological approaches can help us understand the issues involved and where we might be heading.