Trust the anthropologist to talk about humans at a conference on Artificial Intelligence…
Yesterday I spent an extremely stimulating day at the Rustat Conference, held at Jesus College, where the topic was “Artificial Intelligence – the Future Impact of Machine Intelligence”. Chatham House Rules prevent me from describing who said what exactly, but there were panels and talks on “Ethics, Regulation Responsibility”, “From Artificial Intelligence to Super-Intelligence”, and “AGI and Existential Risk”, and attendees included leading figures in the AI tech research world, as well as academics, financiers, and entrepreneurs. If you are interested in the field of AI you’d have known many of the people attending. Being invited to attend, and to come along to the pre-conference dinner the night before, was an honour for this new minted post-doc!
But perhaps I should reframe my first sentence. ‘Humans’ were in fact in evidence in many of the conversations, with discussions of the possibilities of a universal Basic Income for a humanity that will possibly be displaced by automation, in discussions of state and individual human ideologies that might come into conflict with AI futures, and in considerations of the ‘humanlikeness’ of AI.
The latter stirred me to make a point, drawing, as the anthropologist in the room (although not the only one, as my PhD supervisor was a speaker) on some ethnographic examples. I am not entirely convinced I was clear in my point so I want to expand on it here.
One of the discussions involved mapping scales of consciousness and ‘humanlikeness’ for both current and theoretical future AI, with the proviso that both concepts are quantitatively, and semantically, difficult. But as an example, a brick would score at a low point for both, while Ava, the embodied AI from the film Ex Machina would hypothetically be placed further up and to the right on these x and y axis. Recognising this proviso, my issue was with the idea that ‘humanlikeness’ is an objective quality. After this point about scales there was a discussion in which the point was made that technology simply cannot have an essence that would make harming it a problem: we are not concerned by the thought of people torturing iPhones, which raised a laugh from most of the attendees.
To me this is to misunderstand ‘humanlikeness’ and to ignore an incipient and prevalent anthropomorphism in response to AI. For example, AlphaGo the program developed by Google DeepMind would score only slightly higher than a brick on these ‘objective’ scales of humanlikeness and consciousness. Its only human quality is that it plays a game, and there is no claim for its consciousness. And yet… fan art in response to the program’s success has AlphaGo as a young anime style boy or girl (the latter is also interesting because of the gender issues around professional Go, although drawing females may simply be more aesthetically pleasing for these artists).
So here we have:
(nb the second AlphaGo, by Anon or the same artists, has mechanical hands, and is therefore less ‘humanlike’ than the one on the left by @rakita_JH)
Young male AlphaGo by Otokira-is-Good
AlphaGo by Nunsaram (again this AlphaGo has mechanical hands choosing vertices on a holographic seeming Go board, mechanical rabbit-esque ears, and is connected by wires to… something…)
AlphaGo by Knocker12. in this case a more abstract humanoid, with a touch of the Tuxedo Mask about ‘him’.
That these anthropomorphisms are variable in their humanlikeness (the mechanical hands, abstract head, wires, animal ears) shows again that the axis of ‘humanlikeness’ is interpretive. Likewise, we might presume differing levels of presumed consciousness by these artists. So, we might note that the last AlphaGo is presented by Knocker12 as a little linguistically simplistic in its bouts with Lee Sedol:
Whereas, some versions are more verbal, if still using ‘computer speak’
Of course, there is a case to be made for a general anthropomorphism among humans, the same kind that led to interpretations of more natural phenomena and animals in terms of human like intentions, and perhaps led to instances of the deification of phenomena and the non-human.(1)
However, in the context of discussions about AI, a potential new non-human intelligence, perhaps we need more reflexivity about how we mark the distinction between the human and the non. Paying particular attention to the entanglements of anthropomorphism and personification will impact how we understand our positioning of these technologies on any imagined scale of humanlikeness, and therefore how we interact with them (especially when a diminished attribution of humanlikeness might be behind justifications for abuse).
Therefore, laughter at the thought of being concerned at the torture of iPhones ignores this mass anthropomorphism. In discussion I also drew attention to the public reaction on social media to the Boston Dynamics videos of their prototype robots being knocked down or tripped up. While they were not ‘hurt’ in a way that might be objectively measured, human reactions again anthropomorphised these events, and cries of “Bully!” could be heard online.
Some responses online were obviously more tongue in cheek, as in this spoof charity video, but others were more concerned for the ‘poor’ robots (or even, concerned about humanity’s future after a hypothetical ‘Robot Uprising’, when our new overlords might look back at our actions with regards their ancestors!)
Although, and I admitted this at the time, the conference was not really about robots of this kind and therefore this example was perhaps a side point (although, embodied AI, and the importance of embodiment for real consciousness, was also discussed during the panels). But any discussion of humanlikeness has to take into account how humans subjectively apply the quality to the non-human, especially when this is in surprising ways in a general human populace that is much wider than those dedicating themselves to AI research. Considerations of the future of AI will certainly need an awareness of perception, inference, and anthropomorphism, and this is an aspect that the anthropologist can contribute to by paying attention to responses outside of the conference and the research lab.
(1) This ‘next’ step is of interest to me, and it is interesting to note that Knocker12 also represents AlphaGo as a many armed god, although this is outside of the immediate point of this post: