Short Course: Reality, Robots and Religion

NEWS:

Professor Richard Harper has recently joined the expert speakers for our Faraday short course in September. He will be speaking on “Communication, God, and Machines”:

Short Course Advert Small

He joins luminaries including Lord Martin Rees, Professor Nigel Cameron and Professor Susan Eastman (full list of speakers here, with details of how to register for the weekend)

The aim of this weekend event is to address the personal, societal and theological implications of advances in Artificial Intelligence (AI) and Robotics. These issues are complex, multifaceted and highly contested. Our aim is to host a conversation between participants from a range of disciplines, including computing and robotics, sociology, anthropology, ethics and theology.

I hope you can join us!

 

Beth

 

Writing the Script for AI

Yesterday afternoon was spent writing a film.

script

You might start to imagine screenwriters in bottle top glasses hunched over typewriters, with fat producers stalking behind them waggling thick cigars in smoke-filled writers’ rooms decorated with slatted blinds and green glass lamps.  However, we were in a teaching room in St Edmund’s college working on our short film as a part of the Cambridge University/Wellcome Trust Short film scheme. Although the final title is still under discussion, the general theme will be “Could, and Should, Robots Feel Pain?”

Partly this topic has come about because the scheme requires that the two academics generating the idea for the film should be an interdisciplinary team of Arts, Humanities and Social Sciences, and Biomedical Science. This meant that a few months ago I took part in a networking event where blue spots (Arts, Humanities and Social Sciences), sat at tables and introduced themselves and their research to revolving sets of red spots (Biomedical Sciences). Apart from feeling awkwardly a bit like speed dating, this event demonstrated the diversity of the research going on at the university, as well as the difficulty of finding common themes and aims between them. In fact, in a few cases, I met Red Spots who just wanted to use the Blue Spots to do all the writing for the film without much input into the idea behind it!

Luckily, I met Dr Ewan Smith on one of the rotations of the tables, and he was not only interested in a proper collaboration between our projects, but his research on pain (in naked mole rats!) suggested an interesting overlap in interests with a bit of speculation and theorising thrown in.

A month or so later and our proposal was accepted for the scheme (along with proposals by four other teams of researchers), and Ewan and I were back in rotation, this time at a networking event with filmmakers who had expressed interest in producing the actual short films for the scheme. We met many interesting, and very creative, people, but two seconds into the room I had bumped into one filmmaker who mentioned that he was already making a feature-length documentary on Artificial Intelligence. “Oh, well you won’t want to be involved in our project as its too similar,” I said, shrugging. But Colin Ramsey of Little Dragon Films was not put off.

Then we get to yesterday: myself, Ewan, Colin, and James Uren, Colin’s co-producer, in a college teaching room on a humid day in June discussing the beats of a ten-minute film on the technological details and the theoretical implications of pain being a part of the design of non-human others. We discussed the science and who we wanted to interview, the possible reasons why humans might create robots (or embodied AI) that could feel pain, whether they could experience the emotional aspects of pain, and, getting into the meta-level of the questions, what might be the reasons for our reasons for attempting this?

In some ways, this kind of script development meeting is not that unfamiliar. In a previous life, I worked in the film industry on fiction scripts. I even got a degree from the National Film and Television School in Script Development before I decided to return to academia:

NFTS
Receiving my NFTS degree certificate from Sir Richard Attenborough

A key similarity struck me when we were going through the structure of the short and asking ourselves again and again, “What do we want the audience to be thinking/feeling here?” Attention to the audience in the fiction-based film industry has led to some accusations of scripts being written by the numbers, or accusations of pandering to the whims of market research (*cough* test screenings *cough*). Certainly, back in the stone age when I wrote reports for film companies on their incoming scripts (mostly headed for the slush pile), a major section was always on the expected audience and whether the script was successful in engaging with them – often with a quantitative assessment; giving the script a final mark. Further development of the script often involved thinking about narrative conventions and the audience’s expectations for things like romantic journeys, a moment of self-sacrifice, or the obvious one, the ‘Happy Ever After’. Some even with a specific timestamp of when they should appear in a film.

In the case of our short film we have an idea of the levels of engagement we hope for from the audience, but no such near algorithmic attention to popular forms and tropes and how they ‘should’ fit together.

Which brings me to another short film I’ve been thinking about lately: Sunspring. This science fiction short film was written by artificial intelligence, as the blurb explains:

In the wake of Google’s AI Go victory, filmmaker Oscar Sharp turned to his technologist collaborator Ross Goodwin to build a machine that could write screenplays. They created “Jetson” and fueled him with hundreds of sci-fi TV and movie scripts. Shortly thereafter, Jetson announced it wished to be addressed as Benjamin. Building a team including Thomas Middleditch, star of HBO’s Silicon Valley, they gave themselves 48 hours to shoot and edit whatever Benjamin (Jetson) decided to write.

Watching Sunspring you would never be tricked (a la Turing) into thinking this was the product of a human – the disjointedness of the dialogue, the obscure plot (if any) and the repetition of lines asking what is going on, or questioning what the other character just said, make it clear that this is something artificial. The performances bring life to these stilted lines, but Hollywood has little to fear from this automation of creativity. Although, the same technique, refined, could be seen as a logical next step on from the near robotic plotting of some contemporary blockbusters. Script development, as I’ve said, works on similar principles or rules of story. That Benjamin is still learning these perhaps makes him a neophyte screenwriter (perhaps he still hangs out in Starbucks rather than BAFTA) rather than a failure. Returning to filmmaking with our own short film for the Faraday’s AI and robotics project has reminded me of some of my own neophyte-ness as a screenwriter. Perhaps the difference is the attention we want to give to the audience’s engagement and education through the film, whereas Benjamin was simply asked for a script so he made a script. A balance between the two extremes of ignorance of the audience and pandering to it after research would make Benjamin a more convincing screenwriter. Although, what would our reasons be for creating a good AI screenwriter?

robot writer

Perhaps, at least initially, it’s because it makes a good story. But what do these storytellers want us to feel/think in reaction? Reactions online have varied between parody, pointing out similarities with other feature films with equally incomprehensible plots and dialogue, anger that they wasted the 9 mins and 3 seconds watching it, to quotations of the lines that affected them emotionally (comments on Youtube: ‘”I was much better than he did” How come i cant get this line out of my head?’, ‘”He looks at me… and he throws me out of his eyes.” I have no idea why, but that line is incredible!’, ‘This is exactly like a dream. Absurd and yet, deeply emotional.’).

Whatever the reason, I suspect the meta-reason is something like an innate need to frame and then code/write the non-human in human terms: an AI should do the things that we can do, including writing film scripts, because human activities like that make sense to us (even if their products don’t, as yet!). What a true AI would choose to do might still be beyond our conception. Just as they could not have known that Jetson would become Benjamin. That was not in the script for the AI.

Ex Homine – On Artificial Intelligence and Humanlikeness

Trust the anthropologist to talk about humans at a conference on Artificial Intelligence…

Yesterday I spent an extremely stimulating day at the Rustat Conference, held at Jesus College, where the topic was “Artificial Intelligence – the Future Impact of Machine Intelligence”. Chatham House Rules prevent me from describing who said what exactly, but there were panels and talks on “Ethics,  Regulation  Responsibility”, “From Artificial Intelligence to Super-Intelligence”, and “AGI and Existential Risk”, and attendees included leading figures in the AI tech research world, as well as academics, financiers, and entrepreneurs. If you are interested in the field of AI you’d have known many of the people attending. Being invited to attend, and to come along to the pre-conference dinner the night before, was an honour for this new minted post-doc!

But perhaps I should reframe my first sentence. ‘Humans’ were in fact in evidence in many of the conversations, with discussions of the possibilities of a universal Basic Income for a humanity that will possibly be displaced by automation, in discussions of state and individual human ideologies that might come into conflict with AI futures, and in considerations of the ‘humanlikeness’ of AI.

The latter stirred me to make a point, drawing, as the anthropologist in the room (although not the only one, as my PhD supervisor was a speaker) on some ethnographic examples. I am not entirely convinced I was clear in my point so I want to expand on it here.

One of the discussions involved mapping scales of consciousness and ‘humanlikeness’ for both current and theoretical future AI, with the proviso that both concepts are quantitatively, and semantically, difficult. But as an example, a brick would score at a low point for both, while Ava, the embodied AI from the film Ex Machina would hypothetically be placed further up and to the right on these x and y axis. Recognising this proviso, my issue was with the idea that ‘humanlikeness’ is an objective quality. After this point about scales there was a discussion in which the point was made that technology simply  cannot have an essence that would make harming it a problem: we are not concerned by the thought of people torturing iPhones, which raised a laugh from most of the attendees.

To me this is to misunderstand ‘humanlikeness’ and to ignore an incipient and prevalent anthropomorphism in response to AI. For example, AlphaGo the program developed by Google DeepMind would score only slightly higher than a brick on these ‘objective’ scales of humanlikeness and consciousness. Its only human quality is that it plays a game, and there is no claim for its consciousness. And yet… fan art in response to the program’s success has AlphaGo as a young anime style boy or girl (the latter is also interesting because of the gender issues around professional Go, although drawing females may simply be more aesthetically pleasing for these artists).

So here we have:

alphago-3

(nb the second AlphaGo, by Anon or the same artists, has mechanical hands, and is therefore less ‘humanlike’ than the one on the left by @rakita_JH)

alphago1

Young male AlphaGo by Otokira-is-Good

alphago_by_nunsaram-d9v6g5w.png

AlphaGo by Nunsaram (again this AlphaGo has mechanical hands choosing vertices on a holographic seeming Go board, mechanical rabbit-esque ears, and is connected by wires to… something…)

alphago6

AlphaGo by Knocker12. in this case a more abstract humanoid, with a touch of the Tuxedo Mask about ‘him’.

That these anthropomorphisms are variable in their humanlikeness (the mechanical hands, abstract head, wires, animal ears) shows again that the axis of ‘humanlikeness’ is interpretive. Likewise, we might presume differing levels of presumed consciousness by these artists. So, we might note that the last AlphaGo is presented by Knocker12 as a little linguistically simplistic in its bouts with Lee Sedol:

alphago full.png

Whereas, some versions are more verbal, if still using ‘computer speak’

Alphago resign.png

Of course, there is a case to be made for a general anthropomorphism among humans, the same kind that led to interpretations of more natural phenomena and animals in terms of human like intentions, and perhaps led to instances of the deification of phenomena and the non-human.(1)

However, in the context of discussions about AI, a potential new non-human intelligence, perhaps we need more reflexivity about how we mark the distinction between the human and the non. Paying particular attention to the entanglements of anthropomorphism and personification will impact how we understand our positioning of these technologies on any imagined scale of humanlikeness, and therefore how we interact with them (especially when a diminished attribution of humanlikeness might be behind justifications for abuse).

Therefore, laughter at the thought of being concerned at the torture of iPhones ignores this mass anthropomorphism. In discussion I also drew attention to the public reaction on social media to the Boston Dynamics videos of their prototype robots being knocked down or tripped up. While they were not ‘hurt’ in a way that might be objectively measured, human reactions again anthropomorphised these events, and cries of “Bully!” could be heard online.

Some responses online were obviously more tongue in cheek, as in this spoof charity video, but others were more concerned for the ‘poor’ robots (or even, concerned about humanity’s future after a hypothetical ‘Robot Uprising’, when our new overlords might look back at our actions with regards their ancestors!)

Although, and I admitted this at the time, the conference was not really about robots of this kind and therefore this example was perhaps a side point (although, embodied AI, and the importance of embodiment for real consciousness, was also discussed during the panels). But any discussion of humanlikeness has to take into account how humans subjectively apply the quality to the non-human, especially when this is in surprising ways in a general human populace that is much wider than those dedicating themselves to AI research. Considerations of the future of AI will certainly need an awareness of perception, inference, and anthropomorphism, and this is an aspect that the anthropologist can contribute to by paying attention to responses outside of the conference and the research lab.

 

(1) This ‘next’ step is of interest to me, and it is interesting to note that Knocker12 also represents AlphaGo as a many armed god, although this is outside of the immediate point of this post:

Alpha Hindu.png