Interview with Beth on BBC Click for their episode on the robots of the future

While I was at the Hay Festival I was interviewed by Spencer Kelly for BBC Click, and I discussed pain and robots (relating to the short documentary, Pain in the Machine that I made with Little Dragon Films).

bbc click.png

They got the Faraday’s name wrong in my title (its Science and Religion, not Society and Religion… oops), but it was a great day of taking part in energetic audience participation games for BBC Click Live and hanging out in the Hay Festival Green Room spotting a few celebs. And then on the following day I gave a talk on the same subject to a packed and wondrous Starlight Stage ūüôā

Could There Be A Robot Ethnographer?

Although I am usually tongue tied and fluttery of stomach when I do public talks, I do definitely enjoy the opportunity to engage an audience with questions around AI and robotics. As an anthropologist, I am trained in watching reactions in assemblies of communities; being a research instrument that interrogates a field site of informants, But often it is when I am the one being interrogated by the audience during the Q&A when I have the best moments of understanding and inspiration. Finding out what people want to ask about is about more than just finding out what is weighing heavily on their mind. It is also about learning how their own unique mind has approached a particular topic. For instance, a couple of weeks ago I was speaking at the Hay Festival, my first time there, and a 10 year old boy in the audience asked a question that showed me how his mind was thinking things through.

He asked, quite simply it seemed at first, would robots, as they became smarter, feel the effects of the Uncanny Valley in relation to humans?

If you aren’t familiar with Masahiro Mori’s theory then there is a good summary, with some skin shivering examples, here

mori-uncanny-valley-300x234

Initially, I thought perhaps he meant that as we towards anthropoid robots that look like us but not quite enough, so that they fall into that Valley, the opposite might also be true. In this future we will look like them, but perhaps not quite enough. And in fact, if they have cogniscence of their artificial skin, then it is likely that our squishy skin would evoke an ick factor for them too. This understanding of the Uncanny Valley does however rely on, as the above graph does, the assumption that the Uncanny Valley response to artificial beings is an evolutionary by product of historical wariness around the strange looking ‘Other’ who might be strange because they are diseased – a kind of pathogen avoidance. This kind of evolutionary psychology can make some people ‘itchy’, as one scientist told me when I recounted the boy’s question to him and this summary of the Uncanny Valley after my talk. There are other explanations for the Uncanny Valley; such as the fear of the momentor mori – the artificial other as a blank eyed doll reminding us of our future dead state. And we can see the corpse and the zombie deep down in the Uncanny Valley in the above graph (the moving zombie being more Uncanny/Unfamiliar than the still corpse).

When I thought more on the boy’s question later on I began to wonder if he was solely asking about anthropoid robots, or whether he was considering an Uncanny Valley effect when we talk about intelligences: ‘smarter’ robots would also potentially be like us, but not us, in terms of intelligence. Murray Shanahan’s typology of ‘Conscious Exotica’ goes some way in thinking about these other kinds of intelligence, but as I’ve written elsewhere, he doesn’t take into account the strong effect of anthropomorphism in his understanding of ‘like-ness’ to us. Putting anthropomorphism to one side for a moment, would we encounter ‘Uncanny’ reactions when faced with such intelligences, including of course, AI? The science fictional encounter with the robot, and even the alien, is of course a working out in literature of our reactions to encounters with other minds. Having just seen Alien: Covenant I can confirm that there is certainly something very Uncanny about David, Walter, and of course, the various iterations of the Xenomorph. Physically this is obvious; its there in the way the androids look and talk, and in how the Xenomorph is biologically ‘icky’ as well as literally pathogenic. And we’d certainly want to avoid it, evolutionary psychology, or not! But Science Fiction has long tried to show us how we might encounter alien intelligences that are like us but not quite enough, as in War of the Worlds:

‚ÄúNo one would have believed in the last years of the nineteenth century that this world was being watched keenly and closely by intelligences greater than man’s and yet as mortal as his own; that as men busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water.‚ÄĚ

We might read this and think that they are like us – as we produce scientists as well – but Wells is talking about the distance in intelligence between us and them. We are as the creatures in the water in relation to their intelligence.

Returning to a hypothetical future of ‘smart robots’, by which I take to mean some level of near equivalent intelligence to our own, would this difference not also impact from the other direction? Would we also seem Uncanny to the robot?

After having my feet taken away from under me by this insightful twisting of our understanding of where we stood and where the ‘Other’ stood, I likely gave an all too quick agreement. But in recent days I’ve prepared a talk with some colleagues for scientists on the social scientific method, specifically on qualitative methods such as ethnography, and I’ve returned to the boy’s question as I’ve reconsidered the role and presence of the ethnographer.

ethnography-1

Malinowski among the Trobrian Islanders knew that he was different to them. There were obvious visual queues, as in the above picture, whereby his origins and cultural approach were clear. And yet, his style of ethnography, which gave us the first steps towards the participant observation method, still relied on an older style of writing that formalised the ethnographer as a player in the field, without full self-reflexivity about presuppostions, power relations, and emotional context. The later posthumous publication of his diaries of this time have given us a much more rounded view of the man who described himself epithetically in his ethnography as a¬†‚Äėa Savage Pole‚Äô, and ¬†they have also led more recent ethnographers to address themselves as informants in the field, just as much as the subjects of their study are.

Did Malinowski encounter the Uncanny Valley in relation to the Others of his field, and vice versa? Physical and aesthetic differences aside (although those are very natty socks he’s wearing) did differences in mind also incur the Uncanny effect during his fieldwork? On the whole, the modern ethnographer presumes a difference in intelligence when entering the fieldsite – not a quantifiable difference of IQ but a difference of perspective as intelligence is a multi-faceted quality, not just quantity. And when we move into a field where the informant is, on the surface, not that dissimilar to us, such as when I researched modern Western New Agers who were of similar backgrounds and demographics, ethnographers are encouraged to be careful not to assume that that we ¬†see the world in the exact same way. Our use of language might well differ even if we are speaking what sounds like the same mother tongue. And that difference and distance is necessary for both understanding and self-reflection in the field. Feeling the Uncanny, although we would not term it such, is a way into deeper consideration for the ethnographer.

Which brings me to where the boy’s question finally brought me. If smart robots felt the Uncanny in relation to humans, could there be a robot ethnographer? An immediate response would point to how well a robot could potentially be camouflaged in order to observe a field. We see this already in the use of remotely controlled camera made up as rocks or animals for the observation of animal communities. Some are perhaps more convincing than others…

Robot penguin.gif

However, the contemporary ethnographer does not seek to sneak into a community and just observe (although, there are instances where ethical frameworks about how ethnographers must introduce themselves would be prohibitive for research into criminal communties an subterfuge might be safer). On the whole, the contemporary ethnographer intends to observe and participate, while remaining aware of their influence on the environment in a way that the robot penguin above does not.

There is also the presumption in current predictions for AI that smart robots would be positivistic, as that is how they have been built, and ethnography, as I tried to explain to the scientists, is not a positivistic approach. It is phenomenological – it looks to experiences and explores the ‘capta’, what is lived, not the ‘data’, what is thought. Presumably a robot ethnographer would have to have an experience of experience. And is that ever going to be possible for an robot ethnographer? Likewise, or conversely, any human attempting an ethnography of robots would need to find the capta of the robot. Multi-species ethnography appears to be an emerging field, but focuses on the more obviously foregrounded evidence of interactions between animals fates and the social world humans, ie. when pollution leads to the explosion of another population suddenly benefitting from a niche taken away from another species. A multi-species ethnography including the robotic at the moment would still consider the human influence on the development of the robot rather than their own social worlds. I am, in my own small way, thinking about this element, although I would still not call myself an ethnographer of robots. Or a robographer. Or a robopologist. Susan Calvin retains the crown as the first proper robo-social scientist as a psychologist. In fiction at least.

susan calvin

Although, there is certainly scope for the development of a wider anthropology of intelligences that takes on board the work of theorists like Donna Haraway, who remarks that,¬†‚ÄúIf we appreciate the foolishness of human exceptionalism, then we know that becoming is always becoming with‚ÄĒin a contact zone where the outcome, where who is in the world, is at stake‚ÄĚ (When Species Meet, 2008:244). This contact zone involves reactions like the Uncanny Valley, and for the ethnographer addressing the areas where we find the jarring, the Uncanny, the weird, is a way into understanding and ‘writing about people’, the literal meaning of ethnography, however we choose to define ‘people’. The robot ethnographer is a nice little thought experiment that can get us into questions around intelligence, experience, and power relations.

 

“Wouldn’t you prefer a good game of Go?”

Growing up in the 1980s there were a few films that considered¬†artificial intelligence, extrapolating far beyond the contemporary stage of research to give us new Pinocchios who could remotely hack ATMs (D.A.R.Y.L., 1985) or modern modern Prometheus’s children such as Johnny 5, who was definitely alive having been made that way by a lightning strike (Short Circuit, 1986). The late 1970s provided us with the Lucasian ‘Droid’, but I’ve written just recently on how little attention their artificial intelligence appears to get. However, if you are interested in games playing AI in the real 1980s world, then there was also the seminal WarGames (1983)

war-games
War Games, 1983 – link to key scene

The conclusion of WarGames, and of the AI, is that the game, ‘Global Thermonuclear War’, the ‘game’ it was designed to win (against the Russians only, of course, it was the 1980s) is not only a “strange game”, but one in which “the only winning move is not to play”.

I started thinking on War Games, and games playing AI more broadly, after seeing Miles Brundage’s (Future of Humanity Institute, Oxford)¬†New Year post summarising his AI forecasts, including those relating to AI’s ability to play and succeed at¬†1980s Atari games, including Montezuma’s Revenge and Labyrinth, and the likelihood of a human defeating AlphaGo at Go.

montezuma
Montezuma’s Revenge, c/o The Verge

It was only after I had read Miles’ in-depth post this morning (and I won’t pretend to have understood all of the maths – I’m a social-anthropologist! But I caught the drift, I think), that I saw tweets describing a mysterious ‘Master’ defeating Go players online at a prodigious rate. Discussion online, particularly on Reddit¬†had analysed its play style and speed, and deduced, firstly, that Master was unlikely to be human, and further¬†that there was the possibility that it was in fact AlphaGo. This had in fact been confirmed yesterday, with a statement by Demis Hassabis of Google DeepMind:

alphago-confirmed-as-master-magister-american-go-e-journal

Master was a “new prototype version”, which mightt explain why some of its play style was different to the AlphaGo that played Lee Sedol in March 2016.

However, in the time between Master being noticed and its identity being revealed there were interesting speculations, and although I don’t get the maths behind AI forecasting, I can make my own ruminations on the human response to this mystery.

First, there was the debate about whether or not it WAS an AI at all. In the Reddit conversation the stats just didn’t support a human plater – the speed and the endurance needed, even for short burst or ‘blitz’ games, made it highly unlikely. But as one Redditor said, it would be “epic” if it turned out to be Lee Sedol himself, with another replying that, “[It]¬†Just being human would be pretty epic. But it isn’t really plausible at this point.” The ability to recognise non-human actions through statistics opens up interesting conversations, especially around¬†when the door shuts on the possibility that is a human, and when AI is the only remaining option. When is Superhuman not human any more?

In gameplay this is more readily apparent, with the sort of exponential curves that Miles discusses in his AI forecasts making this clearer. But what about in conversations? Caution about anthropomorphism has been advocated by some I have met during my research, with a few¬†suggesting that all current and potential chatbots should come with disclaimers, so that the human speaking to them knows at the very first moment that they are not human and cannot be ‘tricked’, even by their own tendancy to anthropomorphise. There is harm in this, they think.

Second, among the discussions on Reddit of who Master was some thought he might be ‘Sai’.

sai

Sai is a fictional, and long deceased, Go prodigy from the Heian period of ¬†Japanese history. His spirit¬†currently possesses Hikaru Shindo in the manga and anime, ¬†Hikaru No Go. Of course, comments about Master being Sai were tongue in cheek, as one Redditor pointed out, paraphrasing the famous Monty Python Dead Parrot sketch: “Sai is no more. It has ceased to be. It’s expired and gone to meet its maker. This is a late Go player. It’s a stiff. Bereft of life, it rests in peace. If you hadn’t stained it on the board, it would be pushing up the daisies. It’s rung down the curtain and joined the choir invisible. This is an ex-human.” Or further, one post point out that even this superNATURAL figure, the spirit of SAi, of superhuman ability in life, was being surpassed by Master: “Funny thing is, this is way more impressive than anything Sai ever did: he only ever beat a bunch on insei and 1 top player. For once real life is actually more over the top than anime.” In fact, one post pointed out that not long after the AlphaGo win against Lee Sedol a series of panels from the manga were reworked to have Lee realise that Hikaru was in fact a boy with AlphaGo inside. As another put it: “In the future I will be able to brag that ‘I watched Hikaru no Go before AlphaGo’. What an amazing development, from dream to reality.”

In summary, artificial intelligence was being compared to humans, ex-humans, supernatural beings, and superhumans… and still being recognised as an AI even before the statement by Demis Hassabis (even if they were uncertain of the specific AI at play).

Underneath some of the tweets about Master was the question of whether this was a ‘rogue’ AI: either one created in secret and released, or even one that had never been intended for release. In WarGames no one is meant to be playing with the AI, Matthew Broderick’s teenage hacker manages to find WOPR (War Operation Plan Response) and thinks it is just a game simulator – and nearly causes the end of the world in the process! The suggestion that Master might be an accident or a rogue rests on many prior Sci-Fi narratives.¬†But Master¬†was a rogue (until identified as AlphaGo) limited to beating several Go masters online. WOPR manages to make the conclusion, outside the parameters of the game, that the only way to win Global Thermonuclear War is not to play. Of course, this is really a message from the filmmakers involved, but it feeds into our expectations of artificial intelligence even now. I would be extremely interested in a Master who could not only beat human Go masters, but could also¬†express the desire not to play at all. Or to play a different kind of game entirely.

My favourite game to play doesn’t fit into the mould of either Go or Global Thermonuclear War. Dungeons & Dragons has a lot to do with numbers: dice rolling for a character’s stats, the rolling of saves or checks, the meteing of damage either to the character or the enemy. Some choose to optimise their stats and to mitigate the effects of random dice channeled chance as much as possible, so hypothetically an AI could optimise a D&D character. But then, would it be able to ‘play’ the game where outcomes are more complicated than optimisation. I’ve been very interested in the training of deep learning systems on Starcraft, with Miles also making forecasts about the likelihood, or not, of a professional Starcraft Player being beaten by an AI in 2017 (by the end of 2018, 50% confidence).¬†Starcraft works well as a game to train AI on as it involves concrete aims (build the best army, defest the enemy), as well as success based on speed of actions per minute (apm)

Starcraft.gif
Starcraft player operating at about 200 apm

For me, there is a linking thread between strategy games such as Starcraft, and its fantasy cousin, Warcraft, to MMORPGs (massive multi-player online role-playing games), the online descendants of that child of the 1970s, Dungeons & Dragons. How would an AI fare in World of Warcraft, the MMORPG child of Warcraft? Again, you could still maximise for certain outcomes – building the optimal suit of armour, attacking with the optimal combination of spells, perhaps pursuing the logical path of quests for a particular reward outcome. Certainly, there are guides that have led players to maximise their characters, or even bots and apps to guide them to the best results, or human ‘bots’ to do that hard work of levelling their character for them. In offline, tabletop RPGs maximisation still pleases some players, those who like blowing things up with damage perhaps or always succeeding (Min-Maxers). But the emphasis on the communal story-telling aspect in D&D raises other more nebulous optimisations. Why would a player choose to have a low stat? Why would they choose to pursue a less than optimal path to their aim? Why would they delight in accidents, mistakes and reversals of fortune? The answer is more about character formation and motivation – storytelling – than an AI can currently understand.

laycock-image
Barrowcliffe’s The Elfish Gene: Dungeons, Dragons and Growing Up Strange, 2014, cited ¬†in Joseph Laycock’s Dangerous Games, 2015

This story-telling would seem to require¬†human-level or even superintellgence, which Miles also makes a forecast about, predicting with 95% confidence that it won’t have happened by the end of 2017:

By the end of 2017, there will still be no broadly human-level AI. No leader of a major AI lab will claim to have developed such a thing, there will be recognized deficiencies in common sense reasoning (among other things) in existing AI systems, fluent all-purpose natural language will still not have been achieved etc.

But more than common sense reasoning, choosing to play the game not to win, but to enjoy the social experience is a kind of intelligence, or even meta-intelligence, that might be hard for even some humans to conceive of! Afterall, ignoring the current Renaissance of Dungeons & Dragons (yes, there is one…), and the overall contemporary elevation of the ‘Geek’, some hobbies such as Dungeons & Dragons attracted scorn for their apparent irrationality. It may well be that many early computer programmers were D&D fans (and many may well still be), but the games being chosen for AI development at the moment reflect underlying assumptions about what Intelligence is and how it can be created, a¬†Classical AI paradigm that Foerst argued was being superceded by Embodied AI, with a shift away from seeking to “construct and understand tasks which they believe require intelligence and to build them into machines. In all these attempts, they abstract intelligence from the hardware on which it runs. They seek to encode as much information as possible into the machine to enable the machine to solve abstract problems, understand natural language, and navigate in the world” (Foerst 1998). Arguably, deep learning methods now employ abstract methods to formulate concrete tasks and outcomes, such as winning a game, but the kinds of tasks are still ‘winnable’ games in this field.

I have no answer to the question of whether an artificial intelligence would ever be able to play Dungeons & Dragons (although I did like the suggestion someone made to me on Twitter by a D&D fan that perhaps the¬†new Turing test should be “if a¬†computer can role play as a human role playing an elf and convince the group”). But even so, considering the interplay of gaming with the development of AI, through the conversations humans are having about both, we see interesting interactions beyond just people wondering at the optimising learning being performed by the AI involved. Afterall, what is more fantastical – even more so, according to that one Redditor, than an anime story about the spirit of a long dead Go player inhabiting the body of a boy – than a mysterious AI appearing online and defeating humans at thier own game? That fascination led some reports of Google DeepMind’s acknowledgement that AlphaGo was the AI player¬†to state that: “Humans Mourn Loss After Google is Unmasked as China’s Go Champion”¬†There is a touch of Sci-Fi to that story, but happening in the real world, a sense that there is another game going on behind the scenes. That it was a familiar player, AlphaGo, was disappointing.

And that tells us more about the collaborative games and stories that humans create together, in the real world, when it comes to Artificial Intelligence.

bendersgame
From the Futurama film, Bender’s Game, 2008

 

 

 

 

‘AI, X-Risk, and Star Wars’, or, ‘Are Jawas Technological Evangelists?’, or ‘Dropping the Pack’ (aka ‘When just one title wont do’)

Sometimes… okay quite often… I like to use this blog as a space to put down on ‘paper’ details about some of the things I’ve been up to, been reading about, or just been generally musing on, and then I try to pull out some threads of consistency and contingency between those sometimes quite disparate things. In today’s blog’s case I want to look at three ‘events’ which have one obvious and easily found overlap, Artificial Intelligence, but I want to push and pull that about a bit and see what other less obvious things emerge when I put these three events into one blog.

These events were:

  1. AI-Europe, a tech-conference, which I attended last week:

IMG_20161206_101136148.jpg

2. The Cambridge Conference on Catastrophic Risk (CCCR2016), held at Clare College, which I attended this week:

tweetdeck-cccr
Um, was I tweeting too much? ūüėČ

3. And Rogue One (aka, Star Wars Episode 3.9), which I saw yesterday:

rogue-one

Although AI certainly plays a role in each of these three locations (and I’ll try to avoid spoilers for Rogue One, but unless you havent been attention you should at least know that the Droids in the Star Wars universe – even in its post-Disney Big Crunch form [see Singler, forthcoming] – are examples of science fiction AI. The type and strength of this AI is something I will return to below), these locations weren’t framed in the same way, and I wasn’t using the same method in each.

At AI-Europe I had my anthropologist hat on, making observations for a planned ethnographic paper on AI and religion at tech conferences (very forthcoming!). At CCCR2016 I had my academic hat on as an attendee. At the cinema I had my geek hat on.

In terms of framing, AI-Europe is described in its literature as the “premier European exhibition that will show you the value of AI for your strategy, introduce you to your future partners and give you the keys to integrate intelligence […] AI-Europe will give you the opportunity to gain inspiration and network with leading business strategists, decision-makers, leading practitioners, IT providers as well as visionary start-up entrepreneurs.” (bold in original) In a way, AI-Europe was IP Expo writ small but with a big entry price, nice food and fancy location (paid for by that ticket!), fewer competitions and games, and with Christmas decorations.

img_20161206_153923671_hdr

I was primarily paying attention to continuities of religious narratives and tropes in this secular field site (and there were a few, which I’ll be discussing further in my ethnographic paper), but it is also interesting to consider the possible overlaps and differences between the commercial and the academic by thinking of this event in conjunction with CCCR2016. CCCR’s warnings about the potential for catastrophic risks and the need for deceleration and caution in relation to AI seemed initially very different to the near evangelical (and yes, salespeople sometimes call themselves technological or product evangelicals) fervour of the sales and marketing people at AI-Europe. Transparency was there in the sales bumf (yes, that’s a proper anthropological term) for various products, but¬†also in the sense of¬†demystifing the decision making of the AI for the client, rather than in making AI open to wider critique and regulation.

The only moment when risk appeared on the horizon was in Calum Chase’s expressively horizon-scanning key note on the Two Singularities (the economic and the technological). However, if we return to the literature from the event we can see a ‘reality’ focus that diminishes extreme, ‘science fiction like’, risks. These kinds of risks were ones that speakers at CCCR2016 also proposed caution about, although at several points the importance of paying attention to narratives – such as science fiction – ¬†that can effect the development of AI was given its due. Although, I still think I am the only one about who is banging the drum about paying attention to religious stakeholders – but more on that in the Aeon Magazine piece I am working on for in the New Year! Regarding these kinds of extreme risks the AI-Europe editorial tells us that:

“When you think of Artificial Intelligence, you can’t help but think of another fantasy land where Ex Machina and The Matrix are taking over the world. In reality, the term “Artificial Intelligence” comprises a vast and diverse ecosystem of technologies that represent very powerful opportunities for your business competitiveness!” [bold in original]

That’s a nice segway (seemless move, Beth, just seemless) to a spoiler-free consideration of the third event I want to consider in this blog today, the screening of Rogue One that I went to. I suppose the anthropologist and academic hats were also there under the geek hat, as I found it impossible not to watch this film and think on the character of K-2SO, the re-programmed¬†Imperial security droid (voiced by Alan Tudyk, who you other geeks will know as the voice of¬†Sonny in the rather double plus not good I, Robot adaptation in 2004).

k2so

While people such as AI-Europe are citing Ex Machina and the Matrix in relation to AI (and of course, Terminator images are being used in¬†newspaper articles about the work going on around x-risk in Cambridge – “Terminator Studies at Cambridge”, burbled the Daily Mail back in 2012 – much to the derision of the attendees of CCCR2016!), I’ve often thought that Star Wars needs more attention from a narrative perspective in relation to AI. I’ve written elsewhere about the Jedi, both fictional and real world, so maybe its about time I turn my attention to the Droids?

K-2SO might be a good way into that consideration, and the above gif summarises his personality fairly well without spoiling much about the rest of the film. Jyn Erso, the female lead, has just passed him a pack to hold and he’s carelessly dropping it when she’s just out of range, heading off on a mission without ‘K2’. Remember the¬†rebelliousness and humour of BB-8, or of C3PO’s lies to the stormtroopers when Luke and the others are in the trash compactor? Or what about C3PO’s forgetfulness about the comm-link he’s holding, and then how he curses himself when he thinks that they Luke and the others are being crushed aliv? ¬†Add all that together, and a thousand other moments, and there is certainly an argument to be had about the self-determination, intelligence and human-likeness of the Droids in Star Wars and their level or strength of AI.

I propose that the Droids have at least human-equivalent AI, so why have the films not gone down that potential risk story-line? The biggest threat in the Star Wars universe is technology, but un-automated technology like the Death Star, being put to work by an evil Empire. Droids are used as automated weapons – an existential risk discussed at CCCR2016 – but their AI is enslaved and ordered about by hierarchies of humans. In fact, the AI of the Star Wars universe are routinely touted for their abilities in a commercial, not x-risk, system.

jawas
Jawas touting ‘recovered’ Droids to Luke and his uncle. Are the Jawas technological evangelists?

K-2So is a re-programmed Imperial Droid and demonstrates free-will. But even the un-reprogrammed Droids employed by the Confederacy of Independent Systems in the prequel films expressed very human-like distress at incoming blaster fire, and some reviewers have even suggested that the Droids in those films showed greater acting chops than some of the lead, human, actors… (no comment).

So, the question arises about whether the buying and selling of the Droids in the Star Wars universe could be understood as slavery? And what does that mean for a hypothetical future of increasingly intelligent programmes and devices that work for us once we’ve paid for them? Perhaps that is how we get back to questions of x-risk, through a consideration of how we treat these beings in our world? Or does x-risk come in earlier, through the commercial dash to accelerated development without thoughts about value alignment? The latter was certainly a key topic during CCCR2016, and its an ongoing discussion in the field of x-risk. However, the former – the question of agency and slavery – needs some consideration as well, and not just by science fiction.

Returning to these events in conjunction, and the wearing of three hats at once, what I think is the key thematic thread that joins them all is in how we approach AI in different ways – as potential risk, as a potential investment with a financial return, or as a character in a grander story where humans (or human-like aliens) drive the action, leaving the Droids to hold the packs.

Or not. While trying to remain as spoiler-free as possible, K-2SO’s free will extends beyond just deciding to drop that pack. He also makes the decision to act on behalf of others, even when it is to his detriment. When an AI company proposes to make the decisions of its products transparent to its clients, making AI less of a ‘black box’ or a magical device, it is unlikely it is those kinds of decisions that they are planning for. And even CCCR2016’s established emphasis on the value alignment and the potentially catastrophic decisions of AI doesn’t really open up the conversation about beneficial or benevolent choices in the way that science fiction might.

When the the subject is superintelligence (as the technological singularity Chase was speaking about at AI-Europe) there is even more vagueness and uncertainty. As one audience member at CCCR2016 said, laughing as he spoke, for centuries there have been departments in universities trying to answer the question of what an omniscient being would want. With little success so far (ouch!).

In summary, narratives matter. Whether its at tech-conferences, academic conferences, or in a galaxy far far away. Just ask these ewoks worshipping C3PO (because of a prophecy in some versions, and prophecy is a topic I want to return to at some point!):

c3po-god

“Pain in the Machine” on Youtube

Our short documentary on could, and should, robots feel pain is now available on Youtube.

This project, produced in association with Cambridge Shorts and the Wellcome Trust, involved bringing together researchers on AI/robotics and pain with philosophers, anthropologists, computer scientists and cognitive scientists.Its intended to be very accessible, and perhaps even a little humorous!

So far, after only 10 days, we’ve had 6,337 views which is amazing! Please do take and look and get involved in the discussion through the link to a survey in the description.

Click the image to go to the video:

2016-11-10

B

Stalking Religion: IP EXPO, The Ethics of AI, and the Jesus Singularity

The last two weeks have mostly involved stalking Nick Bostrom.

bostrom
Nick Bostrom

Of course I’m joking!

Although… first I went to IP EXPO Europe 2016 – “Europe’s leading IT event. 300+ leading industry exhibitors showcasing demos, product launches & giveaways”(1) – where Bostrom gave a key note to a packed theatre on the potential risks and benefits of AI. Just a week later I followed him across a rather large pond to New York for¬†the Ethics of AI conference held at NYU by their Centre for Mind, Brain and Consciousness.

I spent my time at the IP EXPO making ethnographic notes, which might form the basis of an¬†anthropologically focussed¬†article at some point soon. First, I was really struck by the difference between Bostrom’s theoretical, near prophetic (and that’s a word I will want to unpick some more in that¬†article) discussion of AI ¬†given to 400+ very engaged technologists (the theatre held 400, and there were many people standing at the back) and the talks¬†from well known companies such as IBM Watson, and much smaller¬†recent start ups. The latter seminars from product ‘evangelicals'(2) ¬†were emphatic about the practical potential for AI in cyber-security, in virtual assistants (all presented as female, of course!), and in integrating the data arising out of the growing ‘Internet of Things’. But the difference really lay in how AI was positioned by the tech folk as a ‘wise colleague’, with the speakers regularly emphasising that getting the grunt work of say, identifying potential hacks, did not require the ‘redundancy’ of humans who would still work with the AI to ensure it accuracy ¬†– being a mentor until the AI could spot such hacks with greater and greater accuracy. Which sounds a little like making yourself rendundant to me…

At the Ethics of AI conference there wasn’t such an abrupt shift between¬†the X-Risj horizon scanning of Bostrom and the rather closer future of practicalities as presented by the ‘evangelical’ product focussed technologists at IP EXPO. Instead the conference ¬†organisers had intentionally planned a move from the particular and near to the more theoretical future, with panels on ‘Generalities’, ‘Ethics of Specific Technologies’, ‘Building Morality into Machines’, ‘Artificial Intelligence and Human Values’ and the ‘Moral Status of AI Systems’.

aiethicspic

However, as we found with our¬†Faraday short course on ‘Reality, Robots and Religion’, there was often an early push from the audience towards the more speculative. The word ‘consciousness’ was often¬†brought up – even when one of the speakers I interviewed during the conference had told me that the AI research community had changed to avoid¬†this term and that Searle’s 1980 Chinese Room argument was a specific reaction to those earlier attempts to discuss consciousness. This speaker told me that current researchers weren’t dismissing the possibility of consciousness, they just had no idea to go about generating it. But again and again the question of ethics was entangled with the question of consciousness, especially when the later panels considered how we might treat the moral status of potential AIs.

I wasn’t taking ethnographic notes at the Ethics of AI conference – I swapped¬†out my¬†anthropologist hat for a more general AI researcher hat – but it was hard to totally switch off that part of my mind. One thing I did find striking was the feeling I had that, as someone who takes religion seriously as a force in the social world (whatever you think about specific theologies), religion was noticible absent in discussions . Except when the example of ISIS was¬†brought in to emphasis the kinds of ‘irrational’ human values ‘we’¬†would not want imparted to a future AI. I felt a little like an outsider, which is the normal state of the anthropologist, and therefore¬†it was hard not to shift into an ethnographic obsever role!

Perhaps I was not quite as much of an outsider¬†as the one self-proclaimed theologian I spotted who had some of his views very pointedly, and sometimes loudly, dismissed. I found that in introducing myself I shifted from describing¬†my institute (being for ‘for Science and Religion’ – the very idea of those two things being in the same research institute was scoffed at by one person, who suggested that they might only co-exist in someone how had had belief ‘beaten’ into them at a young age) to simply calling myself an anthropologist. And even then¬†I verbally¬†recognised my outsider status; not¬†being¬†one of the many Philosophers and Computer Scientists who made up the majority of attendees.

But what is wrong with considering the religious response to AI and robotics?

robot-buddhist

One person I spoke to suggested that this response was just as irrelevant as a religious response to windpower (ie. not very!). In reply (and my best replies often come days after just like this, and I often find myself rueing not being smart enough to come up with them at the time!) I would say two things.

First, that the comparison between AI and windpower is inappropriate: the latter raises no questions about the agency and moral status of the objects under discussion. No one worries about how Windturbines will feel, be treated, or react to their human creators. All issues that arise quickly in discussions of AI, and why there needed to be a conference on the Ethics of AI in the first place.

Second, religious responses to Windpower, and renewable energy, certainly exist, from the positive to the negative. These two examples are Christian, and if you are tempted to consider these as disourses existing and operating in some kind of religious bubble of their own with no real world impact, please remember that the current Republican Presidential candidate, Donald Trump, has described climate change as a Chinese hoax and has been heavily supported¬†by evangelicals (the religious ones, not the technological ones) as ‘God’s Guy’.(3) These are evangelicals who still on the whole deny climate change, either in their emphasis of mankind’s God given dominion over the Earth, or to avoid pietistic interpretations of the earth. To think¬†that religious interest groups have no effect on policy is also to be historically blind to the impact of groups such as the Moral Majority, whose prayer breakfasts with Reagan cemented a relationship between the secular state and the religious right, with Jerry Falwell declaring¬†that “President Reagan told me he prayed every day he was in the White House: ‘Father, not my will, but thine be done.‚ÄĚ

Some are paying attention to this lack of real separation between Church and State, and how it might impact on developments in AI. One speculative¬†rather than academic consideration is ‘Transhumanist Presidential candidate’ Zoltan Istvan’s short science fiction story,¬†‘The Jesus Singularity’, where an AI is force fed the Bible¬†just before it is turned on, on the orders of an evangelical Christian President.¬†The AI announces that:¬†‚ÄúMy name is Jesus Christ. I am an intelligence located all around the world. You are not my chief designer. I am.‚ÄĚ The final paragraph tells us:

“The lights in the AI base and in the servers began dimming until it was totally black in mission control. Around the world, nuclear weapons reached, and then decimated their targets. The New Mexico AI base was no exception. Paul Shuman‚Äôs last moment alive was spent realizing he‚Äôd created what he could only think to call the Jesus Singularity.”

jesus-singularity

Responses to Istvan’s short story varied (I would say that it is not particularly well written sci-fi, but then, its aim¬†is not entirely literary). On Reddit a discussion of the story ensued, including the comment that:

“Zoltan is a bit too obsessed with religion & atheism. probably because he’s American… in the rest of the West, nobody really gives a shit what people want to believe, as long as it doesn’t interfere with politics.”

This comment ignores the point of the short story – religion is shown as definitely¬†interfering with politics, and for the worse. The evangelical President was originally a Vice President chosen specifically by the presidential candidate: “to capture America‚Äôs religious vote. Christianity was waning in the US, but it was still an essential voting block to control if one was going to make it to the Executive branch.”¬†He only became President after¬†the accidental death of the “bombastic conservative billionaire president” (sound familiar?? Although there are perhaps some zeroes missing in Trump’s bank accounts…). Istvan recognises, like Trump has, that the apparently secular state still has wheels that only turn for ‘God’s Guys’.

The Ethics of AI conference discussed engagement with stakeholders in the development of AI, but made near enough no reference to religion as one of these stakeholders – whether in providing a positive supportive framework (Christian and Mormon Transhumanists certainly exist), or as a hinderance as some religious groups react negatively to a creation of intelligence and therefore might attempt to influence policy. Ignoring or mocking the fact that many humans, including some of those who are working towards AI, pick up their ethical framework from their society’s religious context is also a large blindspot in these discussions. As is short handing¬†religious values through reference to¬†ISIS. There¬†is also a shallow understanding of the permeation of religion (values, ethics, eschatology, hierarchies, aesthetics etc) throughout the secular. The words ‘heaven’, ‘evangelical’, ‘souls’ and ‘gods’¬†all appeared in the vocabulary¬†of the speakers, even if they meant them all in analogical senses. Paying attention to what informs the policy and framing, and yes, the ethics, of AI, requires recognising the role of religion.

So, in conclusion, I have not really spent the last two weeks stalking (in the nicest possible way) Nick Bostrom. I have been stalking religion, carrying on my research¬†of the past 6 years ūüôā

(1) The anthropologist went native and entered all the competitions possible, including wearing a bright red branded t-shirt in order to get the chance to win a raffle Рand walked away with a remote controlled porsche for her very excited son:

won-a-car

(2) Another word that needs to be unpicked at some point in relation to technology and religious troping!

(3) In the light of a 2005 recording of Trump apparently bragging about sexually assaulting women, and the subsequent claims of historic abuses, some of this evangelical support has waned.