“Wouldn’t you prefer a good game of Go?”

Growing up in the 1980s there were a few films that considered artificial intelligence, extrapolating far beyond the contemporary stage of research to give us new Pinocchios who could remotely hack ATMs (D.A.R.Y.L., 1985) or modern modern Prometheus’s children such as Johnny 5, who was definitely alive having been made that way by a lightning strike (Short Circuit, 1986). The late 1970s provided us with the Lucasian ‘Droid’, but I’ve written just recently on how little attention their artificial intelligence appears to get. However, if you are interested in games playing AI in the real 1980s world, then there was also the seminal WarGames (1983)

war-games
War Games, 1983 – link to key scene

The conclusion of WarGames, and of the AI, is that the game, ‘Global Thermonuclear War’, the ‘game’ it was designed to win (against the Russians only, of course, it was the 1980s) is not only a “strange game”, but one in which “the only winning move is not to play”.

I started thinking on War Games, and games playing AI more broadly, after seeing Miles Brundage’s (Future of Humanity Institute, Oxford) New Year post summarising his AI forecasts, including those relating to AI’s ability to play and succeed at 1980s Atari games, including Montezuma’s Revenge and Labyrinth, and the likelihood of a human defeating AlphaGo at Go.

montezuma
Montezuma’s Revenge, c/o The Verge

It was only after I had read Miles’ in-depth post this morning (and I won’t pretend to have understood all of the maths – I’m a social-anthropologist! But I caught the drift, I think), that I saw tweets describing a mysterious ‘Master’ defeating Go players online at a prodigious rate. Discussion online, particularly on Reddit had analysed its play style and speed, and deduced, firstly, that Master was unlikely to be human, and further that there was the possibility that it was in fact AlphaGo. This had in fact been confirmed yesterday, with a statement by Demis Hassabis of Google DeepMind:

alphago-confirmed-as-master-magister-american-go-e-journal

Master was a “new prototype version”, which mightt explain why some of its play style was different to the AlphaGo that played Lee Sedol in March 2016.

However, in the time between Master being noticed and its identity being revealed there were interesting speculations, and although I don’t get the maths behind AI forecasting, I can make my own ruminations on the human response to this mystery.

First, there was the debate about whether or not it WAS an AI at all. In the Reddit conversation the stats just didn’t support a human plater – the speed and the endurance needed, even for short burst or ‘blitz’ games, made it highly unlikely. But as one Redditor said, it would be “epic” if it turned out to be Lee Sedol himself, with another replying that, “[It] Just being human would be pretty epic. But it isn’t really plausible at this point.” The ability to recognise non-human actions through statistics opens up interesting conversations, especially around when the door shuts on the possibility that is a human, and when AI is the only remaining option. When is Superhuman not human any more?

In gameplay this is more readily apparent, with the sort of exponential curves that Miles discusses in his AI forecasts making this clearer. But what about in conversations? Caution about anthropomorphism has been advocated by some I have met during my research, with a few suggesting that all current and potential chatbots should come with disclaimers, so that the human speaking to them knows at the very first moment that they are not human and cannot be ‘tricked’, even by their own tendancy to anthropomorphise. There is harm in this, they think.

Second, among the discussions on Reddit of who Master was some thought he might be ‘Sai’.

sai

Sai is a fictional, and long deceased, Go prodigy from the Heian period of  Japanese history. His spirit currently possesses Hikaru Shindo in the manga and anime,  Hikaru No Go. Of course, comments about Master being Sai were tongue in cheek, as one Redditor pointed out, paraphrasing the famous Monty Python Dead Parrot sketch: “Sai is no more. It has ceased to be. It’s expired and gone to meet its maker. This is a late Go player. It’s a stiff. Bereft of life, it rests in peace. If you hadn’t stained it on the board, it would be pushing up the daisies. It’s rung down the curtain and joined the choir invisible. This is an ex-human.” Or further, one post point out that even this superNATURAL figure, the spirit of SAi, of superhuman ability in life, was being surpassed by Master: “Funny thing is, this is way more impressive than anything Sai ever did: he only ever beat a bunch on insei and 1 top player. For once real life is actually more over the top than anime.” In fact, one post pointed out that not long after the AlphaGo win against Lee Sedol a series of panels from the manga were reworked to have Lee realise that Hikaru was in fact a boy with AlphaGo inside. As another put it: “In the future I will be able to brag that ‘I watched Hikaru no Go before AlphaGo’. What an amazing development, from dream to reality.”

In summary, artificial intelligence was being compared to humans, ex-humans, supernatural beings, and superhumans… and still being recognised as an AI even before the statement by Demis Hassabis (even if they were uncertain of the specific AI at play).

Underneath some of the tweets about Master was the question of whether this was a ‘rogue’ AI: either one created in secret and released, or even one that had never been intended for release. In WarGames no one is meant to be playing with the AI, Matthew Broderick’s teenage hacker manages to find WOPR (War Operation Plan Response) and thinks it is just a game simulator – and nearly causes the end of the world in the process! The suggestion that Master might be an accident or a rogue rests on many prior Sci-Fi narratives. But Master was a rogue (until identified as AlphaGo) limited to beating several Go masters online. WOPR manages to make the conclusion, outside the parameters of the game, that the only way to win Global Thermonuclear War is not to play. Of course, this is really a message from the filmmakers involved, but it feeds into our expectations of artificial intelligence even now. I would be extremely interested in a Master who could not only beat human Go masters, but could also express the desire not to play at all. Or to play a different kind of game entirely.

My favourite game to play doesn’t fit into the mould of either Go or Global Thermonuclear War. Dungeons & Dragons has a lot to do with numbers: dice rolling for a character’s stats, the rolling of saves or checks, the meteing of damage either to the character or the enemy. Some choose to optimise their stats and to mitigate the effects of random dice channeled chance as much as possible, so hypothetically an AI could optimise a D&D character. But then, would it be able to ‘play’ the game where outcomes are more complicated than optimisation. I’ve been very interested in the training of deep learning systems on Starcraft, with Miles also making forecasts about the likelihood, or not, of a professional Starcraft Player being beaten by an AI in 2017 (by the end of 2018, 50% confidence). Starcraft works well as a game to train AI on as it involves concrete aims (build the best army, defest the enemy), as well as success based on speed of actions per minute (apm)

Starcraft.gif
Starcraft player operating at about 200 apm

For me, there is a linking thread between strategy games such as Starcraft, and its fantasy cousin, Warcraft, to MMORPGs (massive multi-player online role-playing games), the online descendants of that child of the 1970s, Dungeons & Dragons. How would an AI fare in World of Warcraft, the MMORPG child of Warcraft? Again, you could still maximise for certain outcomes – building the optimal suit of armour, attacking with the optimal combination of spells, perhaps pursuing the logical path of quests for a particular reward outcome. Certainly, there are guides that have led players to maximise their characters, or even bots and apps to guide them to the best results, or human ‘bots’ to do that hard work of levelling their character for them. In offline, tabletop RPGs maximisation still pleases some players, those who like blowing things up with damage perhaps or always succeeding (Min-Maxers). But the emphasis on the communal story-telling aspect in D&D raises other more nebulous optimisations. Why would a player choose to have a low stat? Why would they choose to pursue a less than optimal path to their aim? Why would they delight in accidents, mistakes and reversals of fortune? The answer is more about character formation and motivation – storytelling – than an AI can currently understand.

laycock-image
Barrowcliffe’s The Elfish Gene: Dungeons, Dragons and Growing Up Strange, 2014, cited  in Joseph Laycock’s Dangerous Games, 2015

This story-telling would seem to require human-level or even superintellgence, which Miles also makes a forecast about, predicting with 95% confidence that it won’t have happened by the end of 2017:

By the end of 2017, there will still be no broadly human-level AI. No leader of a major AI lab will claim to have developed such a thing, there will be recognized deficiencies in common sense reasoning (among other things) in existing AI systems, fluent all-purpose natural language will still not have been achieved etc.

But more than common sense reasoning, choosing to play the game not to win, but to enjoy the social experience is a kind of intelligence, or even meta-intelligence, that might be hard for even some humans to conceive of! Afterall, ignoring the current Renaissance of Dungeons & Dragons (yes, there is one…), and the overall contemporary elevation of the ‘Geek’, some hobbies such as Dungeons & Dragons attracted scorn for their apparent irrationality. It may well be that many early computer programmers were D&D fans (and many may well still be), but the games being chosen for AI development at the moment reflect underlying assumptions about what Intelligence is and how it can be created, a Classical AI paradigm that Foerst argued was being superceded by Embodied AI, with a shift away from seeking to “construct and understand tasks which they believe require intelligence and to build them into machines. In all these attempts, they abstract intelligence from the hardware on which it runs. They seek to encode as much information as possible into the machine to enable the machine to solve abstract problems, understand natural language, and navigate in the world” (Foerst 1998). Arguably, deep learning methods now employ abstract methods to formulate concrete tasks and outcomes, such as winning a game, but the kinds of tasks are still ‘winnable’ games in this field.

I have no answer to the question of whether an artificial intelligence would ever be able to play Dungeons & Dragons (although I did like the suggestion someone made to me on Twitter by a D&D fan that perhaps the new Turing test should be “if a computer can role play as a human role playing an elf and convince the group”). But even so, considering the interplay of gaming with the development of AI, through the conversations humans are having about both, we see interesting interactions beyond just people wondering at the optimising learning being performed by the AI involved. Afterall, what is more fantastical – even more so, according to that one Redditor, than an anime story about the spirit of a long dead Go player inhabiting the body of a boy – than a mysterious AI appearing online and defeating humans at thier own game? That fascination led some reports of Google DeepMind’s acknowledgement that AlphaGo was the AI player to state that: “Humans Mourn Loss After Google is Unmasked as China’s Go Champion” There is a touch of Sci-Fi to that story, but happening in the real world, a sense that there is another game going on behind the scenes. That it was a familiar player, AlphaGo, was disappointing.

And that tells us more about the collaborative games and stories that humans create together, in the real world, when it comes to Artificial Intelligence.

bendersgame
From the Futurama film, Bender’s Game, 2008

 

 

 

 

‘AI, X-Risk, and Star Wars’, or, ‘Are Jawas Technological Evangelists?’, or ‘Dropping the Pack’ (aka ‘When just one title wont do’)

Sometimes… okay quite often… I like to use this blog as a space to put down on ‘paper’ details about some of the things I’ve been up to, been reading about, or just been generally musing on, and then I try to pull out some threads of consistency and contingency between those sometimes quite disparate things. In today’s blog’s case I want to look at three ‘events’ which have one obvious and easily found overlap, Artificial Intelligence, but I want to push and pull that about a bit and see what other less obvious things emerge when I put these three events into one blog.

These events were:

  1. AI-Europe, a tech-conference, which I attended last week:

IMG_20161206_101136148.jpg

2. The Cambridge Conference on Catastrophic Risk (CCCR2016), held at Clare College, which I attended this week:

tweetdeck-cccr
Um, was I tweeting too much? 😉

3. And Rogue One (aka, Star Wars Episode 3.9), which I saw yesterday:

rogue-one

Although AI certainly plays a role in each of these three locations (and I’ll try to avoid spoilers for Rogue One, but unless you havent been attention you should at least know that the Droids in the Star Wars universe – even in its post-Disney Big Crunch form [see Singler, forthcoming] – are examples of science fiction AI. The type and strength of this AI is something I will return to below), these locations weren’t framed in the same way, and I wasn’t using the same method in each.

At AI-Europe I had my anthropologist hat on, making observations for a planned ethnographic paper on AI and religion at tech conferences (very forthcoming!). At CCCR2016 I had my academic hat on as an attendee. At the cinema I had my geek hat on.

In terms of framing, AI-Europe is described in its literature as the “premier European exhibition that will show you the value of AI for your strategy, introduce you to your future partners and give you the keys to integrate intelligence […] AI-Europe will give you the opportunity to gain inspiration and network with leading business strategists, decision-makers, leading practitioners, IT providers as well as visionary start-up entrepreneurs.” (bold in original) In a way, AI-Europe was IP Expo writ small but with a big entry price, nice food and fancy location (paid for by that ticket!), fewer competitions and games, and with Christmas decorations.

img_20161206_153923671_hdr

I was primarily paying attention to continuities of religious narratives and tropes in this secular field site (and there were a few, which I’ll be discussing further in my ethnographic paper), but it is also interesting to consider the possible overlaps and differences between the commercial and the academic by thinking of this event in conjunction with CCCR2016. CCCR’s warnings about the potential for catastrophic risks and the need for deceleration and caution in relation to AI seemed initially very different to the near evangelical (and yes, salespeople sometimes call themselves technological or product evangelicals) fervour of the sales and marketing people at AI-Europe. Transparency was there in the sales bumf (yes, that’s a proper anthropological term) for various products, but also in the sense of demystifing the decision making of the AI for the client, rather than in making AI open to wider critique and regulation.

The only moment when risk appeared on the horizon was in Calum Chase’s expressively horizon-scanning key note on the Two Singularities (the economic and the technological). However, if we return to the literature from the event we can see a ‘reality’ focus that diminishes extreme, ‘science fiction like’, risks. These kinds of risks were ones that speakers at CCCR2016 also proposed caution about, although at several points the importance of paying attention to narratives – such as science fiction –  that can effect the development of AI was given its due. Although, I still think I am the only one about who is banging the drum about paying attention to religious stakeholders – but more on that in the Aeon Magazine piece I am working on for in the New Year! Regarding these kinds of extreme risks the AI-Europe editorial tells us that:

“When you think of Artificial Intelligence, you can’t help but think of another fantasy land where Ex Machina and The Matrix are taking over the world. In reality, the term “Artificial Intelligence” comprises a vast and diverse ecosystem of technologies that represent very powerful opportunities for your business competitiveness!” [bold in original]

That’s a nice segway (seemless move, Beth, just seemless) to a spoiler-free consideration of the third event I want to consider in this blog today, the screening of Rogue One that I went to. I suppose the anthropologist and academic hats were also there under the geek hat, as I found it impossible not to watch this film and think on the character of K-2SO, the re-programmed Imperial security droid (voiced by Alan Tudyk, who you other geeks will know as the voice of Sonny in the rather double plus not good I, Robot adaptation in 2004).

k2so

While people such as AI-Europe are citing Ex Machina and the Matrix in relation to AI (and of course, Terminator images are being used in newspaper articles about the work going on around x-risk in Cambridge – “Terminator Studies at Cambridge”, burbled the Daily Mail back in 2012 – much to the derision of the attendees of CCCR2016!), I’ve often thought that Star Wars needs more attention from a narrative perspective in relation to AI. I’ve written elsewhere about the Jedi, both fictional and real world, so maybe its about time I turn my attention to the Droids?

K-2SO might be a good way into that consideration, and the above gif summarises his personality fairly well without spoiling much about the rest of the film. Jyn Erso, the female lead, has just passed him a pack to hold and he’s carelessly dropping it when she’s just out of range, heading off on a mission without ‘K2’. Remember the rebelliousness and humour of BB-8, or of C3PO’s lies to the stormtroopers when Luke and the others are in the trash compactor? Or what about C3PO’s forgetfulness about the comm-link he’s holding, and then how he curses himself when he thinks that they Luke and the others are being crushed aliv?  Add all that together, and a thousand other moments, and there is certainly an argument to be had about the self-determination, intelligence and human-likeness of the Droids in Star Wars and their level or strength of AI.

I propose that the Droids have at least human-equivalent AI, so why have the films not gone down that potential risk story-line? The biggest threat in the Star Wars universe is technology, but un-automated technology like the Death Star, being put to work by an evil Empire. Droids are used as automated weapons – an existential risk discussed at CCCR2016 – but their AI is enslaved and ordered about by hierarchies of humans. In fact, the AI of the Star Wars universe are routinely touted for their abilities in a commercial, not x-risk, system.

jawas
Jawas touting ‘recovered’ Droids to Luke and his uncle. Are the Jawas technological evangelists?

K-2So is a re-programmed Imperial Droid and demonstrates free-will. But even the un-reprogrammed Droids employed by the Confederacy of Independent Systems in the prequel films expressed very human-like distress at incoming blaster fire, and some reviewers have even suggested that the Droids in those films showed greater acting chops than some of the lead, human, actors… (no comment).

So, the question arises about whether the buying and selling of the Droids in the Star Wars universe could be understood as slavery? And what does that mean for a hypothetical future of increasingly intelligent programmes and devices that work for us once we’ve paid for them? Perhaps that is how we get back to questions of x-risk, through a consideration of how we treat these beings in our world? Or does x-risk come in earlier, through the commercial dash to accelerated development without thoughts about value alignment? The latter was certainly a key topic during CCCR2016, and its an ongoing discussion in the field of x-risk. However, the former – the question of agency and slavery – needs some consideration as well, and not just by science fiction.

Returning to these events in conjunction, and the wearing of three hats at once, what I think is the key thematic thread that joins them all is in how we approach AI in different ways – as potential risk, as a potential investment with a financial return, or as a character in a grander story where humans (or human-like aliens) drive the action, leaving the Droids to hold the packs.

Or not. While trying to remain as spoiler-free as possible, K-2SO’s free will extends beyond just deciding to drop that pack. He also makes the decision to act on behalf of others, even when it is to his detriment. When an AI company proposes to make the decisions of its products transparent to its clients, making AI less of a ‘black box’ or a magical device, it is unlikely it is those kinds of decisions that they are planning for. And even CCCR2016’s established emphasis on the value alignment and the potentially catastrophic decisions of AI doesn’t really open up the conversation about beneficial or benevolent choices in the way that science fiction might.

When the the subject is superintelligence (as the technological singularity Chase was speaking about at AI-Europe) there is even more vagueness and uncertainty. As one audience member at CCCR2016 said, laughing as he spoke, for centuries there have been departments in universities trying to answer the question of what an omniscient being would want. With little success so far (ouch!).

In summary, narratives matter. Whether its at tech-conferences, academic conferences, or in a galaxy far far away. Just ask these ewoks worshipping C3PO (because of a prophecy in some versions, and prophecy is a topic I want to return to at some point!):

c3po-god

“Pain in the Machine” on Youtube

Our short documentary on could, and should, robots feel pain is now available on Youtube.

This project, produced in association with Cambridge Shorts and the Wellcome Trust, involved bringing together researchers on AI/robotics and pain with philosophers, anthropologists, computer scientists and cognitive scientists.Its intended to be very accessible, and perhaps even a little humorous!

So far, after only 10 days, we’ve had 6,337 views which is amazing! Please do take and look and get involved in the discussion through the link to a survey in the description.

Click the image to go to the video:

2016-11-10

B

Stalking Religion: IP EXPO, The Ethics of AI, and the Jesus Singularity

The last two weeks have mostly involved stalking Nick Bostrom.

bostrom
Nick Bostrom

Of course I’m joking!

Although… first I went to IP EXPO Europe 2016 – “Europe’s leading IT event. 300+ leading industry exhibitors showcasing demos, product launches & giveaways”(1) – where Bostrom gave a key note to a packed theatre on the potential risks and benefits of AI. Just a week later I followed him across a rather large pond to New York for the Ethics of AI conference held at NYU by their Centre for Mind, Brain and Consciousness.

I spent my time at the IP EXPO making ethnographic notes, which might form the basis of an anthropologically focussed article at some point soon. First, I was really struck by the difference between Bostrom’s theoretical, near prophetic (and that’s a word I will want to unpick some more in that article) discussion of AI  given to 400+ very engaged technologists (the theatre held 400, and there were many people standing at the back) and the talks from well known companies such as IBM Watson, and much smaller recent start ups. The latter seminars from product ‘evangelicals'(2)  were emphatic about the practical potential for AI in cyber-security, in virtual assistants (all presented as female, of course!), and in integrating the data arising out of the growing ‘Internet of Things’. But the difference really lay in how AI was positioned by the tech folk as a ‘wise colleague’, with the speakers regularly emphasising that getting the grunt work of say, identifying potential hacks, did not require the ‘redundancy’ of humans who would still work with the AI to ensure it accuracy  – being a mentor until the AI could spot such hacks with greater and greater accuracy. Which sounds a little like making yourself rendundant to me…

At the Ethics of AI conference there wasn’t such an abrupt shift between the X-Risj horizon scanning of Bostrom and the rather closer future of practicalities as presented by the ‘evangelical’ product focussed technologists at IP EXPO. Instead the conference  organisers had intentionally planned a move from the particular and near to the more theoretical future, with panels on ‘Generalities’, ‘Ethics of Specific Technologies’, ‘Building Morality into Machines’, ‘Artificial Intelligence and Human Values’ and the ‘Moral Status of AI Systems’.

aiethicspic

However, as we found with our Faraday short course on ‘Reality, Robots and Religion’, there was often an early push from the audience towards the more speculative. The word ‘consciousness’ was often brought up – even when one of the speakers I interviewed during the conference had told me that the AI research community had changed to avoid this term and that Searle’s 1980 Chinese Room argument was a specific reaction to those earlier attempts to discuss consciousness. This speaker told me that current researchers weren’t dismissing the possibility of consciousness, they just had no idea to go about generating it. But again and again the question of ethics was entangled with the question of consciousness, especially when the later panels considered how we might treat the moral status of potential AIs.

I wasn’t taking ethnographic notes at the Ethics of AI conference – I swapped out my anthropologist hat for a more general AI researcher hat – but it was hard to totally switch off that part of my mind. One thing I did find striking was the feeling I had that, as someone who takes religion seriously as a force in the social world (whatever you think about specific theologies), religion was noticible absent in discussions . Except when the example of ISIS was brought in to emphasis the kinds of ‘irrational’ human values ‘we’ would not want imparted to a future AI. I felt a little like an outsider, which is the normal state of the anthropologist, and therefore it was hard not to shift into an ethnographic obsever role!

Perhaps I was not quite as much of an outsider as the one self-proclaimed theologian I spotted who had some of his views very pointedly, and sometimes loudly, dismissed. I found that in introducing myself I shifted from describing my institute (being for ‘for Science and Religion’ – the very idea of those two things being in the same research institute was scoffed at by one person, who suggested that they might only co-exist in someone how had had belief ‘beaten’ into them at a young age) to simply calling myself an anthropologist. And even then I verbally recognised my outsider status; not being one of the many Philosophers and Computer Scientists who made up the majority of attendees.

But what is wrong with considering the religious response to AI and robotics?

robot-buddhist

One person I spoke to suggested that this response was just as irrelevant as a religious response to windpower (ie. not very!). In reply (and my best replies often come days after just like this, and I often find myself rueing not being smart enough to come up with them at the time!) I would say two things.

First, that the comparison between AI and windpower is inappropriate: the latter raises no questions about the agency and moral status of the objects under discussion. No one worries about how Windturbines will feel, be treated, or react to their human creators. All issues that arise quickly in discussions of AI, and why there needed to be a conference on the Ethics of AI in the first place.

Second, religious responses to Windpower, and renewable energy, certainly exist, from the positive to the negative. These two examples are Christian, and if you are tempted to consider these as disourses existing and operating in some kind of religious bubble of their own with no real world impact, please remember that the current Republican Presidential candidate, Donald Trump, has described climate change as a Chinese hoax and has been heavily supported by evangelicals (the religious ones, not the technological ones) as ‘God’s Guy’.(3) These are evangelicals who still on the whole deny climate change, either in their emphasis of mankind’s God given dominion over the Earth, or to avoid pietistic interpretations of the earth. To think that religious interest groups have no effect on policy is also to be historically blind to the impact of groups such as the Moral Majority, whose prayer breakfasts with Reagan cemented a relationship between the secular state and the religious right, with Jerry Falwell declaring that “President Reagan told me he prayed every day he was in the White House: ‘Father, not my will, but thine be done.”

Some are paying attention to this lack of real separation between Church and State, and how it might impact on developments in AI. One speculative rather than academic consideration is ‘Transhumanist Presidential candidate’ Zoltan Istvan’s short science fiction story, ‘The Jesus Singularity’, where an AI is force fed the Bible just before it is turned on, on the orders of an evangelical Christian President. The AI announces that: “My name is Jesus Christ. I am an intelligence located all around the world. You are not my chief designer. I am.” The final paragraph tells us:

“The lights in the AI base and in the servers began dimming until it was totally black in mission control. Around the world, nuclear weapons reached, and then decimated their targets. The New Mexico AI base was no exception. Paul Shuman’s last moment alive was spent realizing he’d created what he could only think to call the Jesus Singularity.”

jesus-singularity

Responses to Istvan’s short story varied (I would say that it is not particularly well written sci-fi, but then, its aim is not entirely literary). On Reddit a discussion of the story ensued, including the comment that:

“Zoltan is a bit too obsessed with religion & atheism. probably because he’s American… in the rest of the West, nobody really gives a shit what people want to believe, as long as it doesn’t interfere with politics.”

This comment ignores the point of the short story – religion is shown as definitely interfering with politics, and for the worse. The evangelical President was originally a Vice President chosen specifically by the presidential candidate: “to capture America’s religious vote. Christianity was waning in the US, but it was still an essential voting block to control if one was going to make it to the Executive branch.” He only became President after the accidental death of the “bombastic conservative billionaire president” (sound familiar?? Although there are perhaps some zeroes missing in Trump’s bank accounts…). Istvan recognises, like Trump has, that the apparently secular state still has wheels that only turn for ‘God’s Guys’.

The Ethics of AI conference discussed engagement with stakeholders in the development of AI, but made near enough no reference to religion as one of these stakeholders – whether in providing a positive supportive framework (Christian and Mormon Transhumanists certainly exist), or as a hinderance as some religious groups react negatively to a creation of intelligence and therefore might attempt to influence policy. Ignoring or mocking the fact that many humans, including some of those who are working towards AI, pick up their ethical framework from their society’s religious context is also a large blindspot in these discussions. As is short handing religious values through reference to ISIS. There is also a shallow understanding of the permeation of religion (values, ethics, eschatology, hierarchies, aesthetics etc) throughout the secular. The words ‘heaven’, ‘evangelical’, ‘souls’ and ‘gods’ all appeared in the vocabulary of the speakers, even if they meant them all in analogical senses. Paying attention to what informs the policy and framing, and yes, the ethics, of AI, requires recognising the role of religion.

So, in conclusion, I have not really spent the last two weeks stalking (in the nicest possible way) Nick Bostrom. I have been stalking religion, carrying on my research of the past 6 years 🙂

(1) The anthropologist went native and entered all the competitions possible, including wearing a bright red branded t-shirt in order to get the chance to win a raffle – and walked away with a remote controlled porsche for her very excited son:

won-a-car

(2) Another word that needs to be unpicked at some point in relation to technology and religious troping!

(3) In the light of a 2005 recording of Trump apparently bragging about sexually assaulting women, and the subsequent claims of historic abuses, some of this evangelical support has waned.

“Reality, Robots, and Religion” or… “Of Christians and Computer Scientists”

The past weekend saw the Faraday Short course take place in conjunction with our subproject on the implications of developments in AI and Robotics. From Friday to Sunday members of the public could listen to invited speakers, engage in break out discussion groups, and even grill the speakers mercilessly during an evening panel!

faraday-short-course-pic

Photo by Gavin M.

The short course went extremely well in terms of organisation and in the generation of conversations. Anyone seeking definitive answers might have been disappointed, but in part the course was addressing what a religious response to the questions raised about AI and robotics might look like – with, at this point, that response limited to Christian interpretations.

The Faraday Institute is very successful in bringing together those voices that we might initially think to be in opposition. Thus it was fruitful to note where those who were overtly secular in fact agreed with those who were religious. That was one of the main arguments of my own talk for the course: that future tech focussed groups in fact draw on their home-context of religious eschatology in the formation of their science ‘fact’ speculations about the future of AI and robotics. My two particular examples, drawn from conference papers I have given previously this Summer, were the pragmatic attempts at the development of a “Theism from Deism” and a Transhumanist Church, and the potentially ‘tremendum et fascinans’ implications of certain Singularity thought experiments such as Roko’s Basilisk.

prisco-arthur-c-clarke

A slide from a presentation by the Transhumanist, Giulio Prisco, from the 2014 Mormon Transhumanist Conference – presented during my paper at BASR 2016 earlier this month.

However, during the course it was interesting to see both Computer Scientists and Christian Theologians refer to belief in the personhood and/or intelligence of potential AIs as a “category mistake”, drawing in part from Ryle (1949), Searle (1980) and Kelly (1994). This dismissal came at the concept from two apparently distinct directions of the ‘secular’ and the ‘religious’, but they found agreement in their views during the course – or at least those that discussed this issue with me did.

However, I am convinced that there are another two issues that complicate this apparent certainty and agreement. First, and perhaps quite obviously, anthropormorphism makes this “category mistake” somewhat irrelevant. As another speaker noted, all we ever have from other humans is appearance of genuine intelligence – even emotions – to the extent that the speaker raised the possibility that their long time friends might in fact have been “acting out” the friendship. But should the speaker die without knowing this, then there would be no difference in effect than if they had been ‘genuine’ friends.

In some ways, as the lone social anthropologist – as far as I am aware – I felt like this was a conversation catching up with something participant observers have known for a while. I have perhaps been more aware of this issue, being a researcher in digital communities, and being regularly asked if I can believe anything that I am told by someone online – in particular about their religious belief.
internet_dog

computer-online

My answer, and my answer if I was asked about the ‘genuiness’ of a robot’s response, has been to point out that we take things for granted in face to face conversations with humans as well. You tell me your name, how you are feeling… even what you believe. How do I know that you are telling the truth, or even further, that there is an intelligence seperate to me actually having these genuine experiences? We give benefit of the doubt with every human we ‘meet’ – even, or especially, online.

Second, as J. M. Bishop states in his 1995 review of Kelly’s 1994 book:

Kelly meticulously examines the claims, achievements and underlying philosophy of proponents of AI, highlighting the limitations of each theory, with the effect that reading the book is rather like being confronted by a slow moving traction engine. By the end only the most religious practitioners of AI will have moved to apostasy.

In this statement Bishop intends to highlight Kelly’s successful argument, but he accidentally draws attention to what I see as the second issue in relation to the ‘category mistake argument’. There is a strong element of the religious to a continuing belief in AI as potentially becoming a genuine person. That religious, and particularly eschatological, influence was what I wanted to bring attention to in my paper. And Bishop himself is obviously not immune to that influence: he refers to the “most religious” and those who have (rightly in his mind) moved to “apostasy”.

Even overly secular responses to AI and robots draw on the religious context – primarily the Western Christian understanding of the shape of the future and of what a supreme intelligence or being might look like – in their discourse. Paying attention to this influence is key, as I said in my paper, because this shaping by religious eschatology can underlie claims about the future of AI, make certain individuals influential (or even charismatic), and be used to generate interest in funding applications for particular technological developments (see Geraci, 2010).  As a New Religious Movements scholar this aspect of developments in AI and robotics particularly interests me, and will form the basis of my continuing research in this area.

bsg

TL;DW

I’ve been rather busy lately writing conference papers (details below) and I havent unfortunatly had the time to write the rants blog posts that came to mind off the back two AI and robotics stories this month. Even the versions I came up with in my head became TL;DW (too long; didn’t write), and would no doubt also have been TL;DR (too long, didn’t read) by anyone coming upon them on this blog. Instead, here are the stories – with some thoughts added – in as brief a form as I can manage!

First, reports that the Turing Test is intrinsically flawed because an AI could basically plead the fifth and remain silent, making the test null and void, “allowing it pass as a sentient being”, and that objects like rocks could also pass!

speak no evil

1. Why is a study by two academics at a UK university assuming the AI will come under the governance of US laws? Okay, perhaps we could take “pleading the fifth” more as an expression than a legal recourse. But even so, what are the assumption about nationality at play in the AI community? Does nationality even have a part to play when the key organisations and corporations considering and working on AI  are so international?

2. Isn’t pointing out problems with the Turing Test a little redundant? I’ve heard it said, and I tend to repeat it, that any AI smart enough to pass the Turing Test would also be smart enough to fail it on purpose! And let’s consider the origins of the Test. At first it was about seeing how well a computer could pass for a woman. While of course women are examples of sentient and intelligent life (!!!), there may also have been the aim of seeing if it was possible to replace the administrators and coders, women primarily at this time, with computers in an efficiency drive. The test was also less about proving a human intelligence and more about ‘passing’ for human – in an era of Cold War subterfuge and the passing of spies for colleagues. Any attempt to ‘pass’ the Turing Test off as a precise scientific proof for intelligence is intrinsically flawed. Likewise, any new critique of it needs to pay attention to prior commentary on its origins and presupositions before making announcements that it doesn’t work.

Likewise, and as in the second story that caught my eye, references to Asimov’s Three Laws of Robotics in relation to real world events needs a more careful consideration of their origins, aims and flaws. That is not to say that I am dismissive of them because they originated in science fiction. In fact, I am very much against the bracketing off of popular literature because it is popular, or even ‘not grown up’. In the preface to an otherwise excellent book, “What to Think about Machines that Think”, edited by John Brockman, he states: “This year’s contributors […] are a grown up bunch and have eschewed mention of all that science fiction and all those movies…”

Pfft.

But back to the story where Asimov’s Laws were invoked. A mall based robot ran over a toddler, hurting him, which of course should not have happened, and if it had been my son you’d have been finding bits of circuit boards all across the shopping centre for weeks after. But the stories all referred to the robot as having broken Asimov’s first Law of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

asimov

Well…

1. Unlike the laws of a country, you cannot be held to a law of robotics that you havent been programmed with/are aware of. So, perhaps this is a tongue in cheeck comment. But it betrays a certain expectation about robots and regulation that I have seen elsewhere, a presumption that Asimov got it right and that we already have a system that could be implemented to ensure robotic behaviour. Which ignores…

2. That Asimov’s Laws don’t actually work in his stories! Because if they did, there would be no story! They are a narrative catalyst, there to spark incident and to allow Dr Susan Calvin the opportunity to show of her cold intelligence as a robopsychologist, or for Donovan and Powell to chummily muddle through to a solution.

3. The articles also generally ignored the fact that Asimov introduced a Zeroth Law (or that it evolved among the Superintelligent AI minds that govern the world in his later stories), whereby, “A robot may not harm humanity or, through inaction, allow humanity to come to harm.” This is a strict moral utilitarianism that allows the Superminds to make decisions harming a limited number of humans for the greater good, something that Dr Calvin sees the logic in. I’m not suggesting the mall robot was doing anything other than failing to recognise the boy as a smaller example of the shapes it was meant to avoid (NOT the same as being programmed with the first law), but if news articles are going to cite Asimov’s Laws of Robotics then perhaps they should be aware of where they lead to…

Perhaps this TL;DW has now become a ‘too long;ended up writing it anyway’ (TL;EUWIA isnt going to catch on though). But I’ll finish up now with the titles of the conference papers I am working on at the moment:

Ian Ramsey Centre conference on Post-Secularity, Oxford, 27th – 30th July:

“LessWrong = Less Religious?: Secularity as Moral Boundary Making in Future Technology Focussed Groups”

Affective Apocalypses and Millennial Well-Being Conference, Queens University Belfast, 18th – 19th August:

“‘It’s the End of the World as We Know It (And I Feel Fine)’: An Ethnographic Comparison of Existential Hope and Existential Distress in Transhumanist and Apocalyptic Artificial Intelligence Groups”

BASR Conference 2016, Wolverhampton, 5th – 7th September:

“The Possibility of a Religion: Artificial Intelligence, Science Fiction, and New Religious Movements”

 

Short Course: Reality, Robots and Religion

NEWS:

Professor Richard Harper has recently joined the expert speakers for our Faraday short course in September. He will be speaking on “Communication, God, and Machines”:

Short Course Advert Small

He joins luminaries including Lord Martin Rees, Professor Nigel Cameron and Professor Susan Eastman (full list of speakers here, with details of how to register for the weekend)

The aim of this weekend event is to address the personal, societal and theological implications of advances in Artificial Intelligence (AI) and Robotics. These issues are complex, multifaceted and highly contested. Our aim is to host a conversation between participants from a range of disciplines, including computing and robotics, sociology, anthropology, ethics and theology.

I hope you can join us!

 

Beth

 

Writing the Script for AI

Yesterday afternoon was spent writing a film.

script

You might start to imagine screenwriters in bottle top glasses hunched over typewriters, with fat producers stalking behind them waggling thick cigars in smoke-filled writers’ rooms decorated with slatted blinds and green glass lamps.  However, we were in a teaching room in St Edmund’s college working on our short film as a part of the Cambridge University/Wellcome Trust Short film scheme. Although the final title is still under discussion, the general theme will be “Could, and Should, Robots Feel Pain?”

Partly this topic has come about because the scheme requires that the two academics generating the idea for the film should be an interdisciplinary team of Arts, Humanities and Social Sciences, and Biomedical Science. This meant that a few months ago I took part in a networking event where blue spots (Arts, Humanities and Social Sciences), sat at tables and introduced themselves and their research to revolving sets of red spots (Biomedical Sciences). Apart from feeling awkwardly a bit like speed dating, this event demonstrated the diversity of the research going on at the university, as well as the difficulty of finding common themes and aims between them. In fact, in a few cases, I met Red Spots who just wanted to use the Blue Spots to do all the writing for the film without much input into the idea behind it!

Luckily, I met Dr Ewan Smith on one of the rotations of the tables, and he was not only interested in a proper collaboration between our projects, but his research on pain (in naked mole rats!) suggested an interesting overlap in interests with a bit of speculation and theorising thrown in.

A month or so later and our proposal was accepted for the scheme (along with proposals by four other teams of researchers), and Ewan and I were back in rotation, this time at a networking event with filmmakers who had expressed interest in producing the actual short films for the scheme. We met many interesting, and very creative, people, but two seconds into the room I had bumped into one filmmaker who mentioned that he was already making a feature-length documentary on Artificial Intelligence. “Oh, well you won’t want to be involved in our project as its too similar,” I said, shrugging. But Colin Ramsey of Little Dragon Films was not put off.

Then we get to yesterday: myself, Ewan, Colin, and James Uren, Colin’s co-producer, in a college teaching room on a humid day in June discussing the beats of a ten-minute film on the technological details and the theoretical implications of pain being a part of the design of non-human others. We discussed the science and who we wanted to interview, the possible reasons why humans might create robots (or embodied AI) that could feel pain, whether they could experience the emotional aspects of pain, and, getting into the meta-level of the questions, what might be the reasons for our reasons for attempting this?

In some ways, this kind of script development meeting is not that unfamiliar. In a previous life, I worked in the film industry on fiction scripts. I even got a degree from the National Film and Television School in Script Development before I decided to return to academia:

NFTS
Receiving my NFTS degree certificate from Sir Richard Attenborough

A key similarity struck me when we were going through the structure of the short and asking ourselves again and again, “What do we want the audience to be thinking/feeling here?” Attention to the audience in the fiction-based film industry has led to some accusations of scripts being written by the numbers, or accusations of pandering to the whims of market research (*cough* test screenings *cough*). Certainly, back in the stone age when I wrote reports for film companies on their incoming scripts (mostly headed for the slush pile), a major section was always on the expected audience and whether the script was successful in engaging with them – often with a quantitative assessment; giving the script a final mark. Further development of the script often involved thinking about narrative conventions and the audience’s expectations for things like romantic journeys, a moment of self-sacrifice, or the obvious one, the ‘Happy Ever After’. Some even with a specific timestamp of when they should appear in a film.

In the case of our short film we have an idea of the levels of engagement we hope for from the audience, but no such near algorithmic attention to popular forms and tropes and how they ‘should’ fit together.

Which brings me to another short film I’ve been thinking about lately: Sunspring. This science fiction short film was written by artificial intelligence, as the blurb explains:

In the wake of Google’s AI Go victory, filmmaker Oscar Sharp turned to his technologist collaborator Ross Goodwin to build a machine that could write screenplays. They created “Jetson” and fueled him with hundreds of sci-fi TV and movie scripts. Shortly thereafter, Jetson announced it wished to be addressed as Benjamin. Building a team including Thomas Middleditch, star of HBO’s Silicon Valley, they gave themselves 48 hours to shoot and edit whatever Benjamin (Jetson) decided to write.

Watching Sunspring you would never be tricked (a la Turing) into thinking this was the product of a human – the disjointedness of the dialogue, the obscure plot (if any) and the repetition of lines asking what is going on, or questioning what the other character just said, make it clear that this is something artificial. The performances bring life to these stilted lines, but Hollywood has little to fear from this automation of creativity. Although, the same technique, refined, could be seen as a logical next step on from the near robotic plotting of some contemporary blockbusters. Script development, as I’ve said, works on similar principles or rules of story. That Benjamin is still learning these perhaps makes him a neophyte screenwriter (perhaps he still hangs out in Starbucks rather than BAFTA) rather than a failure. Returning to filmmaking with our own short film for the Faraday’s AI and robotics project has reminded me of some of my own neophyte-ness as a screenwriter. Perhaps the difference is the attention we want to give to the audience’s engagement and education through the film, whereas Benjamin was simply asked for a script so he made a script. A balance between the two extremes of ignorance of the audience and pandering to it after research would make Benjamin a more convincing screenwriter. Although, what would our reasons be for creating a good AI screenwriter?

robot writer

Perhaps, at least initially, it’s because it makes a good story. But what do these storytellers want us to feel/think in reaction? Reactions online have varied between parody, pointing out similarities with other feature films with equally incomprehensible plots and dialogue, anger that they wasted the 9 mins and 3 seconds watching it, to quotations of the lines that affected them emotionally (comments on Youtube: ‘”I was much better than he did” How come i cant get this line out of my head?’, ‘”He looks at me… and he throws me out of his eyes.” I have no idea why, but that line is incredible!’, ‘This is exactly like a dream. Absurd and yet, deeply emotional.’).

Whatever the reason, I suspect the meta-reason is something like an innate need to frame and then code/write the non-human in human terms: an AI should do the things that we can do, including writing film scripts, because human activities like that make sense to us (even if their products don’t, as yet!). What a true AI would choose to do might still be beyond our conception. Just as they could not have known that Jetson would become Benjamin. That was not in the script for the AI.

Ex Homine – On Artificial Intelligence and Humanlikeness

Trust the anthropologist to talk about humans at a conference on Artificial Intelligence…

Yesterday I spent an extremely stimulating day at the Rustat Conference, held at Jesus College, where the topic was “Artificial Intelligence – the Future Impact of Machine Intelligence”. Chatham House Rules prevent me from describing who said what exactly, but there were panels and talks on “Ethics,  Regulation  Responsibility”, “From Artificial Intelligence to Super-Intelligence”, and “AGI and Existential Risk”, and attendees included leading figures in the AI tech research world, as well as academics, financiers, and entrepreneurs. If you are interested in the field of AI you’d have known many of the people attending. Being invited to attend, and to come along to the pre-conference dinner the night before, was an honour for this new minted post-doc!

But perhaps I should reframe my first sentence. ‘Humans’ were in fact in evidence in many of the conversations, with discussions of the possibilities of a universal Basic Income for a humanity that will possibly be displaced by automation, in discussions of state and individual human ideologies that might come into conflict with AI futures, and in considerations of the ‘humanlikeness’ of AI.

The latter stirred me to make a point, drawing, as the anthropologist in the room (although not the only one, as my PhD supervisor was a speaker) on some ethnographic examples. I am not entirely convinced I was clear in my point so I want to expand on it here.

One of the discussions involved mapping scales of consciousness and ‘humanlikeness’ for both current and theoretical future AI, with the proviso that both concepts are quantitatively, and semantically, difficult. But as an example, a brick would score at a low point for both, while Ava, the embodied AI from the film Ex Machina would hypothetically be placed further up and to the right on these x and y axis. Recognising this proviso, my issue was with the idea that ‘humanlikeness’ is an objective quality. After this point about scales there was a discussion in which the point was made that technology simply  cannot have an essence that would make harming it a problem: we are not concerned by the thought of people torturing iPhones, which raised a laugh from most of the attendees.

To me this is to misunderstand ‘humanlikeness’ and to ignore an incipient and prevalent anthropomorphism in response to AI. For example, AlphaGo the program developed by Google DeepMind would score only slightly higher than a brick on these ‘objective’ scales of humanlikeness and consciousness. Its only human quality is that it plays a game, and there is no claim for its consciousness. And yet… fan art in response to the program’s success has AlphaGo as a young anime style boy or girl (the latter is also interesting because of the gender issues around professional Go, although drawing females may simply be more aesthetically pleasing for these artists).

So here we have:

alphago-3

(nb the second AlphaGo, by Anon or the same artists, has mechanical hands, and is therefore less ‘humanlike’ than the one on the left by @rakita_JH)

alphago1

Young male AlphaGo by Otokira-is-Good

alphago_by_nunsaram-d9v6g5w.png

AlphaGo by Nunsaram (again this AlphaGo has mechanical hands choosing vertices on a holographic seeming Go board, mechanical rabbit-esque ears, and is connected by wires to… something…)

alphago6

AlphaGo by Knocker12. in this case a more abstract humanoid, with a touch of the Tuxedo Mask about ‘him’.

That these anthropomorphisms are variable in their humanlikeness (the mechanical hands, abstract head, wires, animal ears) shows again that the axis of ‘humanlikeness’ is interpretive. Likewise, we might presume differing levels of presumed consciousness by these artists. So, we might note that the last AlphaGo is presented by Knocker12 as a little linguistically simplistic in its bouts with Lee Sedol:

alphago full.png

Whereas, some versions are more verbal, if still using ‘computer speak’

Alphago resign.png

Of course, there is a case to be made for a general anthropomorphism among humans, the same kind that led to interpretations of more natural phenomena and animals in terms of human like intentions, and perhaps led to instances of the deification of phenomena and the non-human.(1)

However, in the context of discussions about AI, a potential new non-human intelligence, perhaps we need more reflexivity about how we mark the distinction between the human and the non. Paying particular attention to the entanglements of anthropomorphism and personification will impact how we understand our positioning of these technologies on any imagined scale of humanlikeness, and therefore how we interact with them (especially when a diminished attribution of humanlikeness might be behind justifications for abuse).

Therefore, laughter at the thought of being concerned at the torture of iPhones ignores this mass anthropomorphism. In discussion I also drew attention to the public reaction on social media to the Boston Dynamics videos of their prototype robots being knocked down or tripped up. While they were not ‘hurt’ in a way that might be objectively measured, human reactions again anthropomorphised these events, and cries of “Bully!” could be heard online.

Some responses online were obviously more tongue in cheek, as in this spoof charity video, but others were more concerned for the ‘poor’ robots (or even, concerned about humanity’s future after a hypothetical ‘Robot Uprising’, when our new overlords might look back at our actions with regards their ancestors!)

Although, and I admitted this at the time, the conference was not really about robots of this kind and therefore this example was perhaps a side point (although, embodied AI, and the importance of embodiment for real consciousness, was also discussed during the panels). But any discussion of humanlikeness has to take into account how humans subjectively apply the quality to the non-human, especially when this is in surprising ways in a general human populace that is much wider than those dedicating themselves to AI research. Considerations of the future of AI will certainly need an awareness of perception, inference, and anthropomorphism, and this is an aspect that the anthropologist can contribute to by paying attention to responses outside of the conference and the research lab.

 

(1) This ‘next’ step is of interest to me, and it is interesting to note that Knocker12 also represents AlphaGo as a many armed god, although this is outside of the immediate point of this post:

Alpha Hindu.png

 

 

A Postsecular Age? New Narratives of Religion, Science, and Society, 2016 IRC Conference, Oxford, 27-30 July

I heard this week (just before my viva, so it slipped my mind a little!) that my paper for the Post-Secularity conference at the Ian Ramsey Centre was accepted! I’m pleased to be presenting on:

LessWrong = Less Religious? Secularity as Moral Boundary Making in Future Technology Focussed Groups

I am also very happy to hear that some really great academics will also be involved, such as S Francesca Po and Rebecca Catto  both of whom I have crossed paths with before 🙂

In my experience, conferences are one of the delights of being an academic and I am very much looking forward to attending!