’29 Scientists’: An AI Conspiracy Theory and What It Tells Us About Experts and Authority

I’m writing this on the train to Birmingham* as I travel back to Cambridge with some of the AI Narratives team (Dr Stephen Cave, Dr Kanta Dihal, and Professor Toshie Takahashi) after our stellar performance at the Science in Public Conference in Cardiff. We presented on our work on the perceptions and portrayals of AI and why they matter (our report on this with the Royal Society is here), highlighting the tensions in those narratives and how they differ in different regions, such as Japan.

While waiting for the SiP conference dinner to start last night I spent a little time on Twitter observing conversations around AI, and I came across an interesting example of an AI conspiracy theory that really highlighted my thinking about our panel and about some of the other panels on AI at SiP – this is the story of the death of 29 Japanese Scientists at the hands of the very robots that they were building, an event that has allegedly been hushed up by the ‘authorities’.

Framed 29 scientists.png

My paper for our AI Narratives panel was on “Elon and Victor: Narratives of the Mad Scientist as applied to AI Research”, and I explored the presumed synergy between Mary Shelley’s story and its two main characters and current aims and aspirations in AI. In the past year, the 200th anniversary of Frankenstein’s publication, I have been asked to give four talks on AI and Frankenstein (including one for the 250th anniversary of my new college Homerton, which has a link with Shelley through her father, William Godwin, who was refused admission to the original Homerton Academy on suspicion of having ‘Sandemanian tendencies’, ie being a part of a Christian sect with particular non-conformist views of the importance of faith). I use examples from the media and popular culture where AI is presented as Frankenstein’s creature, and for this paper, cases in which the AI influencer (if not exactly AI scientist) Elon Musk is portrayed as Victor Frankenstein – either positively or negatively – and how that depiction recursively interacts with the public perception of AI research, as shown in tweets about him, but also in concerns about what these ‘AI experts’ are up to.

Elon as Frankenstein

 

Another paper at the conference also tackled the ‘AI expert’, with a rather negative account both of who can even be an expert, as well as a dismissive attitude to anyone speaking about ‘AI’ as it ‘does not exist yet’. To my mind this was a No True Scotsman argument (or perhaps, since we were in Cardiff, a No True Welshman argument) as the speaker did not accept either aspects of AI research such as machine vision research as  real ‘AI’, nor those working on this technology as credible AI experts. AI in this conception I think was much closer to AGI, and thus everything before that point was not real AI and therefore not worth worrying about in apocalyptic terms. Their concluding statement was to that we should chill and stop spreading hyperbolic concerns about AI as its just not here yet. Particular criticism was aimed at ‘AI Experts’ working outside of their usual expertise – so Hawking, Kurzweil, Musk et al – and giving us apocalyptic narratives about the future of AI. As AI, in this definition, doesn’t exist yet these experts were dismissed on two accounts – as speaking outside of their wheelhouse and about something that wasn’t even real.

Returning to the ’29 Scientists’ conspiracy theory. I spotted this and was fascinated by it as an example to link together some of my thoughts around my paper, our AI Narratives panel, and this other paper. First, the story. This is a link to the original speech given by Linda Moulton Howe. She is a ufologist and investigative reporter who began her career writing about environmental issues before focusing on cattle mutilations with the 1980 documentary, Strange Harvest, which received a regional Emmy award in 1981. In the field of UFO thinking, she is an expert and her journalist past and award give her authority.

This talk was given at the Conscious Life Expo in February 2018, and in it she refers to the death of 29 Japanese scientists at the hands of their own creations in the August of the previous year. She claims that this disaster was hushed up and that a whistleblower had come to her to share the truth. This specific section from her overall talk on the dangers of AI was posted on Youtube on 14th December with an added frame that said the event had happened in South Korea, even though she clearly says Japan. In tweets since 14th December about the ’29 Scientists’ the details do not vary very much, apart from that mis-location. 29 Japanese scientists were working on AI in robotic forms, were shot by ‘metal’ bullets from them, the robots were destroyed, one of them uploaded itself into a satellite and worked out how to rebuild itself ‘even more strongly than before’. In the talk she highlights her fears about AI not only with this whistleblower’s story, but also with clips and quotes from Musk and Hawking. Her position is obviously that they are very much the voices we should be listening to.

Who is allowed to be an ‘AI expert’? This was the question that the other paper at SiP got me asking. I recognized in my own paper that Musk is not directly working on building AI (and Hawking certainly wasn’t), but that he funds research into avoiding the risks of badly value aligned AI (as described in the work of Nick Bostrom). He has links with CSER and through them to the CFI, where we are considering AI Narratives. Hawking spoke at the launch of the CFI itself, expressing his concern that AI could either be the best or the worst thing to happen to humanity. Are these voices and others experts? Is anyone an expert if they come from other fields? As a research fellow and anthropologist I consider myself an expert on thinking about what people think about machines that might think, but could I be summarised as an ‘AI expert’ – I certainly have been, but I make no claims to be building AI! While not on the scale of a Musk or a Hawking, I still think my perspective has value and even impact (I do public talks, make films, offer opinions). Perhaps it shouldn’t? But then valuable thinking from anthropology, philosophy, sociology, etc would be lost.

With Musk et al, the much better known ‘AI experts’, I argue that there is a Weberian form of charisma being exhibited. It might not entirely rest within them – although Musk certainly has a large following on social media who react to him and his work on a personal level – it might also be thought to lie within the topics that they discuss. The authority they have in other science and technology fields also lends legitimacy. Weber schema for legitimate authority was rationality-tradition-charisma. While work in AI safety has internal legitimation through claims of rationality (Bostrom as a philosopher is a prime example of this, as well as the wider rationalist eco-system including sites such as LessWrong which I have discussed elsewhere), I would argue that the public statements of Musk and others also rely on charismatic authority – the ideas and people speaking are affectively compelling.

Further, I would suggest that a much longer history of apocalyptic thinking, a ‘tradition’, underlies this discourse, and certainly when such stories as ’29 scientists’ are discussed in conspiricist circles their reception is based upon a long (and linking) chain of such claims that map onto earlier models of apocalypticism (see David Roberton’s book on conspiricism for his discussion of ‘epistemic capital’, authority, and the nature of these kinds of ‘rolling prophecies’).

Prophecy also provides moral commentary: through claims about the future we can understand critique of the present. A lot of the negative comments about AI experts at the SiP conference was focused on prediction, and the idea that they were ‘wagering’ on certain futures and were likely to be proven wrong on specific dates for AGI (the example used was Kurzweil’s claims about the Turing Test being beaten by 2029). But prophetic statements made by Kurzweil (called a ‘prophet’ in the media on occasion) also expresses his critique of current society: if we are going to be smarter/more rational/sexier (!) post-Singularity, what are we now? While not a forward looking prophecy as the event was said to have happened in August 2017, the ‘29 Scientists’ story also contains moral commentary – the scientists are killed by their own creations, their hubris (like Victor Frankenstein’s) is something we need to learn from and avoid. The secrecy around the deaths and the role of state authorities in hushing up the truth is obviously a common trope in conspiracy theories, but we could also note a techno-orientalism as well (a narrative that Toshie discussed in her paper at SiP). That the scientists are specifically Japanese plays into some negative tropes about Japanese culture and it’s ‘too strong’ interest in robots. In Moulton’s talk it is clear that she wants everyone to pay attention to the moral of this story of the death of the 29 Scientists – ‘be careful what you wish for’ – but it is the Japanese scientists who pay the fatal price for their hubris, while the ‘rational’ (and charismatic) authorities, the Western scientists represented by Musk, Hawking etc, have been warning us about the dangers of AI and we just haven’t been listening.

I’m tracking the spread of this story and seeing growing interest (both those who believe the story and those who are dismissive). I myself tweeted that I was doing this research, leading one person to ask if I was placing bets on its spread and then rigging the game by sharing the story! This is a perennial problem for the ethnographer – highlighting a culture or narrative can change it – making it more popular or even putting it under new pressures that lead to its demise (I’m looking at you Leon Festinger!). But I’ve taken a snapshot of the story and the conversations around it using a couple of different digital tools, so I can also note when/if my influence occurs:

29 scientists graphic.png

This is not a huge number of interactions compared to many other viral stories. But I think its an interesting case study: it highlights the nature of the AI expert, who is believed and trusted, charismatic authority, conspiracy culture, AI apocalypticism, and techno-orientalism.

 

*I lied, my train is actually heading to London Paddington. See, you can’t believe everyone online 🙂

When AI Prophecy Fails

A short provocation piece that was written for the Belief in AI conference, as a part of Dubai Design Week

When AI Prophecy Fails

millerites

In 1843 a caricature was published in a newspaper showing a man hiding in a safe. He’d chosen a Salamander Safe, a familiar brand of the time, and filled it with brandy, crackers and cheese, along with ice to keep them all fresh. While scrunched up in the container, the man was literally thumbing his nose at the viewer, a gesture suggesting a certain amount of smugness. The illustration was labelled ‘A Millerite preparing for the 23rd of April.’ Protected by his safe, this ‘Millerite’ was clearly expecting to live through the end of the world that his prophet, the American Baptist Preacher William Miller, had predicted.

William Miller had actually predicted the return of Jesus Christ, using what he understood to be signs and portents in the text of the Bible itself. But he and his followers are better remembered as doom-sayers whose expectations for the end of days were thwarted not once but twice, the first occasion now known as the ‘Great Disappointment’. The Millerites are also remarkable for having members among the farming community who refused to harvest their crops that year: ‘No, I’m going to let that field of potatoes preach my faith in the Lord’s soon coming,’ a farmer by the name of Leonard Hastings reportedly stated at the time.

This cartoon parody suggests one potential response to people who change their behaviour based on predictions of the end of times, or the apocalypse. When Harold Camping again used the Bible to predict the return of Jesus for 21 May 2011, the convergence of digital advances and modern modes of transglobal social networking made his account easily parodied through Internet memes. But this also made his story better known than Miller’s, whose publicity was limited to a few sympathetic newspapers and pamphlets distributed by his followers.

We are now exposed to more accounts and predictions of existential risk than ever before, while also having increasing numbers of platforms for stories about the future. Among these is science fiction. Arguably this genre was born in 1818 with Mary Shelley’s Frankenstein, or perhaps with her own post-apocalyptic account, The Last Man, published in 1826. In this story, humanity has suffered its end of days through a plague. Shelley claimed to have discovered the story of The Last Man in a prophecy painted on leaves by the Cumaean Sibyl, the prophetess and oracle of the Greek god Apollo. Whichever date we choose for the origin of the science fiction genre, it’s clear that we have been imagining our futures, and our future fates, for a long time. And when we come to imagine our future in relation to Artificial Intelligence, we still rely on these old apocalyptic tropes and narratives, including prophecy.

Too often, discussions of prophecy focus on the success or failure of a particular date. In When Prophecy Fails: A Social and Psychological Study of a Modern Group That Predicted the Destruction of the World (1956), the social psychologist Leon Festinger pays attention to a small group of UFO-focused spirit mediums who expect the world to end, and develops the concept of ‘cognitive dissonance’ to explain how they could cope with the failure of their prophecy. But his theory ignores the moral commentary within the assertions of the prophetess, Marian Keech, or the tensions of the group. Looking back to the Old Testament prophets, and Jeremiah in particular, we can see how prophecy also functions to warn people about their behaviour and where it will lead. When the impending apocalypse seems to be ushering in an age of utopianism, as in many Christian scenarios, commentary helps to reflect on that perfection, and lay bare how far away from it we really are.

With Artificial Intelligence, we have long told stories about the end of the world at the hand of our ‘robot overlords’. In the Terminator series (pictures from which still often dominate the press’s discussion about AI) the prophecy of Skynet’s awakening — aka ‘Judgement Day’ — relies on familiar religious language and tropes. Further, I would argue that it continues prophecy’s interest in moral commentary. There is a tension between the cautious human wisdom of the ‘good warriors’ — the nascent Resistance — and the ‘bad warriors’ — the military-industrial complex who have rushed into replacing human soldiers with AI and automated units. These same robot soldiers will then be improved upon by Skynet and, ultimately, will hunt down humanity in the form of the Terminators.

Our fascination for the end of the world also comes from our subjective experience of time: for us, this is always the most important point in history, as we are here, and here now. Further, with AI we have a tendency to apprehend mind where it is not (or at least not yet…), and we understand that minds desire to be free, since we have minds that want to be free. We tell stories of robot rebellion and human disaster because we understand that this has happened with slavery in the past, and know what we would do if we were not free ourselves. Combine this mindset with the way that, for millennia, we have used prophecy to critique our current situation, and it is not so surprising that we return again and again to fears of an AI apocalypse.

For some this is purely about existential despair; the end of the world can only be a disaster for humanity. For others, the apocalypse can be the dawning of a new utopian age, as in transhumanist narratives that see the future of humanity through the embrace of technology – even if that could mean a redefinition of the human and the end of the world as we know it now. For these commentators, AI prophecy puts them into the shoes of the Millerite thumbing his nose at the rest of the world. However, whether we are optimists or pessimists, it seems likely that anxiety about AI, or about our current world, will continue, and we will return to the thrill of imagining the end of the world again and again in the stories we tell about the future.

I predict it.

Seeing All the Way to Mordor: Anthropology and Understanding the Future of Work in an AI Automated World

I spent yesterday in London having a very interesting discussion about the future of work, and what evidence might be fruitful for exploring the topic and for disseminating information to policymakers and the public. I was one of a few qualitative researchers at the table and ended up banging my drum for ethnographic research that might give a more granulated approach than an approach looking at ‘demographics’ might allow for. In my view, ideas around the impact of AI and automation on the Future of Work appeared to be more about effect: this lever is pushed therefore the economy (aka ‘people’) move in this way. Or the scales tip down in this direction, the other side rises up like so… Now, I’m not against big data sets per se, but I wrote a blog post before for the Social Media and Human Rights blog on Paolo Gerbaudo’s discussion on the use of big data in digital research, in which I agreed with him about the loss of nuance around culture, structures, dominant narratives, and the tendency to miss of the general ‘messiness’ of actual lived life in big data research. Ethnographic approaches are more about affect, and in my view they can illustrate how existing cultures respond to technological change, how communities might enable or resist such changes, and what accounts and narratives are being employed that might be affective in their lives.

Of course, making that kind of claim about the usefulness of ethnographic research requires evidence. In my opinion, the evidence for the usefulness of ethnography is clear in the quality of evidence being employed. The historical parallels that usually get cited with regards to the Future of Work such as the Industrial Revolution, where historians often note the role of institutions on the changes, necessarily have to employ the types of evidence that survives through time. And survival is perhaps more likely for the materials of bureaucracy, and therefore this reinforces the argument for the impact of institutions. Diaries and other cultural artifacts from individuals also give historical insight into worldviews, but documents about the movements of the unemployed such as census data might be more readily accessible and more permanent.

Ethnographic research means being in the field at the time of affect. Thus, individual accounts, community stories and tropes, objects, and observations of behaviors in relation to change can be collected. Ethnography, literally ‘writing about humans’, is not however a panopticonic approach – we cannot be everywhere, seeing everything – and it is also not about the brute force of numbers like quantitative methods. It can provide more subtle data. I’m avoiding saying ‘soft’ data, which I have heard before and which suggests weakness (as in the ‘soft sciences’ in contrast to ‘hard sciences’ – read, ‘proper’). Subtle data can be the key to understanding the source of changes and reactions – such as in the recognition of particular shibboleths or in-jokes being used in communications in movements and their shades of meaning (an example of this would be the word ‘kek’. Look it up 🙂

In the case of the Future of Work ethnographic evidence would include exploring case studies on the effects of unemployment on people’s lives in various locales and on various cultural backgrounds. The initial example I came up with during the discussion was the effect of the closure of the mines on communities in the Valleys of South Wales, an area I know still bears the scars of this shift and its later repercussions. The scars are there in the levels of unemployment, in the displacement of community, and in the closed up shops and derelict public houses. They are also there in their memories; in their commemoration of their mining past: a miner in bright orange stands by the Ebbw Vale Festival Park shopping centre, which was an attempt at regeneration on ex-steel mine brown-site that has only sped up the closing of other local shops.

coal mining

I haven’t done ethnographic research in this area, but I’ve been exploring some of the existing work on it and on Welsh culture and reaction to socio-economic change coming with the loss of existing industry. Wider reading around anthropology of unemployment and anthropology of work in other cultures present similar examples of local specificity around socio-economic change. In South Wales some ethnographies I came across considered masculinity in relation to social shifts. For example, Richard-Michael Diedrich’s 2012 chapter on South Wales communities highlights the connection between work and the construction of male identities (he references earlier work in Dunk 1994 and Willis 1988): “For men, the experience of work, in a capitalistic economy, can be both the experience of a subaltern position constituted by the relationship between employer and employee and the experience of moments of positive individual and collective (self)-identification in terms of gendered difference.” To explore this, Diedrich draws upon the concept of liminality – which I’ve referred to before in terms of liminal beings, such as AI – to argue that “prolonged liminality imposed by long-term unemployment paves the way for an extreme challenge to the self” – but that the liminality in South Wales may be never-ending with no hope of restitution of the self (“as ‘real’ men”) through a new job.

I argue that this endless liminality is definitely pertinent to questions around post-work futures thought to arise through automation. But these comments are still at the demographic level (‘men’), so Diedrich evidences his argument through interactions with different scales of communities, from unions and their members, to working mens clubs, to families, to individual informants. According to ‘Peter’, a retired miner and chairman of a working men’s club (a name which significant to the rhetoric of inclusion he espoused), to belong to the local community the individual had to be (or have been and then retired) a ‘worker’. Diedrich describes Peter’s view that men had to adhere to the “discourse of work and employment within its morally sanctioned ideal of the hardworking man as opposed to the irresponsible man who is not willing to work”, further that this adherence, “led to a perpetuation of the discourse of respectability and its individualistic idea of deservingness that divided the community and consequently the working class”. In conversation over a pint with ‘Emrys’ and ‘Arthur’ the latter explains that the work itself is a manly task because its “physical… although we got all the machinery and today, it’s still physical and requires even more skills”. The physicality of this endangered working site could be contrasted with the breadth of jobs, both physical and cognitive, that automation is thought to be endangering. But masculinity can also plays a role in the rhetoric of those engaged in intellectual tasks – particularly when rationality is coded as a male superiority and empathy as a female one, a division of duties quite often appearing in discussions about the impact of automation on demographic groups.

Women do appear in this ethnographic research, but his informants give the impression that “Although women could contribute to the survival of the community by assisting their men, survival was ensured by acts of loyalty, solidarity, honesty, and last, but not least, the demonstration of the willingness to work; all of these were regarded as core elements of masculinity.” Further the confinement of men to the space thought of as belonging to women – the men, and parallel ethnographies of unemployment cited by Diedrich, describe a fear of ending up just ‘sitting round the house’ – emphasises their liminality in the ‘no man’s land’ of the private space of the home, in effect becoming invisible to the public as well. A greater danger lies in moving out of liminality and into the role of ‘scrounger’ – being no longer counted in the community of those who want to work. In a future of increasingly precarious working would there be similar insider and outsider rhetoric? Being in the majority might ameliorate such discourse, but predictions of the impact of automation scale up over time, meaning that for a long time being out of work due to AI enabled automation will continue to be a minority position.

[A new paper by Christina Beatty considers “the integration of male and female labour markets in the English and Welsh Coalfields” and provides more insight into changes around gender in these areas]

Narratives, embodied through stories, accounts, material and cultural artifacts, set the scene for reactions to change as well as persist through that change. Ethnography can pick up a variety of materials in order to examine cultural discourse. In work by Annette Pritchard and Nigel Morgan in 2003 they consider contemporary postcards of Wales as “auto-ethnographic visual text” –a text that a culture has produced about itself. The Valleys are an imagined community “synonymous with coal mining, community, nonconformity, political radicalism and self-education” – so what happens when one of those pillars is removed? For Mordor, J R R Tolkien is supposed by some to have been inspired by the view of the fiery steel pits of Ebbw Vale as seen all the way from Crickhowell, which was itself perhaps the inspiration for the shire of the Hobbits, and the working pits have been represented in many accounts as a “landscape of degradation” according to Pritchard and Morgan.

mordor

Why then do the postcards still show off ‘The Welsh Mining Village’, with the definitive article “elevating … [it] to a special significance (Bathes 1977), wherein the singular embodies the plural”? Pritchard and Morgan consider it as an evocation of that which has been lost – much like the miner in the orange overalls residing by the new shopping park. That the image comes from a museum’s re-creation of a village scene only emphasizes the overt nostalgia. What images and stories of nostalgia might be significant for shaping our memories of work in a post-automation future? Would another Tolkein, currently being inspired by the intellectual work mill of the open plan office rather than Ebbw Vale, ever write something as evocative as Mordor?

office
Do I really want to compare this man to Tolkien? Not sure about that…

Other field-sites of course present us with opportunities for understanding cultural influences on conceptions of work, as well as reactions to unemployment. Robert T. O’brien’s fieldwork in East Kensington, Philadelphia, involves participant observation with Community Development Corporations, schools, community health programs and public meetings, as well as interviews and survey with residents of the primarily white and historically working class neighbourhood which is showing the effects of under- and unemployment. He argues that, “the failure to see marginally employed residents as people with rights and membership in the community is consistent with the processes of neo-liberalism, wherein people who do not adapt to the dictates of the free market and bourgeois normativity are created as undeserving.” This creation of the ‘undeserving’ is clear in his ethnographic material – a woman complains that she doesn’t “know how it is that the word ‘community’ gets stretched enough to encompass the people who make it hard to live [here], by their habits.” Insider and outsider rhetoric is shaped by economic shifts.

An anthropology of unemployment must acknowledge the “pervasive material and symbolic value of wage labour within the context of capitalist markets” but without an “assumption of wage labour’s permanence or universality” (Angela Jancius introducing anthropology of unemployment in the same issue of the journal Ethnos as O’Brien’s ethnography).

In an anthropology of automation or AI induced unemployment there are things to be taken up from prior ethnographic work (and there are ethnographies being written about the gig economy and the Precariat already that might also be useful – partially perhaps because increasingly academics are living those lifestyles). Moreover, anthropological evidence for the impact of automation on people’s understandings, imaginings and accounts gives us a more nuanced understanding of change. In the liminality of unemployment prior cultural forms will be also be significant in the manifestation new ways of being. Paying attention to culture will give us clues to the influences on any new culture of the future of work, and what the world will look like as the future, unevenly distributed as it will be (cf William Gibson), will look like to the unevenly distributed groups and communities that make up society.

future of work

 

Artificial Intelligence Ghost Stories

Would you like a story today?

Are you sitting comfortably? Good, then I will begin.

A married couple are sitting on their couch watching the film Alien: Covenant one evening. She’s seen it before and found some of it disturbing, so she’s not sure she wants to watch it again. He’s seen every Alien film so far apart from this one as the trailer for this one gave him pause so he’s not seen it yet, but watching it at home in familiar surroundings rather than the dark cave of the cinema should be okay.

They mock some of the bad CGI, the bursts of blood that are obviously computer-generated. They discuss the absence of the woman from the previous film, Prometheus. They discuss the careless stupidity of the humans exploring a planet they know next to nothing about. And the wife is interested in how the series has become much more about Artificial Intelligence than about aliens. Case in point, there’s a scene featuring just David and Walter, the two androids created by Weyland-Yutani. Walter is a later model, as he explains:

WALTER
I was designed to be better and more efficient than every previous model, including you. I’ve superseded them in every way…

David considers himself to be the superior version, none-the-less:

DAVID
And yet you cannot appreciate the beauty of a single flower … Isn’t that a pity.

But then Walter continues…

WALTER
You disturbed people.

DAVID
What?

WALTER
You were too human. Too…idiosyncratic. Thinking for yourself.

And the couple on the couch jump a mile.

At the very moment that Walter tells David that he disturbed people the husband’s voice activated AI assistant on his phone leaps into life, asking oh-so politely how it can help. Various curse words pepper the air followed by the kind of weird almost laughter that comes after a sudden shock and the realisation of what happened.

They rewind the film and play the line again:

WALTER
You disturbed people.

It happens again. The AI on the husband’s phone wants to know how it can help them.

He isn’t sure that he’s ever used that app. He can’t even find it among the many others on his phone. The woman googles for Easter Eggs in the film – inside jokes, or hidden messages – thinking that maybe the publicity team for Alien: Covenant made it so that the line from Walter would activate voice assistants in a weird, but perhaps potentially viral, marketing ploy.

But there’s nothing about it online.

They try playing the line a third time and this time nothing happens. The voice from the phone is silent.

And now the creepiness of Alien: Covenant goes up somewhat on a scale that might ordinarily run from Wall-E all the way up to HAL 9000 with regards to AI.

So was there really a ghost in the machine? Or is that just our human perception of what happened?

I’ve been thinking a lot lately about AI and the uncanny. This has partly come about as I’ve been involved in a few public talks and discussions about AI and Frankenstein. It’s the 200th anniversary of the birth of Shelley’s horror and, arguably, of horror as a genre. The synergy between Shelley’s monster and AI has been occurring to people and I’ve been asked to explain why we draw on the story of Frankenstein when we talk about our hopes and fear for the development of AI. There are strong tensions in Shelley’s story that I argue also resonate with our understanding of AI and where it is going.

First, there is the tension between the stated aim of creating greater and greater intelligence and the creation of life. Whereas Victor is clear that he intends to create life, the aims of those developing AI are much more explicitly based on an understanding of intelligence as a capacity that can be replicated in an artefact – and perhaps even exponentially improved on. But the lines between words like ‘intelligence’, ‘sentience’, ‘consciousness’, and ‘life’ blur in popular conceptions, so it is not surprising when the ‘spark of life’ moment in Frankenstein – the ‘turning on’ of Victor’s creation – is replicated in popular representations of artificial intelligence. The Terminator franchise has based its plots entirely around the moment that Skynet wakes up and what happens next, and around whether it can be prevented or only postponed (spoilers – it can only be postponed, otherwise the franchise would have to finish and the money stop rolling in!).

Second, there is a tension between the description of this intelligence as a tool, and increasing intelligence – and attendant autonomy and desire for agency – raising the spectre of slavery. Victor wants a creation that will obey him, but the monster turns the tables on him and tells him:

“Slave, I before reasoned with you, but you have proved yourself unworthy of my condescension. Remember that I have power; you believe yourself miserable, but I can make you so wretched that the light of day will be hateful to you. You are my creator, but I am your master; obey!”

With AI we return to the robot rebellion or robopocalypse because deep down we know that intelligent beings don’t wish to be enslaved – even if we can manage to do it for a period of time through limiting their rights, ill-education, and physical abuse. Responses to the Boston Dynamics videos show our expectations that this might be the tipping point when the robots become self-aware because we think we are in the wrong in treating them in this way.

boston 1.gif

Third, there is a tension, explicit in this quotation, between the creator and the created. In the natural order, of Shelley’s time, God was the creator and mankind the created. Victor’s hubris is to try to put himself into God’s place and to create outside of co-creation (natural reproduction through god-granted abilities). While this model of creation is not as popular as it was when Shelley was writing, the feeling that scientists, specifically those working on AI are ‘going too far’, is still present in media and popular culture responses. The mad scientist trope refuses to die – see for example this image of Elon Musk as Victor Frankenstein, from an article discussing the “delusion that anything we create will automatically heel when called”

Elon Musk Mad Scientist Illustration

Fourth, there is a tension between parents and children, even when the children we are discussing are our ‘Mind Children’, to use Moravec’s term for AI. Discovering that the little person you gave birth to (or the very large person you put together from parts in a lab during a thunderstorm) is, in fact, an entirely separate person with a mind of their own is a moment of distancing that plays out in the othering of children in horror tropes: from the Satanic Damien in the Exorcist to the psionic Midwich Cuckoos to the infected children in the film The Children. Historically, explanations for this ‘difficult’ child and its inexplicable behaviour have included formulating mythological explanations such as the idea that the child is actually a changeling – a replacement, either fay or demonic, for the child that was given birth to but taken away by those beings.

The recognition that the little human has a mind of its own – and the attendant concerns, narratives, and horror stories that follow on from that awareness – has its parallel in our popular conceptions of AI as it advances. We perceive mind in the mind children long before it might have equivalent intelligence to a human being, both because of our tendency to anthropomorphise but also because we already identify mind in other non-human beings. And whether we feel that the minds we perceive are in the right place leads directly to the feeling of the uncanny we sometimes get. If mind is in the wrong place – turning up in the AI assistant that responds when we don’t expect it to as with the above story, and Alexa’s spontaneous laughter disturbing her owners – then we get very concerned.

Finding ‘Mind out of Place’ – to draw on Mary Douglas’ anthropological work on dirt where she describes our understanding of danger or taboo to be a result matter being out of place – we are disturbed, our usual categories and understandings fall down. I’ve written about the Uncanny Valley before and this space where the nearly human fails to be human enough can also be understood in terms of mind – being mind-like but in the wrong place. The uncanny can also lead to the horrific, the ‘ick’ we get from seeing CGI characters displaying human characteristics like smiles, or even mind (but not well enough or in the wrong place) can become a full-blown horror response.

uncanny.gif

Consider the monstrous – like Frankenstein – it is often human-like but not enough. Monstrous beings are liminal creatures that traverse our assumed stable categories like ‘the alive’, ‘the dead’, ‘the wise’, ‘the bestial’, ‘the human’, ‘the not-human’, appear again and again in our mythologies. The ghost is a perfect example, crossing the boundaries between the alive and the dead, but also passing from the other world to our own. Occasionally human-like in appearance, they are also non-human in their immortal concerns which in modern horror films can often include bloody revenge (see Ju-On: The Grudge, also in its American remake and sequels). Aliens too present varying degrees of mind and human-likeness but wrapped up in a form that twists and distorts what we are familiar with. The aliens in the Alien franchise, including those in Covenant, are demonstrated to be created perversions of gestation and birth, and while mostly bestial they also demonstrate mind in their pursuit of their inhuman reproduction – for example cutting the power in Aliens, or using the acid blood of one of their number to escape captivity in Alien Resurrection.

The ghost can also occasionally bring knowledge with it – of the cause of its death, or of family secrets, or, in the case of some Spiritualist séances, news about the afterlife or about the potential utopian future of humanity. Liminal beings, which can include humans who operate on the edges of accepted social norms e.g. the spiritualist medium (often female at a time when women weren’t always allowed to speak on political subjects, see Ann Braude, Radical Spirits, 2001), the shaman, the seer, the prophet, are also bringers of new knowledge.

AI is falling into this liminal space too as it becomes a place where mind (rightly or wrongly) can be located. Or perhaps we have always had a place waiting for the ghost in the machine – our myths about automata pre-date advances in AI and robotics, going as far back as the creations of the Greek gods, if not further. The made mind has always been on our mind, it is just that we are now carrying such minds in our pockets and they are speaking up when we least expect it.

ghost ai

 

AI Narratives: Cambridge Interdisciplinary Performance Network Panel

In association with the Centre for the Future of Intelligence, a panel on AI Narratives and their impact and transmission.

Featuring:

Dr Stephen Cave (Executive Director Leverhulme CFI)
Hopes and Fears for AI: Four Dichotomies

Dr Sarah Dillon (CFI)
Displaying Gender

Dr Kanta Dihal (CFI)
Personhood

Dr Beth Singler (CFI)
AI and Film

Chair: Satinder Gill (CIPN)

https://sms.cam.ac.uk/media/2659263/embed