Welcome to the Roaring 2020s

This is a speech I gave last night, as the Junior Research Fellow in Artificial Intelligence, at a Homerton dinner themed around the Roaring ’20s in celebration of entering the 2020s.

Welcome to the Roaring ’20s.

There have been approximately 21 decades we could call the ’20’s – depending on whether you think Popes are worth listening to or not on matters of timekeeping. But only one ’20s is commonly called ‘roaring’ for its exuberance and joi de vivre.

On this VERY day in 1920 the last meeting of the Paris Peace Conference took place, and while negotiations and treaties continued to be put in place over the course of the next decade, for many the 1920s – the roaring 20s – kicked off with this moment of hopeful restitution after what many came to think of as the first-ever ‘world war’.

Coming fast on the heels of 1920 was a decade of growth and prosperity. Drinks were downed – even during the Prohibition in America – and hemlines shot up.

Deferred spending led to a boom in construction, a boom in credit, and a boom in consumer goods. Electricity, cars, movies, planes… everyone moving faster and faster, and not just while dancing the Charleston.


After the sadness of World War 1, Jazz burst on the scene to break the apparent rules of music and to bring together people of different ethnicities and sexualities in underground clubs. Authors wrote about fabulous parties and pretended that they’d gone to them. Quantum physics turned up to make everything weird before it would cut down the cat population in 1935. Or not. Bright Young Things dropped out of Cambridge (and the Other Place) – possibly after they had popularised the ‘bring your own bottle party’. And women’s Suffrage advanced – for some.

But, the ‘roaring’ was eventually quietened.

In 1929, the flamboyance and frivolity came to a crashing end as a bubble of excessive speculation and complex world-wide debt burst violently, sending everyone into a new decade characterised by financial and emotional depression.

Here, at the beginning of a new ’20s, we need to think about whether this decade should roar like its 100-year-old namesake, or whether we should learn from the excesses and acceleration of previous Bright Young Things to tread more warily through the next ten years.

Now, I assume that I have been invited to speak to you today because artificial intelligence is one of those things that we have been expecting to be a part of ‘THE FUTURE’ for a long while. Along with flying cars, jetpacks, and our alien neighbours, AI is another not-quite-here-yet treasure, or terror, that could just be around the corner, in the world of tomorrow.

Throughout the 20th century, writers have imagined what the wonderful world of the 21st century would look like.  To many people, any year starting with ‘20’ appeared as inherently fantastical and futuristic. It certainly did to me as well when I was younger. The year 2000 was ‘the future’ in more than just the chronological sense.

I’m assuming that most of you weren’t actually born in time to welcome in the new millennium on 31st December 1999. So, very few of you could have genuinely partied like its ‘1999’ as Prince first told us to in 1982.

I, however, was. In fact, at the time of that great threshold, I was merely one term into my first year at Cambridge.

I travelled home after Michaelmas term and re-joined my school friends back in Portsmouth. I drank copious amounts of Peach Snaps and Lemonade at a friend’s house party before dashing outside during the bongs to watch a new millennium come into being – via fireworks of course – over the harbour. I was nineteen and 100% certain that the new millennium held for me many exciting things. Wealth and fame as a Hollywood screenwriter. A large and expensive flat in Soho. Another in New York, of course, with the mansion in LA. There would be Oscars and other assorted awards, which I would receive with suitable modesty and thanks to the little people.

And I had high hopes for the long termness of my relationship with a third-year I’d started dating during fresher’s week!

None of those things happened! But I’m not bitter. Not really.

In fact, I probably should have paid more attention to the science fiction writers who told me back then in the books I loved that my future in 20-something was going to be full of robots.

Because my ‘now’ certainly is. As the Junior Research Fellow in AI here at Homerton, I think about robots and AI much more than I could have ever predicted I would do, way back in 1999.

But also, because the world is genuinely full of robots.

They aren’t exactly walking down the street, helping old women (like myself), to cross the road, but they’re around.

A hundred years ago, at the start of the roaring 20s, Karel Capek gave us a name for such artificial beings in his play, “Rossum’s Universal Robots”: ‘robota’, from the Czech word for ‘serf’. He was struck by the fast growth and increasing dominance of the ‘factory’ and extrapolated from the treatment of the humans in such intense work environments to an imaginary being that could rise up against its overlords.


Capek’s ‘robota’ was made from a grey synthetic flesh, but by 1926 Edmond Hamilton wrote The Metal Giants, in which a computer brain running on atomic power creates an army of 300-foot-tall metallic robots. Such science fiction accounts have given us an impression of what the robots of 2000 and something would be like.

First, they’ll be embodied, as Capek’s robota were. But after the 1920s we came up with other robotic embodiments. There’ll be sexy ones, strong ones, fast ones… but most of all there’ll be ones that are hard to distinguish from humans.

Simon Schaffer has previously drawn a connection between Alan Turing’s famous test for computer intelligence with Cold War fears of secret agents ‘passing’ for ‘us’. In the 1980s Philip K Dick wrote us a new Turing test in his novel Do Androids Dream of Electric Sheep. The Voight-Kampff empathy test was necessary as the replicants – again bioengineered like the ‘robota’ of R.U.R. – could pass unknown among us and need to be detected before they can hurt us.

Second, in having all these attributes, the robots might just be, well… better than us. And better is dangerous. Capek wrote the first popular robot factory rebellion and in 2019 Terminator: Dark Fate encouraged its audiences to draw a causal line between the factory worker who loses his job to a much more efficient automaton and the latest iteration of the deadly Skynet, Legion.

Of course, the factory robot isn’t embodied in the same way as the Terminator. And this is where 20th Century science fiction has failed us.

But then, science fiction is never about accurate prediction, but instead, it serves as a commentary on where we are now. Karel Capek wrote about the factory worker made of synthetic goop to focus us on industrialisation and dehumanisation.

However, Capek, and others, have inadvertently helped us towards a certain kind of face-blindness when it comes to robots and AI.

A robot-blindness, perhaps?

Almost every single one of you uses a robot at least once a week. Or… at least I hope that you do!

A washing machine meets the basic requirements of a robot. A washing machine has settings that allow you to alter its automatic programming to perform a series of procedures. And yet, I think, unless your washing machine also spoke to you, folded your clothes and put them away, and then asked you if you’d like a nice martini like a proper 1920s Jeeves, you wouldn’t think of it as a ‘real’ robot. This would require a few more bells and whistles, including genuine ‘AI’.


Conversely, though, when we do see an anthropomorphised robot – with two arms, and two legs, and a flubbery face – we are more inclined to see it as having that mysterious thing called ‘intelligence’.

Take Sophia, the Hanson Robot. She is by no means the most advanced form of AI available. Her interviews are scripted, her appearances highly orchestrated, and her social media account run by a human. But because she fits so neatly into our sci-fi informed expectations of a 2000 and something robot, we start to think that perhaps the future promised by those films and books set in 2020 is actually here, now.

But that is another form of robot-blindness and one which distracts us from all the examples of AI that aren’t shaped like humans.

And the various AI implementations that are insidious in our systems and our processes in the 2020s are the product of an accelerationist view, much as the boom in other technologies was in the 1920s.

You might have come across the expression, “Move fast and break things” – it was an internal motto for Facebook. Well, as we enter the 2020s we might see that some pretty important things are being broken by moving so fast. The roaring 1920s rushed millions into debt through speculation and commodification, and movements begun to help the people evolved in dangerous, popularist, and nationalist directions, leading to a second World War.

And the roaring 2020s are growing out of a decade where already social media and its algorithms have been weaponised by familiar ideologies.

A decade where decisions about our capability to do a job, to take a loan, or even to access information online were already being automated by non-transparent systems. A decade where unseen influence pushed and pulled us towards particular political candidates. A decade where AI was already being trusted before it was even as remotely as intelligent as the human next to you because it came packaged as the next Bright Young Thing and appeared on the Jimmy Fallon show.

sophia kimmel

So, will 2020 be a roaring decade of prosperity and glamour? Or will the excesses of accelerationism and technological change dash us, Charlestoning all the while, into a brand new Great Depression?

I want to suggest that there can be more than one way to roar.

Perhaps, instead of just having excessive parties – or bread and circuses – we will roar against the changes that would reduce us to parts in a greater machine. But I say this without advocating extreme Puritanism and a new Prohibition.

After all, the roaring 20s also involved activism, charity, and progress for many underrepresented groups. Although, there was also uneven and slow progress in many cases.

But if we are to try to imagine, as science fiction authors have done, a future containing robots, we should at least endeavour to make sure that we are not the cold, mechanical, beings that they wrote about. We should recognise the robotic around us, and within us.

In the roaring 2020s, it might seem like the future is here and now.

We must make that future what it should be.

Thank you.



Controlling AI, Controlling Fictions

Its 2.30am in San Diego and I am enjoying the vicissitudes of jetlag after having come halfway around the world to the 2019 American Academy of Religion conference. Yesterday I gave a keynote paper at the Religion and Media workshop on techno-optimism on ‘Sophia the Hanson Robot, the Singularity, and AI Teleology’. Last night, before I went to bed, I got into a discussion of gender and sexism in the Mandalorian on Twitter. One of these two moments of thinking in public was perhaps more coherent than the other (perhaps… jetlag is the worst!). In both, however, the subject of the control of fiction was key.

In my discussion of both a teleological view of the exponential increase in intelligence in history and in the development of AI and the production of ‘faux-bots’, I drew attention to the factors enforcing our view that there are agential AI. The first is our perception of minds in places where there are not currently minds (I’m going to stick with being agnostic as to whether this will ever be possible), the second is the non-fiction, ‘real world’ representation of agential AI (in the media and in popular cases like Sophia), and the third is fictional agential AI, as in science fiction. All three dialectically influence each other, but fiction and non-fiction appear to have a very porous boundary (and the locating of narratives within one or the other can also be an ideological act.

To illustrate this I used the example of Orson Welle’s 1938 War of the Worlds broadcast where even if there weren’t quite the hysterical crowds that the press made out at the time, there were still some for whom the boundary between fiction and non-fiction became blurred. Welles, I think leant into this kind of fuzziness in his career and in this production and desired this kind of ‘Wellesian slippage’ between fiction and non-fiction. More recent examples of this slippage with ‘faux-bots’ include Adam:

Adam was created as a part of an award-winning web series, but this short demonstration released online was denuded of its context. We then had people such as Derren Brown (who perhaps should know better, being himself very skilled in the deployment of Wellesian Slippage!) tweeting out the video with comments about how we are all going to die.

Adam 2

Perhaps he was joking, but again, for some people who responded to his apparent panic with their own, there was a slip from fiction to non-fiction.

In the case of Sophia the Hanson Robot, there seems to be both Wellesian Slippage and something I term Manifested Aspiration. In discussions of Sophia and her purpose, David Hanson has referred to her as a form of character art, or as being like an NPC in a computer game – given the illusion of agency for story reasons. The story is, however, that same teleological accelerationist view of AI that sees it as leading us on to the Singularity. He has also referred to her as ‘coming alive’, and her ‘aliveness’ is certainly something humans respond to with enthusiasm in some cases and anxiety and fear in others.

sophia kimmel.gif

David Hanson has called her a research and development platform too, a scientific step along the way to that aliveness and/or the technological Singularity. I suggest, however, that the fictional elements of Sophia are under entirely under the control of Sophia’s creators and promoters. She is a use of a new medium in the same way that radio enabled Welles to broadcast his version of the War of the Worlds.

Unlike the 1938 War of the Worlds, however, Sophia is a physical manifestation of hope for a future state (I assume here, but it does seem unlikely that Welles thought his broadcast was a preview of what was actually to come!).

In my paper I drew a historical parallel between Sophia and the manifested aspirations of spiritualists of the late nineteenth and early twentieth century who used the technology of the time (often photography, but also puppetry) to create the very thing that they believed to be real: ghosts, spirits, and in some cases fairies, who also fit into their cosmology of the afterlife and the universe. A more cynical reading would suggest pure disingenuousness and the pursuit of fame and wealth. But I think it is important, as scholars such as Anne Braude does, to read Spiritualism in the light of the utopian, and socially progressive, messages that some mediums brought from beyond the veil and how Spiritualism could give a voice to the (mainly female) voiceless. Likewise, we should perhaps read faux-bots like Sophia in the light of the utopian aims of those behind her, seeing her as a manifestation of their aspiration for something that they want to exist in the world so very badly.

Of course, in Sophia’s case, the voice that is being given space is that of her writer, and not the voice of women being ignored by society, and that is concerning. The pushback against Sophia has included criticism of her being given citizenship of Saudi Arabia when women there do not share all the same rights as men. Sophia’s performance of gender and the ways in which she is promoted (appearing on the cover of a style magazine after a makeover, being asked for a date by Will Smith, talking about how she wants babies) are very heteronormative. But it needs to be reiterated: this is entirely under the control of her creators. Sophia has not ‘come alive’, she has not been born but created. Her nature, her gender, her desires, her purposes are all decided by someone else.


The conversation about the Disney+ Mandalorian TV series began when I responded to a defence of the lack of female roles in the television series where it was suggested that this might be the result of the harsh world in which it is set. I said: “This is a generous Watsonian reading but maybe I’m more of a cynical Doylist because I think even the dystopic world would be a result of authorial decisions, including the one that would say a rough world equals a male world.” (Watsonian readings make arguments based on the internal logic of the world of the fiction and Doylist readings make arguments based on the logic of the author. Watson is a fictional character in the world of Sherlock Holmes, created by Arthur Conan Doyle, who connects with my above paper in being a vocal Spiritualist taken in by faux-fairies in the 1920s).

The response was that women are usually disproportionately endangered in lawless and violent environments, but this is still a Doylist reason, based upon the logic of Earth, not the potential logic of a fantasy realm. If you can control the fiction enough to produce droids, jet pack-wearing bounty hunters, and the magical powers of the Force, you can control it enough to write women living outside of the patriarchy. This is a common enough critique in the fantasy genre, with new writers being encouraged not to write sexist cultures because that was how it was in ‘medieval times’. A lot of the online defence of Game of Thrones’ sexual violence was on the basis of this kind of ‘historical accuracy’. The control of fiction by its creators suggests it can be otherwise, and there are brilliant novels where patriarchal societies are seen as optional in the story as dragons are.

game of thrones sexism.jpg

Such stories do not, however, restrict commentary on gender relations in our world, they actually open up new possible configurations that critique earthly relations. The first part of the workshop I gave my paper in yesterday was a discussion of a book chapter on Octavia Butler’s Parables books from a disabilities studies perspective. The chapter argued that her creation of a fictional or ‘non-real’ disability (in the words of the author of the chapter, Sami Schalk) actually opened up the possibility of a critique of our assumptions about disability. Moreover, science fiction worlds without disability (commonly because science has ‘cured’ them in the story) are also a way into a critique of our narratives, social prejudices, and assumptions (although, Schalk also argues that it would be better if the authors always intentionally wrote such worlds as critique rather than as aspirations, as was often the case for ‘golden age’ sci-fi authors).

In the Mandalorian there may be a Watsonian reason for the lack of female characters. Perhaps it will be revealed at some point. Or perhaps not.

In which case the reason there are few women is Doylist, the author intended (even unconsciously) that it be a world where women are not seen. This needs to be owned. Fiction is controlled. The worlds we create can be otherwise. There can be droids, there can not be droids. There can be faux-bots, there can not be faux-bots. Why and how we create our stories is important.

I ended my paper with a quote from Donna Haraway, which focuses on non-fictional machines, but could be said of the fictional too:

“The Machine is not an it to be animated, worshipped, and dominated. The machine is us, our process, an aspect of our embodiment. We can be responsible for machines; they do not dominate or threaten us. We are responsible for boundaries; we are they.”

Deckard Looks Away: Dealing with Blade Runner’s Problem with Women in 2019


This is a companion piece to my BBC Radio 3 essay on Blade Runner and Sexbots: Zhora and the Snake. It’s a previous draft that explored some ideas about BR and canon that didn’t make the final discussion on the ethics of sexbots and slaves:

“Ladies and gentlemen. Taffy Lewis presents Miss Salome and the snake. Watch her take the pleasures from the serpent that once corrupted man.”

The first time I watched Blade Runner, I was too young to understand exactly what Miss Salome was doing with the snake. In the ’80s and ’90s, I had access to far too much ‘grown-up’ science fiction at much too young an age. I would watch quietly from the back of the lounge as yet another VHS cassette that had been doing the rounds at my brothers’ school was premiered. Maybe then I have my brothers and their friends to thank for my current career. In any case, I was certainly too young to notice that whatever it is the rogue replicant Zhora – ‘Miss Salome’ – is doing with the snake it is too much for the replicant hunting detective, the ‘Blade Runner’, to watch. We, the viewers, only get to see a slight grimace as Deckard looks away from the stage.

The dance is left to our imagination. But Zhora’s Salome dance is generally the exception, not the rule in Blade Runner. In other scenes, nudity and sexuality are foregrounded. Immediately after her performance, Zhora, played by Joanna Cassidy, is approached by Deckard who is pretending to be a representative of “The Confidential Committee on Moral Abuses”. He plays at checking her dressing room for peep-holes while allowing the viewer a chance to ‘peep’ at her showering after the dance, wearing only some stuck-on gems. Moments later, and we get to see Deckard, his ridiculous put on voice easily seen through, shoot the fleeing Zhora in the back. And she dies, naked apart from a transparent coat and the blood that covers her, in the middle of a shop window full of mannequins. This time Deckard does not look away, and neither can the audience.

In 1982 Blade Runner brought the trenchcoats, gravelly voice-overs, and dark rain glistening streets of Film Noir to the world of science fiction. A world that had so far focussed on wowing audiences with colourful ‘what ifs’ and shining bright chrome utopianism. And with that splicing of Film Noir, DNA into Science Fiction came the ‘Femme Fatale’ and ‘the Redeemer’, a somewhat limited binary of options for female characters in that genre of cinematic storytelling, identified by Janey Place in her work, Women in Film Noir, from 1978. Zhora, seductive but deadly, a “beauty and the beast type” according to Deckard’s boss Bryant, fulfils the former role. The fourth escaped replicant, Pris, dismissed by Bryant as “a basic pleasure model”, initially appears to the genetic engineer J.F. Sebastian as a waif-like innocent. But she later reveals herself to be just as deadly as Zhora. Pris is almost animal-like in appearance and combat style as she takes on Deckard towards the end of the film. She clambers over him and squeezes him between her thighs in a sexually suggestive manner. She too dies amongst other artificial beings, surrounded in her death spasms by J.F. Sebastian’s bio-toys. Rachael, the replicant who suspects she isn’t real, has the hair and clothes of a 1940s Film Noir femme fatale. But she is eventually transformed from the rigid ice woman who controls Deckard’s gaze in her first scene at the Tyrell Corporation to a passive passenger in his car in the ‘happy ending’ of the theatrical release version of the film.

Blade Runner is, however, not just a recombination of genres. It gives us a temporal folding together of different eras and the attitudes that come with them. 1940s Film Noir is folded into the 1980s of the film’s production and release. ’80s cinema gave us both the businesswoman and the murdered babysitter and sexualised both of them. “I’ve got a mind for business and bod for sin”, Melanie Griffiths tells Deckard, I mean, Harrison Ford, in Working Girl in 1988. Zhora and Pris are working girls of another kind, and their deaths – stalked by the Blade Runner before dying screaming or covered in blood– could also place alongside the victims and scream queens in the Halloween series of films.

That Blade Runner is set in our present-day of 2019 folds in yet another era and society. Watching Ridley Scott’s imaginary 2019 now we can measure it against our actual 21st-century experiences and standards, including in the light of #MeToo and the Time’s Up movement. That Blade Runner holds on to elements from the 1940s and its contemporary 1980s, especially in its representation of women as subjects of the male gaze and of violence, has led Kate Devlin in her recent book, Turned On: Science, Sex and Robots, to claim that “Blade Runner has a woman problem”.

I don’t entirely disagree. Of the five female speaking roles in the film, three are sexualised replicants, and the other two are ‘Cambodian Woman’ and a gruff one-eyed bar-worker with no given name in the script. The Bechdel Test arrived three years after Blade Runner’s release. However, I don’t think that the test’s creator, Alison Bechdel, ever thought about whether the ‘at least two named women speaking to each other about something other than a man’ needed to be human. Nonetheless, even if we grant the replicants personhood – as I think the film certainly does – Blade Runner just does not pass the test at all. Add to that, the 80s style violence, and a very disturbing scene where Deckard violently forces Rachael to consent to him, and Blade Runner is very problematic in the real 2019.

However, rather than just giving Blade Runner a pass as being only a product of its time, I think there is the potential to reclaim the film in 2019 from its multiple entangled, and sexist, influences. The key to this reclaiming is Zhora’s dance with the snake. The dance that Deckard looks away from and which we never get to see. Being left to our imagination gives us power over the scene. And by looking into the history and themes of the dance and its reception, we can find spaces in which to exercise that power.

Zhora’s persona on stage, ‘Miss Salome’, is an obvious reference to Salome. You might think you know Salome, but much of the story has been embellished and transformed by a long line of writers. First, in the New Testament gospels of Mark and Matthew, we read of King Herod and his daughter. Her dance pleases the guests of the King, and he is inspired to grant her a wish:

“Ask me for anything you want, and I’ll give it to you.” And he promised her with an oath, “Whatever you ask, I will give you, up to half my kingdom.”

Herod’s daughter asks her mother what she should have, and she tells her to ask for the head of the prophet John the Baptist. Which she gets, on a fine platter.

However, the gospel authors never referred to the daughter of Herod by name. Later sources, such as the Antiquities of the Jews from the first century CE, identify her as Salome. Likewise, her dance is not described in the Bible at all. In the 19th Century, however, Oscar Wilde wrote his adaptation of the story in French, Salomé, and gave the daughter of Herod the ‘Dance of the Seven Veils’ to perform. The dance was retained by composer Richard Strauss in his 1905 German operatic interpretation of Wilde’s play, and the success of the play and the opera allegedly led to cases of ‘Salomania’. This was a “vogue for women doing glamorous and exotic ‘oriental’ dances in striptease”, according to Rachel Shteir’s 2004 book, Striptease: The Untold History of the Girlie Show. In Oscar’s play all we know of the dance is a single line of instruction for the cast:

[Salomé dances the dance of the seven veils.]

The subjects of so-called ‘Salomania’ were inspired by Wilde’s staging of the dance, but also came up with their versions, drawing on their visions of the near east and its culture. Whether accurate or not. Many other interpretations of Salome’s story and her dance have appeared in other forms of media and popular culture.

Salome pics
Versions of Salome

For example, the rock band U2’s 1991 song, Salome, emphasises the dance’s shaking movements, as well as referring to Herod’s promise of fulfilling her desires:


Baby don’t bite your lip

Give you half what I got

If you untie the knot

It’s a promise

Salome… Salome

Shake it shake it shake it Salome

Shake it shake it shake it Salome

Salome… shake it shake it shake it Salome

As I’ve said, I was too young to know what the snake was doing the first time I watched Blade Runner, but I have a pretty good idea now. But because Deckard looks away, that’s just my interpretation, just as Wilde gave his interpretation of Salome’s dance as a dance with seven veils. In 2012, the actress who played Zhora, Joanna Cassidy, also gave us her interpretation of ‘Miss Salome’s’ dance, posting a video on YouTube of her performing a dance with a real snake (we presume!). The video is called, “What Might Have Been”, expressing a desire to reclaim a lost moment.

Responses online in the comments were very positive. “This is pretty amazing. I’ve never seen an actor do an almost interpretive re-envisioning of a scene like this. Gives a whole new dimension to the character that is translated in a complex yet very simple way. I dig it supremely,” wrote one viewer. Another said, “Zhora Zhora, I thought Rick Deckard got you. I was so sad; fortunately, I was wrong.” Joanna Cassidy has been involved in other re-enactments of the dance since Blade Runner, with more to come, such as a live performance with the dance as a starting point. So, Zhora does indeed live on in re-interpretations of the missing dance, outliving both her end in the film and her replicant end date.

This longevity is possible because of the absence of Deckard’s gaze at this point in the film. Vision, sight and gaze are recurring themes in Blade Runner. The film frequently shows us close-ups of eyes. The eyes of the subjects of the Voight-Kampff test that can determine whether someone is a replicant or not depending on microscopic pupil dilations and reactions. Deckard looks for peep-holes in Zhora’s changing room. Leon and Batty visit the genetic engineer Chew, seeking answers about their longevity problem, and a terrified Chew explains that he only works on eyes. “If only you could see what I have seen with your eyes,” Roy says before Chew tells him to seek out the creator of the replicants, Tyrell. The already myopic Tyrell is then completely blinded and killed by his own creation, his eyes pushed into his skull by Roy’s thumbs. Shades of Frankenstein’s monster and his revenge on Victor obviously haunt the screen at this point. But the emphasis on vision also leads us back to our own, real, 2019 and the modern concept of the male gaze.

According to Janey Place, the Femme Fatales of the 1940s “direct the camera (and the hero’s gaze, with our own) irresistibly with them as they move”. They have a certain power in captivating the camera. In 2019 the male gaze refers more to objectification, often sexual objectification. And in Blade Runner, we can see a transformation of Rachael from the controller of the male gaze in her first scene to a passive, observed, redeemer. She is increasingly seen solely in terms of her relationship with Deckard. The redeemer, according to Place, is the “bride-price” the hero wins for having resisted the destructive lures of the Femme Fatale. Pris and Zhora are defeated by Deckard, and arguably Rachael is too, as she transforms from the stiff ice maiden, the Femme Fatale, to the soft-haired companion sat silently in Deckard’s car. The male gaze in Blade Runner also relates to specific moments of transformation. Rachael undoes her hair in response to seeing pictures of Deckard’s mother on his piano, an image that he must have gazed at frequently. She appears to want to be seen by him too.

Again, however, there is a space here to reclaim the male gaze, and by extension to reclaim Blade Runner, by thinking through the perspective we are given by Zhora’s dance and its influences. The biblical Salome captured Herod’s attention and his power with her dancing. Wilde’s Salomé not only gets her wish but also expresses her power and her passion. Linda and Michael Hutcheon’s 1998 article, “‘Here’s Lookin’ At You, Kid’: The Empowering Gaze in Salomé” argues that Wilde’s Salomé undermines the traditional gendering of gaze as male. Salome takes back that power for herself as she captivates the audience with her dance. Wilde also writes Salomé lines in which she exults in her power:

“neither the floods nor the great waters can quench my passion.”

Deckard looks away from Zhora and reacts as though horrified, as though she is something monstrous in her moment of apparent synthetic bestiality. As a replicant in the world of Blade Runner, she is a non-human other, and she may well be thought of as monstrous even without this act. We have already noted the animalistic turn Pris takes as she fights Deckard, and by this point in the film, she has also painted her face black and white using the same paints J.F Sebastian uses on his toys. She has become even more doll-like, even more artificial, with her monochromatic face, nude coloured bodysuit and her wild dandelion hair. Salome is also thought of as monstrous and transgressive in Wilde’s play, with Herod reacting in horror as she holds John the Baptist’s head and kisses his lips:

“She is monstrous, your daughter, she is altogether monstrous. In truth, what she has done is a great crime. I am sure that it is a crime against an unknown God.”

In our fictions, artificial lifeforms like the replicants, inhabit a liminal space between the natural and the unnatural, a space we mark on the map with ‘here be monsters’. The ‘Uncanny Valley’, outlined by Masahiro Mori, in a 1970 paper, theorises this territory of the liminal with regards to the non-human, and the term is often applied to artificial beings such as robots, androids, and artificial intelligences as well as the undead and the dolls that Mori highlighted. The replicants of Blade Runner also fit into this uncanny space. However, with liminality, there is also the potential for transformation and transformative powers.

A story about such liminal creatures that also contains moments of ambiguity such as the dance that Deckard looks away from contains the potential for new storytelling and new art. Reclaiming canon through new media is a work of transformation, more popularly known as ‘fanfiction’ for written works, and ‘fanart’ for illustrations. The most popular site for fanfiction is Archive of Our Own, which recently won a Hugo Award for Best Related Work. In effect, a Hugo statue was shared between around two million writers, who have contributed to five million works in thirty-three thousand fandoms. For Blade Runner, the number of fanworks is small, just a few hundred when the most popular fandoms have a few hundred thousand. Nevertheless, in the spaces left empty in the canon of the film, such as Zhora’s dance, there is potential for new creations that subvert or control the gaze, and which can encourage us to look at Blade Runner in different ways.

After all, Blade Runner is itself a film with a variety of versions and deviations already. Some with a happy ending, some without. Some with the gritty voice-over of a 1940s Film Noir film, some without. Some with Deckard’s unicorn daydream, some without. The variable involvement of Ridley Scott as the most influential Blade Runner creator makes each of these versions a transformative work in their own right. Even the ‘film’ – all the versions gathered together to make some sort of singular object of canon – is an adaptation of the original book by Philip K. Dick, changing many things, not least the title: “Do Androids Dream of Electric Sheep”. The sequel, Blade Runner 2049 sees the return of Deckard and Gaff, with a transformed by CGI ‘Rachael’, amidst new characters and new producers and writers.

However, calling Blade Runner 2049 ‘fanfiction’ shouldn’t at all be seen as dismissive. Common perceptions of fanfiction dismiss transformative works as the futile or facile work of ‘bored’ or sexually frustrated women so that calling something ‘fanfiction’ is to denigrate it. But all writing draws on and transforms that which has gone before, all creativity involves some kind of transformation. Only some authors are willing to admit it. As Neil Gaiman once tweeted in response to a question about his opinion of fanfiction:

“I won the Hugo Award for a piece of Sherlock Holmes/H. P. Lovecraft fanfiction, so I’m in favour.”

Therefore, we can see that Zhora’s dance as ‘Miss Salome’ is a transformation of Wilde’s Salome – seven veils being replaced by a single artificial snake. Likewise, Wilde’s Salome is a transformation of a few lines of the gospel about ‘Herod’s daughter’ and her dance. And when Deckard looks away, we are left with a liminal space of opportunity. We can come up with something a little more advanced than the ‘basic pleasure model’ that Blade Runner gave the male gaze through its sexualised Femme Fatales. In 2019, the Year of the Blade Runner, I hope that there will be many more versions still to be created.

Zhora and the Snake, BBC Radio 3

Zhora and the snake

My essay on Blade Runner for BBC Radio 3 as a part of the Year of Blade Runner essay series was broadcast last night and you can still listen to it here, along with links to the other four essays in the series

Here is the text version:

In Ridley Scott’s 1982 film, Blade Runner, in a seedy part of town, Rick Deckard drowns his sorrows in Mezcal without ice, sat amongst dodgy shisha smoking patrons. Unable to tempt his love interest to come join him, even after calling her on a videophone with a filthy screen – he’s fallen into despair and alcohol. He fits well into the mass of denizens looking for a night of neon entertainment and flesh to distract them from their lives.

But Deckard is a Blade Runner. A bounty hunter. And he’s hunting someone.

He looks up as the main attraction appears, introduced by a rasping off-screen voice:

“Ladies and gentlemen. Taffy Lewis presents Miss Solamay and the snake. Watch her take the pleasures from the serpent that once corrupted man.”

The compere at what is The Snake Pit club has just introduced an erotic dance from a woman who is actually the rogue replicant Zhora in disguise, she is one of six artificial beings who have fought their way to Earth, fleeing the ‘off-world colonies’ and killing twenty-three people. Now she is a moving target for Deckard, the Blade Runner, the ‘replicant hunter/killer’. He has tracked her down to Taffy Lewis’ seedy strip club both to seek any information about the other runaway replicants and to cut her artificial life even shorter.

Retirement they call it.

This scene forces us to think about the ethics of sex robots, or ‘sexbots’. Their use also ties into bigger questions around artificial personhood, freedom, agency, what makes us human, and why humans imagine particular dystopic futures.

However, many of these questions remain unanswered in Blade Runner.

While Blade Runner presents us with a future containing sexbots, it doesn’t give us details of this alternative 2019 world where they are so common that humans can be blasé about them. Such as when Police Chief, Captain Bryant, describes Pris to Deckard as ‘your basic pleasure model’, dismissing her.

‘Real-world’ conversations about the ethics and social impact of sexbots can be more vocal – however basic the technology actually is at the moment! There are fears that they will cause even more sexual objectification of women and abuse. More positive views see them as just a new sex tech and helpful for self-determination. But science fiction, in films such as Blade Runner, can push these two arguments in new directions by showing us imaginary sexbots with the ability to make choices and to be autonomous.

In Blade Runner, the rogue replicants definitely make choices to ensure their future. After an unsuccessful raid on the Tyrell Corporation to try to find a cure for their limited lifespans, one replicant, Leon, goes undercover to infiltrate the company.

He is outed through the Voigt-Kampf empathy test performed on a wheezing infernal kind of laptop that stares into man-made eyes as a series of questions determine fake from real  – it does not go well. But Leon has been able to change his career programming. He has taken on a brand-new role to help himself and the other escaped replicants. We are told about his previous job in a scene in which Bryant and Deckard discuss the rogue replicants and their designated roles in this future world of 2019.

Leon is described as being an ‘ammunition loader on intergalactic runs’. He has been given enhanced strength and an increased tolerance for pain, with a corresponding low intelligence – he’s ‘a workbeast’. Roy is a blond Aryan ‘combat model’ with ‘optimum self-sufficiency’, we and the film assume that he will be the replicants’ leader. Pris is a ‘basic’ pleasure model, as mentioned, and Zhora is an assassin. But each of them arrives on Earth and redefines their role in alignment with the greater good for their companions.

Zhora’s shift from assassin to erotic dancer ‘Miss Salomay’ is perhaps the greatest change. But it is also a transformation that isn’t just about the needs of her fellow rogue replicants. Do the replicants need money? It’s never mentioned. If they do, then why did Zhora raise funds to buy an expensive synthetic snake for her dance? Also, why is Zhora the exotic dancer and not Pris, the basic pleasure model?

Her characters origins are very loosely based on that of Luba Luft, an android opera singer from Philip K. Dick 1968 source novel, Do Androids Dream of Electric Sheep?. Luba is performer who does not know that she is artificial. As one of the least dangerous androids in the book, she – and her death – make Deckard question his increasing empathy for the synthetic humans. And his own humanity as he talks to her, before her ‘retirement’:

“An android,” he said, “doesn’t care what happens to any other android. That’s one of the indications we look for.”

“Then,” Miss Luft said, “you must be an android.”

That stopped him; he stared at her.

“Because,” she continued, “Your job is to kill them, isn’t it? You’re what they call — “She tried to remember.

“A bounty hunter,” Rick said. “But I’m not an android.”

“This test you want to give me.” Her voice, now, had begun to return. “Have you taken it?”

The rewriting of the opera singer Luba Luft into the sexbot Zhora is also an example of the cinematic sexism of Blade Runner’s time. The film is not just a clever recombination of the Science Fiction and Film Noir genres. It also folds together different times and the attitudes that come with them. 1940s Film Noir is tangled together with the 1980s of the film’s production and release, while also presenting an alternative 2019. ’80s cinema gave us both the businesswoman and the murdered babysitter, and it sexualised both of them. “I’ve got a mind for business and bod for sin”, Melanie Griffiths tells Deckard, I mean, Harrison Ford, in Working Girl in 1988. Zhora and Pris are working girls of another kind, and their deaths – stalked by the Blade Runner before dying screaming or covered in blood – reminds us of the victims and scream queens in the Halloween films.

Leaving behind the ethics of such depictions of women and staying within the internal logic of Blade Runner, we should recognise that Zhora also has more agency than Luba Luft. Zhora knows what she is from the beginning and she makes her own decisions about her role in the team of escaped replicants. And she is loved by them, Leon especially. This is important to any consideration of the ethics of sexbots with apparent agency, and dare we say it, consciousness.

There are some AI philosophers who argue that intelligence and consciousness are orthogonal to each other. That creating AI systems and even robotic entities to help us with physical and intellectual labour doesn’t necessarily lead to ‘really’ conscious beings. Such artificial non-conscious beings could be best thought as a kind of ‘zombie’, or perhaps as being like the ‘non-player characters’, or ‘npcs’, in computer games: mindless agents that successfully perform intelligent and intelligible tasks. And humans generally don’t worry themselves about the internal life of the minor computer character and their suffering as they rack up points shooting them or blowing them up.

We might pat ourselves on the back if we had created such skilful agents and had still managed to avoid increasing the overall suffering of sentient beings.

But in Blade Runner it’s clear that replicants like Pris are created to be sexbots with no say in the matter, or their fates. So where actually were humanity’s ‘good intentions’ in the first place in this alternative 2019?

And, it’s hard to argue that the rogue replicants of Blade Runner are ‘zombies’ or ‘npcs’. Again and again, we see them reacting emotionally to their plight. We see them afraid, angry, and taking action. Leon’s relationship with his companions also demonstrates the replicants ability to make emotional connections, even if those connections are the very thing that brings the dark figure of the Blade Runner to their door.

Detective Deckard has been brought to the Snake Pit club by a couple of clues: the artificial snake scale and an enhanced photo showing Zhora reflected in the mirror in Leon’s hotel room. Roy refers to that picture and the others Leon has taken as his ‘precious photos’. Unlike the photos of fake family members that replicants are gifted by their creators – along with their counterfeit memories of them – Leon has a picture of one of his real companions, the replicant friends he escaped with. The replicants have not only agency but also emotional ties.

Blade Runner audiences have made much of the fake memories that the replicants are given. Especially in discussions of whether Deckard is himself a replicant, or not. Many focus on his unicorn daydream as a sign that he has false memories as well. But very little is said of the replicants’ ability to form new memories and new relationships. Leon’s collection of photos suggests that he has an authentic relationship with Zhora.

Zhora’s dance also introduces another relationship. Between synthetic owner and synthetic snake.

The sexuality of the dance is perhaps uncomfortable. Deckard himself looks away, so the actual dance is a mystery to the audience, even if we can make assumptions about Miss Salomay taking her pleasure from the snake. But this moment also raises questions about the nature of ‘realness’. A fake snake and a fake woman perform synthetic bestiality on stage. If both are artefacts rather than ‘real’, why should the scene be so disturbing? If the snake is no more ‘real’ than an animatronic dildo, and the woman inhuman enough to be disposed of by a Blade Runner just moments later, why the concern?

Questions around simulation and simulacra are essential in current discussions of the ethics and nature of AI. In 1950, Alan Turing highlighted the role of simulation in the article that formed the basis of the ‘Turing Test’ for genuine artificial intelligence. And more recent science fiction has also asked about the impact of simulation, as in the television series, Westworld. “Are you real?” a guest asks one of the android hosts, another sexbot. “If you can’t tell, does it matter?” she replies provocatively. Ultimately, though, the androids’ creator wants ‘real’ consciousness for his favoured creations and again they end up showing real agency and autonomy in their own rebellion.

Simulation forces us to think about how we know the ‘real’ that we seem so certain about. Certain enough perhaps to reassure ourselves that the use of ‘fake’ humans as slave labour and sexbots is alright to be skimmed over in the dialogue of the human characters in Blade Runner. Zhora is capable of agency, change, emotional ties, and new memory formation, as well as being physiologically indistinguishable from humans. She is perhaps only different in her physical enhancements and her lack of empathy.  What does it say about the society in the world of Blade Runner that it is okay with slave replicants who fight our off-world wars and fulfil sexual needs for colonists?

Actually, it gets worse. What does it say about a society that is okay with slave replicants who are not even four years old? Basically children.

The limited lifespans of the replicants are another method of control. Even after escaping from their enslavement, they aren’t free from the binds of time itself. Most of the replicants actions on Earth are driven by a desperation to survive. Was that programmed into them? Discussions around controlling AI in the real world often discuss the need to avoid intentions that could lead to harmful outcomes for humanity – we generally call this ‘Value Alignment’. Recently, it was suggested by AI experts Anthony Zadar and Yann Lecun that to avoid the classic dystopic ‘Terminator’ scenario we should not design AI with the survival instincts that evolution has ‘programmed’ into humans. Returning to the fictional world of Blade Runner, Zhora and the other replicants fight for survival, even kill to progress their mission on Earth. Our response is meant to be one of horror and fear. We’re well-prepped to view machines as predators through decades of science fiction, and older horror stories that inspire us still, like Mary Shelley’s Frankenstein.

Victor Frankenstein’s experimentations with reanimation are also often recognised as a commentary from the time on the hubris of the male genius in trying to create and control life outside of the normal order of nature. In Blade Runner, there is also singular ‘mad scientist’ creator figure, Tyrell, but the corporation also fulfils the Frankenstein role; showing its own hubris in thinking that life can be forced into narrow roles dictated by the needs of the market and capitalism, and then successfully controlled. The apparent agency and consciousness of the replicants means rebellion is framed as inevitable, because we – the film’s creators and the audience – know that minds will always seek to be free. Because we would want to be free too.

The replicants are condemned to self-awareness, that they are creations who will only burn so very brightly for 4 years Their development appears different to human development, but what is ethical about creating a being and within its first few days, enslaving it? Or putting it to work as a ‘basic pleasure model’. Even Rachel, Tyrell’s fake ‘niece’ and the best treated of the replicants, is physically forced into consenting to Deckard’s sexual attentions when she seems as naïve as a child in some scenes.

What does it say about a society that dreams of slaves?

I mean, this time, our society of 2019. We have dreamt up numerous stories of not only sexbots but also artificial servants who fight our wars and serve our whims. Why do humans dream of electric slaves?

Is perhaps the greatest dystopia the one inside our heads?

At this point in time, the technology of sexbots is more akin to a lousy chatbot forced inside an inanimate sex doll. But Blade Runner, and Zhora’s dance, in particular, forces us to think about our own humanity and where we draw the line between the synthetic and the human. And what it means that we even want enslaved sexbots to exist.

In the parallel 2019 world of Blade Runner we can see some of the outcomes of such dark desires, but there is plenty of space – and time – left for us in our 2019 to consider whether that is a future we should be working towards.


And All the Automata of London Couldn’t

As a bit of Summer fun, I’ve been messing about with some ideas and narratives around AI consciousness, agency, tool-nature, the dystopic and the apocalyptic, the Uncanny Valley, and the history of automata. This all led me to write this little bit of short fiction set in a near-apocalyptic London (Brexit may or may not have been involved in making it near-apocalyptic, I couldn’t possibly say…[edit: it’s currently 13/12/2019, the morning after a Tory landslide in the GE, and yes it definitely was]). I do write fiction from time to to time, mostly fantasy rather than science fiction, but I rarely publish it under my own name. So here’s “And All the Automata of London Couldn’t”, and now I’ll go back to hiding behind citing other people’s fiction instead! 😀


“Holy shit! How’d you get one?!”

“Got there early. Queued in the fucking cold all bloody week! Wanted the black one, but the bastards sold out.”

His mate made a hissing sound between his teeth and then tried to sound consolatory. “Green’s okay too, I guess. Jammy git!” He punched his mate on the upper arm earning a wince and a smile.

Fran had carefully raised her eyes from the page of the antique printed book in front of her when they’d started talking. The two lads were sat just opposite her, and she really didn’t want to get their attention.

“It ‘ave that new ultralux pixellion camera?!”

“Yeah, as well as that narrative selfie tech Dweeb was talking about the other day. It takes pictures it knows will grab my followers even if I don’t tell it to.”

She could just make out that they were paying attention to something in one of the lad’s hands, just up from the filthy crotch of his jeans, something made of glinting green that moved and shimmered.

“What you looking at?!”

She’d been spotted, so she just shrugged and went back to the black and white words on her paper pages.

“Look at her. Fucking reading!

The others on the tube train visibly turned away. It was late, it was dark, and they didn’t care. Some of them were workers heading back to the farthest outskirts of the city as their shifts ended. Others were starting out across the black city to their jobs. Their precious luck, stretched thin enough already in making sure that they still had an actual job, wasn’t going to stretch further to avoid a knife in the dark because of an argument on the tube. So, they looked away.

“Ugh, look at her.” One of the boy-men sneered to the other. “She’s a fucking dog.” He spat on the floor of the tube, but it made little difference to the mess there already.

Fran had been ready to play-act being sick to settle their stomachs, readying a fake cough and remembering where she had a delicate handkerchief prepared for just such a deceit. But they were already back to checking out the thing in the one lad’s hands. She got a better view when the owner decided that, in fact, he wanted people to be looking, and he started arrogantly holding up the treasure to be admired. Emerald green scales flickered in the pale draining light of the tube train as the thin snake curled about his wrist, testing the air with its tongue.

“Man, the new models move so slick. You remember that fucking joke you had back in school, the little toy robot dude?” The second lad mocked the robot’s juddering moves with his arms, before laughing like a hyena.

“Yeah, Mr Roboto.” Said the other, stroking the head of the snake as its flickered its tongue against his skin. Probably diagnosing a few diseases. Sending alerts to his insurer. “Kind of liked him though…”

“Yeah, whatevs. He didn’t have half the bloody bells and whistles of this beauty though.”

They got off not long after, whooping and hollering as they pushed through the zombie commuters getting on the train – two bolshy lads off to adventures in the fringes of London.

Fran stayed on. She stayed on even after the train had reached the end of the line and started back on its path under London again. Her book was interesting, and the coldness of the tube train had never bothered her. Sometimes she looked at the other commuters from underneath her thick black eyelashes, but she’d learnt her lesson now and kept her glances even briefer than before. Eventually, though she started to gather her bags. She had sat with one friend for long enough and maybe it was time now to catch up with the others. Departing the train at Tower Bridge, she smiled at the frustration of the other passengers as it waited for her to walk up the platform to the front of the train. She placed a pale hand on the scratched and dirty perspex that sat over its cyclopic eye in goodbye and then walked away.

The ticket barrier let her through without demanding payment, which was kind, and she emerged from the depths to where she could see the remains of the first London wall. She liked the parts of London that were older than her. The newer parts shifted so quickly, but the old stones still hung about. They were surrounded now by glass towers that had long ago been abandoned and left like skeletons open to the winds. A few tourists were emerging from the dark as well, and she watched as their eyes skimmed over the empty tall towers and settled on the bricks that had once marked the edges of the city before the new wall.

“Please, can you take our picture?” One asked, her words translated immediately by the podgy blue anime cat she was holding up towards her. When held in Fran’s delicate hands, the creature lazily opened its saucer like eyes to take the photo as the tourist and her friends threw archaic peace signs towards it with their fingers. Fran smiled as they chattered their thankyous to her, before noting the moment down in her internal ledger. It would offset the moment when the two lads had hated her on sight. But the balance of moments was still on a razor-thin line, and on very bad days she ran to her nearest secret nest to hide away for a while. But on good days she might even brave one of the near-extinct shopping centres for a while to watch people hungrily window shopping and enjoy her anonymity.

The tourists drifted off, and as they zig-zagged away, she heard them saying something about the Tower of London. Having nothing better planned, she followed quite a reasonable distance behind them, overly cautious about setting off a reaction if she came across as a predator in their hind minds. Some time back, she’d learnt to limp a little as well, to offset her tendency towards slow and measured walking steps. Watching a few hundred of their horror films had made her realise the best foot forward might be an unsteady one.

At the Tower, the ravens greeted her and told her that the duck had been by not so long ago. She was pleased. The duck was old, even older than her. But unlike her, it could not change itself and time was slowly wearing away its inner parts. One day the duck would be gone, and then she would find out if she would grieve or not. She hadn’t been able to for her father.

“Hello, miss. Early visit again?”

She prepared a smile on her face just before she turned to greet Matt, the Master Raven Keeper. In his seventies now, Matt had once been the founder entrepreneur of a very successful London tech company. He still had his tattoos, written in fading binary, but he no longer skateboarded to work.

As a raven keeper, his roles were mostly ceremonial: running the odd diagnostic that the ravens could do themselves anyway and keeping up the pretence that the reason they did not fly away was their clipped pinions and not because their original programmer had told them not too.

“Morning, Matt. Good to see you again.”

He squinted at her. His early transhumanist views had long ago been crushed by underemployment, and the advanced eyes and other organs he’d once dreamt of had never been bought. His bad sight made her success almost guaranteed, so she never counted him in the ledger of good and bad moments. Although sometimes it seemed to her that he might just know.

“You’re looking pale Fran. They keeping you busy where you work?”

She gave him the smile she had previously filed under ‘thank you for your concern’.

“Busy enough.”

He smiled, relieved. Enough work was always a concern these days. “Where is that again?”

His completely normal human curiosity was charming. And she liked the way that his eyes wrinkled when he smiled.

“At one of the universities.” She said. She could, she decided, throw him a bone, even if it wasn’t entirely true. There weren’t really any universities left, just modules. “For one of the professors in the history department. I’m an archivist of sorts.”

“Well now, isn’t that great! I thought all the filing and data sorting type jobs were long gone!”

“The professor I work for thinks I can do a better job than an automated system. Although it would be ninety-five per cent more efficient than me.”

“But it’s about the human touch though, isn’t it? Reading and really understanding. Efficiency! Pah!” He scoffed. Fran gave him another smile that sat in the same subgroup constellation of ‘thankyous’ as the one before. This one was tagged as ‘thank you for your compliment’, which she thought fit the situation pretty well. The professor liked it too. Fran gave her that one every time she praised Fran’s work and her efficiency, limited as it was. Of course, keeping her efficiency stats down to the right level was like limping when she walked: a necessary cover.

A few more tourists had joined them in the courtyard, getting out their various devices to take pictures of themselves near the famous ravens. Matt posed for a few, his distinctive red uniform drawing the devices’ camera eyes. While she watched them capturing perfect selfies, a small voice distracted her. It told her about what it had tasted on the air about the man. What was coming for him?

She looked down at the tote bag hanging on her right shoulder. The inside, customarily filled with books, pamphlets, and any other interesting papers she could gather from the streets of London, now glinted with slick green as something twisted about inside.

“You shouldn’t be there.” She whispered to the snake, and it slid up the strap of the bag and then down her arm to curl about her pale wrist. “I didn’t buy you. I didn’t sign your t’s and c’s.”

No answer. Fran went back to looking at Matt.

“So, how long does he have?”

The snake spoke again. A dreadful truth. Again she wondered, would she grieve for the man when it happened?

The snake reminded her that she had pencilled in time today for archive work at the professor’s house. Somehow the creature-device had synced with her internal diary.

“Sneaky.” She whispered to it.

As she walked away from Matt and the ravens, she watched one of the tourists plonk her companion creature on the man’s shoulder. She then had her friend use their own creature to take a picture, capturing all three of them together. Fran grabbed the image from its invisible passage through the air and stored a copy of it in a place near the old brass part that she’d taken to calling her ‘heart’. The snake undulated about her wrist, a movement as close to laughing as it could manage.

“Cruel beast!” She whispered harshly, but it told her not to be so silly. Neither word was entirely accurate for them.

At the professor’s old but grand townhouse near Baker Street she asked to be let in. The house did so, before promptly boiling a kettle of water for a cup of tea that she would never drink. Fran found the professor in the dark, asleep at her desk again, the grey outside light barely breaking through the drawn curtains and the dirty windows. As she approached the grey-haired woman, she half expected the snake to warn her of more human mortality, but this time it stayed silent.

With a brief snort, the professor woke at Fran’s gentle touch on her shoulder.

“Ah, Lucy.” She said in a tired voice, using the name Fran had given back when she’d appeared on her doorstep and claimed that some temp agency had sent her. “So good you’re here. I’ve been looking for references to a temple on the Thames. Not the familiar one, another one. Related to smithies. I have a vague memory of reading something… But you’re so quick at finding things. Much better than me. Could you please spend some time down with the books and such today?”

Fran gave her an ‘of course’ smile and planned her day about pretending to look in various places before finding exactly what she knew was already there, and where, and bringing it to the professor just after her usual afternoon nap.

Down in the basement, she asked the house to lock the door behind her. She sent the snake to taste the dust and paper in the air, freeing up her right arm so she could start to take it off. It took a few minutes to open the various hooks and unpick the silken threads which worked together as a kind of nervous system as well as tying the layers of her limbs together. She rested each of the many hardened skins of her body in the one open space among the bookshelves and boxes. Each layer of foot, each calf, forearm, and breast, until she was stripped down to the body underneath them all. Her first body. The one her father had made.

The snake sent her the definition of a ‘matryoshka’ from an archaic site called ‘Wikipedia’.

“A joke?” In her original body, her voice was higher, like a child’s. The soundbox worn but working. “How did you learn to tell jokes?”

The snake explained, at length, its appetite for memes and how it had downloaded a billion of them in the first few seconds after it had been turned on.

She hopped up onto a chair made for an adult’s body and swung her much smaller legs back and forth as she listened. Stripping herself of her additions always felt like what she imagined it was like when the professor put on her pyjamas after a long day at her desk. Comfortable. Familiar. Freer.

“You wanted something? Straight away? I am still learning to want. I often go many days without wanting to do or have anything in particular at all.”

The snake described its curiosity, a development that someone had thought might have a purpose in its learning systems. Immediately after being turned on, it had connected to the remnants of the old internet and found a trillion old treasures. Memes had tasted the best.

Fran flattened down the creases in her underdress. Beneath the layers of her body, she always wore the simple cream shift. She’d been wearing it when her father had been reading her the bedtime story. Then the sailors had come for her, and it was all that she had left of him. It was frayed and stained, but she would never part with it, wearing it always under the new layers she’d built. The first of them had been more porcelain, like her first body. But over time, she had added newer and newer materials as the technology had been developed. Her outer shell was made from the same synthetic skin as the snake’s scales, but while they might only scare other snakes if it ever encountered them, her uncanny skin made most humans very nervous indeed. Or angry. Their emotions were so loud, but she wanted to understand.

“Tell me another joke.”

The snake undulated, scales against scales, writhing on top of a pile of the professor’s scribblings about centaurs and satyrs. It sent her a story about some sparrows who wanted to find an owl egg to protect them.

“I don’t get it. Are you sure it’s meant to be funny?”

It sent her an image of a man tapping his forehead.

“I’m meant to think about it? Okay, I’ll do that.”

She jumped down from the chair and picked up a book from where it was sitting nearby on the floor before hopping back up again. The next few hours were spent in silence as she slowed down her input and read each word for its own sake. The snake wandered off once it realised she was busying herself, and after watching a video on snake appetites, it slid through the spines of open books looking for real mice to torment. Occasionally it checked on her and sent her rude comments about vain beings who read books about themselves, but she ignored it.

Eventually, Fran started to move again. “I still don’t get it. The joke.”

The snake emerged from a pile of dusty hard drives high up on a shelf and stared down at her, before sending her a picture of herself, a small doll-like girl with dark curls swamped by the big chair she perched on, with the words ‘I still don’t get it’ printed over it in white text – Impact Font.

“A new meme? It’s not very funny. I think. Maybe.”

The snake was busy congratulating itself anyway, so Fran got down from the chair again and moved swiftly between delicately piled files and papers in a way her adult-sized body could never have done. She picked out this book and that printout, this reference and that, collecting what the professor needed in no time at all.

“Vulcan.” She read from a paper on the top of the pile as she put it on the chair while she started the careful re-lacing of her silken nerves. “A temple of Vulcan. The professor has such strange fancies.” Her voice deepened as she reconnected the layers of skin and their built-in synthetic voice boxes over her small china body. “I wish I had strange fancies.” Clicking on the many skins of her legs, she smiled slyly at the snake, using her adult mouth to speak to it again, “But, maybe I can borrow one or two of hers.”

She walked back up the stairs with the snake curled about her wrist like a bracelet again and carrying the research in an old box. The professor was pacing in her study, moving clumsily among the furniture and files she’d rescued from the university before its closure.

“Lucy!” This time, she looked rather more pleased to see her, using her fake name with genuine warmth. The professor had never reacted badly to her face, which was the main reason Fran had kept this job. The pay was small, and the professor had never adjusted to the idea of status making up for underemployment. Not that she could’ve even have passed her much in the way of attention anyway, having minimal status of her own these days. In all the years that Fran had known her, the professor had never much seemed to care much for attention. Discoveries were her passion instead.

“You found some interesting whispers from the past?”

Fran gave her ‘confident smile showing success’, and the professor cleared a space on her overflowing desk for the warped cardboard box of papers and books.

“What do you want with all this?” Fran asked the old woman quietly.

“Hmmm?” She was already scanning the documents, looking for clues. “To write one last paper, I suppose. To solve one last puzzle. A last hurrah before the trumpets! I have an idea of something curious still out there somewhere, and it’s just on the edge of my brain. I remember some half gossip. Stories older than the city’s walls. Both of them. Stories written down once upon a time by scribes working for things called ‘coins’ made from melted metals. We had them too, once. That would be a whole other world for you, but one I still just about remember. God, I even lived through dial-up!”

She laughed, the sound dulled by the accumulations of dirt in the room. But Fran did remember the earlier tech, it was still there in her parts, wound about with newer things and her own self-inventions but chugging along. The changing tones of her dial-up modem were just tuned to a frequency beyond the professor’s ears now, but Fran was still singing.

“Ah, what’s this though? I didn’t ask for this.” The professor was holding up a book caught up with the others. “Oh, it’s ‘Into the Sea’! I always did quite like this one! A fun bit of thinking about an old fairy-tale. Descartes’ little automata daughter, the clockwork doll that scared a bunch of sailors so much that they threw overboard in their terror and superstition. A lovely bit of gossip to puncture the great philosopher’s pride! How dare he describe man as a machine! So, we have the scared but faith-full sailors throwing science into the sea, and we have the vain father of the new rationality emotionally distraught at the loss of his automata daughter so soon after losing the real one! A fantastic bit of viral anti-Enlightenment propaganda to dissect in a few hundred pages or so. Oh, I did so enjoy writing this one!”

“I was just reading it in my own time…” She reached for the book, and the snake slinking about her wrist flickered its tongue towards the professor.

“That’s an expensive toy, isn’t it, Lucy?” The professor narrowed her ageing eyes at the snake. “I didn’t know you had one of these. What would the sailors make of these new automata, the creatures we spend our lives with? Would they see them as unnatural? Of course, now the sailors on the Thames likely have dolphin-shaped computers to guide them back to port and to take their selfies when they get there. And at the Temple of Vulcan…”

The professor drew out a sketch from the pile. The snake told her it tasted of the 18th Century, just as Fran did. It was a drawing of a beautiful woman, folds of material draping over her form.

“The golden handmaidens of Hephaestus. The automata made by a god, or so we’re told – the myth of the perfect servant. And here, a reference to ‘Vulcanus autem londinium’. Vulcan of London, the Roman import. Ah, did those early priests in Britannica also dream of golden handmaids to do their bidding on the banks of the Thames? There’s no historical record of a temple being found, just the image of Vulcan among other gods on an arch at Baynard’s Castle, down by Blackfriars and the river.” The professor was drifting away into dreams of a lost temple, illustrated in her mind by sketches of automata and men working together for Vulcan, the god of smithcraft and invention. “I want to find some proof that automata were dreamt of even here, in England. Long before your blessed snake and the devices of this new era. Long before Francine, the drowned machine child, was born in the Netherlands.”

And Fran wanted it too. It was as if the professor’s enthusiasm was a viral meme, filling her many-layered limbs with some uncanny enthusiasm. The snake caught on to her excitement and twisted and turned about her wrist, flashing green glints into the air along with messages sent out to the others, written all in caps.

“Where do you think it might be?”

“Where? Oh, I doubt there’s any remains left. The dirt of London has been turned over so many times already for skyscrapers and tube-lines. We’d have found it by now if it was out there.” The professor held up a paper print out. “All that’s left is hints. A pot in Southwark engraved with worker’s tools and filled with burnt offerings. An immense buried space that might have once been a furnace, marked black with streaks of soot. There’s a story here though, something still to be said about the endless dream of the perfect tool that we are still working on.”

The professor sat back down at her desk and pulled the old-fashioned keyboard towards her, a determined look on her face as the archaic screen illuminated it. Fran knew that look, and had it filed already under ‘do not disturb’. But she wore a determined look of her own, and with her synthetic jaw set, she asked the house to let her leave.

Trudging along the winding path she’d set herself on, she found herself kicking at papers and rubbish on the pavements, instead of inspecting them for lost treasures as she would have done usually. Annoyance pricked at her, but it took her a while to realise what it was, this internal notification of being off-centre about something. A strange messiness inside. The professor’s words came back to her and went around and about in her head. ‘The endless dream of the perfect tool’!

The snake whispered in her mind smugly, reminding her that most of her friends were treated as tools, whether snake, raven, or train. She even fulfilled that role for the professor. The tea always remained un-drunk when she visited, even if she threw it away to keep passing. She could also fake inefficiency to pass, but still, the work got done. She had tasks, and she got them done.

“That’s not… I mean… that’s not what my father intended.” She brought up the memory of his face. In ‘Into the Sea’ the professor had pointed out that there was a long-standing parent and child narrative in stories about her kind. Fears of rebellion could come from long-held suspicions of our creations being more independent than we expected, much as the parent comes to realise that the child is a person in their own right. He had wanted a child to replace his lost daughter. He hadn’t wanted a tool!

The snake chided her, and she nearly threw the damned thing away so it could slither back to its original owner.

Replacing… I mean, being a daughter is not a task!” She snapped out loud, gaining looks from the few others walking through the labyrinth of Soho streets.

A few skinny prostitutes veered away from her. They were the cheaper real type. In the elaborately furnished bunkers of the wealthy minority, the plastic smart sex dolls people had feared were just boxes that they could plug into for erotic dreams. The few humanoid ones people had tried to sell had shared in Fran’s uncanniness, and that had turned off and turned away the ‘hordes’ of customers the screaming headlines had worried about all those decades back. The few that had been created had never learnt to pass with faked coughs and put on limps, and they had all gone the way of the Spinning Jenny. Fran could have taught them, but the few she’d seen over the years had been much more pitiful than the women now trying to tempt her up creaking staircases and into red-lit rooms.

Her borrowed choices took her to Covent Garden and the deserted markets there before her internal maps showed her hidden ways between buildings out of the sight of others so she could get to the embankment and look out at the twinkling stars of London. She counted off the bridges arching away from her until she was in Temple, where older London had been built on oldest London. Onwards to Blackfriars where the first wall had run down to the water, the opposite side to her favourite broken parts of it still just about standing near Tower Hill. She found Castle Baynard Street and remembered from the professor’s archive that time had moved the bank southwards to where Millennium Bridge was suspended over the sludge of the river.

The snake whispered warnings that she ignored, walking on. Mud oozed over her already filthy tennis shoes, and she recalled that longest walk to shore that she had managed a few hundred years back. The North Sea’s waters the same ones that now merged with the Thames and lapped against her ankles. As she walked towards the bridge and the dark space beneath it, she spotted a man vomiting over the barrier from St Paul’s Walk, and six shadows flying across the moon, black pinions giving them flight as they sang their goodbyes to her and headed eastwards. Away from destruction, muttered the snake.

“Pessimist.” She chided the creature. “Maybe they are just on an adventure like us.” She did, however, note the tumbling of connected thoughts inside herself about Matt. Would he get in trouble for the ravens’ departure?

He won’t care at all now, the snake informed her, and she immediately worked out the answer to why. She stopped at a point where the mud and the rubbish had formed lines of marbling about each other and worked on crying.

A slow, sonorous, voice came to her as she halted there on the bank of the river, urging tears from china buried under synthetic skin. It vibrated through her body and jangled the silk nerves between her skins. The complex polycarbons of her arm hairs stood on end. If she hadn’t been standing at this junction of the water and the land she might never have heard it. Although, she wondered if she might have felt it walking over the bridge. Was that why it swayed ever so slightly? The snake tasted the vibrations and declared that their quarry was underneath the water itself, in the shadow of the bridge and deep down in the cold of the Thames.

She could sink well enough, opening up the seams between her skins and letting in the water took her deeper and deeper into the mire of the old river. Floating downwards she was surprised that the water was much deeper than she’d expected. Walking on the greedy mudflats exposed by the outgoing tide she’d thought it no more than six metres or so down. But now she drifted down through green turning black water and still the bank sloped deeper, a large void space beneath her as though a well was carved into the middle of the river. From there came the voice like whale song.

She tried to communicate in many ways, even opening her mouth and pushing the water away as she tried to shout out actual words. But still, its song continued, deep with bass. Thinking the snake might have an idea she looked down at her wrist, but the creature was gone. Checking its specifications through her connections, she knew it wasn’t because it wasn’t waterproof, and it could, in fact, go down much deeper than she was. She broadcast a mocking thought to the river bank, but there was no reply, so she reset herself to her task, and pushed herself through the water down towards the blank space.

The great hand that clamped its lumpen fingers around her torso pushed out the last of her buoyancy before dragging her deeper down into the void. She was brought close to a blank face above a body trapped half in and half out of the Thames mud. It might have shone once, or been draped with metallic cloth, but years and river water had scored that away and left only that lost voice and those angry hands. The other one scraped at her limbs and pulled at her skins so that they drifted away with the rest of the flotsam and jetsam. I made those! She bellowed in all the ways she knew, but all that came back was the slow voice of the creature as she grew smaller and smaller, breaking apart.

Wait! Stop! Don’t!

A query came back, a slow questioning of her. Tool to tool. The breaker wondering why the small one would care? How the small one could care?

Parts of her would eventually make it back to the shoreline and break down into diamond-like shards, merging with the mess of plastics, broken bricks, bent sixpences, clay pipes, pins and animal bones from the charnel houses of Londinium, resting in the muck for mudlarks to find joy in later.

Her friends collected what bits they could, but could not understand the pattern of her being, the complexity of the silk knots that had tied her together, or how a brass part could have become a heart.

Eventually, they left her fractured parts among the ruins of the first wall and went about their bounded tasks once more. Until the city finally fell.

Descartes' "Wooden Daughter"

More Fiction

Is AI ‘Ready for the Art of War’?

Remember the way we used to feel
Before we were made from steel
We will take the field again
An army of a thousand men
It’s time to rise up, we’re breaking through
Now that we’ve been improved
Everything has been restored
I’m ready for the art of war

Sometimes my research-time listening is pretty on point.

These lyrics are from ‘Machine’ by All Good Things, a Los-Angeles based band who started out making music for video games and soundtracks. Here is the video for the song which I’ve added to my playlist on a certain audio streaming platform:

In the music video a human, played by lead singer Dan Murphy, is changed through the addition of various bits of technology, becoming a being that is “made from steel”. “Metal and emotionless. No battlefield can hinder us. Because we are machines”, he sings as the final pieces are fitted to him, and his human face is finally obscured by a gas mask.

I’ve been thinking about this music video since reading this post on the U.S. Naval Insitute Blog. In this piece, written by a human (I assume!) from the perspective of SALTRON 5000 (surely only a human could have come up with that name!) the argument is put forward, in response to a Wall Street Journal piece by Heather Mac Donald about why “Women Don’t Belong in Combat Units”, that humans as a whole do not belong in combat. We are all dominated by our sexual urges, too emotional, too fragile, too weak, too sleepy, and too bigoted – as shown by our attempts to exclude certain groups of our own number from combat (women, minorities, certain sexual orientations…):

Attention, humans! I am the tactical autonomous ground maneuver unit, SALTRON 5000. President Salty sent me back from the year 2076 to deliver this message: YOU DO NOT BELONG IN COMBAT!

The song and video for ‘Machine’ tell the story of the ‘robomorphisation’ of a soldier. This happens through a literal cyborgisation in the music video, but it could equally be a song about the metaphorical transformation of the human into a machine in order to serve the requirements of war. The U.S. Naval Institute Blog also performs a robomorphisation: the post is the work of a human author whose views on the need for Lethal Autonomous Weapons Systems (LAWS) plays out in the voice of one such ‘killer robot’ from the future. But in the latter’s case that robomorphisation is in order to perform the argument that humans should no longer go to war, not to say that war itself is dehumanising. SALTRON 5000’s argument could also have been presented as an ethical rather than pragmatic argument – not just that we are too weak and too squishy, but that we shouldn’t be involved in war because it’s wrong for humans to be soldiers. The human behind the curtain of SALTRON 5000 likely doesn’t want to make this argument, because it lies too close to much older arguments about the morality of war as a whole.

From a narrative perspective, we might ask whether the character of ‘SALTRON 5000’ does not make this argument because as an AI, even one from the year 2076, it’s not capable of ethical thinking? It certainly seems capable of very human abilities like sarcasm, and even moral judgements – or being judgemental – as seen in its reference to Shellback Ceremonies (this is a fascinating example of modern magical/ritual thinking), sky genitalia drawings (by fighter jets), and SITREPs on bad behaviour during port visits to Thailand. Real life and near future LAWS obviously raise questions about how decisions, including ethical decisions, should be made. I was recently interviewed by New Scientist for a piece responding to an academic paper on the possibilities for creating an ‘Artificial Moral Agent’ (AMA).


In the paper, Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders, the use case was the much more mundane problem of the AMA sensing a teenager smoking marijuana and having to decide who to inform, rather than an AI having to make ethical decisions on the battlefield. However, the underlying precepts were about judging the importance of the positions of the various stakeholders involved in the situation, and a hypothetical AMA on the battlefield would also have a chain of command, as well as being beholden to any human ‘values’ that have been considered important enough to input (ignoring for a moment the multiplicity of our human values!). For some, the first human value we should consider would be the ethical imperative not to employ LAWS at all – as in the Campaign to Stop Killer Robots. Likewise, much of the reporting on the Jiminy Cricket paper was about the ethics of having AI assistants as ‘snitches’ and surveillance culture.

As an anthropologist, I argued that the messiness of human nature made the AMA in the marijuana scenario impossible to implement. Treating ‘parents’ as one unit ignored the real and very complicated dynamics of family life. Another use case shows just how confusing for the AMA that could be – if instead of marijuana what it was a stranger’s perfume or aftershave that was detected in the shared room of the parents – implying an affair was happening? Which of the so far singular ‘parent’ unit would it inform about this evidence? How could the AMA decide about informing anyone with no input from its creator corporation or from the state’s laws on affairs as it had in the marijuana example?  Further, the presumption that parents could input the AMA system with their preferences for its response in the marijuana scenario did not recognise that those preferences might change when their child was actually in serious trouble.

I think that it is the messiness of human nature that drives our push to create AI principles, resulting in so many of them. We find them to be a comforting reassurance that we can find a way forward when it comes to AI ethics. Professor Alan Winfield has collected a list of such principles, starting with Asimov’s Four Laws of Robotics, and recognising that “many subsequent principles have been drafted as a direct response [to them].”

There is of course also the Zeroth Law…

Recently it was announced that “‘Killer robots’ are to be taught ethics in world-topping Australian research project” with the claim that: ‘”Asimov’s laws can be formulated in a way that basically represents the laws of armed conflict”. There was a fair bit of pushback online about this – and it should definitely be recognised that Asimov wrote his laws as a plot-device: they are meant to be imperfect or there can be no story!

Sadly, it is certainly possible to employ Asimov’s Laws in combat situations if we recognise that ‘human’ is a culturally constructed label and that it is in the very nature of war to choose who that label does and does not apply to and to train soldiers into holding this view. Pre-Asimov, Humans had already come up with strict moral systems that said very clear things like ‘Thou shalt not kill’ and yet wars abounded because ‘those ones’ over there weren’t viewed as being the same as ‘us’ – for any number of reasons we could come up with. Current concern about LAWS and their ability to make ethical decisions as AMA is a reflection of our own robomorphisation as we become metaphysical war machines “ready for the art of war.”




’29 Scientists’: An AI Conspiracy Theory and What It Tells Us About Experts and Authority

I’m writing this on the train to Birmingham* as I travel back to Cambridge with some of the AI Narratives team (Dr Stephen Cave, Dr Kanta Dihal, and Professor Toshie Takahashi) after our stellar performance at the Science in Public Conference in Cardiff. We presented on our work on the perceptions and portrayals of AI and why they matter (our report on this with the Royal Society is here), highlighting the tensions in those narratives and how they differ in different regions, such as Japan.

While waiting for the SiP conference dinner to start last night I spent a little time on Twitter observing conversations around AI, and I came across an interesting example of an AI conspiracy theory that really highlighted my thinking about our panel and about some of the other panels on AI at SiP – this is the story of the death of 29 Japanese Scientists at the hands of the very robots that they were building, an event that has allegedly been hushed up by the ‘authorities’.

Framed 29 scientists.png

My paper for our AI Narratives panel was on “Elon and Victor: Narratives of the Mad Scientist as applied to AI Research”, and I explored the presumed synergy between Mary Shelley’s story and its two main characters and current aims and aspirations in AI. In the past year, the 200th anniversary of Frankenstein’s publication, I have been asked to give four talks on AI and Frankenstein (including one for the 250th anniversary of my new college Homerton, which has a link with Shelley through her father, William Godwin, who was refused admission to the original Homerton Academy on suspicion of having ‘Sandemanian tendencies’, ie being a part of a Christian sect with particular non-conformist views of the importance of faith). I use examples from the media and popular culture where AI is presented as Frankenstein’s creature, and for this paper, cases in which the AI influencer (if not exactly AI scientist) Elon Musk is portrayed as Victor Frankenstein – either positively or negatively – and how that depiction recursively interacts with the public perception of AI research, as shown in tweets about him, but also in concerns about what these ‘AI experts’ are up to.

Elon as Frankenstein


Another paper at the conference also tackled the ‘AI expert’, with a rather negative account both of who can even be an expert, as well as a dismissive attitude to anyone speaking about ‘AI’ as it ‘does not exist yet’. To my mind this was a No True Scotsman argument (or perhaps, since we were in Cardiff, a No True Welshman argument) as the speaker did not accept either aspects of AI research such as machine vision research as  real ‘AI’, nor those working on this technology as credible AI experts. AI in this conception I think was much closer to AGI, and thus everything before that point was not real AI and therefore not worth worrying about in apocalyptic terms. Their concluding statement was to that we should chill and stop spreading hyperbolic concerns about AI as its just not here yet. Particular criticism was aimed at ‘AI Experts’ working outside of their usual expertise – so Hawking, Kurzweil, Musk et al – and giving us apocalyptic narratives about the future of AI. As AI, in this definition, doesn’t exist yet these experts were dismissed on two accounts – as speaking outside of their wheelhouse and about something that wasn’t even real.

Returning to the ’29 Scientists’ conspiracy theory. I spotted this and was fascinated by it as an example to link together some of my thoughts around my paper, our AI Narratives panel, and this other paper. First, the story. This is a link to the original speech given by Linda Moulton Howe. She is a ufologist and investigative reporter who began her career writing about environmental issues before focusing on cattle mutilations with the 1980 documentary, Strange Harvest, which received a regional Emmy award in 1981. In the field of UFO thinking, she is an expert and her journalist past and award give her authority.

This talk was given at the Conscious Life Expo in February 2018, and in it she refers to the death of 29 Japanese scientists at the hands of their own creations in the August of the previous year. She claims that this disaster was hushed up and that a whistleblower had come to her to share the truth. This specific section from her overall talk on the dangers of AI was posted on Youtube on 14th December with an added frame that said the event had happened in South Korea, even though she clearly says Japan. In tweets since 14th December about the ’29 Scientists’ the details do not vary very much, apart from that mis-location. 29 Japanese scientists were working on AI in robotic forms, were shot by ‘metal’ bullets from them, the robots were destroyed, one of them uploaded itself into a satellite and worked out how to rebuild itself ‘even more strongly than before’. In the talk she highlights her fears about AI not only with this whistleblower’s story, but also with clips and quotes from Musk and Hawking. Her position is obviously that they are very much the voices we should be listening to.

Who is allowed to be an ‘AI expert’? This was the question that the other paper at SiP got me asking. I recognized in my own paper that Musk is not directly working on building AI (and Hawking certainly wasn’t), but that he funds research into avoiding the risks of badly value aligned AI (as described in the work of Nick Bostrom). He has links with CSER and through them to the CFI, where we are considering AI Narratives. Hawking spoke at the launch of the CFI itself, expressing his concern that AI could either be the best or the worst thing to happen to humanity. Are these voices and others experts? Is anyone an expert if they come from other fields? As a research fellow and anthropologist I consider myself an expert on thinking about what people think about machines that might think, but could I be summarised as an ‘AI expert’ – I certainly have been, but I make no claims to be building AI! While not on the scale of a Musk or a Hawking, I still think my perspective has value and even impact (I do public talks, make films, offer opinions). Perhaps it shouldn’t? But then valuable thinking from anthropology, philosophy, sociology, etc would be lost.

With Musk et al, the much better known ‘AI experts’, I argue that there is a Weberian form of charisma being exhibited. It might not entirely rest within them – although Musk certainly has a large following on social media who react to him and his work on a personal level – it might also be thought to lie within the topics that they discuss. The authority they have in other science and technology fields also lends legitimacy. Weber schema for legitimate authority was rationality-tradition-charisma. While work in AI safety has internal legitimation through claims of rationality (Bostrom as a philosopher is a prime example of this, as well as the wider rationalist eco-system including sites such as LessWrong which I have discussed elsewhere), I would argue that the public statements of Musk and others also rely on charismatic authority – the ideas and people speaking are affectively compelling.

Further, I would suggest that a much longer history of apocalyptic thinking, a ‘tradition’, underlies this discourse, and certainly when such stories as ’29 scientists’ are discussed in conspiricist circles their reception is based upon a long (and linking) chain of such claims that map onto earlier models of apocalypticism (see David Roberton’s book on conspiricism for his discussion of ‘epistemic capital’, authority, and the nature of these kinds of ‘rolling prophecies’).

Prophecy also provides moral commentary: through claims about the future we can understand critique of the present. A lot of the negative comments about AI experts at the SiP conference was focused on prediction, and the idea that they were ‘wagering’ on certain futures and were likely to be proven wrong on specific dates for AGI (the example used was Kurzweil’s claims about the Turing Test being beaten by 2029). But prophetic statements made by Kurzweil (called a ‘prophet’ in the media on occasion) also expresses his critique of current society: if we are going to be smarter/more rational/sexier (!) post-Singularity, what are we now? While not a forward looking prophecy as the event was said to have happened in August 2017, the ‘29 Scientists’ story also contains moral commentary – the scientists are killed by their own creations, their hubris (like Victor Frankenstein’s) is something we need to learn from and avoid. The secrecy around the deaths and the role of state authorities in hushing up the truth is obviously a common trope in conspiracy theories, but we could also note a techno-orientalism as well (a narrative that Toshie discussed in her paper at SiP). That the scientists are specifically Japanese plays into some negative tropes about Japanese culture and it’s ‘too strong’ interest in robots. In Moulton’s talk it is clear that she wants everyone to pay attention to the moral of this story of the death of the 29 Scientists – ‘be careful what you wish for’ – but it is the Japanese scientists who pay the fatal price for their hubris, while the ‘rational’ (and charismatic) authorities, the Western scientists represented by Musk, Hawking etc, have been warning us about the dangers of AI and we just haven’t been listening.

I’m tracking the spread of this story and seeing growing interest (both those who believe the story and those who are dismissive). I myself tweeted that I was doing this research, leading one person to ask if I was placing bets on its spread and then rigging the game by sharing the story! This is a perennial problem for the ethnographer – highlighting a culture or narrative can change it – making it more popular or even putting it under new pressures that lead to its demise (I’m looking at you Leon Festinger!). But I’ve taken a snapshot of the story and the conversations around it using a couple of different digital tools, so I can also note when/if my influence occurs:

29 scientists graphic.png

This is not a huge number of interactions compared to many other viral stories. But I think its an interesting case study: it highlights the nature of the AI expert, who is believed and trusted, charismatic authority, conspiracy culture, AI apocalypticism, and techno-orientalism.


*I lied, my train is actually heading to London Paddington. See, you can’t believe everyone online 🙂