Sometimes… okay quite often… I like to use this blog as a space to put down on ‘paper’ details about some of the things I’ve been up to, been reading about, or just been generally musing on, and then I try to pull out some threads of consistency and contingency between those sometimes quite disparate things. In today’s blog’s case I want to look at three ‘events’ which have one obvious and easily found overlap, Artificial Intelligence, but I want to push and pull that about a bit and see what other less obvious things emerge when I put these three events into one blog.
These events were:
- AI-Europe, a tech-conference, which I attended last week:
2. The Cambridge Conference on Catastrophic Risk (CCCR2016), held at Clare College, which I attended this week:

3. And Rogue One (aka, Star Wars Episode 3.9), which I saw yesterday:
Although AI certainly plays a role in each of these three locations (and I’ll try to avoid spoilers for Rogue One, but unless you havent been attention you should at least know that the Droids in the Star Wars universe – even in its post-Disney Big Crunch form [see Singler, forthcoming] – are examples of science fiction AI. The type and strength of this AI is something I will return to below), these locations weren’t framed in the same way, and I wasn’t using the same method in each.
At AI-Europe I had my anthropologist hat on, making observations for a planned ethnographic paper on AI and religion at tech conferences (very forthcoming!). At CCCR2016 I had my academic hat on as an attendee. At the cinema I had my geek hat on.
In terms of framing, AI-Europe is described in its literature as the “premier European exhibition that will show you the value of AI for your strategy, introduce you to your future partners and give you the keys to integrate intelligence […] AI-Europe will give you the opportunity to gain inspiration and network with leading business strategists, decision-makers, leading practitioners, IT providers as well as visionary start-up entrepreneurs.” (bold in original) In a way, AI-Europe was IP Expo writ small but with a big entry price, nice food and fancy location (paid for by that ticket!), fewer competitions and games, and with Christmas decorations.
I was primarily paying attention to continuities of religious narratives and tropes in this secular field site (and there were a few, which I’ll be discussing further in my ethnographic paper), but it is also interesting to consider the possible overlaps and differences between the commercial and the academic by thinking of this event in conjunction with CCCR2016. CCCR’s warnings about the potential for catastrophic risks and the need for deceleration and caution in relation to AI seemed initially very different to the near evangelical (and yes, salespeople sometimes call themselves technological or product evangelicals) fervour of the sales and marketing people at AI-Europe. Transparency was there in the sales bumf (yes, that’s a proper anthropological term) for various products, but also in the sense of demystifing the decision making of the AI for the client, rather than in making AI open to wider critique and regulation.
The only moment when risk appeared on the horizon was in Calum Chase’s expressively horizon-scanning key note on the Two Singularities (the economic and the technological). However, if we return to the literature from the event we can see a ‘reality’ focus that diminishes extreme, ‘science fiction like’, risks. These kinds of risks were ones that speakers at CCCR2016 also proposed caution about, although at several points the importance of paying attention to narratives – such as science fiction – that can effect the development of AI was given its due. Although, I still think I am the only one about who is banging the drum about paying attention to religious stakeholders – but more on that in the Aeon Magazine piece I am working on for in the New Year! Regarding these kinds of extreme risks the AI-Europe editorial tells us that:
“When you think of Artificial Intelligence, you can’t help but think of another fantasy land where Ex Machina and The Matrix are taking over the world. In reality, the term “Artificial Intelligence” comprises a vast and diverse ecosystem of technologies that represent very powerful opportunities for your business competitiveness!” [bold in original]
That’s a nice segway (seemless move, Beth, just seemless) to a spoiler-free consideration of the third event I want to consider in this blog today, the screening of Rogue One that I went to. I suppose the anthropologist and academic hats were also there under the geek hat, as I found it impossible not to watch this film and think on the character of K-2SO, the re-programmed Imperial security droid (voiced by Alan Tudyk, who you other geeks will know as the voice of Sonny in the rather double plus not good I, Robot adaptation in 2004).
While people such as AI-Europe are citing Ex Machina and the Matrix in relation to AI (and of course, Terminator images are being used in newspaper articles about the work going on around x-risk in Cambridge – “Terminator Studies at Cambridge”, burbled the Daily Mail back in 2012 – much to the derision of the attendees of CCCR2016!), I’ve often thought that Star Wars needs more attention from a narrative perspective in relation to AI. I’ve written elsewhere about the Jedi, both fictional and real world, so maybe its about time I turn my attention to the Droids?
K-2SO might be a good way into that consideration, and the above gif summarises his personality fairly well without spoiling much about the rest of the film. Jyn Erso, the female lead, has just passed him a pack to hold and he’s carelessly dropping it when she’s just out of range, heading off on a mission without ‘K2’. Remember the rebelliousness and humour of BB-8, or of C3PO’s lies to the stormtroopers when Luke and the others are in the trash compactor? Or what about C3PO’s forgetfulness about the comm-link he’s holding, and then how he curses himself when he thinks that they Luke and the others are being crushed aliv? Add all that together, and a thousand other moments, and there is certainly an argument to be had about the self-determination, intelligence and human-likeness of the Droids in Star Wars and their level or strength of AI.
I propose that the Droids have at least human-equivalent AI, so why have the films not gone down that potential risk story-line? The biggest threat in the Star Wars universe is technology, but un-automated technology like the Death Star, being put to work by an evil Empire. Droids are used as automated weapons – an existential risk discussed at CCCR2016 – but their AI is enslaved and ordered about by hierarchies of humans. In fact, the AI of the Star Wars universe are routinely touted for their abilities in a commercial, not x-risk, system.

K-2So is a re-programmed Imperial Droid and demonstrates free-will. But even the un-reprogrammed Droids employed by the Confederacy of Independent Systems in the prequel films expressed very human-like distress at incoming blaster fire, and some reviewers have even suggested that the Droids in those films showed greater acting chops than some of the lead, human, actors… (no comment).
So, the question arises about whether the buying and selling of the Droids in the Star Wars universe could be understood as slavery? And what does that mean for a hypothetical future of increasingly intelligent programmes and devices that work for us once we’ve paid for them? Perhaps that is how we get back to questions of x-risk, through a consideration of how we treat these beings in our world? Or does x-risk come in earlier, through the commercial dash to accelerated development without thoughts about value alignment? The latter was certainly a key topic during CCCR2016, and its an ongoing discussion in the field of x-risk. However, the former – the question of agency and slavery – needs some consideration as well, and not just by science fiction.
Returning to these events in conjunction, and the wearing of three hats at once, what I think is the key thematic thread that joins them all is in how we approach AI in different ways – as potential risk, as a potential investment with a financial return, or as a character in a grander story where humans (or human-like aliens) drive the action, leaving the Droids to hold the packs.
Or not. While trying to remain as spoiler-free as possible, K-2SO’s free will extends beyond just deciding to drop that pack. He also makes the decision to act on behalf of others, even when it is to his detriment. When an AI company proposes to make the decisions of its products transparent to its clients, making AI less of a ‘black box’ or a magical device, it is unlikely it is those kinds of decisions that they are planning for. And even CCCR2016’s established emphasis on the value alignment and the potentially catastrophic decisions of AI doesn’t really open up the conversation about beneficial or benevolent choices in the way that science fiction might.
When the the subject is superintelligence (as the technological singularity Chase was speaking about at AI-Europe) there is even more vagueness and uncertainty. As one audience member at CCCR2016 said, laughing as he spoke, for centuries there have been departments in universities trying to answer the question of what an omniscient being would want. With little success so far (ouch!).
In summary, narratives matter. Whether its at tech-conferences, academic conferences, or in a galaxy far far away. Just ask these ewoks worshipping C3PO (because of a prophecy in some versions, and prophecy is a topic I want to return to at some point!):
One thought on “‘AI, X-Risk, and Star Wars’, or, ‘Are Jawas Technological Evangelists?’, or ‘Dropping the Pack’ (aka ‘When just one title wont do’)”