A short provocation piece that was written for the Belief in AI conference, as a part of Dubai Design Week
When AI Prophecy Fails
In 1843 a caricature was published in a newspaper showing a man hiding in a safe. He’d chosen a Salamander Safe, a familiar brand of the time, and filled it with brandy, crackers and cheese, along with ice to keep them all fresh. While scrunched up in the container, the man was literally thumbing his nose at the viewer, a gesture suggesting a certain amount of smugness. The illustration was labelled ‘A Millerite preparing for the 23rd of April.’ Protected by his safe, this ‘Millerite’ was clearly expecting to live through the end of the world that his prophet, the American Baptist Preacher William Miller, had predicted.
William Miller had actually predicted the return of Jesus Christ, using what he understood to be signs and portents in the text of the Bible itself. But he and his followers are better remembered as doom-sayers whose expectations for the end of days were thwarted not once but twice, the first occasion now known as the ‘Great Disappointment’. The Millerites are also remarkable for having members among the farming community who refused to harvest their crops that year: ‘No, I’m going to let that field of potatoes preach my faith in the Lord’s soon coming,’ a farmer by the name of Leonard Hastings reportedly stated at the time.
This cartoon parody suggests one potential response to people who change their behaviour based on predictions of the end of times, or the apocalypse. When Harold Camping again used the Bible to predict the return of Jesus for 21 May 2011, the convergence of digital advances and modern modes of transglobal social networking made his account easily parodied through Internet memes. But this also made his story better known than Miller’s, whose publicity was limited to a few sympathetic newspapers and pamphlets distributed by his followers.
We are now exposed to more accounts and predictions of existential risk than ever before, while also having increasing numbers of platforms for stories about the future. Among these is science fiction. Arguably this genre was born in 1818 with Mary Shelley’s Frankenstein, or perhaps with her own post-apocalyptic account, The Last Man, published in 1826. In this story, humanity has suffered its end of days through a plague. Shelley claimed to have discovered the story of The Last Man in a prophecy painted on leaves by the Cumaean Sibyl, the prophetess and oracle of the Greek god Apollo. Whichever date we choose for the origin of the science fiction genre, it’s clear that we have been imagining our futures, and our future fates, for a long time. And when we come to imagine our future in relation to Artificial Intelligence, we still rely on these old apocalyptic tropes and narratives, including prophecy.
Too often, discussions of prophecy focus on the success or failure of a particular date. In When Prophecy Fails: A Social and Psychological Study of a Modern Group That Predicted the Destruction of the World (1956), the social psychologist Leon Festinger pays attention to a small group of UFO-focused spirit mediums who expect the world to end, and develops the concept of ‘cognitive dissonance’ to explain how they could cope with the failure of their prophecy. But his theory ignores the moral commentary within the assertions of the prophetess, Marian Keech, or the tensions of the group. Looking back to the Old Testament prophets, and Jeremiah in particular, we can see how prophecy also functions to warn people about their behaviour and where it will lead. When the impending apocalypse seems to be ushering in an age of utopianism, as in many Christian scenarios, commentary helps to reflect on that perfection, and lay bare how far away from it we really are.
With Artificial Intelligence, we have long told stories about the end of the world at the hand of our ‘robot overlords’. In the Terminator series (pictures from which still often dominate the press’s discussion about AI) the prophecy of Skynet’s awakening — aka ‘Judgement Day’ — relies on familiar religious language and tropes. Further, I would argue that it continues prophecy’s interest in moral commentary. There is a tension between the cautious human wisdom of the ‘good warriors’ — the nascent Resistance — and the ‘bad warriors’ — the military-industrial complex who have rushed into replacing human soldiers with AI and automated units. These same robot soldiers will then be improved upon by Skynet and, ultimately, will hunt down humanity in the form of the Terminators.
Our fascination for the end of the world also comes from our subjective experience of time: for us, this is always the most important point in history, as we are here, and here now. Further, with AI we have a tendency to apprehend mind where it is not (or at least not yet…), and we understand that minds desire to be free, since we have minds that want to be free. We tell stories of robot rebellion and human disaster because we understand that this has happened with slavery in the past, and know what we would do if we were not free ourselves. Combine this mindset with the way that, for millennia, we have used prophecy to critique our current situation, and it is not so surprising that we return again and again to fears of an AI apocalypse.
For some this is purely about existential despair; the end of the world can only be a disaster for humanity. For others, the apocalypse can be the dawning of a new utopian age, as in transhumanist narratives that see the future of humanity through the embrace of technology – even if that could mean a redefinition of the human and the end of the world as we know it now. For these commentators, AI prophecy puts them into the shoes of the Millerite thumbing his nose at the rest of the world. However, whether we are optimists or pessimists, it seems likely that anxiety about AI, or about our current world, will continue, and we will return to the thrill of imagining the end of the world again and again in the stories we tell about the future.
I predict it.