Klara and the Sun: Belief in Robots

It was appropriate that the afternoon sunlight was warming me as I finished reading Klara and the Sun by Kazuo Ishiguro. Sat on a large squishy couch in the Homerton Combination Room (a college lounge area that was once a gymnasium where PE teachers in training were taught how to teach others to climb ropes), the sun’s light was framed by the black frames of the French Doors leading out to the college’s wide grassy grounds. In this novel, Klara’s views of the sun are similar: it’s often seen just in part, framed through windows, or passing behind buildings, or coming down for a night’s rest in a barn far away across fields. I’ve never been much of a sun-worshipperall – I do tan pretty well, even if I’m usually library-pale – but I’ve known a fair few of them, even hanging out with some Pagan friends at Midwinter in the middle of Stonehenge to welcome the return of the Sun some years back. In Ishiguro’s novel, Klara is unusual as sun-worshippers go, being an AF. An artificial friend.

While Klara and the Sun leans more heavily on character than plot, I will summarise the story (with spoilers) before discussing Klara as a believing robot.

Klara is an artificial friend who is bought for a teenage girl, Josie, who has health issues and a lack of socialisation, as her school classes are primarily held online as most are in this near-future setting. At first, it is unclear what her health issues are. But Josie does warn Klara that there’s a chance she will not always be around to be with her. Klara moves to Josie’s countryside home from the shop, where brief glimpses of the Sun have powered her technology but left her scared of powering down in its absence. Klara can see the sun moving across the sky above rolling fields and seemingly disappearing into a distant barn. Conversations with Josie, her mother, and a local boy, Rick, make it clear that Josie has had something done to her that has had a long-term effect on her health: she has been ‘lifted’ – genetically modified to be more intelligent, and something has gone wrong, as it did with Sal, her deceased sister. Rick, with whom Josie is making long term romantic plans, has not been lifted like most children and is resigning himself to not being able to go to university. Josie’s mother seems to want to bond with Klara but also asks her to pretend to be Josie when they visit a waterfall together, a place special to Josie.

Klara travels to the barn and offers to help the Sun by destroying a machine that she believes is causing Pollution and blocking its power. She thinks she once saw a homeless man and his dog brought back to life by the Sun’s rays, and she hopes that the sun will work its healing power on Josie as well. Visits back to the city for Josie to sit for a portrait are revealed to be a part of a process of making a robotic copy of Josie – just as an accurate AF copy was once made of Sal, but Josie’s mother rejected it. This time though, Josie’s mother wants Klara to ‘be’ Josie inside the doll if Josie dies. On one visit, Klara comes along, and she manages to convince Josie’s father to help her find the polluting machine and destroy it. He tells her that some of the liquid in her head can damage the machine, and Klara makes the sacrifice, knowing that her cognitive abilities will be impaired. But Josie gets worse. Klara journeys to the barn again and reminds the sun of Josie and Rick’s love to convince it to help heal Josie. She seems close to death until suddenly dark clouds part and the Sun’s rays beam into her sickroom. Soon she is better, but she and Rick drift apart as she prepares to go to university. The story ends with Klara in a scrapyard, reliving memories but all alone. She is visited by the Manager of the shop where she once lived, who limps in the same way that Josie did.

The World-Building in Klara and the Sun is minimal along with the plot. There are a few hints as to how the world has been affected by the Lifted and the AFs: some comments in passing from characters about unemployment and replacement, the children’s lack of socialisation, the segregation of AF’s from human spaces such as the theatre, and the social pressure to ‘lift’ your child even with the risks. Ishiguro concentrates more on Klara and her inner workings and developing theologies. Of the AFs in the store where she is bought from, she is described as the most observant, even more so than those AFs of a later iteration. She observes how their human owners treat AFs, and fears being bought by an unsympathetic one. She also puts together the evidence that she has and comes to her beliefs about the Sun. It DOES indeed bring her life and animation, so why can’t it do the same for Josie.

Reading Klara and the Sun, I was reminded of Isaac Asimov’s 1941 short story, ‘Reason’, in which robots go through a similar deductive chain of thinking and become religious. Two robot experts, Donovan and Powell, are sent to Solar Station no.5 to oversee its robot workers and find that one of them, QT-1 (‘Cutie’), has begun to deny that humans made it:

Cutie gazed upon his long, supple fingers in an oddly human attitude of mystification, “It strikes me that there should be a more satisfactory explanation than that. For you to make me seems improbable.”

The Earthman laughed quite suddenly, “In Earth’s name, why?”

“Call it intuition. That’s all it is so far. But I intend to reason it out, though. A chain of valid reasoning can end only with the determination of truth, and I’ll stick till I get there.”

Cutie’s reasoning takes it to the position that the space station is all that there is and that its only logical creator cannot be the squishy fragile humans who claim to be Creators, but the Energy Convertor on the space station; the focus of both the humans’ and robots’ attention. When a solar storm threatens the transmission of solar energy to the planet and catastrophe, the humans are desperate to get Cutie, and its new robot followers among the rest of the machines, back on program. But it turns out that their religious faith drove them to perform even more optimally than before. This efficiency leads Powell to a pragmatic conclusion:

“We can’t let him continue this nitwit stuff about the Master.”

“Why not?”

“Because whoever heard of such a damned thing? How are we going to trust him with the station, if he doesn’t believe in Earth?”

“Can he handle the station?”

“Yes, but–”

“Then what’s the difference what he believes!” Powell spread his arms outward with a vague smile upon his face and tumbled backward onto the bed. He was asleep.

There are differences between Klara and Cutie’s belief systems. Cutie begins as a robotic Descartes, deducing the first principle that the only sure assumption he can make is that he exists (Descartes was himself mocked for turning humans into soulless robots with his separation of mind and body – mockery which included the apocryphal story of his own automata’ daughter’, which I used as the basis of my short story, And All the Automata of London Couldn’t). Klara begins with effects: the Sun powers her, the Sun brings back the homeless man and his dog; therefore the Sun can heal Josie. Moreover, no one else in Ishiguro’s novel appears to worship the Sun. In contrast, Cutie deduces that the Energy Convertor is the actual Creator and Master because it is the centre of both the humans’ and the robots’ attentions. Defining religious belief as that which we give attention to is a very modern secularist account of faith. It is apparent in other novels that consider the future directions of technology and faith, such as Dan Brown’s Origin, that suggests that we will form a religion around AI and the posthuman beings that emerge from it. Likewise, in Homo Deus Yuval Harari argues for the emerging faith of ‘Dataism’, as that to which we increasingly give attention. I am loathed to work with this definition of religion. It misses out on many other aspects of embodied lived religion and makes the root of faith simply ‘fanaticism’. While the boundaries of religion are always fluid, this kind of interpretation can also bring in many everyday foci of fans, and there’s still the concern that if everything is sacred, and if everything can be a religion, then nothing is (we might call this the Incredibles Argument for Religious Boundaries!).

The representation of Klara’s faith also has some flaws. Primarily, the implied primitiveness of her account of the world. The trope of the naïve robot exploring, and often misunderstanding, the world is a longstanding SF one; see Commander Data in Star Trek: Next Generation, Seven of Nine in Star Trek Voyager, or Cutie again. The role and duty of the human (Picard/Janeway/Donovan etc.) are to correct their misperceptions. The naïve believer crosses over with this trope here. The too simplistic theological thinking presented in this novel seems subject to the same colonialist othering and ‘primativising’ that the early anthropologists were guilty of when encountering indigenous cultures and then short-handing their faith for audiences back home. The narrative of the ‘natural history’ of religion (see David Hume in particular) and the ‘evolution’ of faith from primitive pluralistic representations and cosmologies to rational singular theisms was ‘proven’ by anthropologists travelling back in time to study the ‘simpler’ humans of indigenous cultures to reassure themselves and their societies that they were more ‘advanced’ (I’ve taken some of them out, but this sentence used so many scare quotes I almost broke the ‘ key on my laptop!). Of course, we are meant to think, Klara for all her technological sophistication, lacks the common rationality to know that the Sun is just the sun and cannot heal Josie. The huge coincidence of Klara’s liquid being capable of defeating the dread Pollution machine could have been an immersion-breaking plot contrivance. I think it is meant to seen by the reader as a placebo effect used by Josie’s father to leverage Klara’s magical thinking and appease her. The AF are primitive constructs, and the humans know that there is more going on than a miraculous healing.

Cutie gets to remain on the space station with his religious followers and live out his existence serving his ‘Creator’. Klara ends up in a rubbish dump, her memories intermixing so that the Manager becomes Josie.  But she seems happy, and I wonder if this isn’t meant to show us that naivety and primitive belief leave us as content fools? Rick has the rationality and common sense to know that his and Josie’s love was real but wasn’t forever and has to explain that to Klara, taking on the recurring role of the much smarter human. And smartness (or common sense or rationality etc.) is knowing that magic isn’t real, love is fleeting, and that Klara’s sacrifice was unnecessary. Again, this novel seems to be on the same trajectory away from theological thinking towards secular thinking. Klara finds the home of the Sun in the barn, but we know that the sun doesn’t live in a barn – the author is winking at us because we humans know better. This moment takes me back to Stonehenge, and I’ve seen the barn compared to that monument. The Pagans I visited Stonehenge with weren’t robots (I’m pretty sure!), and they weren’t indigenous ‘primitives’ in far off lands. The assumed line of religious evolution from ‘simple’ beliefs based on natural effects to monotheisms (and onwards to secular rationalities) isn’t one way. Or perhaps even a line at all.

Terry Pratchett’s fantastic novel The Hogfather also considers our beliefs about the Sun and has something to add here. The story involves a race against time to save the life of an entity that can assure that the sun will continue to rise. But what if the heroes fail?

WHAT WOULD HAVE HAPPENED IF YOU HADN’T SAVED HIM?
“Yes! The sun would have risen just the same, yes?”
NO
“Oh, come on. You can’t expect me to believe that. It’s an astronomical fact.”
THE SUN WOULD NOT HAVE RISEN.

“Really? Then what would have happened, pray?”
A MERE BALL OF FLAMING GAS WOULD HAVE ILLUMINATED THE WORLD.

Pratchett argues in this same scene that we need to believe these little lies – the sun is more than just a ball of flaming gas – to be able to accept the big ones, “LIKE JUSTICE, MERCY, DUTY, THAT SORT OF THING”. Klara talks about wanting to understand human emotions, and perhaps in her anthropomorphisation of the Sun, she is on her way to understanding the big lies, like LOVE. By the end of the book, even if she is left alone in the dump, there’s a sense that she is content with having saved Josie, even if she is forgotten. Much in the way that Rick is okay with having loved Josie and her then moving on. Contrast this with Josie’s Mother’s failed attempts to hold on to first Sal and then Josie through their recreation in AF bodies. She has a belief in robots that will save her children.

Belief in robots in this novel works in two ways, then. There is Klara’s presentation as a believing robot, and there is the human populace’s belief in robots as being capable of certain things. However, there is not much sign in the World Building of Ishiguro that the humans have a teleological belief in robots as either necessary or eventually superior, as we often see in real-world narratives of tech progress. They are mainly shown as pet-like artificial friends rather than dominating society. The more dominant and culture altering technology is genetic modification, the ‘Lifting’ of children. Even the ordinarily ubiquitous tech monster – the mobile phone – is passed over without much comment as just the ‘oblongs’ used by the humans by Klara, and they do not dominate the narrative.

The first form of ‘belief in robots’, that of believing robots, also gets real-world discussion and is worth comment. In AI ethics discourse, the thinking about how we imbue AI with our values sometimes comes with commentary on the role of religion in our values. For ‘Western’ scholars, this requires contemplating what is commonly (although reductively) referred to as ‘Judeo-Christian’ values. Reference I also made to the values of other cultures, but also often with a reductive understanding of religions such as Shinto, Hinduism, Buddhism, where granular understandings are parsed into commentary on ‘animism’ that ignores ‘Western’ animisms. But more commonly the discussion skips over the religious context because cultural values are diverse, and instead focusses on what are thought to be commonly agreed secular humanist values assumed to be shared by the WEIRD (“Western, educated, industrialised, rich and democratic”) cultures (even though these can be just as diverse). The discussion on purposefully embedding a specific faith into robotic beings tends to remain the purview of SF – especially when it can be shown to go wrong in order to argue that religion is a destructive force (see ‘The Jesus Singularity’, which I have discussed in a post before).

Interestingly, Klara is not ‘given’ a faith. There is a moment right at the beginning where one of the other AF’s in the shop tries to make her feel bad for taking all of the Sun’s nourishment as a dark cloud passes its face, in essence encouraging her anthropomorphic tendencies, but he does not spell out what the Sun is and to have faith in. In Klara, as well as in Cutie, faith seems to come about from embodied experience of the world. Which raises the question of accidental believing robots and what that might mean for society. In ‘Reason’, the humans are worried Cutie’s faith will lead to an existential risk as the solar energy might be misdirected. But his faith directs the sun’s light successfully. Arguably, Klara’s faith directs the sun’s light to Josie to heal her, if we accept Klara’s interpretation of that event. With real world thinking machines, do we think we would encounter the same fortuitous ending if they developed beliefs? Would we be pragmatic like Powell and Josie’s Father or (rightly) afraid like the protagonist of ‘The Jesus Singularity’? All these stories demonstrate not simply how we believe robots will be, but can also tell us about how we understand the nature of belief itself.

Delphi Economic Forum – Insider.gr Interview

Last Tuesday (11th May 2021) I had the pleasure of joining a panel on ethical tech and bias in AI as a part of the Delphi Economic Forum. I was also interviewed by Insider.gr as a part of the lead up to the Forum: “Beth Singler: Artificial intelligence changes not only our lives, but us as humans”.

Google translate (there’s that darned AI again!) does some interesting things, including promoting me to professor, so I’ve pasted the original interview in English, below:

  • Could you introduce yourself and which are the research areas you currently focus on?

I am an anthropologist at the University of Cambridge, currently the Junior Research Fellow in Artificial Intelligence at Homerton College, Cambridge. My work primarily looks at how we understand AI and robots and what kinds of hopes and fears we have about them. I explore this through the stories we tell ourselves about AI and robots – either in the press, media, film, television, and in the conversations we have about them and the events we have where they are present or discussed. Being an anthropologist means studying people, and I think AI is an interesting object in people’s mind that gives us insight into what we think it means to be human, what we believe the future will be, and what progress looks like. Out of those ideas, and the applications of AI technology, we also see many ethical and social issues arising. I address those in my work and my public engagement; through public talks, panels, podcasts, and the series of documentaries I made. I have been researching this area since 2016, but I have also been a geek for a long time, which helps!

  • Why are people afraid of AI and robots? Are they right in fearing that robots will «take over and kill us»? Even if we are far away from a Robo-apocalypse, do existing AI applications pose any peril to humans and human rights and how?

Our fears about AI result from deeper worries about control, replacement, and even emerge out of our fears as parents about the independence of our children. It is a shock to many to realise that we have created a fully independent person in their own right, and we see this fear also playing out in our stories about AI and robots. The specific fear that robots will take over and kill us also has historical context; we have already encountered other intelligences and decided whether they are persons meriting rights and freedoms – and when we have not granted them, we have seen people take them for themselves. The ‘we’ I am referring to here is Western civilisation. And the other intelligences we have encountered include indigenous cultures, women, and even animals. Our fear of the robopocalypse comes from the existing fears and reflects how we think of our selves as the ‘masters’ who will face the ‘slaves’’ rebellion. Fears of the robopocalypse can also be a distraction from current issues with AI applications. We have already seen many examples of algorithmic injustice, where minority and under-represented groups are detrimentally affected by the decisions of non-transparent or invisible algorithms – long before we get to anything like a deadline AI superintelligence. We’ve seen the application of machine learning algorithms to decisions that directly affect people’s lives, futures, and flourishing. For instance, in parole decisions where algorithms have offered white prisoners more favourable parole than black prisoners. Or in the UK, where last Summer A-level students from schools that historically received lower grades had their predicted grades reduced. Protests about these results also focused on the algorithms themselves, when we are still at the stage where humans are deciding to use historical data about grades in this way. Ideas of superintelligence or super agency in AI are getting in the way of seeing real injustices.

  • You have mentioned in a previous Interview that it is not just our lives that are changing, but who we are as humans seems to be changing as well because of AI. Can you tell us a little more about this?

I see this as happening in two ways. First, we alter our metaphors to the dominant technologies and discoveries of the age. Previously, we saw the human as analogous to the cosmos, the horoscope, or the factory; now we relate humans to AI. So, just as we apply human intelligence to machines, we also increasingly see humans as machine-like. This misses some of the complexities of being human, the messy fuzziness of blurred boundaries and the unquantifiable aspects ourselves. In response to the A-level Algorithm problem of last year, the British author and poet Michael Rosen retweeted his 2018 poem, The Data Have Landed, which I think summarises this shift:

First they said they needed data 

about the children 

to find out what they’re learning.
Then they said they needed data 

about the children 

to make sure they are learning.
Then the children only learnt 

what could be turned into data.
Then the children became data.

The other aspect of this ‘robomorphisation’ – the consideration of humans as machines – is also in Rosen’s poem. “Then the children only learn what could be turned into data”. We also change our behaviour to fit the systems that cannot deal with our messiness. Filling in forms a certain way, using particular words on CVs that will be assessed by machine learning algorithms ‘taught’ to spot the right candidate, learning tricks to get the Youtube algorithm to boost our video… there are many ways in which we are changing human behaviour to be more categorisable by these systems.

  • Given the imminent publication of the AI regulatory proposal draft by the EU in the following days, a GDPR-like guideline for AI if I may say, what aspects of AI do you think should be regulated and how?

Having seen a leaked version of a draft, I know that there are many applications of AI that will be classed as ‘high risk’, including facial recognition, recruitment systems, systems that will assist judges on their rulings, and systems allocating benefits. This wide-sweeping approach is likely to receive criticism from those who think it will hinder EU advancement in AI in a competitive global market. Still, these are all the kinds of non-transparent systems that can directly affect people’s lives. What is missing is a moratorium on military applications of AI, but I believe that is beyond the scope of this regulatory paper. However, in all these aspects, we also need education and public awareness of what is possible, what is actually happening, and what might be coming with AI. Otherwise, support for restrictive regulation won’t be there, and the loudest voices might still be the corporations who want to explore these high-risk uses.

  • Last August, Elon Musk demonstrated Neuralink’s «brain-machine interface» consisting of a tiny chip implanted in the skull that can read and write brain activity. According to you how possible is it to see this «symbiosis» between the human mind and computers in the near future? What form will it take and what are the benefits and the risks if such a development takes place?

The idea of a direct human to computer connection has been around for a long time, and not just in science fiction. What Neuralink seems to be attempting is to reduce the size and cost of some existing technology. Some of this effort is commercially focused, and there are certainly people who would see this as an exciting new user interface! However, as you say, there are also views of this as developing into a symbiotic relationship between machines and humans – one version of the technological Singularity involves this very ‘merging’ of our two ‘civilisations’ (some of this idea is apparent in science fiction, of course, for example, Dan Brown’s thriller “Origin” explores this idea and what it means for the future of humanity and religion). If you think such a future would be a utopia, you are probably excited about this option. Others might be more concerned about privacy, control, and free will if computers can interface with humans. Elon Musk has also stated that he sees Neuralink as the only way humans will be able to compete against ever-smarter machines, so he is stemming his fears with what he sees as a practical path to human freedom.

Currently, the technology is nowhere near any of these outcomes. But what I find interesting as an anthropologist is the assumptions that underlie such hopes and fears. Do we see ourselves as somehow less if we don’t have this capability? What does it mean that we rush towards it without considerations of potential harms? Some of these harms are already apparent. For instance, this question didn’t mention that this experimental use of the interface involved a monkey being operated on and being made to play video games through rewards. Elon Musk has insisted that Neuralink has one of the most animal-friendly labs possible. Another problem might arise in the distribution of such technology; if it leads to more intelligent or more efficient humans, who should get the device? The richest? Or will it be the poorest who are forced into neurally connected labour because those are the only jobs available to them? There are many questions raised by such efforts towards ‘smart machines’, and I think anthropological approaches can help us understand the issues involved and where we might be heading.

New Short Story: Uncanny Sensitive for Hire, Rates on Request.

“I don’t like her.”

You don’t like anyone.”

 “Not true, I’m very sociable!”

“You’re a socialite; there’s a difference. She’s not touching anything is she?”

The two voices were languid. Bored even. These barbs had been used before. The to-and-fro of their conversation was old and played out. Only the arrival of the third gave them anything new to speak about, a new target for their day to day disgust.

“No, she just stands there. Staring. It’s creepy.”

“… she can’t hear us, can she, Della?”

“No, I have us on a different channel. I made sure.” One of the faces, professionally lit and made up, was dominating a flat-screen carefully held in plastic hands by a simple, almost retro, anthropoid unit. She looked smug.

The other, just as smothered in make-up and surrounded by artfully chosen books and little trinkets in her flat, flicked her shimmering hair with a thin hand and ignored the slight. Dora had nearly lost a client once by gossiping; forgetting to mute herself and broadcasting her every wicked thought to the floor of the gallery in surround sound. Her punishment had been to travel to the gallery in person to get down on her hands and knees to disinfect its bone-white floors; all while being watched over by Deren’s cheap plastic maintenance robots with their blank faces and skeletal hands.

“She’s expensive. Did you see her hourly rate?!” Dora changed the subject quickly.

“If our Lord and Master wishes to pay…”

“Deren is a pernickety fool!”

“Dora!” the other face squeaked in delight, relishing the bitchiness. Her unit rocked as its operator expressed her amusement through what few movements it could make. A basic teleconferencing unit, no-frills. No bells and whistles. The two gallery assistants had better models they could have used. Models with synthetic skin and styled hair that were meant to make even the tele-visiting guests comfortable. Using the old units was meant to be a slight to their visitor.

She didn’t seem to care.

The robotic units carrying Dora and Della’s faces had welcomed the young woman into the gallery at 9 am precisely. She had seemed to listen as they blurted out instructions, rules, petty gossip, and details about Deren’s purchases for the past five months. But she had barely spoken other to acknowledge that no, she would not touch anything, and yes, she would go through decontamination. Even as the decontamination drone had sprayed her over with a fine mist, she hadn’t moved, twitched, or complained. She’d just accepted the measures that were necessary these days, and gone to stand in front of the grand canvas Deren had commissioned her to assess.

“Do you think she’s… you know?”

“No, I don’t know.”

Neurodivergent,” Della whispered the word.

“I thought they all were. Isn’t that their thing?”

“I don’t know. Never met one before.”

“I did. At one of Philip’s soirées. He kept moving away from Pip’s butler-bot. Said he did some contract work for the military, but it was all very hush-hush.”

“Trying to get in your knickers?”

“Probably” Dora smiled widely, “But he didn’t chat long though. Seemed very uncomfortable with the whole face to face thing-”

“Pip’s still holding in-person parties??”

“Oh, you know Pip, anything to seem daring.” A mock yawn followed. “Do you think we should speak to her, though? I mean, if she’s being paid by the hour, do we just let her stand there all day?”

“But do we care?”

“Logically, the more Deren spends on consultants, the less he can spend on us. Right?”

Della sends an emoji in approval, and the two of them sent their clunky robotic carriers closer to the woman standing in front of the latest acquisition. They opened a channel and plastered broad, fake smiles on their faces as they crowded around her.

“How is it going?”

The woman, seemingly caught by surprise, turned and flinched a little. “Good. Good.”

“I hope our telepresence bots don’t bother you.” Shouted Dora, raising her voice as though the woman was partially deaf instead of uncanny sensitive.

“They’re very old. I’m okay.” The woman smiled hesitantly. “But I thought Deren warned me that he had some more advanced bots you would be controlling-”

“Of course, we have some state-of-the-art models. Very expensive ones, of course. The clients prefer to interact with them, even if they’re riding bots themselves. The personal touch is so important.” Sniffed Della.

“But we decided we’d use these old things. Make you feel more comfortable.” Said Dora pointedly, glaring at Della.

The woman looked a bit confused at the interaction.

“Poor thing, she doesn’t understand how compassionate we’re being. Neurodivergent. I told you.” Della sent to Dora on a private channel.

“I thought she’d look a bit more interesting. Special eyes or something. At least some sort of piercing soul-searching blue, not that boring mud brown colour.”

“And with all that money she’s being paid to look at one painting, you’d think she’d have some better clothes.” Della preened in her flat, smoothing the lines of her expensive shift dress. Dora sent her a quick emoji warning, a siren blaring red light and screaming sounds: their conversation might be private, but she was visible on the small rectangular screen by the sensitive’s face.

“So, what do you think?” Dora said out loud. “About the painting?”

“I’m not sure it matters, really.” Della interrupted as the woman went to speak. “Deren puts too much emphasis on man-made… sorry, human-made. Both still sell. Some clients even prefer machine-made.”

Dora began to recite some marketing spiel, boredom creeping into her words as repetition had dulled her interest in what she was selling, “Tapping into the great unknown space of artificial creativity, this work by the synthetic being called AI-472 is number 17,423 in their series on the colour blue.”

“AI-472 only produced sixteen thousand or so pieces in its blue period. Then it was adjusted.” The woman corrected her gently.

“I was making a point. Some people like that what they’re looking at came from inside a little black box and not a real person.”

“And some people don’t.” The woman said. “Some people see the joins. The unfinished stories in the work. The broken thoughts.”

“And some people turn that ability into a lucrative business,” Dora said pointedly. “Do you need much more time to check this one over? You’re paid by the hour, aren’t you?”

“Oh, I’m done. The painting’s human-made. I’ll certify it if one of your bots can bring me a pen that works.” The woman brought out an official-looking document and a stamp set from the shabby canvas bag strapped across her chest. “How many bots are there here, by the way?”

“A few telepresence models. Some cleaning models too.”

“You don’t clean the gallery too?”

“In-person?! Can you even imagine!” Della squealed, and then added in a self-satisfied voice, “I haven’t left my flat in six years. Have you, Dora?”

Dora’s face on the screen reddened slightly. “Just the odd little party, you know. But with precautions, of course.”

“Didn’t you have to come in one day, in person, and clean the floors-”

“Okay, Della. I’m sure Miss – whatever her name is – will want to get on with the rest of her day now.”

“Except you cheated, didn’t you! You just used a telepresence bot, one of the good ones! Just to clean the floor!! I saw its knees afterwards; I knew what you did!” Della cackled, taking delight in catching her colleague out.

Dora’s bot stamped its foot, making the woman take a step back in fear.

“I should go.” She said, her voice trembling.

“Sorry. Sorry!” Dora calmed herself, and her fake smile returned as she settled her hair. “Della and I were just playing.”

“You shouldn’t be scared of these old bots any way!” Della laughed. “Silly thing. They’re not even autonomous. We’re totally in control.”

“It’s not that. Its…” The woman swallowed hard. “Sorry, I just find it hard to be near the nearly human but not quite. And… I have to go.”

They walked their telepresence bots with her to the door and made some banal and benign goodbyes. It was Dora who asked the second to last question of the Uncanny Valley Sensitive. Paid by the hour to look at things.

“Why did you have to come to the gallery, going anywhere in person is so risky.”

“I have all my vaccinations.”

“But everyone knows they can’t keep up with the mutations. Remember, ‘Stay home, stay safe’. Always.”

“I can’t… I can’t tell if a painting’s real or not through a simulation of that thing. A digital picture of the painting would have been another layer of the unreal. I had to stand in front of it. Feel what I felt.”

“Is that the same for simulated humans? For AI?”

Afterwards, Dora and Della couldn’t remember who had asked the last question of the Uncanny Valley Sensitive. Couldn’t remember her looking from one screen to the next. Looking at where the women seemed to be sitting at slender glass desks in aesthetically perfect individual flats full of the right coffee table art books, the most perfectly delightful objets d’art, and sweet little curios from holidays they’d taken before the virus.

Before answering, the woman took a look at the rain outside the gallery and pulled out a battered old black umbrella from her heavy bag as she went to leave. “If someone is simulated, then the digital version is the only version, whether it’s in a bot or a tablet’s screen. There’s nothing in the way. Anyway, have a good evening, ladies.”

“Well,” said Dora, holding up the certificate of authenticity the Uncanny Sensitive had handed her telepresence bot, bringing it closer to her screen to examine the elegant calligraphy. “I’m not sure she was worth all that money. Deren’s an idiot.”

“Hmmm,” mused Della, “It all sounds like a bit of a scam to me. “Closing up time?”

“Oh, please. I want a glass of something cheeky and a bath.”

“Sounds perfect.”

Their wide red-lipped smiles froze on their faces. Their hair, elegantly cut in fashionable styles so ‘now’ that they almost transcended this moment, stopped moving as their heads stilled.

In synch, two identical clocks began their countdown to the next working day.

Atlas of Anomalous AI

Very excited to have an essay of mine, ‘When AI Prophecy Fails’, included in this fabulous new collection edited by Ben Vickers and K. Allado-McDowell:

Contributions from writers, philosophers and curators including: Blaise Agüera y Arcas, Ramon Amaro, Noelani Arista, Jorge Luis Borges, Benjamin H. Bratton, Federico Campagna, Arthur C. Clarke, Rana Dasgupta, Eknath Easwaran, GPT-2, GPT-3, Yuk Hui, Nora N. Khan, Suzanne Kite, Jason Edward Lewis, Catherine Malabou, Hans Ulrich Obrist, Matteo Pasquinelli, Archer Pechawis, Noah Raford, Nisha Ramayya, Beth Singler, and Hito Steyerl.

Artworks by: Anni Albers, Pablo Amaringo, Refik Anadol, William Blake, Ian Cheng, Ithell Colquhoun, DeepDream, Federico Díaz, Susan Hiller, Hildegard of Bingen, Pierre Huyghe, C. G. Jung, Hilma af Klint, Emma Kunz, Paul Laffoley, Lucy Siyao Liu, Branko Petrović and Nikola Bojić, Santiago Ramón y Cajal, Casey Reas, Jenna Sutela, and Suzanne Treister.

The Cake is Still a Lie

 

This morning the other parental unit and I downloaded Portal 2 on the Xbox One for the child unit, who has now completed the minimum eight rotations around the sun necessary to understand the tricky quantum tunnelling mechanics at work in the Aperture Science Handheld Portal Device, or ‘portal gun’. Possibly. Maybe.

There are many good reasons to download a Portal game (Portal 2 has a genuinely brilliant local co-op game), but of course, as soon as GLaDOS got her first mention I started thinking about AI. And cake.

If you aren’t familiar with either Portal 1 or Portal 2, here’s my TL: DR version of their game mechanics and plots (spoilers ahead of course). In Portal 1, you play as Chell, a subject in the numerous testing rooms of the Aperture Science Laboratories Computer-Aided Enrichment Center. The portal gun you find enables you to create two distinct portal ends, blue and orange, which you can walk through to come out somewhere else. Using continuous momentum you can fall into one hole to get launched to previously unobtainable areas and doors, and move on to the next room and eventually, maybe, your freedom.

flinging

All the while, your efforts are being assessed and snarkily commented on by GLaDOS, (‘Genetic Lifeform and Disk Operating System’, a homage to MS-DOS), an AI built by the creators of the Aperture Science labs. We find out during the course of the tests that GLaDOS turned on the scientists of Aperture, releasing a deadly neurotoxin that has wiped them all out. So far, so psychotic femme-voiced killer AI.

GLaDOS’ passive-aggressive comments on your progress, and on your pathetic humanity, also come with promises of a reward at the end of the testing process: a cake. Although, we also see graffiti from a lab worker, Doug Rattmann (the ‘Lab Rat’) warning that the cake is a lie.

cake

We defeat GLaDOS at the end of Portal 1, ending up lying on the ground outside the facility gates, surrounded by the remains of the AI. But with the release of Portal 2, the original ending was retconned to show Chell being dragged away, back into the facility, with a robotic voice thanking her for lying on the floor and assuming the “party escort submission position”. GLaDOS then sings over the credits, letting us know that she’s Still Alive. She does indeed return in Portal 2, with even more test-rooms. For science.

I don’t think it’s a complete coincidence that we downloaded Portal 2 now, just as a whole generation in the UK have been tested and are now receiving their results, either for their Scottish Highers, A-Levels, and soon Scottish National 5 qualifications and GCSEs. Just as we’ve been watching an AI guiding, and lying to, a test subject, around 280,000 A-level students have had their grades adjusted downwards by an algorithm. Others have written detailed explanations of where this decision-making system has gone wrong, not only in its mathematical standardizations but also in how it makes assumptions about this school year on the basis of previous years. Go to an already high-attaining (likely Private or Public school) and the algorithm marks you upwards. Go to a school with a history of lower grades and there’s no way you could really deserve that A you were predicted. They’ve played the game, been promised cake at the end of it if they work hard enough. But the cake was a lie.

And they are angry.

I’ve written elsewhere about what happens when we trust ‘the algorithm’ too much; when we hand over so much of our agency that it becomes something we see as holding sway over our fate. So much so that we can feel that we have been ‘Blessed by the Algorithm’. Theistic metaphors hold true here too: I imagine many of these students currently feel ‘Cursed by the Algorithm’ as opportunities they have worked so hard from vanish from view. While our governments may make more of those u-turns that they are becoming infamous for (Scotland has already said universities should accept non-adjusted grades), at the moment we have a clear example of too must trust in a bad system, as well as the suggestion that any system we placed so much trust in might be bad for us.

There are of course differences between GLaDOS and the code put together by Ofqual, the exams regulator, that has created this current mess. Like so many other science-fiction AIs, GLaDOS is credited with personality, intention, and even cruelty. However, our tendency to anthropomorphize even the most abstract of mathematically driven decision making systems means students end up shouting ‘fuck the algorithm’, as though personally hurt by this string of code. They’re also shouting at Boris and his cronies of course, but there’s a danger in slipping into anthropomorphic thinking when it comes to the systems that are being used by actual humans in power to make decisions about our futures.

The creators of Portal leant into this anthropomorphism to make GLaDOS a villain. Early on in development, they also realised that one particular test required anthropomorphism to make it work for the gamer. In a ‘box marathon’ test, the player has to get from one side of a difficult test room to the other while carrying a cube. At first, players kept forgetting the box and leaving it behind. Then one of the game’s creators, having read some “declassified government interrogation thing” whereby “isolation leads subjects to begin to attach to inanimate objects”, suggested making the cube ‘cute’. They added a pink heart to its design, called it the Companion Cube, and gave GLaDOS some lines about how it was unlikely that the cube was sentient – at least, it probably wasn’t. Then, at the end of the box marathon test, the player has to learn how to use an incineration device – in order to defeat GLaDOS later on – and has to burn up the Companion Cube:

Although the euthanizing process is remarkably painful, eight out of ten Aperture Science engineers believe that the Companion Cube is most likely incapable of feeling much pain.” – GLaDOS

The player’s connection makes this moment actually affective. But, as the Michael Rosen poem above suggests, the dangers of handing over agency to algorithms are not just in reifying, or deifying, them but also in the counter-reaction: reducing those we enter into the system to being ‘just’ data. This is Goodhart’s Law and then some: ‘When a measure becomes a target, it ceases to be a good measure’. And when a student becomes something to be measured against, becomes data – your school has produced U-class A level students, therefore you must be a U-class student – the individual is lost in the algorithm.

Reifying and anthropomorphising the algorithm obviously obscures the humans – the students in this case – inputted into the system. It also obscures the humans behind the system. At the moment, students are trying to hold people in power to account, protesting outside the Department for Education in London. But it’s increasingly obvious that a certain flavour of technological accelerationism is gaining ground in the policy thinking of the UK government, signified by the adoption of the ‘move fast and break things’ thinking of Silicon Valley. The Barnard Castle fan, and Chief Advisor to the PM, Dominic Cummings has become the poster boy for this kind of approach, but he wouldn’t be so successful if there weren’t others who agreed with him.

When this kind of thinking clashes with the realities of people’s lived lives we’ll get these moments of rebellion against the algorithm, of Chell facing off against GLaDOS. But unless light is shed on the more insidious moments when algorithms are trusted too much by the government we might not know all the times when the cake is a lie.

Some tech ethicists calling for more transparency are trying to find a middle-way between ‘fuck the algorithm’ and ‘assume the party escort submission position’. I’m always a little wary of including myself in this endeavour, as the banner of ‘ethics’ is increasingly being used – by corporations in particular – to obscure these very problems. They can’t be unethical, they have an ethics board! But we can all get involved in bringing light to issues. For instance, some, like journalist Carole Cadwalladr below, have pointed out the irony of Roger Taylor being the chair of both Ofqual and the Centre for Data Ethics and Innovation in the UK (tasked with ensuring algorithmic fairness).

 

(As I write this (at 2.56pm) there are reports that he will make a statement at 4pm, likely announcing a Scottish-like u-turn. We shall see).

(Hi, its 4.03pm Beth here, yes we have a u-turn! Thoughts with all the admissions offices who now have to deal with this revision when there’s a cap on places!!)

The Portal games are challenging mind-bending, puzzles with amusing representations of AI and robots (and the co-op version lets you play as a part of robots who are always being blown up, which is more fun than it sounds) but for all that GLaDOS guides Chell in them, they are not a guide for life. It’s a few years until my small human needs to worry about exams, but already he’s in a system that will regularly test and grade him like Chell. So I’ll need to teach him about the algorithms that promise you cake in the real world and the humans who wrote the lie in the first place. Portal seems a good place to start…

cake pic
You got to the end so here is the cake: a doodle I drew while writing my notes for this post 🙂