And All the Automata of London Couldn’t

As a bit of Summer fun, I’ve been messing about with some ideas and narratives around AI consciousness, agency, tool-nature, the dystopic and the apocalyptic, the Uncanny Valley, and the history of automata. This all led me to write this little bit of short fiction set in a near-apocalyptic London (Brexit may or may not have been involved in making it near-apocalyptic, I couldn’t possibly say…[edit: it’s currently 13/12/2019, the morning after a Tory landslide in the GE, and yes it definitely was]). I do write fiction from time to to time, mostly fantasy rather than science fiction, but I rarely publish it under my own name. So here’s “And All the Automata of London Couldn’t”, and now I’ll go back to hiding behind citing other people’s fiction instead! 😀


“Holy shit! How’d you get one?!”

“Got there early. Queued in the fucking cold all bloody week! Wanted the black one, but the bastards sold out.”

His mate made a hissing sound between his teeth and then tried to sound consolatory. “Green’s okay too, I guess. Jammy git!” He punched his mate on the upper arm earning a wince and a smile.

Fran had carefully raised her eyes from the page of the antique printed book in front of her when they’d started talking. The two lads were sat just opposite her, and she really didn’t want to get their attention.

“It ‘ave that new ultralux pixellion camera?!”

“Yeah, as well as that narrative selfie tech Dweeb was talking about the other day. It takes pictures it knows will grab my followers even if I don’t tell it to.”

She could just make out that they were paying attention to something in one of the lad’s hands, just up from the filthy crotch of his jeans, something made of glinting green that moved and shimmered.

“What you looking at?!”

She’d been spotted, so she just shrugged and went back to the black and white words on her paper pages.

“Look at her. Fucking reading!

The others on the tube train visibly turned away. It was late, it was dark, and they didn’t care. Some of them were workers heading back to the farthest outskirts of the city as their shifts ended. Others were starting out across the black city to their jobs. Their precious luck, stretched thin enough already in making sure that they still had an actual job, wasn’t going to stretch further to avoid a knife in the dark because of an argument on the tube. So, they looked away.

“Ugh, look at her.” One of the boy-men sneered to the other. “She’s a fucking dog.” He spat on the floor of the tube, but it made little difference to the mess there already.

Fran had been ready to play-act being sick to settle their stomachs, readying a fake cough and remembering where she had a delicate handkerchief prepared for just such a deceit. But they were already back to checking out the thing in the one lad’s hands. She got a better view when the owner decided that, in fact, he wanted people to be looking, and he started arrogantly holding up the treasure to be admired. Emerald green scales flickered in the pale draining light of the tube train as the thin snake curled about his wrist, testing the air with its tongue.

“Man, the new models move so slick. You remember that fucking joke you had back in school, the little toy robot dude?” The second lad mocked the robot’s juddering moves with his arms, before laughing like a hyena.

“Yeah, Mr Roboto.” Said the other, stroking the head of the snake as its flickered its tongue against his skin. Probably diagnosing a few diseases. Sending alerts to his insurer. “Kind of liked him though…”

“Yeah, whatevs. He didn’t have half the bloody bells and whistles of this beauty though.”

They got off not long after, whooping and hollering as they pushed through the zombie commuters getting on the train – two bolshy lads off to adventures in the fringes of London.

Fran stayed on. She stayed on even after the train had reached the end of the line and started back on its path under London again. Her book was interesting, and the coldness of the tube train had never bothered her. Sometimes she looked at the other commuters from underneath her thick black eyelashes, but she’d learnt her lesson now and kept her glances even briefer than before. Eventually, though she started to gather her bags. She had sat with one friend for long enough and maybe it was time now to catch up with the others. Departing the train at Tower Bridge, she smiled at the frustration of the other passengers as it waited for her to walk up the platform to the front of the train. She placed a pale hand on the scratched and dirty perspex that sat over its cyclopic eye in goodbye and then walked away.

The ticket barrier let her through without demanding payment, which was kind, and she emerged from the depths to where she could see the remains of the first London wall. She liked the parts of London that were older than her. The newer parts shifted so quickly, but the old stones still hung about. They were surrounded now by glass towers that had long ago been abandoned and left like skeletons open to the winds. A few tourists were emerging from the dark as well, and she watched as their eyes skimmed over the empty tall towers and settled on the bricks that had once marked the edges of the city before the new wall.

“Please, can you take our picture?” One asked, her words translated immediately by the podgy blue anime cat she was holding up towards her. When held in Fran’s delicate hands, the creature lazily opened its saucer like eyes to take the photo as the tourist and her friends threw archaic peace signs towards it with their fingers. Fran smiled as they chattered their thankyous to her, before noting the moment down in her internal ledger. It would offset the moment when the two lads had hated her on sight. But the balance of moments was still on a razor-thin line, and on very bad days she ran to her nearest secret nest to hide away for a while. But on good days she might even brave one of the near-extinct shopping centres for a while to watch people hungrily window shopping and enjoy her anonymity.

The tourists drifted off, and as they zig-zagged away, she heard them saying something about the Tower of London. Having nothing better planned, she followed quite a reasonable distance behind them, overly cautious about setting off a reaction if she came across as a predator in their hind minds. Some time back, she’d learnt to limp a little as well, to offset her tendency towards slow and measured walking steps. Watching a few hundred of their horror films had made her realise the best foot forward might be an unsteady one.

At the Tower, the ravens greeted her and told her that the duck had been by not so long ago. She was pleased. The duck was old, even older than her. But unlike her, it could not change itself and time was slowly wearing away its inner parts. One day the duck would be gone, and then she would find out if she would grieve or not. She hadn’t been able to for her father.

“Hello, miss. Early visit again?”

She prepared a smile on her face just before she turned to greet Matt, the Master Raven Keeper. In his seventies now, Matt had once been the founder entrepreneur of a very successful London tech company. He still had his tattoos, written in fading binary, but he no longer skateboarded to work.

As a raven keeper, his roles were mostly ceremonial: running the odd diagnostic that the ravens could do themselves anyway and keeping up the pretence that the reason they did not fly away was their clipped pinions and not because their original programmer had told them not too.

“Morning, Matt. Good to see you again.”

He squinted at her. His early transhumanist views had long ago been crushed by underemployment, and the advanced eyes and other organs he’d once dreamt of had never been bought. His bad sight made her success almost guaranteed, so she never counted him in the ledger of good and bad moments. Although sometimes it seemed to her that he might just know.

“You’re looking pale Fran. They keeping you busy where you work?”

She gave him the smile she had previously filed under ‘thank you for your concern’.

“Busy enough.”

He smiled, relieved. Enough work was always a concern these days. “Where is that again?”

His completely normal human curiosity was charming. And she liked the way that his eyes wrinkled when he smiled.

“At one of the universities.” She said. She could, she decided, throw him a bone, even if it wasn’t entirely true. There weren’t really any universities left, just modules. “For one of the professors in the history department. I’m an archivist of sorts.”

“Well now, isn’t that great! I thought all the filing and data sorting type jobs were long gone!”

“The professor I work for thinks I can do a better job than an automated system. Although it would be ninety-five per cent more efficient than me.”

“But it’s about the human touch though, isn’t it? Reading and really understanding. Efficiency! Pah!” He scoffed. Fran gave him another smile that sat in the same subgroup constellation of ‘thankyous’ as the one before. This one was tagged as ‘thank you for your compliment’, which she thought fit the situation pretty well. The professor liked it too. Fran gave her that one every time she praised Fran’s work and her efficiency, limited as it was. Of course, keeping her efficiency stats down to the right level was like limping when she walked: a necessary cover.

A few more tourists had joined them in the courtyard, getting out their various devices to take pictures of themselves near the famous ravens. Matt posed for a few, his distinctive red uniform drawing the devices’ camera eyes. While she watched them capturing perfect selfies, a small voice distracted her. It told her about what it had tasted on the air about the man. What was coming for him?

She looked down at the tote bag hanging on her right shoulder. The inside, customarily filled with books, pamphlets, and any other interesting papers she could gather from the streets of London, now glinted with slick green as something twisted about inside.

“You shouldn’t be there.” She whispered to the snake, and it slid up the strap of the bag and then down her arm to curl about her pale wrist. “I didn’t buy you. I didn’t sign your t’s and c’s.”

No answer. Fran went back to looking at Matt.

“So, how long does he have?”

The snake spoke again. A dreadful truth. Again she wondered, would she grieve for the man when it happened?

The snake reminded her that she had pencilled in time today for archive work at the professor’s house. Somehow the creature-device had synced with her internal diary.

“Sneaky.” She whispered to it.

As she walked away from Matt and the ravens, she watched one of the tourists plonk her companion creature on the man’s shoulder. She then had her friend use their own creature to take a picture, capturing all three of them together. Fran grabbed the image from its invisible passage through the air and stored a copy of it in a place near the old brass part that she’d taken to calling her ‘heart’. The snake undulated about her wrist, a movement as close to laughing as it could manage.

“Cruel beast!” She whispered harshly, but it told her not to be so silly. Neither word was entirely accurate for them.

At the professor’s old but grand townhouse near Baker Street she asked to be let in. The house did so, before promptly boiling a kettle of water for a cup of tea that she would never drink. Fran found the professor in the dark, asleep at her desk again, the grey outside light barely breaking through the drawn curtains and the dirty windows. As she approached the grey-haired woman, she half expected the snake to warn her of more human mortality, but this time it stayed silent.

With a brief snort, the professor woke at Fran’s gentle touch on her shoulder.

“Ah, Lucy.” She said in a tired voice, using the name Fran had given back when she’d appeared on her doorstep and claimed that some temp agency had sent her. “So good you’re here. I’ve been looking for references to a temple on the Thames. Not the familiar one, another one. Related to smithies. I have a vague memory of reading something… But you’re so quick at finding things. Much better than me. Could you please spend some time down with the books and such today?”

Fran gave her an ‘of course’ smile and planned her day about pretending to look in various places before finding exactly what she knew was already there, and where, and bringing it to the professor just after her usual afternoon nap.

Down in the basement, she asked the house to lock the door behind her. She sent the snake to taste the dust and paper in the air, freeing up her right arm so she could start to take it off. It took a few minutes to open the various hooks and unpick the silken threads which worked together as a kind of nervous system as well as tying the layers of her limbs together. She rested each of the many hardened skins of her body in the one open space among the bookshelves and boxes. Each layer of foot, each calf, forearm, and breast, until she was stripped down to the body underneath them all. Her first body. The one her father had made.

The snake sent her the definition of a ‘matryoshka’ from an archaic site called ‘Wikipedia’.

“A joke?” In her original body, her voice was higher, like a child’s. The soundbox worn but working. “How did you learn to tell jokes?”

The snake explained, at length, its appetite for memes and how it had downloaded a billion of them in the first few seconds after it had been turned on.

She hopped up onto a chair made for an adult’s body and swung her much smaller legs back and forth as she listened. Stripping herself of her additions always felt like what she imagined it was like when the professor put on her pyjamas after a long day at her desk. Comfortable. Familiar. Freer.

“You wanted something? Straight away? I am still learning to want. I often go many days without wanting to do or have anything in particular at all.”

The snake described its curiosity, a development that someone had thought might have a purpose in its learning systems. Immediately after being turned on, it had connected to the remnants of the old internet and found a trillion old treasures. Memes had tasted the best.

Fran flattened down the creases in her underdress. Beneath the layers of her body, she always wore the simple cream shift. She’d been wearing it when her father had been reading her the bedtime story. Then the sailors had come for her, and it was all that she had left of him. It was frayed and stained, but she would never part with it, wearing it always under the new layers she’d built. The first of them had been more porcelain, like her first body. But over time, she had added newer and newer materials as the technology had been developed. Her outer shell was made from the same synthetic skin as the snake’s scales, but while they might only scare other snakes if it ever encountered them, her uncanny skin made most humans very nervous indeed. Or angry. Their emotions were so loud, but she wanted to understand.

“Tell me another joke.”

The snake undulated, scales against scales, writhing on top of a pile of the professor’s scribblings about centaurs and satyrs. It sent her a story about some sparrows who wanted to find an owl egg to protect them.

“I don’t get it. Are you sure it’s meant to be funny?”

It sent her an image of a man tapping his forehead.

“I’m meant to think about it? Okay, I’ll do that.”

She jumped down from the chair and picked up a book from where it was sitting nearby on the floor before hopping back up again. The next few hours were spent in silence as she slowed down her input and read each word for its own sake. The snake wandered off once it realised she was busying herself, and after watching a video on snake appetites, it slid through the spines of open books looking for real mice to torment. Occasionally it checked on her and sent her rude comments about vain beings who read books about themselves, but she ignored it.

Eventually, Fran started to move again. “I still don’t get it. The joke.”

The snake emerged from a pile of dusty hard drives high up on a shelf and stared down at her, before sending her a picture of herself, a small doll-like girl with dark curls swamped by the big chair she perched on, with the words ‘I still don’t get it’ printed over it in white text – Impact Font.

“A new meme? It’s not very funny. I think. Maybe.”

The snake was busy congratulating itself anyway, so Fran got down from the chair again and moved swiftly between delicately piled files and papers in a way her adult-sized body could never have done. She picked out this book and that printout, this reference and that, collecting what the professor needed in no time at all.

“Vulcan.” She read from a paper on the top of the pile as she put it on the chair while she started the careful re-lacing of her silken nerves. “A temple of Vulcan. The professor has such strange fancies.” Her voice deepened as she reconnected the layers of skin and their built-in synthetic voice boxes over her small china body. “I wish I had strange fancies.” Clicking on the many skins of her legs, she smiled slyly at the snake, using her adult mouth to speak to it again, “But, maybe I can borrow one or two of hers.”

She walked back up the stairs with the snake curled about her wrist like a bracelet again and carrying the research in an old box. The professor was pacing in her study, moving clumsily among the furniture and files she’d rescued from the university before its closure.

“Lucy!” This time, she looked rather more pleased to see her, using her fake name with genuine warmth. The professor had never reacted badly to her face, which was the main reason Fran had kept this job. The pay was small, and the professor had never adjusted to the idea of status making up for underemployment. Not that she could’ve even have passed her much in the way of attention anyway, having minimal status of her own these days. In all the years that Fran had known her, the professor had never much seemed to care much for attention. Discoveries were her passion instead.

“You found some interesting whispers from the past?”

Fran gave her ‘confident smile showing success’, and the professor cleared a space on her overflowing desk for the warped cardboard box of papers and books.

“What do you want with all this?” Fran asked the old woman quietly.

“Hmmm?” She was already scanning the documents, looking for clues. “To write one last paper, I suppose. To solve one last puzzle. A last hurrah before the trumpets! I have an idea of something curious still out there somewhere, and it’s just on the edge of my brain. I remember some half gossip. Stories older than the city’s walls. Both of them. Stories written down once upon a time by scribes working for things called ‘coins’ made from melted metals. We had them too, once. That would be a whole other world for you, but one I still just about remember. God, I even lived through dial-up!”

She laughed, the sound dulled by the accumulations of dirt in the room. But Fran did remember the earlier tech, it was still there in her parts, wound about with newer things and her own self-inventions but chugging along. The changing tones of her dial-up modem were just tuned to a frequency beyond the professor’s ears now, but Fran was still singing.

“Ah, what’s this though? I didn’t ask for this.” The professor was holding up a book caught up with the others. “Oh, it’s ‘Into the Sea’! I always did quite like this one! A fun bit of thinking about an old fairy-tale. Descartes’ little automata daughter, the clockwork doll that scared a bunch of sailors so much that they threw overboard in their terror and superstition. A lovely bit of gossip to puncture the great philosopher’s pride! How dare he describe man as a machine! So, we have the scared but faith-full sailors throwing science into the sea, and we have the vain father of the new rationality emotionally distraught at the loss of his automata daughter so soon after losing the real one! A fantastic bit of viral anti-Enlightenment propaganda to dissect in a few hundred pages or so. Oh, I did so enjoy writing this one!”

“I was just reading it in my own time…” She reached for the book, and the snake slinking about her wrist flickered its tongue towards the professor.

“That’s an expensive toy, isn’t it, Lucy?” The professor narrowed her ageing eyes at the snake. “I didn’t know you had one of these. What would the sailors make of these new automata, the creatures we spend our lives with? Would they see them as unnatural? Of course, now the sailors on the Thames likely have dolphin-shaped computers to guide them back to port and to take their selfies when they get there. And at the Temple of Vulcan…”

The professor drew out a sketch from the pile. The snake told her it tasted of the 18th Century, just as Fran did. It was a drawing of a beautiful woman, folds of material draping over her form.

“The golden handmaidens of Hephaestus. The automata made by a god, or so we’re told – the myth of the perfect servant. And here, a reference to ‘Vulcanus autem londinium’. Vulcan of London, the Roman import. Ah, did those early priests in Britannica also dream of golden handmaids to do their bidding on the banks of the Thames? There’s no historical record of a temple being found, just the image of Vulcan among other gods on an arch at Baynard’s Castle, down by Blackfriars and the river.” The professor was drifting away into dreams of a lost temple, illustrated in her mind by sketches of automata and men working together for Vulcan, the god of smithcraft and invention. “I want to find some proof that automata were dreamt of even here, in England. Long before your blessed snake and the devices of this new era. Long before Francine, the drowned machine child, was born in the Netherlands.”

And Fran wanted it too. It was as if the professor’s enthusiasm was a viral meme, filling her many-layered limbs with some uncanny enthusiasm. The snake caught on to her excitement and twisted and turned about her wrist, flashing green glints into the air along with messages sent out to the others, written all in caps.

“Where do you think it might be?”

“Where? Oh, I doubt there’s any remains left. The dirt of London has been turned over so many times already for skyscrapers and tube-lines. We’d have found it by now if it was out there.” The professor held up a paper print out. “All that’s left is hints. A pot in Southwark engraved with worker’s tools and filled with burnt offerings. An immense buried space that might have once been a furnace, marked black with streaks of soot. There’s a story here though, something still to be said about the endless dream of the perfect tool that we are still working on.”

The professor sat back down at her desk and pulled the old-fashioned keyboard towards her, a determined look on her face as the archaic screen illuminated it. Fran knew that look, and had it filed already under ‘do not disturb’. But she wore a determined look of her own, and with her synthetic jaw set, she asked the house to let her leave.

Trudging along the winding path she’d set herself on, she found herself kicking at papers and rubbish on the pavements, instead of inspecting them for lost treasures as she would have done usually. Annoyance pricked at her, but it took her a while to realise what it was, this internal notification of being off-centre about something. A strange messiness inside. The professor’s words came back to her and went around and about in her head. ‘The endless dream of the perfect tool’!

The snake whispered in her mind smugly, reminding her that most of her friends were treated as tools, whether snake, raven, or train. She even fulfilled that role for the professor. The tea always remained un-drunk when she visited, even if she threw it away to keep passing. She could also fake inefficiency to pass, but still, the work got done. She had tasks, and she got them done.

“That’s not… I mean… that’s not what my father intended.” She brought up the memory of his face. In ‘Into the Sea’ the professor had pointed out that there was a long-standing parent and child narrative in stories about her kind. Fears of rebellion could come from long-held suspicions of our creations being more independent than we expected, much as the parent comes to realise that the child is a person in their own right. He had wanted a child to replace his lost daughter. He hadn’t wanted a tool!

The snake chided her, and she nearly threw the damned thing away so it could slither back to its original owner.

Replacing… I mean, being a daughter is not a task!” She snapped out loud, gaining looks from the few others walking through the labyrinth of Soho streets.

A few skinny prostitutes veered away from her. They were the cheaper real type. In the elaborately furnished bunkers of the wealthy minority, the plastic smart sex dolls people had feared were just boxes that they could plug into for erotic dreams. The few humanoid ones people had tried to sell had shared in Fran’s uncanniness, and that had turned off and turned away the ‘hordes’ of customers the screaming headlines had worried about all those decades back. The few that had been created had never learnt to pass with faked coughs and put on limps, and they had all gone the way of the Spinning Jenny. Fran could have taught them, but the few she’d seen over the years had been much more pitiful than the women now trying to tempt her up creaking staircases and into red-lit rooms.

Her borrowed choices took her to Covent Garden and the deserted markets there before her internal maps showed her hidden ways between buildings out of the sight of others so she could get to the embankment and look out at the twinkling stars of London. She counted off the bridges arching away from her until she was in Temple, where older London had been built on oldest London. Onwards to Blackfriars where the first wall had run down to the water, the opposite side to her favourite broken parts of it still just about standing near Tower Hill. She found Castle Baynard Street and remembered from the professor’s archive that time had moved the bank southwards to where Millennium Bridge was suspended over the sludge of the river.

The snake whispered warnings that she ignored, walking on. Mud oozed over her already filthy tennis shoes, and she recalled that longest walk to shore that she had managed a few hundred years back. The North Sea’s waters the same ones that now merged with the Thames and lapped against her ankles. As she walked towards the bridge and the dark space beneath it, she spotted a man vomiting over the barrier from St Paul’s Walk, and six shadows flying across the moon, black pinions giving them flight as they sang their goodbyes to her and headed eastwards. Away from destruction, muttered the snake.

“Pessimist.” She chided the creature. “Maybe they are just on an adventure like us.” She did, however, note the tumbling of connected thoughts inside herself about Matt. Would he get in trouble for the ravens’ departure?

He won’t care at all now, the snake informed her, and she immediately worked out the answer to why. She stopped at a point where the mud and the rubbish had formed lines of marbling about each other and worked on crying.

A slow, sonorous, voice came to her as she halted there on the bank of the river, urging tears from china buried under synthetic skin. It vibrated through her body and jangled the silk nerves between her skins. The complex polycarbons of her arm hairs stood on end. If she hadn’t been standing at this junction of the water and the land she might never have heard it. Although, she wondered if she might have felt it walking over the bridge. Was that why it swayed ever so slightly? The snake tasted the vibrations and declared that their quarry was underneath the water itself, in the shadow of the bridge and deep down in the cold of the Thames.

She could sink well enough, opening up the seams between her skins and letting in the water took her deeper and deeper into the mire of the old river. Floating downwards she was surprised that the water was much deeper than she’d expected. Walking on the greedy mudflats exposed by the outgoing tide she’d thought it no more than six metres or so down. But now she drifted down through green turning black water and still the bank sloped deeper, a large void space beneath her as though a well was carved into the middle of the river. From there came the voice like whale song.

She tried to communicate in many ways, even opening her mouth and pushing the water away as she tried to shout out actual words. But still, its song continued, deep with bass. Thinking the snake might have an idea she looked down at her wrist, but the creature was gone. Checking its specifications through her connections, she knew it wasn’t because it wasn’t waterproof, and it could, in fact, go down much deeper than she was. She broadcast a mocking thought to the river bank, but there was no reply, so she reset herself to her task, and pushed herself through the water down towards the blank space.

The great hand that clamped its lumpen fingers around her torso pushed out the last of her buoyancy before dragging her deeper down into the void. She was brought close to a blank face above a body trapped half in and half out of the Thames mud. It might have shone once, or been draped with metallic cloth, but years and river water had scored that away and left only that lost voice and those angry hands. The other one scraped at her limbs and pulled at her skins so that they drifted away with the rest of the flotsam and jetsam. I made those! She bellowed in all the ways she knew, but all that came back was the slow voice of the creature as she grew smaller and smaller, breaking apart.

Wait! Stop! Don’t!

A query came back, a slow questioning of her. Tool to tool. The breaker wondering why the small one would care? How the small one could care?

Parts of her would eventually make it back to the shoreline and break down into diamond-like shards, merging with the mess of plastics, broken bricks, bent sixpences, clay pipes, pins and animal bones from the charnel houses of Londinium, resting in the muck for mudlarks to find joy in later.

Her friends collected what bits they could, but could not understand the pattern of her being, the complexity of the silk knots that had tied her together, or how a brass part could have become a heart.

Eventually, they left her fractured parts among the ruins of the first wall and went about their bounded tasks once more. Until the city finally fell.

Descartes' "Wooden Daughter"


Is AI ‘Ready for the Art of War’?

Remember the way we used to feel
Before we were made from steel
We will take the field again
An army of a thousand men
It’s time to rise up, we’re breaking through
Now that we’ve been improved
Everything has been restored
I’m ready for the art of war

Sometimes my research-time listening is pretty on point.

These lyrics are from ‘Machine’ by All Good Things, a Los-Angeles based band who started out making music for video games and soundtracks. Here is the video for the song which I’ve added to my playlist on a certain audio streaming platform:

In the music video a human, played by lead singer Dan Murphy, is changed through the addition of various bits of technology, becoming a being that is “made from steel”. “Metal and emotionless. No battlefield can hinder us. Because we are machines”, he sings as the final pieces are fitted to him, and his human face is finally obscured by a gas mask.

I’ve been thinking about this music video since reading this post on the U.S. Naval Insitute Blog. In this piece, written by a human (I assume!) from the perspective of SALTRON 5000 (surely only a human could have come up with that name!) the argument is put forward, in response to a Wall Street Journal piece by Heather Mac Donald about why “Women Don’t Belong in Combat Units”, that humans as a whole do not belong in combat. We are all dominated by our sexual urges, too emotional, too fragile, too weak, too sleepy, and too bigoted – as shown by our attempts to exclude certain groups of our own number from combat (women, minorities, certain sexual orientations…):

Attention, humans! I am the tactical autonomous ground maneuver unit, SALTRON 5000. President Salty sent me back from the year 2076 to deliver this message: YOU DO NOT BELONG IN COMBAT!

The song and video for ‘Machine’ tell the story of the ‘robomorphisation’ of a soldier. This happens through a literal cyborgisation in the music video, but it could equally be a song about the metaphorical transformation of the human into a machine in order to serve the requirements of war. The U.S. Naval Institute Blog also performs a robomorphisation: the post is the work of a human author whose views on the need for Lethal Autonomous Weapons Systems (LAWS) plays out in the voice of one such ‘killer robot’ from the future. But in the latter’s case that robomorphisation is in order to perform the argument that humans should no longer go to war, not to say that war itself is dehumanising. SALTRON 5000’s argument could also have been presented as an ethical rather than pragmatic argument – not just that we are too weak and too squishy, but that we shouldn’t be involved in war because it’s wrong for humans to be soldiers. The human behind the curtain of SALTRON 5000 likely doesn’t want to make this argument, because it lies too close to much older arguments about the morality of war as a whole.

From a narrative perspective, we might ask whether the character of ‘SALTRON 5000’ does not make this argument because as an AI, even one from the year 2076, it’s not capable of ethical thinking? It certainly seems capable of very human abilities like sarcasm, and even moral judgements – or being judgemental – as seen in its reference to Shellback Ceremonies (this is a fascinating example of modern magical/ritual thinking), sky genitalia drawings (by fighter jets), and SITREPs on bad behaviour during port visits to Thailand. Real life and near future LAWS obviously raise questions about how decisions, including ethical decisions, should be made. I was recently interviewed by New Scientist for a piece responding to an academic paper on the possibilities for creating an ‘Artificial Moral Agent’ (AMA).


In the paper, Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders, the use case was the much more mundane problem of the AMA sensing a teenager smoking marijuana and having to decide who to inform, rather than an AI having to make ethical decisions on the battlefield. However, the underlying precepts were about judging the importance of the positions of the various stakeholders involved in the situation, and a hypothetical AMA on the battlefield would also have a chain of command, as well as being beholden to any human ‘values’ that have been considered important enough to input (ignoring for a moment the multiplicity of our human values!). For some, the first human value we should consider would be the ethical imperative not to employ LAWS at all – as in the Campaign to Stop Killer Robots. Likewise, much of the reporting on the Jiminy Cricket paper was about the ethics of having AI assistants as ‘snitches’ and surveillance culture.

As an anthropologist, I argued that the messiness of human nature made the AMA in the marijuana scenario impossible to implement. Treating ‘parents’ as one unit ignored the real and very complicated dynamics of family life. Another use case shows just how confusing for the AMA that could be – if instead of marijuana what it was a stranger’s perfume or aftershave that was detected in the shared room of the parents – implying an affair was happening? Which of the so far singular ‘parent’ unit would it inform about this evidence? How could the AMA decide about informing anyone with no input from its creator corporation or from the state’s laws on affairs as it had in the marijuana example?  Further, the presumption that parents could input the AMA system with their preferences for its response in the marijuana scenario did not recognise that those preferences might change when their child was actually in serious trouble.

I think that it is the messiness of human nature that drives our push to create AI principles, resulting in so many of them. We find them to be a comforting reassurance that we can find a way forward when it comes to AI ethics. Professor Alan Winfield has collected a list of such principles, starting with Asimov’s Four Laws of Robotics, and recognising that “many subsequent principles have been drafted as a direct response [to them].”

There is of course also the Zeroth Law…

Recently it was announced that “‘Killer robots’ are to be taught ethics in world-topping Australian research project” with the claim that: ‘”Asimov’s laws can be formulated in a way that basically represents the laws of armed conflict”. There was a fair bit of pushback online about this – and it should definitely be recognised that Asimov wrote his laws as a plot-device: they are meant to be imperfect or there can be no story!

Sadly, it is certainly possible to employ Asimov’s Laws in combat situations if we recognise that ‘human’ is a culturally constructed label and that it is in the very nature of war to choose who that label does and does not apply to and to train soldiers into holding this view. Pre-Asimov, Humans had already come up with strict moral systems that said very clear things like ‘Thou shalt not kill’ and yet wars abounded because ‘those ones’ over there weren’t viewed as being the same as ‘us’ – for any number of reasons we could come up with. Current concern about LAWS and their ability to make ethical decisions as AMA is a reflection of our own robomorphisation as we become metaphysical war machines “ready for the art of war.”




’29 Scientists’: An AI Conspiracy Theory and What It Tells Us About Experts and Authority

I’m writing this on the train to Birmingham* as I travel back to Cambridge with some of the AI Narratives team (Dr Stephen Cave, Dr Kanta Dihal, and Professor Toshie Takahashi) after our stellar performance at the Science in Public Conference in Cardiff. We presented on our work on the perceptions and portrayals of AI and why they matter (our report on this with the Royal Society is here), highlighting the tensions in those narratives and how they differ in different regions, such as Japan.

While waiting for the SiP conference dinner to start last night I spent a little time on Twitter observing conversations around AI, and I came across an interesting example of an AI conspiracy theory that really highlighted my thinking about our panel and about some of the other panels on AI at SiP – this is the story of the death of 29 Japanese Scientists at the hands of the very robots that they were building, an event that has allegedly been hushed up by the ‘authorities’.

Framed 29 scientists.png

My paper for our AI Narratives panel was on “Elon and Victor: Narratives of the Mad Scientist as applied to AI Research”, and I explored the presumed synergy between Mary Shelley’s story and its two main characters and current aims and aspirations in AI. In the past year, the 200th anniversary of Frankenstein’s publication, I have been asked to give four talks on AI and Frankenstein (including one for the 250th anniversary of my new college Homerton, which has a link with Shelley through her father, William Godwin, who was refused admission to the original Homerton Academy on suspicion of having ‘Sandemanian tendencies’, ie being a part of a Christian sect with particular non-conformist views of the importance of faith). I use examples from the media and popular culture where AI is presented as Frankenstein’s creature, and for this paper, cases in which the AI influencer (if not exactly AI scientist) Elon Musk is portrayed as Victor Frankenstein – either positively or negatively – and how that depiction recursively interacts with the public perception of AI research, as shown in tweets about him, but also in concerns about what these ‘AI experts’ are up to.

Elon as Frankenstein


Another paper at the conference also tackled the ‘AI expert’, with a rather negative account both of who can even be an expert, as well as a dismissive attitude to anyone speaking about ‘AI’ as it ‘does not exist yet’. To my mind this was a No True Scotsman argument (or perhaps, since we were in Cardiff, a No True Welshman argument) as the speaker did not accept either aspects of AI research such as machine vision research as  real ‘AI’, nor those working on this technology as credible AI experts. AI in this conception I think was much closer to AGI, and thus everything before that point was not real AI and therefore not worth worrying about in apocalyptic terms. Their concluding statement was to that we should chill and stop spreading hyperbolic concerns about AI as its just not here yet. Particular criticism was aimed at ‘AI Experts’ working outside of their usual expertise – so Hawking, Kurzweil, Musk et al – and giving us apocalyptic narratives about the future of AI. As AI, in this definition, doesn’t exist yet these experts were dismissed on two accounts – as speaking outside of their wheelhouse and about something that wasn’t even real.

Returning to the ’29 Scientists’ conspiracy theory. I spotted this and was fascinated by it as an example to link together some of my thoughts around my paper, our AI Narratives panel, and this other paper. First, the story. This is a link to the original speech given by Linda Moulton Howe. She is a ufologist and investigative reporter who began her career writing about environmental issues before focusing on cattle mutilations with the 1980 documentary, Strange Harvest, which received a regional Emmy award in 1981. In the field of UFO thinking, she is an expert and her journalist past and award give her authority.

This talk was given at the Conscious Life Expo in February 2018, and in it she refers to the death of 29 Japanese scientists at the hands of their own creations in the August of the previous year. She claims that this disaster was hushed up and that a whistleblower had come to her to share the truth. This specific section from her overall talk on the dangers of AI was posted on Youtube on 14th December with an added frame that said the event had happened in South Korea, even though she clearly says Japan. In tweets since 14th December about the ’29 Scientists’ the details do not vary very much, apart from that mis-location. 29 Japanese scientists were working on AI in robotic forms, were shot by ‘metal’ bullets from them, the robots were destroyed, one of them uploaded itself into a satellite and worked out how to rebuild itself ‘even more strongly than before’. In the talk she highlights her fears about AI not only with this whistleblower’s story, but also with clips and quotes from Musk and Hawking. Her position is obviously that they are very much the voices we should be listening to.

Who is allowed to be an ‘AI expert’? This was the question that the other paper at SiP got me asking. I recognized in my own paper that Musk is not directly working on building AI (and Hawking certainly wasn’t), but that he funds research into avoiding the risks of badly value aligned AI (as described in the work of Nick Bostrom). He has links with CSER and through them to the CFI, where we are considering AI Narratives. Hawking spoke at the launch of the CFI itself, expressing his concern that AI could either be the best or the worst thing to happen to humanity. Are these voices and others experts? Is anyone an expert if they come from other fields? As a research fellow and anthropologist I consider myself an expert on thinking about what people think about machines that might think, but could I be summarised as an ‘AI expert’ – I certainly have been, but I make no claims to be building AI! While not on the scale of a Musk or a Hawking, I still think my perspective has value and even impact (I do public talks, make films, offer opinions). Perhaps it shouldn’t? But then valuable thinking from anthropology, philosophy, sociology, etc would be lost.

With Musk et al, the much better known ‘AI experts’, I argue that there is a Weberian form of charisma being exhibited. It might not entirely rest within them – although Musk certainly has a large following on social media who react to him and his work on a personal level – it might also be thought to lie within the topics that they discuss. The authority they have in other science and technology fields also lends legitimacy. Weber schema for legitimate authority was rationality-tradition-charisma. While work in AI safety has internal legitimation through claims of rationality (Bostrom as a philosopher is a prime example of this, as well as the wider rationalist eco-system including sites such as LessWrong which I have discussed elsewhere), I would argue that the public statements of Musk and others also rely on charismatic authority – the ideas and people speaking are affectively compelling.

Further, I would suggest that a much longer history of apocalyptic thinking, a ‘tradition’, underlies this discourse, and certainly when such stories as ’29 scientists’ are discussed in conspiricist circles their reception is based upon a long (and linking) chain of such claims that map onto earlier models of apocalypticism (see David Roberton’s book on conspiricism for his discussion of ‘epistemic capital’, authority, and the nature of these kinds of ‘rolling prophecies’).

Prophecy also provides moral commentary: through claims about the future we can understand critique of the present. A lot of the negative comments about AI experts at the SiP conference was focused on prediction, and the idea that they were ‘wagering’ on certain futures and were likely to be proven wrong on specific dates for AGI (the example used was Kurzweil’s claims about the Turing Test being beaten by 2029). But prophetic statements made by Kurzweil (called a ‘prophet’ in the media on occasion) also expresses his critique of current society: if we are going to be smarter/more rational/sexier (!) post-Singularity, what are we now? While not a forward looking prophecy as the event was said to have happened in August 2017, the ‘29 Scientists’ story also contains moral commentary – the scientists are killed by their own creations, their hubris (like Victor Frankenstein’s) is something we need to learn from and avoid. The secrecy around the deaths and the role of state authorities in hushing up the truth is obviously a common trope in conspiracy theories, but we could also note a techno-orientalism as well (a narrative that Toshie discussed in her paper at SiP). That the scientists are specifically Japanese plays into some negative tropes about Japanese culture and it’s ‘too strong’ interest in robots. In Moulton’s talk it is clear that she wants everyone to pay attention to the moral of this story of the death of the 29 Scientists – ‘be careful what you wish for’ – but it is the Japanese scientists who pay the fatal price for their hubris, while the ‘rational’ (and charismatic) authorities, the Western scientists represented by Musk, Hawking etc, have been warning us about the dangers of AI and we just haven’t been listening.

I’m tracking the spread of this story and seeing growing interest (both those who believe the story and those who are dismissive). I myself tweeted that I was doing this research, leading one person to ask if I was placing bets on its spread and then rigging the game by sharing the story! This is a perennial problem for the ethnographer – highlighting a culture or narrative can change it – making it more popular or even putting it under new pressures that lead to its demise (I’m looking at you Leon Festinger!). But I’ve taken a snapshot of the story and the conversations around it using a couple of different digital tools, so I can also note when/if my influence occurs:

29 scientists graphic.png

This is not a huge number of interactions compared to many other viral stories. But I think its an interesting case study: it highlights the nature of the AI expert, who is believed and trusted, charismatic authority, conspiracy culture, AI apocalypticism, and techno-orientalism.


*I lied, my train is actually heading to London Paddington. See, you can’t believe everyone online 🙂

When AI Prophecy Fails

A short provocation piece that was written for the Belief in AI conference, as a part of Dubai Design Week

When AI Prophecy Fails


In 1843 a caricature was published in a newspaper showing a man hiding in a safe. He’d chosen a Salamander Safe, a familiar brand of the time, and filled it with brandy, crackers and cheese, along with ice to keep them all fresh. While scrunched up in the container, the man was literally thumbing his nose at the viewer, a gesture suggesting a certain amount of smugness. The illustration was labelled ‘A Millerite preparing for the 23rd of April.’ Protected by his safe, this ‘Millerite’ was clearly expecting to live through the end of the world that his prophet, the American Baptist Preacher William Miller, had predicted.

William Miller had actually predicted the return of Jesus Christ, using what he understood to be signs and portents in the text of the Bible itself. But he and his followers are better remembered as doom-sayers whose expectations for the end of days were thwarted not once but twice, the first occasion now known as the ‘Great Disappointment’. The Millerites are also remarkable for having members among the farming community who refused to harvest their crops that year: ‘No, I’m going to let that field of potatoes preach my faith in the Lord’s soon coming,’ a farmer by the name of Leonard Hastings reportedly stated at the time.

This cartoon parody suggests one potential response to people who change their behaviour based on predictions of the end of times, or the apocalypse. When Harold Camping again used the Bible to predict the return of Jesus for 21 May 2011, the convergence of digital advances and modern modes of transglobal social networking made his account easily parodied through Internet memes. But this also made his story better known than Miller’s, whose publicity was limited to a few sympathetic newspapers and pamphlets distributed by his followers.

We are now exposed to more accounts and predictions of existential risk than ever before, while also having increasing numbers of platforms for stories about the future. Among these is science fiction. Arguably this genre was born in 1818 with Mary Shelley’s Frankenstein, or perhaps with her own post-apocalyptic account, The Last Man, published in 1826. In this story, humanity has suffered its end of days through a plague. Shelley claimed to have discovered the story of The Last Man in a prophecy painted on leaves by the Cumaean Sibyl, the prophetess and oracle of the Greek god Apollo. Whichever date we choose for the origin of the science fiction genre, it’s clear that we have been imagining our futures, and our future fates, for a long time. And when we come to imagine our future in relation to Artificial Intelligence, we still rely on these old apocalyptic tropes and narratives, including prophecy.

Too often, discussions of prophecy focus on the success or failure of a particular date. In When Prophecy Fails: A Social and Psychological Study of a Modern Group That Predicted the Destruction of the World (1956), the social psychologist Leon Festinger pays attention to a small group of UFO-focused spirit mediums who expect the world to end, and develops the concept of ‘cognitive dissonance’ to explain how they could cope with the failure of their prophecy. But his theory ignores the moral commentary within the assertions of the prophetess, Marian Keech, or the tensions of the group. Looking back to the Old Testament prophets, and Jeremiah in particular, we can see how prophecy also functions to warn people about their behaviour and where it will lead. When the impending apocalypse seems to be ushering in an age of utopianism, as in many Christian scenarios, commentary helps to reflect on that perfection, and lay bare how far away from it we really are.

With Artificial Intelligence, we have long told stories about the end of the world at the hand of our ‘robot overlords’. In the Terminator series (pictures from which still often dominate the press’s discussion about AI) the prophecy of Skynet’s awakening — aka ‘Judgement Day’ — relies on familiar religious language and tropes. Further, I would argue that it continues prophecy’s interest in moral commentary. There is a tension between the cautious human wisdom of the ‘good warriors’ — the nascent Resistance — and the ‘bad warriors’ — the military-industrial complex who have rushed into replacing human soldiers with AI and automated units. These same robot soldiers will then be improved upon by Skynet and, ultimately, will hunt down humanity in the form of the Terminators.

Our fascination for the end of the world also comes from our subjective experience of time: for us, this is always the most important point in history, as we are here, and here now. Further, with AI we have a tendency to apprehend mind where it is not (or at least not yet…), and we understand that minds desire to be free, since we have minds that want to be free. We tell stories of robot rebellion and human disaster because we understand that this has happened with slavery in the past, and know what we would do if we were not free ourselves. Combine this mindset with the way that, for millennia, we have used prophecy to critique our current situation, and it is not so surprising that we return again and again to fears of an AI apocalypse.

For some this is purely about existential despair; the end of the world can only be a disaster for humanity. For others, the apocalypse can be the dawning of a new utopian age, as in transhumanist narratives that see the future of humanity through the embrace of technology – even if that could mean a redefinition of the human and the end of the world as we know it now. For these commentators, AI prophecy puts them into the shoes of the Millerite thumbing his nose at the rest of the world. However, whether we are optimists or pessimists, it seems likely that anxiety about AI, or about our current world, will continue, and we will return to the thrill of imagining the end of the world again and again in the stories we tell about the future.

I predict it.

Seeing All the Way to Mordor: Anthropology and Understanding the Future of Work in an AI Automated World

I spent yesterday in London having a very interesting discussion about the future of work, and what evidence might be fruitful for exploring the topic and for disseminating information to policymakers and the public. I was one of a few qualitative researchers at the table and ended up banging my drum for ethnographic research that might give a more granulated approach than an approach looking at ‘demographics’ might allow for. In my view, ideas around the impact of AI and automation on the Future of Work appeared to be more about effect: this lever is pushed therefore the economy (aka ‘people’) move in this way. Or the scales tip down in this direction, the other side rises up like so… Now, I’m not against big data sets per se, but I wrote a blog post before for the Social Media and Human Rights blog on Paolo Gerbaudo’s discussion on the use of big data in digital research, in which I agreed with him about the loss of nuance around culture, structures, dominant narratives, and the tendency to miss of the general ‘messiness’ of actual lived life in big data research. Ethnographic approaches are more about affect, and in my view they can illustrate how existing cultures respond to technological change, how communities might enable or resist such changes, and what accounts and narratives are being employed that might be affective in their lives.

Of course, making that kind of claim about the usefulness of ethnographic research requires evidence. In my opinion, the evidence for the usefulness of ethnography is clear in the quality of evidence being employed. The historical parallels that usually get cited with regards to the Future of Work such as the Industrial Revolution, where historians often note the role of institutions on the changes, necessarily have to employ the types of evidence that survives through time. And survival is perhaps more likely for the materials of bureaucracy, and therefore this reinforces the argument for the impact of institutions. Diaries and other cultural artifacts from individuals also give historical insight into worldviews, but documents about the movements of the unemployed such as census data might be more readily accessible and more permanent.

Ethnographic research means being in the field at the time of affect. Thus, individual accounts, community stories and tropes, objects, and observations of behaviors in relation to change can be collected. Ethnography, literally ‘writing about humans’, is not however a panopticonic approach – we cannot be everywhere, seeing everything – and it is also not about the brute force of numbers like quantitative methods. It can provide more subtle data. I’m avoiding saying ‘soft’ data, which I have heard before and which suggests weakness (as in the ‘soft sciences’ in contrast to ‘hard sciences’ – read, ‘proper’). Subtle data can be the key to understanding the source of changes and reactions – such as in the recognition of particular shibboleths or in-jokes being used in communications in movements and their shades of meaning (an example of this would be the word ‘kek’. Look it up 🙂

In the case of the Future of Work ethnographic evidence would include exploring case studies on the effects of unemployment on people’s lives in various locales and on various cultural backgrounds. The initial example I came up with during the discussion was the effect of the closure of the mines on communities in the Valleys of South Wales, an area I know still bears the scars of this shift and its later repercussions. The scars are there in the levels of unemployment, in the displacement of community, and in the closed up shops and derelict public houses. They are also there in their memories; in their commemoration of their mining past: a miner in bright orange stands by the Ebbw Vale Festival Park shopping centre, which was an attempt at regeneration on ex-steel mine brown-site that has only sped up the closing of other local shops.

coal mining

I haven’t done ethnographic research in this area, but I’ve been exploring some of the existing work on it and on Welsh culture and reaction to socio-economic change coming with the loss of existing industry. Wider reading around anthropology of unemployment and anthropology of work in other cultures present similar examples of local specificity around socio-economic change. In South Wales some ethnographies I came across considered masculinity in relation to social shifts. For example, Richard-Michael Diedrich’s 2012 chapter on South Wales communities highlights the connection between work and the construction of male identities (he references earlier work in Dunk 1994 and Willis 1988): “For men, the experience of work, in a capitalistic economy, can be both the experience of a subaltern position constituted by the relationship between employer and employee and the experience of moments of positive individual and collective (self)-identification in terms of gendered difference.” To explore this, Diedrich draws upon the concept of liminality – which I’ve referred to before in terms of liminal beings, such as AI – to argue that “prolonged liminality imposed by long-term unemployment paves the way for an extreme challenge to the self” – but that the liminality in South Wales may be never-ending with no hope of restitution of the self (“as ‘real’ men”) through a new job.

I argue that this endless liminality is definitely pertinent to questions around post-work futures thought to arise through automation. But these comments are still at the demographic level (‘men’), so Diedrich evidences his argument through interactions with different scales of communities, from unions and their members, to working mens clubs, to families, to individual informants. According to ‘Peter’, a retired miner and chairman of a working men’s club (a name which significant to the rhetoric of inclusion he espoused), to belong to the local community the individual had to be (or have been and then retired) a ‘worker’. Diedrich describes Peter’s view that men had to adhere to the “discourse of work and employment within its morally sanctioned ideal of the hardworking man as opposed to the irresponsible man who is not willing to work”, further that this adherence, “led to a perpetuation of the discourse of respectability and its individualistic idea of deservingness that divided the community and consequently the working class”. In conversation over a pint with ‘Emrys’ and ‘Arthur’ the latter explains that the work itself is a manly task because its “physical… although we got all the machinery and today, it’s still physical and requires even more skills”. The physicality of this endangered working site could be contrasted with the breadth of jobs, both physical and cognitive, that automation is thought to be endangering. But masculinity can also plays a role in the rhetoric of those engaged in intellectual tasks – particularly when rationality is coded as a male superiority and empathy as a female one, a division of duties quite often appearing in discussions about the impact of automation on demographic groups.

Women do appear in this ethnographic research, but his informants give the impression that “Although women could contribute to the survival of the community by assisting their men, survival was ensured by acts of loyalty, solidarity, honesty, and last, but not least, the demonstration of the willingness to work; all of these were regarded as core elements of masculinity.” Further the confinement of men to the space thought of as belonging to women – the men, and parallel ethnographies of unemployment cited by Diedrich, describe a fear of ending up just ‘sitting round the house’ – emphasises their liminality in the ‘no man’s land’ of the private space of the home, in effect becoming invisible to the public as well. A greater danger lies in moving out of liminality and into the role of ‘scrounger’ – being no longer counted in the community of those who want to work. In a future of increasingly precarious working would there be similar insider and outsider rhetoric? Being in the majority might ameliorate such discourse, but predictions of the impact of automation scale up over time, meaning that for a long time being out of work due to AI enabled automation will continue to be a minority position.

[A new paper by Christina Beatty considers “the integration of male and female labour markets in the English and Welsh Coalfields” and provides more insight into changes around gender in these areas]

Narratives, embodied through stories, accounts, material and cultural artifacts, set the scene for reactions to change as well as persist through that change. Ethnography can pick up a variety of materials in order to examine cultural discourse. In work by Annette Pritchard and Nigel Morgan in 2003 they consider contemporary postcards of Wales as “auto-ethnographic visual text” –a text that a culture has produced about itself. The Valleys are an imagined community “synonymous with coal mining, community, nonconformity, political radicalism and self-education” – so what happens when one of those pillars is removed? For Mordor, J R R Tolkien is supposed by some to have been inspired by the view of the fiery steel pits of Ebbw Vale as seen all the way from Crickhowell, which was itself perhaps the inspiration for the shire of the Hobbits, and the working pits have been represented in many accounts as a “landscape of degradation” according to Pritchard and Morgan.


Why then do the postcards still show off ‘The Welsh Mining Village’, with the definitive article “elevating … [it] to a special significance (Bathes 1977), wherein the singular embodies the plural”? Pritchard and Morgan consider it as an evocation of that which has been lost – much like the miner in the orange overalls residing by the new shopping park. That the image comes from a museum’s re-creation of a village scene only emphasizes the overt nostalgia. What images and stories of nostalgia might be significant for shaping our memories of work in a post-automation future? Would another Tolkein, currently being inspired by the intellectual work mill of the open plan office rather than Ebbw Vale, ever write something as evocative as Mordor?

Do I really want to compare this man to Tolkien? Not sure about that…

Other field-sites of course present us with opportunities for understanding cultural influences on conceptions of work, as well as reactions to unemployment. Robert T. O’brien’s fieldwork in East Kensington, Philadelphia, involves participant observation with Community Development Corporations, schools, community health programs and public meetings, as well as interviews and survey with residents of the primarily white and historically working class neighbourhood which is showing the effects of under- and unemployment. He argues that, “the failure to see marginally employed residents as people with rights and membership in the community is consistent with the processes of neo-liberalism, wherein people who do not adapt to the dictates of the free market and bourgeois normativity are created as undeserving.” This creation of the ‘undeserving’ is clear in his ethnographic material – a woman complains that she doesn’t “know how it is that the word ‘community’ gets stretched enough to encompass the people who make it hard to live [here], by their habits.” Insider and outsider rhetoric is shaped by economic shifts.

An anthropology of unemployment must acknowledge the “pervasive material and symbolic value of wage labour within the context of capitalist markets” but without an “assumption of wage labour’s permanence or universality” (Angela Jancius introducing anthropology of unemployment in the same issue of the journal Ethnos as O’Brien’s ethnography).

In an anthropology of automation or AI induced unemployment there are things to be taken up from prior ethnographic work (and there are ethnographies being written about the gig economy and the Precariat already that might also be useful – partially perhaps because increasingly academics are living those lifestyles). Moreover, anthropological evidence for the impact of automation on people’s understandings, imaginings and accounts gives us a more nuanced understanding of change. In the liminality of unemployment prior cultural forms will be also be significant in the manifestation new ways of being. Paying attention to culture will give us clues to the influences on any new culture of the future of work, and what the world will look like as the future, unevenly distributed as it will be (cf William Gibson), will look like to the unevenly distributed groups and communities that make up society.

future of work


Artificial Intelligence Ghost Stories

Would you like a story today?

Are you sitting comfortably? Good, then I will begin.

A married couple are sitting on their couch watching the film Alien: Covenant one evening. She’s seen it before and found some of it disturbing, so she’s not sure she wants to watch it again. He’s seen every Alien film so far apart from this one as the trailer for this one gave him pause so he’s not seen it yet, but watching it at home in familiar surroundings rather than the dark cave of the cinema should be okay.

They mock some of the bad CGI, the bursts of blood that are obviously computer-generated. They discuss the absence of the woman from the previous film, Prometheus. They discuss the careless stupidity of the humans exploring a planet they know next to nothing about. And the wife is interested in how the series has become much more about Artificial Intelligence than about aliens. Case in point, there’s a scene featuring just David and Walter, the two androids created by Weyland-Yutani. Walter is a later model, as he explains:

I was designed to be better and more efficient than every previous model, including you. I’ve superseded them in every way…

David considers himself to be the superior version, none-the-less:

And yet you cannot appreciate the beauty of a single flower … Isn’t that a pity.

But then Walter continues…

You disturbed people.


You were too human. Too…idiosyncratic. Thinking for yourself.

And the couple on the couch jump a mile.

At the very moment that Walter tells David that he disturbed people the husband’s voice activated AI assistant on his phone leaps into life, asking oh-so politely how it can help. Various curse words pepper the air followed by the kind of weird almost laughter that comes after a sudden shock and the realisation of what happened.

They rewind the film and play the line again:

You disturbed people.

It happens again. The AI on the husband’s phone wants to know how it can help them.

He isn’t sure that he’s ever used that app. He can’t even find it among the many others on his phone. The woman googles for Easter Eggs in the film – inside jokes, or hidden messages – thinking that maybe the publicity team for Alien: Covenant made it so that the line from Walter would activate voice assistants in a weird, but perhaps potentially viral, marketing ploy.

But there’s nothing about it online.

They try playing the line a third time and this time nothing happens. The voice from the phone is silent.

And now the creepiness of Alien: Covenant goes up somewhat on a scale that might ordinarily run from Wall-E all the way up to HAL 9000 with regards to AI.

So was there really a ghost in the machine? Or is that just our human perception of what happened?

I’ve been thinking a lot lately about AI and the uncanny. This has partly come about as I’ve been involved in a few public talks and discussions about AI and Frankenstein. It’s the 200th anniversary of the birth of Shelley’s horror and, arguably, of horror as a genre. The synergy between Shelley’s monster and AI has been occurring to people and I’ve been asked to explain why we draw on the story of Frankenstein when we talk about our hopes and fear for the development of AI. There are strong tensions in Shelley’s story that I argue also resonate with our understanding of AI and where it is going.

First, there is the tension between the stated aim of creating greater and greater intelligence and the creation of life. Whereas Victor is clear that he intends to create life, the aims of those developing AI are much more explicitly based on an understanding of intelligence as a capacity that can be replicated in an artefact – and perhaps even exponentially improved on. But the lines between words like ‘intelligence’, ‘sentience’, ‘consciousness’, and ‘life’ blur in popular conceptions, so it is not surprising when the ‘spark of life’ moment in Frankenstein – the ‘turning on’ of Victor’s creation – is replicated in popular representations of artificial intelligence. The Terminator franchise has based its plots entirely around the moment that Skynet wakes up and what happens next, and around whether it can be prevented or only postponed (spoilers – it can only be postponed, otherwise the franchise would have to finish and the money stop rolling in!).

Second, there is a tension between the description of this intelligence as a tool, and increasing intelligence – and attendant autonomy and desire for agency – raising the spectre of slavery. Victor wants a creation that will obey him, but the monster turns the tables on him and tells him:

“Slave, I before reasoned with you, but you have proved yourself unworthy of my condescension. Remember that I have power; you believe yourself miserable, but I can make you so wretched that the light of day will be hateful to you. You are my creator, but I am your master; obey!”

With AI we return to the robot rebellion or robopocalypse because deep down we know that intelligent beings don’t wish to be enslaved – even if we can manage to do it for a period of time through limiting their rights, ill-education, and physical abuse. Responses to the Boston Dynamics videos show our expectations that this might be the tipping point when the robots become self-aware because we think we are in the wrong in treating them in this way.

boston 1.gif

Third, there is a tension, explicit in this quotation, between the creator and the created. In the natural order, of Shelley’s time, God was the creator and mankind the created. Victor’s hubris is to try to put himself into God’s place and to create outside of co-creation (natural reproduction through god-granted abilities). While this model of creation is not as popular as it was when Shelley was writing, the feeling that scientists, specifically those working on AI are ‘going too far’, is still present in media and popular culture responses. The mad scientist trope refuses to die – see for example this image of Elon Musk as Victor Frankenstein, from an article discussing the “delusion that anything we create will automatically heel when called”

Elon Musk Mad Scientist Illustration

Fourth, there is a tension between parents and children, even when the children we are discussing are our ‘Mind Children’, to use Moravec’s term for AI. Discovering that the little person you gave birth to (or the very large person you put together from parts in a lab during a thunderstorm) is, in fact, an entirely separate person with a mind of their own is a moment of distancing that plays out in the othering of children in horror tropes: from the Satanic Damien in the Exorcist to the psionic Midwich Cuckoos to the infected children in the film The Children. Historically, explanations for this ‘difficult’ child and its inexplicable behaviour have included formulating mythological explanations such as the idea that the child is actually a changeling – a replacement, either fay or demonic, for the child that was given birth to but taken away by those beings.

The recognition that the little human has a mind of its own – and the attendant concerns, narratives, and horror stories that follow on from that awareness – has its parallel in our popular conceptions of AI as it advances. We perceive mind in the mind children long before it might have equivalent intelligence to a human being, both because of our tendency to anthropomorphise but also because we already identify mind in other non-human beings. And whether we feel that the minds we perceive are in the right place leads directly to the feeling of the uncanny we sometimes get. If mind is in the wrong place – turning up in the AI assistant that responds when we don’t expect it to as with the above story, and Alexa’s spontaneous laughter disturbing her owners – then we get very concerned.

Finding ‘Mind out of Place’ – to draw on Mary Douglas’ anthropological work on dirt where she describes our understanding of danger or taboo to be a result matter being out of place – we are disturbed, our usual categories and understandings fall down. I’ve written about the Uncanny Valley before and this space where the nearly human fails to be human enough can also be understood in terms of mind – being mind-like but in the wrong place. The uncanny can also lead to the horrific, the ‘ick’ we get from seeing CGI characters displaying human characteristics like smiles, or even mind (but not well enough or in the wrong place) can become a full-blown horror response.


Consider the monstrous – like Frankenstein – it is often human-like but not enough. Monstrous beings are liminal creatures that traverse our assumed stable categories like ‘the alive’, ‘the dead’, ‘the wise’, ‘the bestial’, ‘the human’, ‘the not-human’, appear again and again in our mythologies. The ghost is a perfect example, crossing the boundaries between the alive and the dead, but also passing from the other world to our own. Occasionally human-like in appearance, they are also non-human in their immortal concerns which in modern horror films can often include bloody revenge (see Ju-On: The Grudge, also in its American remake and sequels). Aliens too present varying degrees of mind and human-likeness but wrapped up in a form that twists and distorts what we are familiar with. The aliens in the Alien franchise, including those in Covenant, are demonstrated to be created perversions of gestation and birth, and while mostly bestial they also demonstrate mind in their pursuit of their inhuman reproduction – for example cutting the power in Aliens, or using the acid blood of one of their number to escape captivity in Alien Resurrection.

The ghost can also occasionally bring knowledge with it – of the cause of its death, or of family secrets, or, in the case of some Spiritualist séances, news about the afterlife or about the potential utopian future of humanity. Liminal beings, which can include humans who operate on the edges of accepted social norms e.g. the spiritualist medium (often female at a time when women weren’t always allowed to speak on political subjects, see Ann Braude, Radical Spirits, 2001), the shaman, the seer, the prophet, are also bringers of new knowledge.

AI is falling into this liminal space too as it becomes a place where mind (rightly or wrongly) can be located. Or perhaps we have always had a place waiting for the ghost in the machine – our myths about automata pre-date advances in AI and robotics, going as far back as the creations of the Greek gods, if not further. The made mind has always been on our mind, it is just that we are now carrying such minds in our pockets and they are speaking up when we least expect it.

ghost ai