A desire to project agency and intelligence onto inanimate matter, though, is deeply human, says Beth Singler, a digital anthropologist at the University of Cambridge. “You don’t have to go as far as Ameca has with facial features before people start bringing animated entities into what I call their cosmology of potential beings,” she tells The Verge. “There’s this sense that what is around us could be intelligence, and different cultures react to that in different ways.”
Traditions like Shinto and Buddhism are more open about this impulse to ascribe soul to objects, says Singler, but the same instincts run deep in the West. “We like to think we’re immune to this because we had the Enlightenment and became very serious and rational,” she says. “But I don’t see that. When I see people’s interactions with animated technological entities — and that can be everything from a robot to a Roomba — I see that same animistic tendency.” In other words: we still want to believe.
I’m so very grateful to Neil Lawrence, the Google DeepMind Professor in Machine Learning at the University of Cambridge, for a shout-out for my research in his recent Cambridge Festival talk on “Artificial Intelligence: Reclaiming Control”.
I was very honoured to be invited to be involved in this ‘Salon’ and have the chance to discuss my research into our conceptions of AI and our hopes and fears for the future:
Conversation from Tuesday, March 15, 2022 at the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin. Artificial intelligence isn’t just changing how we communicate, work, and live day to day. It’s transforming our beliefs and dreams about existence and how we imagine the future. Dr. Beth Singler, an anthropologist specializing in artificial intelligence, has studied these questions closely and written extensively about the lived experience of people as they engage with AI and robots. Please join us for this one-hour salon featuring Dr. Singler in conversation with CHM visiting scholar William Powers.
A great chat about religion and AI with myself, and Hope McGovern and Ed Kessler of the Woolf Institute 🙂
We have Alexa, we have drones in the sky, killer robots on the battlefield and creepy algorithms designed to anticipate our every need. But do we lose sight of the potential benefits of A.I? Beth Singler and Hope McGovern throw some light on a much discussed subject…
I was delighted to be a part of this panel along with Ruth Peacock and Andrew Brown of the Religion Media Centre; Maria Farrell, writer and speaker on technology, who has worked in technology policy for 20 years and has taught internet governance; Dr Jason Pridmore, Vice Dean Education: Erasmus School of History, Culture and Communication, Rotterdam; and Emma Holland creative manager of the ‘Pray as you Go’ Jesuit app.
I’m seeing a lot of posts and chatter about a robotic artwork, ‘Can’t Help Myself’ by Sun Yuan & Peng Yu, which was commissioned by the Guggenheim museum and installed in 2016.
As you can see in this timelapse and footage from the museum, the robotic arm, which also employs visual-recognition sensors, has the apparently Sisyphean task of keeping the dark reddish liquid from flowing beyond a certain point away from itself.
The accounts I am seeing online of this artwork offer their own interpretation, but they are based on a misunderstanding of how the robotic arm actually works. They are affected by the image of a robot trying to stay alive by eternally returning its own essential fluids – its ‘blood’ – back to itself. They read the noises it makes as its shovel scratches the floor as it pulls the fluid back as ‘screams’ and ‘groans’:
“No piece of art has ever emotionally affected me the way this robot arm piece has. It’s programmed to try to contain the hydraulic fluid that’s constantly leaking out and required to keep itself running…if too much escapes, it will die so it’s desperately trying to pull it back to continue to fight for another day.”
This comment comes from an interpretation I’ve seen in several places now. The author, credited as ‘James Kricked Parr’ – a music label owner – originally posted his perspective on Instagram back in November 2021. He goes on to claim:
“Many years later… (as you see it now in the video) it looks tired and hopeless as there isn’t enough time to dance anymore.. It now only has enough time to try to keep itself alive as the amount of leaked hydraulic fluid became unmanageable as the spill grew over time. Living its last days in a never-ending cycle between sustaining life and simultaneously bleeding out… (Figuratively and literally as its hydraulic fluid was purposefully made to look like it’s actual blood).”The robot arm finally ran out of hydraulic fluid in 2019, slowly came to a halt and died – And I am now tearing up over a friggin robot arm It was programmed to live out this fate and no matter what it did or how hard it tried, there was no escaping it. Spectators watched as it slowly bled out until the day that it ceased to move forever. Saying that ‘this resonates’ doesn’t even do it justice imo. Created by Sun Yuan & Peng Yu, they named the piece, ‘Can’t Help Myself’. What a masterpiece. What a message.”
Except this isn’t accurate. The fluid was never essential. It wasn’t used as hydraulic fluid. The robot never died. It simply reached the end of its exhibition ‘life’ and was installed for the Venice Biennale in 2019. The creator’s intentions were to comment on surveillance culture and the maintenance of national boundaries by the ‘machine’. Its a piece of art about the desperation of humans, not the desperation of a ‘dying’ robot.
Of course, art should be subjective. Your account of it will differ from mine, and no artist has, or should have, control over the affective response to their work. But what’s interesting to me is, again, how our sympathy for the robot is so easily invoked. Discussions online, including in response to the original post, talked about their emotional responses: their concern for the robot itself, their reflections on their own lives and the futile struggles they felt they also endured, the sadness the robot invoked:
“this was insanely powerful to me. Resonates deeply. Sharing it bc I thought maybe you guys would appreciate this artist statement. Love you all. Makes me want to live every day like it’s my last…”
I broke down crying the first time I saw a video of this exhibit. No physical art piece has ever made me get close to that emotional. The futile, endless, yet hopeful strain to keep it together just stings me in a vehement heart-rending way.”
And, as so often is the case, the emotional response could transmut from sympathy to fear and robopocalyptic narratives and statements:
I like to think that at some point the robot will gain sentience and stop doing anything cause it knows it will just shut off and be refilled after. And maybe it will try the same with his creator/programmer. The beginnng of the machine war starts here. With artistic torture.
One teddit thread I followed drew on existing robopocalyptic narratives, such as Roko’s Basilisk (which I have written about in more depth here). Some users just dropped link’s to details about Roko’s Basilisk without any preamble, leading to discussion of its dangers, merits and flaws:
“Warning: link contains an infohazard. Information that, by knowing it, could be detrimental to oneself or others. Note: it’s a thought experiment and I’m mostly joking… Unless it turns out to be real eventually lol”
“Doesn’t us knowing about Roko’s Basilisk help it exist later on?“
“Edit to say that if this sounds interesting to you probably don’t look it up – it’s hard to find for a reason, that reason being that apparently some people think about it too hard and go off the deep end. The guy who came up with it/first posted about it pretty clearly regrets it and doesn’t want people reading about it. I was already in the “we live in a simulation” camp when I came across it so I’m fine I guess, your mileage may vary so pretty much don’t.”
“I’m continually amazed by tech-bros’ ability to rephrase theological arguments and believe them completely unironically as long as you replace “God” with “AI.”
Of course, this loop back to things I’ve worked on, like the theistic interpretation of AI, fascinates me. As does how the account given by James Kricked Parr is being so quickly accepted and shared.
Both speak to our very human tendency to include the non-human Other, the robotic arm, in our cosmology of beings. Even when we know its createdness and the limitations of its possible actions, we still extend care to it and reflect on what it tells us about our own human condition.
But again, the danger lies in that care obscuring the message of the object’s creators about the care we should be extending to those being controlled and curtailed in their own freedom by the state-run machinery of boundary-making and surveillance. I’ve argued in a few places (e.g. this CogX panel) that along with the anthropomorphism of AI and robots we’re seeing ‘robomorphisation’… the treatment of humans as machine. From the metaphors we live by to how corporations and our bureaucratic process increasingly treat us like machines. This particular formation of the human goes back centuries of course, but with actual robots increasingly available to take over, robomorphisation is being intentionally implemented to narrow the gap between the human worker and the robot worker. I am not surprised people saw themselves in the ‘dying robot’, but it is still a moral hazard if we turn it into Narcissus’ reflective pool, and not see the people drowning in that water.
My interview with the lovely Emily Bates at the New Scientist is now available in both a text version and a video:
We are growing used to the idea of artificial intelligence influencing our daily lives, but for some people, it is an opaque technology that poses more of a threat than an opportunity. Anthropologist Beth Singler studies our relationship with AI and robotics and suggests that the lack of transparency behind it leads some people to elevate AI to a mysterious deity-like figure.
“People might talk about being blessed by the algorithm,” says Singler, but actually it probably comes down a very distinct decision being made at a corporate level. Rather than fearing a robot rebellion or a deity version of AI judging us, she tells Emily Bates, we should identify and be critical of those making decisions about how AI is used. “When fear becomes too all-consuming and it distracts you from those questions, I think that’s a concern,” says Singler…