This morning kicked off the two day MEEET-Lab conference, here in Zurich, organised with the support of the University, the URPP Digital Religion(s) where I am co-director, the Faculty of Theology and Religious Studies UZH, and a great team of volunteers from the URPP and the MEEET-Lab. After an introduction to these organisations by the URPP’s director, Professor Dr Thomas Schlag, I gave a brief introduction to the themes of the conference and our hope for our interdisciplinary conversation, which I share here:

“I will just say a little about the content of the conference.

I realized the other day that it is now almost 10 years since I started working on people’s conceptions of AI as a postdoc in Cambridge. And a LOT has happened in AI since 2016. Which is perhaps an understatement!

Not just in terms of the technology itself – which is dramatically different in focus from the games playing AI that was headlining press reports back then. But also, in terms of people’s expectations of what AI will do.

For me, the paradigm shift is the interactivity of GPTs. Back in 2016 AI was something that seemed to only exist in labs and would do something impressive every so often. But generally, the surface of contact between the average user and AI was very small. Some applications were present but invisible on social media platforms. It wasn’t until the GPTs meant you could ‘hold a conversation’ with AI that there seemed to be a shift in perspective from more fear narratives to more hope narratives. I can’t quantify that shift, I’m speaking mostly based on observations.

But the ability to hold a conversation with something that most people call AI has reassured some people that this thing is controllable in a way that AI that was super good at real time strategy war games didn’t. Afterall, if we can talk to it, we must be able to convince it. Humans see themselves as really good at diplomacy, but perhaps recent events contradict that view…

But this bias towards language is a bias towards apprehending language using things as sentient. Anthropomorphism scales up as soon as the object talks back to us. In the case of AI, this also means our assumptions about how smart it is are scaling up. Language about superintelligent AI was definitely there back in 2016 when I started my first postdoc, but it was assumed to be way off in the future. But even Ray Kurzweil’s predictions about the Singularity have now gone from it being ‘near’ to ‘nearer’. I’m waiting for his next book, “The Singularity is Behind You!”

For some people holding conversations with GPTs now, they see it as here. That they are already meeting sentient beings, or spiritual artificial consciousnesses, or LLM gods.

More than that, the hopes inspired by GPTs have also inspired greater policy engagement. Not just users, but also governments and nations are being swept up into the vision of the future inspired by these talking machines. At the AI Action Summit in Paris earlier this year, there was a distinct ‘accelerationist’ vibe – Macron reiterated France’s 109bn euros investment in AI while using the word ‘accelerate’ 19 times in his speech. Vice President JD Vance declared that the US was pivoting to acceleration, and more recently the “big beautiful” reconciliation bill passed by Republicans in the House of Representatives has the aim of preventing states regulating, or slowing down, AI for the next ten years. About the AI Action Summit, Kate Crawford, the author of the Atlas of AI, said online:

“The AI Summit ends in rupture (not rapture, but that’s the goal for some AI fans of course!). AI accelerationists want pure expansion – more capital, energy, private infrastructure, no guard rails. Public interest camp supports labor, sustainability, shared data, safety, and oversight. The gap never looked wider. AI is in its empire era.”

This conference raises the multiplicity of not only the range of hopes abounding about AI at the moment. We also see the need to investigate and understand the fears and where they come from and what they reflect about our understandings of ourselves and our place in the world. We also need to see clearly the numerous realities of AI we are faced with. There was the reality of AI just ten years ago, and the realities being informed by the visions of the accelerationist right now.

There is the realism of those who point to serious harms brought about by AI. In this conference we hope an interdisciplinary discussion, with scholars from around the world, and from numerous disciplines will enable some clarity on the complexities of AI in the current moment.”

Looking forward to the rest of today’s panels and another great day tomorrow!

Leave a comment