Don’t Read This Post

I’ve recently had a paper accepted for the 2016 ‘Denton’ Conference on Implicit Religion. This is the first piece of research coming out of the Faraday AI and Robotics project, so its very exciting to have been accepted!

Here’s the abstract, and once you read it and learn about Roko’s Basilisk you’ll know why I titled this post as I have!

Roko’s Basilisk, or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion

 “The original version of [Roko’s] post caused actual psychological damage to at least some readers. This would be sufficient in itself for shutdown even if all issues discussed failed to be true, which is hopefully the case.

Please discontinue all further discussion of the banned topic.

All comments on the banned topic will be banned.

Exercise some elementary common sense in future discussions. With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don’t.

As we used to say on SL4: KILLTHREAD.” – Eliezer Yudkowsky, July 2010

This KILLTHREAD command came in response to a post written only four hours earlier on the ‘LessWrong’ community blog by ‘Roko’. A post which had introduced a rather disturbing idea to the other members, who are dedicated to “refining the art of human rationality”, according to LessWrong literature.

Roko had proposed that the hypothetical, but inevitable, super-artificial intelligence often known as the ‘Singularity’ would, according to its intrinsic utilitarian principles, punish those who failed to help it, or to help to create it.  Including those from both its present and its past through the creation of perfect virtual simulations based on their data.  Therefore, merely knowing about the possibility of this superintelligence now could open you up to punishment in the future, even after your physical death.  In response to this acausal threat, the founder of LessWrong, Eliezer Yudkowsky responded with the above dictat, which stood for over five years.

Roko’s Basilisk, as this theory came to be known for the effect it had on those who ‘saw’ it, has been described by the press in florid terms as “The Most Terrifying Thought Experiment of All Time!” (Slate, 2014).  It has also been dismissed by members as either based on extremely flawed deductions, or as an attempt to incentivise greater “effective altruism” – directed financial donations for the absolute greatest good.  In this case, donation specifically to MIRI, the Machine Intelligence Research Institute, which works towards developing a strictly human valued orientated AI, and which is also directly linked to the LessWrong forum itself. Others have dismissed it as a futurologist reworking of Blaise Pascal’s famous wager, or as just a fanciful dystopian fairytale.

This paper will not debate the logic, or validity, of this thought experiment. Instead it will approach the case of Roko’s Basilisk with a social anthropological perspective to consider how its similarities with theologically inclined arguments highlights the moral boundary making between the religious and the secular being performed by rationalist forums of futurologists, transhumanists and Singularians such as LessWrong and the AI mailing list ‘SL4’ that Yudowsky referred to.  This paper also raises wider questions of how implicitly religious thought experiments can be, and how this boundary making in apparently secular thought communities can be critically addressed.

Slate (2014) “The Most Terrifying Thought Experiment Of All Time: Why Are Techno-Futurists So Freaked Out By Roko’s Basilisk?”, available at http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html (accessed 22/02/2016)

Sorry, but I did warn you. /KILLTHREAD

basilisk

Ch-Ch-Changes!

If you have a quick squiz at the title of this blog you might notice something has changed.  This is no longer my “PhD Diary + Blog”, but is now my “Post Doc Diary + Blog”.  That’s because in the seven months since the last post I wrote on here several good things happened (in fact they all happened around October/November 2015!)*:

  1. I signed a book deal for my thesis (Thank you Ashgate!!)
  2. I was offered a post doc job as a research associate
  3. I submitted my PhD thesis**

In finishing up my thesis my blogging got a little left behind, but now that I am settling into my new position as a research associate at the Faraday Institute for Science and Religion (more on that below) I am returning to occasional posts… this time with a focus more on the AI and Robots project I am involved with (but with some NRMs things slipping in, as I am still working on an edited volume on NRMs, and my research into Transhumanism and Singularity theories drifts into NRM studies).

But I am getting ahead of myself.  What is my post doc and what am I doing?

I am now based at the Faraday Institute for Science and Religion, which sits in Benet House at St Edmunds College, Cambridge. So although I am coming to the end of my third degree at Pembroke College (thanks!), I am staying in Cambridge for at least the next 3 years***. My specific project, under the overall project on “Human Flourishing” is:

Human identity in an age of nearly-human machines – the impact of advances in robotics and AI technology on human identity and self-understanding

What does that mean? Well, we are considering:

“the theological, social and philosophical implications of recent developments in robotics and AI technology for secular and religious understandings of human nature and identity. Of particular interest and concern is the development of humanoid robots whose appearance, motor behaviour and responsiveness are becoming virtually indistinguishable from human beings. In addition, new technical developments provide increasingly realistic simulation by AIs of human compassion, empathy and emotional intelligence. These developments raise urgent and profound questions and challenges for human self-understanding.

To date there has been very little genuinely multidisciplinary and informed debate about these issues. The current sub-project aims to address the implications of these developments using an academically rigorous and structured approach. In particular we will investigate whether there is a genuine convergence and blurring of human/machine abilities and behaviour and if so whether this is likely to lead to fundamental changes in common social and religious understandings of what it means to be human.”

TLDR version: AI! Robots! The Future!

robot learning

As a social anthropologist of NRMs my particular interests lie in how people respond to new technological advances and weave them into their religious narratives.  But there are wider questions about how these developments will affect our understanding of what it means to be human, and a core element of the project is getting experts and academics together to start the conversation on this.  You may be aware of the boost in funding that projects considering AI and robots have received (the Centre for Studies in Existential Risk is a prime example, where the focus is mitigating the possibility of ‘unfriendly’ AI), and this generally agreed to be a key time for considering these issues, when we are on the cusp of some dramatic discoveries around intelligence (or not, depending on who you ask).

Posts in this blog will now reflect this new research focus, with occasional dips into NRM studies. I hope this is of interest!

Beth

 

* Some bad things happened in those seven months too, and the title of this post is in honour of David Bowie (1947-2016).

** No, I still dont have a viva date. I’ll let you know when I know…

*** Precluding a zombie apocalypse.