I’ve recently had a paper accepted for the 2016 ‘Denton’ Conference on Implicit Religion. This is the first piece of research coming out of the Faraday AI and Robotics project, so its very exciting to have been accepted!
Here’s the abstract, and once you read it and learn about Roko’s Basilisk you’ll know why I titled this post as I have!
Roko’s Basilisk, or Pascal’s? Thinking of Singularity Thought Experiments as Implicit Religion
“The original version of [Roko’s] post caused actual psychological damage to at least some readers. This would be sufficient in itself for shutdown even if all issues discussed failed to be true, which is hopefully the case.
Please discontinue all further discussion of the banned topic.
All comments on the banned topic will be banned.
Exercise some elementary common sense in future discussions. With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don’t.
As we used to say on SL4: KILLTHREAD.” – Eliezer Yudkowsky, July 2010
This KILLTHREAD command came in response to a post written only four hours earlier on the ‘LessWrong’ community blog by ‘Roko’. A post which had introduced a rather disturbing idea to the other members, who are dedicated to “refining the art of human rationality”, according to LessWrong literature.
Roko had proposed that the hypothetical, but inevitable, super-artificial intelligence often known as the ‘Singularity’ would, according to its intrinsic utilitarian principles, punish those who failed to help it, or to help to create it. Including those from both its present and its past through the creation of perfect virtual simulations based on their data. Therefore, merely knowing about the possibility of this superintelligence now could open you up to punishment in the future, even after your physical death. In response to this acausal threat, the founder of LessWrong, Eliezer Yudkowsky responded with the above dictat, which stood for over five years.
Roko’s Basilisk, as this theory came to be known for the effect it had on those who ‘saw’ it, has been described by the press in florid terms as “The Most Terrifying Thought Experiment of All Time!” (Slate, 2014). It has also been dismissed by members as either based on extremely flawed deductions, or as an attempt to incentivise greater “effective altruism” – directed financial donations for the absolute greatest good. In this case, donation specifically to MIRI, the Machine Intelligence Research Institute, which works towards developing a strictly human valued orientated AI, and which is also directly linked to the LessWrong forum itself. Others have dismissed it as a futurologist reworking of Blaise Pascal’s famous wager, or as just a fanciful dystopian fairytale.
This paper will not debate the logic, or validity, of this thought experiment. Instead it will approach the case of Roko’s Basilisk with a social anthropological perspective to consider how its similarities with theologically inclined arguments highlights the moral boundary making between the religious and the secular being performed by rationalist forums of futurologists, transhumanists and Singularians such as LessWrong and the AI mailing list ‘SL4’ that Yudowsky referred to. This paper also raises wider questions of how implicitly religious thought experiments can be, and how this boundary making in apparently secular thought communities can be critically addressed.
Slate (2014) “The Most Terrifying Thought Experiment Of All Time: Why Are Techno-Futurists So Freaked Out By Roko’s Basilisk?”, available at http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html (accessed 22/02/2016)
Sorry, but I did warn you. /KILLTHREAD
One thought on “Don’t Read This Post”