I’ve been rather busy lately writing conference papers (details below) and I havent unfortunatly had the time to write the rants blog posts that came to mind off the back two AI and robotics stories this month. Even the versions I came up with in my head became TL;DW (too long; didn’t write), and would no doubt also have been TL;DR (too long, didn’t read) by anyone coming upon them on this blog. Instead, here are the stories – with some thoughts added – in as brief a form as I can manage!
First, reports that the Turing Test is intrinsically flawed because an AI could basically plead the fifth and remain silent, making the test null and void, “allowing it pass as a sentient being”, and that objects like rocks could also pass!
1. Why is a study by two academics at a UK university assuming the AI will come under the governance of US laws? Okay, perhaps we could take “pleading the fifth” more as an expression than a legal recourse. But even so, what are the assumption about nationality at play in the AI community? Does nationality even have a part to play when the key organisations and corporations considering and working on AI are so international?
2. Isn’t pointing out problems with the Turing Test a little redundant? I’ve heard it said, and I tend to repeat it, that any AI smart enough to pass the Turing Test would also be smart enough to fail it on purpose! And let’s consider the origins of the Test. At first it was about seeing how well a computer could pass for a woman. While of course women are examples of sentient and intelligent life (!!!), there may also have been the aim of seeing if it was possible to replace the administrators and coders, women primarily at this time, with computers in an efficiency drive. The test was also less about proving a human intelligence and more about ‘passing’ for human – in an era of Cold War subterfuge and the passing of spies for colleagues. Any attempt to ‘pass’ the Turing Test off as a precise scientific proof for intelligence is intrinsically flawed. Likewise, any new critique of it needs to pay attention to prior commentary on its origins and presupositions before making announcements that it doesn’t work.
Likewise, and as in the second story that caught my eye, references to Asimov’s Three Laws of Robotics in relation to real world events needs a more careful consideration of their origins, aims and flaws. That is not to say that I am dismissive of them because they originated in science fiction. In fact, I am very much against the bracketing off of popular literature because it is popular, or even ‘not grown up’. In the preface to an otherwise excellent book, “What to Think about Machines that Think”, edited by John Brockman, he states: “This year’s contributors […] are a grown up bunch and have eschewed mention of all that science fiction and all those movies…”
Pfft.
But back to the story where Asimov’s Laws were invoked. A mall based robot ran over a toddler, hurting him, which of course should not have happened, and if it had been my son you’d have been finding bits of circuit boards all across the shopping centre for weeks after. But the stories all referred to the robot as having broken Asimov’s first Law of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Well…
1. Unlike the laws of a country, you cannot be held to a law of robotics that you havent been programmed with/are aware of. So, perhaps this is a tongue in cheeck comment. But it betrays a certain expectation about robots and regulation that I have seen elsewhere, a presumption that Asimov got it right and that we already have a system that could be implemented to ensure robotic behaviour. Which ignores…
2. That Asimov’s Laws don’t actually work in his stories! Because if they did, there would be no story! They are a narrative catalyst, there to spark incident and to allow Dr Susan Calvin the opportunity to show of her cold intelligence as a robopsychologist, or for Donovan and Powell to chummily muddle through to a solution.
3. The articles also generally ignored the fact that Asimov introduced a Zeroth Law (or that it evolved among the Superintelligent AI minds that govern the world in his later stories), whereby, “A robot may not harm humanity or, through inaction, allow humanity to come to harm.” This is a strict moral utilitarianism that allows the Superminds to make decisions harming a limited number of humans for the greater good, something that Dr Calvin sees the logic in. I’m not suggesting the mall robot was doing anything other than failing to recognise the boy as a smaller example of the shapes it was meant to avoid (NOT the same as being programmed with the first law), but if news articles are going to cite Asimov’s Laws of Robotics then perhaps they should be aware of where they lead to…
Perhaps this TL;DW has now become a ‘too long;ended up writing it anyway’ (TL;EUWIA isnt going to catch on though). But I’ll finish up now with the titles of the conference papers I am working on at the moment:
Ian Ramsey Centre conference on Post-Secularity, Oxford, 27th – 30th July:
“LessWrong = Less Religious?: Secularity as Moral Boundary Making in Future Technology Focussed Groups”
Affective Apocalypses and Millennial Well-Being Conference, Queens University Belfast, 18th – 19th August:
“‘It’s the End of the World as We Know It (And I Feel Fine)’: An Ethnographic Comparison of Existential Hope and Existential Distress in Transhumanist and Apocalyptic Artificial Intelligence Groups”
BASR Conference 2016, Wolverhampton, 5th – 7th September:
“The Possibility of a Religion: Artificial Intelligence, Science Fiction, and New Religious Movements”