Friend in the Machine, the second in our series of four films on developments in AI and robotics and their implications, is available on YouTube.
Can a robot be a true friend? Are we lonely enough to consider relationships with machines? What is companionship and can a machine be a substitute for a human companion? Made with Cambridge University and international experts discussing topical issues within the field of artificial intelligence – Friend in the Machine presents fascinating insights from academia and industry about the world of companion robots and asks what it means to be human in an age of nearly human machines.
Meanwhile at the Pan-Asian Deep Learning Conference in Kuala Lumpa in 2028:
So, Deep Learning – aka making dancing robots in this case – has become ‘sexy’. Arguably this moment came earlier, around August 2017 when Stanford University grad Alexandre Robicquet, a machine vision researcher, fronted a Yves Saint Laurent ad campaign, which included his occupation on the posters. But with ‘Filthy’ Timberlake’s video director Mark Romanek (under JT’s guidance no doubt) has out-CES’d CES just as CES is starting in Las Vegas. Currently the number two trending video on YouTube, ‘Filthy’ is a smorgasboard of unpackable assumptions about the tech, the tech community, and deeper stances about the value of the ‘real’ and the ‘fake’.
First off, the setting. A Pan-Asian conference on Deep Learning. This is definitely more down the corporate side than the academic, with its light shows and booth-babes flourishing the achievements of a humanoid robot. It might be nice to think that by 2028 booth babes are at least on the endangered list, if not entirely extinct! The ones I have seen and interacted with have all been at the more corporate exhibitions where sales are the driving force. Demonstrations of deep learning based tech at these corporate events have also primarily been in the fintech, security or AI assistant domains. Humanoid robots are of less interest, especially ones that seem at first to only to be able to perform the box lifting kinds of tasks they are already achieving here in 2018. My only experience of an Asian conference so far was the AI and Society conference in Tokyo last year. A far more sedate affair in comparison:
Salesforce had a confetti drop at Benioff’s keynote at Dreamforce in 2017 of course, and as a music video a certain panache is necessary. A lot of this panache comes from JT himself, wearing the Steve Jobs-esque black and white with trainers of the tech entrepreneur, and channeling his dancing moves to the robot. At first Pinocchio still has his strings, while he’s still doing his relatively mundane balancing/walking/lifting schtick, but once they are removed he cuts the rug to the wonder of the audience. Although, JT seems to be at least inspiring the moves, perhaps even directing them at moments. So much for Deep Learning?
I say ‘he’ as the robot is presented as masculine. The model is based on JT himself using motion capture techniques, and some of the song lines come from the JT-bot’s masculine features on his oddly malleable face (Q: wouldn’t this tech would be far more fantastic than the dancing?). Also, there are the NSFW moments where the JT-bot simulates sex with the female dancers. Simulation is a key theme of this video, but the gender dynamics of the video are blunt, and not at all as ‘advanced’ as the tech is presented as to the audience.
[UPDATE: After writing this blog post CES began in Las Vegas and stories emerged about not only the disturbing lack of female keynote speakers there (sigh. Charlotte Jee has curated a list of 348 women speakers in Tech which is a good starting point if there are any claims about them being hard to find) but also that Giles Walker’s robo-strippers were appearing on stage with human female pole dancers. Described as originally being an artistic commentary on surveillance culture, the juxtaposition of the robo-strippers with human dancers only serves to highlight the lack of advancement in gender dynamics and the kinds of commodification at play at such tech conferences. Both of which I would like to think have definitely been retired by 2028, so that this kind of display at CES AND the demonstration in ‘Filthy’ seem anachronistic. If not before!]
Aside from the booth babes the other women in the video are the stage manager and a few members of that audience. They give reaction shots: the woman with white glasses who adjusts them, the stage manager who dances along to the track but who doesn’t actually seem to be doing any work (someone else is pressing buttons), a woman in the audience who tilts her head (in a robotic way?) as the robot first walks down the stairs, women clapping, and the woman who reacts to the groin grab (at the line “And what you gonna do with all that beast?”, which is followed by a roaring noise). There are male reactors in the audience too of course, but the emphasis is on observation for the women, whereas the men in the audience gape more with wonder, especially at the NSFW moments.
What is the signalling and message here? The uniform is an obvious cue – here is the Zuckerberg/Musk/Jobs figure bounding on stage and calling the shots. The stage presentation, again more CES than conference, sets up the expectations of a demonstration. But in combination with the lyrics the increasingly familiar cultural context is entwined with current deeper themes in the development of Artificial Intelligence:
If you know what’s good
(If you know what’s good)
If you know what’s good
(If you know what’s good)
Hey, if you know what’s good
(If you know what’s good)”
But what is ‘good’ and do you know it? Laser lights aside, this is not a question of knowing what is ethical – there’s no suggestion of the robot turning on the audience due to its own understanding of what is ‘good’. Its a value question – if you know what’s good you’ll have distinguished between the real and the fake. And the show is all about the ‘real’. The robot can do remarkable things, but gets more remarkable when there are no strings on him. Pinocchio is freed, and real.
However, knowing what’s real and what’s not is tricky. The twist in the tale is that JT is himself a simulation. With the refrain “put your filthy hands on me” JT runs his hands over his own body and his image breaks up into coloured blocks of light, before he finally vanishes entirely and brings the end of the song. Which brings another angle to the line, “Baby, don’t you mind if I do, yeah. Exactly what you like times two, yeah”. He and the robot are the two that he’s bringing; two synthetic beings that can do what you like (sexually, we are led to understand). But this seems to run counter to the main theme of the song , the ‘good’ that he’s explaining: that “Haters gon’ say its fake. [but its] So real.”
Music videos can present narratives, but they are not necessarily tied into the same conventions of storytelling – that call for satisfying and thematically sensible endings – that we see in longer cinematic forms. If we take ‘Filthy’ in comparison with a feature film with a similar aesthetic, Ex Machina, which did try for a satisfying ending, we can make some further observations about the understanding of the culture that both are presenting.
JT is presented in ‘Filthy’ as a tech CEO, and Nathan Bateman in Ex Machina has strong similarities in his monochromatic fashion taste and dance moves (although Bateman also has tank tops in his clothing repertoire). The android in Ex Machina, Eva, is in a more complete form than the bare metallic bones and muscles of the JT-bot, and also demonstrates more actual Deep Learning rather than just physical prowess (you can of course make dancing bots without deep learning). JT-bot of course has the disadvantage of being in a music video rather than a film with space for dialogue. But that is not true of Kyoko, the Asian sex-bot that Bateman dances with in Ex Machina who never speaks. There is some similarity in passiveness in the Asian audience in ‘Filthy’. Uncharitably, anyone working in AI research might think of JT’s presentation at a Pan-Asian conference as a little like ‘selling ice to Inuits’ – technological progress in Asia is moving at a rather rapid pace and by 2028 what an American entrepreneur could take there to demonstrate might be even more archaic in that context than a dancing robot is now. Just two days ago China announced it would be building a $2.1bn AI research park.
Returning to themes, the central question of Ex Machina is realness as well – the test is to see if the third main character of Caleb will think of Eva as really conscious even when her inner robotic workings are visible. Nathan’s final statement that Eva’s ability to manipulate Caleb is the true proof of her true intelligence fatally ignores that the test should really have been about her consciousness and personhood – and he pays the price. Knowing what’s ‘good’ in Ex Machina is about knowing who is going to harm you – initially its about knowing that Nathan is bad, but then both Caleb and Nathan underestimate Eva, and she sets out into the world. They fail the test, they do not see the realness of her threat. Knowing what’s good in ‘Filthy’, beyond knowing what’s good for you sexually, is a confusion between knowing that the tech is really doing what it seems to be doing and the mixed message of JT himself not actually being real. I’d argue for simulation being a kind of real in ‘Filthy’, but JT looks genuinely surprised and upset to find out he’s not real.
The video ends with a light show and standing ovation – both currently not uncommon in the tech field, but this ‘Greatest Show on Earth’ grandstanding would likely feel old hat by 2028. Really, the video provides us with a reflection on how much the front facing part of the tech industry has transformed into a show. Regularly there are calls to pull back on the hype around AI by technologists and researchers – the reaction to Sophia, the Hanson robot who cannot dance but can hold a conversation and who has been made a citizen of Saudia Arabia, has become a ignition point for conversations about robot rights, human rights, anthropomorphism and the ‘realness’ of claims about AI. Yann LeCun tackled this on Twitter in response to an article on Tech Insider where Sophia was ‘interviewed’.
This is to AI as prestidigitation is to real magic.
Perhaps we should call this “Cargo Cult AI” or “Potemkin AI” or “Wizard-of-Oz AI”.
In other words, it’s complete bullsh*t (pardon my French).
Tech Insider: you are complicit in this scam. https://t.co/zhUE4V2PSR
The real vs fake debate will continue of course, much as the debate as to what should and should not be included under the words ‘Artificial Intelligence’. JT’s reference to Deep Learning works in a similar way – signalling a hot topic which he then plays out with funky aesthetics. But the field can’t be all smoke and mirrors, metallic confetti and lasers, booth babes and simulation, given the impact on society that AI will actually have.
At the end of 2017 was thrilled to be a part of the live shows that BBC Click does at Broadcasting House (I appear from about 16:00). I got some really great questions from the audience of both school students and adults (and I got to save a robot!)
A film made by Dr Beth Singler, Dr Ewan St John Smith from the University of Cambridge, and Little Dragon Films of Cambridge has made the shortlist for the Arts and Humanities Research Council’s prestigious 2017 Research in Film Awards.
The film called ‘Pain in the Machine’ has been shortlisted for the Best Research Film of the Year.
Hundreds of films were submitted for the Awards this year and the overall winner for each category, who will receive £2,000 towards their filmmaking, will be announced at a special ceremony at 195 Piccadilly in London, home of BAFTA, on the 9 November.
Launched in 2015, the Research in Film Awards celebrate short films, up to 30 minutes long, that have been made about the arts and humanities and their influence on our lives.
There are five categories in total with four of them aimed at the research community and one open to the public.
Filmmaker Beth Singler, said: ‘‘Pain in the Machine was a chance to ask a provocative question about the future of AI and robotics. We have world-class experts really considering whether robots could, or should, feel pain. And this film has also opened up the conversation to a wider audience. We have had 17,000 views of the film on the University’s Youtube channel alongside several public screenings. Being shortlisted for this award recognises the excellent research and effort put in by the whole production team and we are thrilled.”
Mike Collins, Head of Communications at the Arts and Humanities Research Council, said: “The standard of filmmaking in this year’s Research in Film Awards has been exceptionally high and the range of themes covered span the whole breadth of arts and humanities subjects.
“While watching the films I was impressed by the careful attention to detail and rich storytelling that the filmmakers had used to engage their audiences. The quality of the shortlisted films further demonstrates the endless potential of using film as a way to communicate and engage people with academic research. Above all, the shortlist showcases the art of filmmaking as a way of helping us to understand the world that we live in today.”
A team of judges watched the longlisted films in each of the categories to select the shortlist and ultimately the winner. Key criteria included looking at how the filmmakers came up with creative ways of telling stories – either factual or fictional – on camera that capture the importance of arts and humanities research to all of our lives.
Judges for the 2017 Research in Film Awards include Richard Davidson-Houston, Head of All 4, Channel 4 Television, Lindsay Mackie Co-founder of Film Club and Matthew Reisz from Times Higher Education.
The winning films will be shared on the Arts and Humanities Research Council website and YouTube channel. On 9 November you’ll be able to follow the fortunes of the shortlisted films on Twitter via the hashtag #RIFA2017.