Remember the way we used to feel
Before we were made from steel
We will take the field again
An army of a thousand men
It’s time to rise up, we’re breaking through
Now that we’ve been improved
Everything has been restored
I’m ready for the art of war
Sometimes my research-time listening is pretty on point.
These lyrics are from ‘Machine’ by All Good Things, a Los-Angeles based band who started out making music for video games and soundtracks. Here is the video for the song which I’ve added to my playlist on a certain audio streaming platform:
In the music video a human, played by lead singer Dan Murphy, is changed through the addition of various bits of technology, becoming a being that is “made from steel”. “Metal and emotionless. No battlefield can hinder us. Because we are machines”, he sings as the final pieces are fitted to him, and his human face is finally obscured by a gas mask.
I’ve been thinking about this music video since reading this post on the U.S. Naval Insitute Blog. In this piece, written by a human (I assume!) from the perspective of SALTRON 5000 (surely only a human could have come up with that name!) the argument is put forward, in response to a Wall Street Journal piece by Heather Mac Donald about why “Women Don’t Belong in Combat Units”, that humans as a whole do not belong in combat. We are all dominated by our sexual urges, too emotional, too fragile, too weak, too sleepy, and too bigoted – as shown by our attempts to exclude certain groups of our own number from combat (women, minorities, certain sexual orientations…):
Attention, humans! I am the tactical autonomous ground maneuver unit, SALTRON 5000. President Salty sent me back from the year 2076 to deliver this message: YOU DO NOT BELONG IN COMBAT!
The song and video for ‘Machine’ tell the story of the ‘robomorphisation’ of a soldier. This happens through a literal cyborgisation in the music video, but it could equally be a song about the metaphorical transformation of the human into a machine in order to serve the requirements of war. The U.S. Naval Institute Blog also performs a robomorphisation: the post is the work of a human author whose views on the need for Lethal Autonomous Weapons Systems (LAWS) plays out in the voice of one such ‘killer robot’ from the future. But in the latter’s case that robomorphisation is in order to perform the argument that humans should no longer go to war, not to say that war itself is dehumanising. SALTRON 5000’s argument could also have been presented as an ethical rather than pragmatic argument – not just that we are too weak and too squishy, but that we shouldn’t be involved in war because it’s wrong for humans to be soldiers. The human behind the curtain of SALTRON 5000 likely doesn’t want to make this argument, because it lies too close to much older arguments about the morality of war as a whole.
From a narrative perspective, we might ask whether the character of ‘SALTRON 5000’ does not make this argument because as an AI, even one from the year 2076, it’s not capable of ethical thinking? It certainly seems capable of very human abilities like sarcasm, and even moral judgements – or being judgemental – as seen in its reference to Shellback Ceremonies (this is a fascinating example of modern magical/ritual thinking), sky genitalia drawings (by fighter jets), and SITREPs on bad behaviour during port visits to Thailand. Real life and near future LAWS obviously raise questions about how decisions, including ethical decisions, should be made. I was recently interviewed by New Scientist for a piece responding to an academic paper on the possibilities for creating an ‘Artificial Moral Agent’ (AMA).
In the paper, Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders, the use case was the much more mundane problem of the AMA sensing a teenager smoking marijuana and having to decide who to inform, rather than an AI having to make ethical decisions on the battlefield. However, the underlying precepts were about judging the importance of the positions of the various stakeholders involved in the situation, and a hypothetical AMA on the battlefield would also have a chain of command, as well as being beholden to any human ‘values’ that have been considered important enough to input (ignoring for a moment the multiplicity of our human values!). For some, the first human value we should consider would be the ethical imperative not to employ LAWS at all – as in the Campaign to Stop Killer Robots. Likewise, much of the reporting on the Jiminy Cricket paper was about the ethics of having AI assistants as ‘snitches’ and surveillance culture.
As an anthropologist, I argued that the messiness of human nature made the AMA in the marijuana scenario impossible to implement. Treating ‘parents’ as one unit ignored the real and very complicated dynamics of family life. Another use case shows just how confusing for the AMA that could be – if instead of marijuana what it was a stranger’s perfume or aftershave that was detected in the shared room of the parents – implying an affair was happening? Which of the so far singular ‘parent’ unit would it inform about this evidence? How could the AMA decide about informing anyone with no input from its creator corporation or from the state’s laws on affairs as it had in the marijuana example? Further, the presumption that parents could input the AMA system with their preferences for its response in the marijuana scenario did not recognise that those preferences might change when their child was actually in serious trouble.
I think that it is the messiness of human nature that drives our push to create AI principles, resulting in so many of them. We find them to be a comforting reassurance that we can find a way forward when it comes to AI ethics. Professor Alan Winfield has collected a list of such principles, starting with Asimov’s Four Laws of Robotics, and recognising that “many subsequent principles have been drafted as a direct response [to them].”

Recently it was announced that “‘Killer robots’ are to be taught ethics in world-topping Australian research project” with the claim that: ‘”Asimov’s laws can be formulated in a way that basically represents the laws of armed conflict”. There was a fair bit of pushback online about this – and it should definitely be recognised that Asimov wrote his laws as a plot-device: they are meant to be imperfect or there can be no story!
Sadly, it is certainly possible to employ Asimov’s Laws in combat situations if we recognise that ‘human’ is a culturally constructed label and that it is in the very nature of war to choose who that label does and does not apply to and to train soldiers into holding this view. Pre-Asimov, Humans had already come up with strict moral systems that said very clear things like ‘Thou shalt not kill’ and yet wars abounded because ‘those ones’ over there weren’t viewed as being the same as ‘us’ – for any number of reasons we could come up with. Current concern about LAWS and their ability to make ethical decisions as AMA is a reflection of our own robomorphisation as we become metaphysical war machines “ready for the art of war.”