On killer robots, celebrity scientists, and the campaign to ban lethal autonomous weapons

autonomousweapons

Screencap of South Korean autonomous weapon in action courtesy of Richard Anders via YouTube.  Reticle added by Curiousmatic.

Amidst endless screen shots from Terminator 3: Rise of the Machines (Warner Bros Pictures, 2003), and seemingly obligatory invocations of Stephen Hawking, Elon Musk and Steve Wozniak as signatories, the media reported the release on 28 July of an open letter signed by thousands of robotics and AI researchers calling for a ban on lethal autonomous weapons. The letter’s release to the press was timed to coincide with the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2015) in Buenos Aires. Far more significant than the inclusion of celebrity signatories – their stunning effect in drawing international media attention notwithstanding – is the number of prominent computer scientists (not a group prone to add their names to political calls to action) who have been moved to endorse the letter. Consistent with this combination of noise and signal, the commentaries generated by the occasion of the letter’s release range from aggravatingly misleading to helpfully illuminating.

The former category is well represented in an interview by Fox News’ Shepard Smith with theoretical physicist and media scientist Michio Kaku. In response to Smith’s opening question regarding whether or not concerns about autonomous weapons are overblown, Kaku suggests that “Hollywood has us brainwashed” into thinking that Terminator-style robots are just around the corner. Quite the contrary, he assures us, “we have a long ways to go before we have sentient robots on the battlefield.” This ‘long ways to go’ is typical of futurist hedges that, while seemingly interrupting narratives of the imminent rise of the machines, implicitly endorse the assumption of continuing progress in that direction. Kaku then further affirms the possibility, if not inevitability, of the humanoid weapon: “Now, the bad news of course is that once we do have such robots, these autonomous killing machines could be a game changer.” Having effectively clarified that his complaint with Hollywood is less the figure of the Terminator-style robot than its timeline, he reassures us that “the good news is, they’re decades away. We have plenty of time to deal with this threat.” “Decades away, for sure?” asks Shepard Smith. “Not for sure, cuz we don’t know how progress is,” Kaku replies, and then offers what could be a more fundamental critique of the sentient robot project. Citing the disappointments of the recent DARPA Robotics Challenge as evidence, he explains: “It turns out that our brain is not really a digital computer.” The lesson to take from this, he proposes, is that the autonomous killing machine “is a long term threat, it’s a threat that we have time to digest and deal with, rather than running to the hills like a headless chicken” (at which he and Shepard share a laugh). While I applaud Kaku’s scepticism regarding advances in humanoid robots, it’s puzzling that he himself frames the question in these terms, suggesting that it’s the prospect of humanoid killer robots to which the open letter is addressed, and (at least implicitly) dismissing its signatories as the progeny of Chicken Little.

Having by now spent all but 30 seconds of his 3 minutes and 44, Kaku then points out that “one day we may have a drone that can seek out human targets and just kill them indiscriminately. That could be a danger, a drone that’s only mission is to kill anything that resembles a human form … so that is potentially a problem – it doesn’t require that much artificial intelligence for a robot to simply identify a human form, and zap it.” Setting aside the hyperbolic reference to indiscriminate targeting of any human form (though see the Super Aegis 2 system projected to patrol the heavily armed ‘demilitarized zone’ between North and South Korea), this final sentence (after which the interview concludes) begins to acknowledge the actual concerns behind the urgency of the campaign for a ban on lethal autonomous weapons. Those turn not on the prospect of a Terminator-style humanoid or ‘sentient’ bot, but on the much more mundane progression of increasing automation in military weapon systems: in this case, automation of the identification of particular categories of humans (those in a designated area, or who fit a specified and machine-readable profile) as legitimate targets for killing. In fact, it’s only the popular media that have raised the prospect of fully intelligent humanoid robots: the letter, and the wider campaign for a ban on lethal autonomous weapons, has nothing to do with ‘Terminator-style’ robots. The developments that are cited in the letter are both far more specific, and more imminent.

That specificity is clarified in a CNET story about the open letter produced by Luke Westaway, broadcast on July 27th. Despite its inclusion of cuts from Terminator 3 and its invocation of the celebrity triad, we’re also informed that the open letter defines autonomous weapons as those that “select and engage targets without human intervention.” The story features interviews with ICRAC’s Noel Sharkey, and Thomas Nash of the UK NGO Article 36. Sharkey helpfully points out that rather than assuming humanoid form, lethal autonomous weapons are much more likely to look like already-existing weapons systems, including tanks, battle ships and jet fighters. He explains that the core issue for the campaign is an international ban that would pre-empt the delegation of ‘decisions’ to kill to machines. It’s worth noting that the word ‘decision’ in this context needs to be read without the connotations of that term that associate it with human deliberation. A crucial issue here – and one that could be much more systematically highlighted in my view – is that this delegation of ‘the decision to kill’ presupposes the specification, in a computationally tractable way, of algorithms for the discriminatory identification of a legitimate target. The latter, under the Rules of Engagement, International Humanitarian Law and the Geneva Conventions, is an opponent that is engaged in combat and poses an ‘imminent threat’. We have ample evidence for the increasing uncertainties involved in differentiating combatants from non-combatants under contemporary conditions of war fighting (even apart from crucial contests over the legitimacy of targeting protocols). The premise that legitimate target identification could be rendered sufficiently unambiguous to be automated reliably is at this point unfounded (apart from certain nonhuman targets like incoming missiles with very specific ‘signatures’, which also clearly pose an imminent threat).

‘Do we want to live in a world in which we have given machines the power to take human lives, without a human being there to pull the trigger?’ asks Thomas Nash of Article 36 (CNET 27 July 2015)? Of course the individual human with their hand on the trigger is effectively dis-integrated – or better highly distributed – in the case of advanced weapon systems. But the existing regulatory apparatus that comprises the laws of war relies fundamentally on the possibility of assigning moral and legal responsibility. However partial and fragile its reach, this regime is our best current hope for articulating limits on killing. The precedent for a ban on lethal autonomous weapons lies in the United Nations Convention on Certain Conventional Weapons (CCW), the body created ‘to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.’  Achieving that kind of legally binding international agreement, as Westaway points out, is a huge task but as Thomas Nash explains there is some progress. Since the launch of the campaign in 2013, the CCW has put the debate on lethal autonomous weapons onto its agenda and held two international ‘expert’ consultations. At the end of this year, the CCW will consider whether to continue discussions, or to move forwards on the negotiation of an international treaty.

CCW

Convention on Certain Conventional Weapons, May 2014

To appreciate the urgency of interventions into the development of lethal autonomous weapons, science and technology studies (STS) offers a useful concept. The idea of ‘irreversibility’ points to the observation that, while technological trajectories are never self-determining or inevitable, the difficulties of undoing technological projects increase over time. (See for example Callon, Michel (1990), Techno-economic networks and irreversibility. The Sociological Review, 38: 132–161) Investments (both financial and political) increase as does the iterative installation and institutionalization of associated infrastructures (both material and social). The investments required to dismantle established systems grow commensurately. In the CNET interview, Nash points to the entrenched and expanding infrastructures of drone technology as a case in point.

BBC World News (after invoking the Big Three, and also offering the obligatory reference to The Terminator) interviews Professor Heather Roff who helped to draft the letter. The BBC’s Dominic Laurie asks Roff to clarify the difference between a remotely-operated drone, and the class of weapons to which the letter is addressed. Roff points to the fact that the targets for current drone operations are ‘vetted and checked’, in the case of the US military by a Judge Advocate General (JAG). She is quick to add, “Now, whether or not that was an appropriate target or that there are friendly fire issues or there are collateral killings is a completely different matter”; what matters for a ban on lethal autonomous weapons, she emphasizes, is that “there is a human being actually making that decision, and there is a locus of responsibility and accountability that we can place on that human.” In the case of lethal autonomous weapons, she argues, human control is lacking “in any meaningful sense”.

The question of ‘meaningful human control’ has become central to debates about lethal autonomous weapons. As formulated by Article 36 and embraced by United Nations special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns, it is precisely the ambiguity of the phrase that works to open up the discussion in vital and generative ways. In collaboration with Article 36, Roff is now beginning a project – funded by the Future of Life Institute – to develop the concept of meaningful human control more fully. The project aims to create a dataset “of existing and emerging semi-autonomous weapons, to examine how autonomous functions are already being deployed and how human control is maintained. The project will also bring together a range of actors including computer scientists, roboticists, ethicists, lawyers, diplomats and others to feed into international discussions in this area.”

While those of us engaged in thinking through STS are preoccupied with the contingent and shifting distributions of agency that comprise complex sociotechnical systems, the hope for calling central actors to account rests on the possibility of articulating relevant legal and normative frameworks. These two approaches are not, in my view, incommensurable. Jutta Weber and I have recently attempted to set out a conception of human-machine autonomies that recognizes the inseparability of human and machine agencies, and the always contingent nature of ideas of autonomy, in a way that supports the campaign against lethal autonomous weapons. Like the signatories to the open letter, and as part of a broader concern to interrupt the intensification of automated killing, we write of the urgent need to reinstate human deliberation at the heart of matters of life and death.

 

Post a comment or leave a trackback: Trackback URL.

Comments

Trackbacks

Leave a comment