Monthly Archives: November 2013

Postscript to Ethical Governor 0.1

I’ve been encouraged by a colleague to add a postscript to my last post, lest its irony be lost on any of my readers. The post was a form of thought experiment on what it would mean to take Ron Arkin at his word (at least in the venue of the aforementioned debate), to put his proposal to the test by following it out to (one of) its logically absurd conclusions. That is, if as Arkin claims it’s the failures of humans that are his primary concern, and that his ‘ethical governor’ is designed to correct, why wait for the realization of robot weapons to implement it?  Why not introduce it as a restraint into conventional weapons in the first instance, as a check on the faulty behaviours of the humans who operate them?  Of course I assume that the answer to this question is that the ‘governor’ remains in the realm of aspirational fantasy, existing I’m told only in the form of a sketch of an idea and some preliminary mathematics developed within a briefly funded student project back in 2009, with no actual proposal for how to translate the requisite legal frameworks into code. Needless to say, I hope, my proposal for the Ethical Governor 0.1 is not something that I would want the DoD actually to fund, though there seems little danger that they would be keen to introduce such a restraint into existing weapon systems even if it could plausibly be realized.

There are two crucial issues here. The first is Arkin’s premise that, insofar as war is conducted outside of the legal frameworks developed to govern it, there could be a technological solution to that problem. And the second is that such a solution could take the form of an ‘ethical governor’ based on the translation of legal frameworks like the Geneva Convention, International Humanitarian Law and Human Rights Law into algorithmic specifications for robot perception and action.  Both of these have been carefully critiqued by my ICRAC colleagues (see http://icrac.net/resources/ for references), as well as in a paper that I’ve co-authored with ICRAC Chair Noel Sharkey. A core problem is that prescriptive frameworks like these presuppose, rather than specify, the capacities for comprehension and judgment required for their implementation in any actual situation.  And it’s precisely those capacities that artificial intelligences lack, now and for the foreseeable future. Arkin’s imaginary of the encoding of battlefield ethics brings the field no closer to the realization of the highly contingent and contextual abilities that are requisite to the situated enactment of ethical conduct, begging these fundamental questions rather than seriously addressing them.

Ethical Governor 0.1

Last Monday, November 18th, Georgia Tech’s Center for Ethics and Technology hosted a debate on lethal autonomous robots, between roboticist Ron Arkin (author of Governing Lethal Behavior in Autonomous Robots, 2009) and philosopher of technology Rob Sparrow (founding member of the International Committee for Robot Arms Control).  Along with the crucial issues raised by Rob Sparrow (regarding the lost opportunities of ongoing, disproportionate expenditures on military technologies; American exceptionalism and the assumption that ‘we’ will be on the programming end and not the targets of such weapons; the prospects of an arms race in robotic weaponry and its contributions to greater global insecurity, etc.), as well as the various objections that I would raise to Arkin’s premise that the situational awareness requisite to legitimate killing could be programmable, one premise of Arkin’s position in particular inspired this immediate response.

Arkin insists that his commitment to research on lethal autonomous robots is based first and foremost in a concern for saving the lives of non-combatants.  He proceeds from there to the ‘thesis’ that an ethically-governed robot could adhere more reliably to the laws of armed conflict and rules of engagement than human soldiers have demonstrably done.  He points to the history of atrocities committed by humans, and emphasizes that his project is aimed not at the creation of an ethical robot (which would require moral agency), but (simply and/or more technically) at the creation of an ‘ethical governor’ to control the procedures for target identification and engagement and ensure their compliance with international law.  Taking seriously that premise, my partner (who I’ll refer to here as the Lapsed Computer Scientist) suggests an intriguing beta release for Arkin’s project, preliminary to the creation of fully autonomous, ethically-governed robots. This would be to incorporate ethical governors into existing, human-operated weapon systems.  Before a decision to fire could be made, in other words, the conditions of engagement would be represented and submitted to the assessment of the automated ethical governor; only if the requirements for justifiable killing were met would the soldier’s rifle or the hellfire missile be enabled.  This would at once provide a means for testing the efficacy of Arkin’s governor (he insists that proof of its reliability would be a prerequisite to its deployment), and hasten its beneficial effects on the battlefield (reckless actions on the part of human soldiers being automatically prevented).  I would be interested in the response to this suggestion, both from Ron Arkin (who insists that he’s not a proponent of lethal autonomous weapons per se, but only in the interest of saving lives) and from the Department of Defense by whom his research is funded.  If there were objections to proceeding in this way, what would they be?