Ethical Governor 0.1

Last Monday, November 18th, Georgia Tech’s Center for Ethics and Technology hosted a debate on lethal autonomous robots, between roboticist Ron Arkin (author of Governing Lethal Behavior in Autonomous Robots, 2009) and philosopher of technology Rob Sparrow (founding member of the International Committee for Robot Arms Control).  Along with the crucial issues raised by Rob Sparrow (regarding the lost opportunities of ongoing, disproportionate expenditures on military technologies; American exceptionalism and the assumption that ‘we’ will be on the programming end and not the targets of such weapons; the prospects of an arms race in robotic weaponry and its contributions to greater global insecurity, etc.), as well as the various objections that I would raise to Arkin’s premise that the situational awareness requisite to legitimate killing could be programmable, one premise of Arkin’s position in particular inspired this immediate response.

Arkin insists that his commitment to research on lethal autonomous robots is based first and foremost in a concern for saving the lives of non-combatants.  He proceeds from there to the ‘thesis’ that an ethically-governed robot could adhere more reliably to the laws of armed conflict and rules of engagement than human soldiers have demonstrably done.  He points to the history of atrocities committed by humans, and emphasizes that his project is aimed not at the creation of an ethical robot (which would require moral agency), but (simply and/or more technically) at the creation of an ‘ethical governor’ to control the procedures for target identification and engagement and ensure their compliance with international law.  Taking seriously that premise, my partner (who I’ll refer to here as the Lapsed Computer Scientist) suggests an intriguing beta release for Arkin’s project, preliminary to the creation of fully autonomous, ethically-governed robots. This would be to incorporate ethical governors into existing, human-operated weapon systems.  Before a decision to fire could be made, in other words, the conditions of engagement would be represented and submitted to the assessment of the automated ethical governor; only if the requirements for justifiable killing were met would the soldier’s rifle or the hellfire missile be enabled.  This would at once provide a means for testing the efficacy of Arkin’s governor (he insists that proof of its reliability would be a prerequisite to its deployment), and hasten its beneficial effects on the battlefield (reckless actions on the part of human soldiers being automatically prevented).  I would be interested in the response to this suggestion, both from Ron Arkin (who insists that he’s not a proponent of lethal autonomous weapons per se, but only in the interest of saving lives) and from the Department of Defense by whom his research is funded.  If there were objections to proceeding in this way, what would they be?

Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: