Postscript to Ethical Governor 0.1

I’ve been encouraged by a colleague to add a postscript to my last post, lest its irony be lost on any of my readers. The post was a form of thought experiment on what it would mean to take Ron Arkin at his word (at least in the venue of the aforementioned debate), to put his proposal to the test by following it out to (one of) its logically absurd conclusions. That is, if as Arkin claims it’s the failures of humans that are his primary concern, and that his ‘ethical governor’ is designed to correct, why wait for the realization of robot weapons to implement it?  Why not introduce it as a restraint into conventional weapons in the first instance, as a check on the faulty behaviours of the humans who operate them?  Of course I assume that the answer to this question is that the ‘governor’ remains in the realm of aspirational fantasy, existing I’m told only in the form of a sketch of an idea and some preliminary mathematics developed within a briefly funded student project back in 2009, with no actual proposal for how to translate the requisite legal frameworks into code. Needless to say, I hope, my proposal for the Ethical Governor 0.1 is not something that I would want the DoD actually to fund, though there seems little danger that they would be keen to introduce such a restraint into existing weapon systems even if it could plausibly be realized.

There are two crucial issues here. The first is Arkin’s premise that, insofar as war is conducted outside of the legal frameworks developed to govern it, there could be a technological solution to that problem. And the second is that such a solution could take the form of an ‘ethical governor’ based on the translation of legal frameworks like the Geneva Convention, International Humanitarian Law and Human Rights Law into algorithmic specifications for robot perception and action.  Both of these have been carefully critiqued by my ICRAC colleagues (see http://icrac.net/resources/ for references), as well as in a paper that I’ve co-authored with ICRAC Chair Noel Sharkey. A core problem is that prescriptive frameworks like these presuppose, rather than specify, the capacities for comprehension and judgment required for their implementation in any actual situation.  And it’s precisely those capacities that artificial intelligences lack, now and for the foreseeable future. Arkin’s imaginary of the encoding of battlefield ethics brings the field no closer to the realization of the highly contingent and contextual abilities that are requisite to the situated enactment of ethical conduct, begging these fundamental questions rather than seriously addressing them.

Post a comment or leave a trackback: Trackback URL.

Comments

  • Lucy Suchman  On November 27, 2013 at 3:32 pm

    A post-postscript: Ron Arkin informs me that the ethical governor was funded by the Army Research Office over 3 years from 2006-2009, resulting in what he characterizes as a ‘prototype proof-of-concept system.’ For a video demonstration see http://www.cc.gatech.edu/ai/robot-lab/ethics/#multi. My understanding from this is that the project has not received further DoD funding.

  • robotman  On November 29, 2013 at 1:38 am

    Hi Lucy – thank you for sharing this piece. I just followed the link that you provide but could not find anything that could be considered a “proof of concept” – even a prototype one. I assume that you mean the four little video clips where there is a voiceover that simply restates what is in the book plus a few shots of a crude interface. Other than that, there is a sort of cartoon aerial vehicle that simply moves along in a straight line.

    These are essentially a slide show with a voice over. I cannot see anything that could be characterised as a proof – prototype or otherwise. Quite disappointing really.

  • Lucy Suchman  On November 29, 2013 at 7:30 am

    Hi robotman – thanks in turn for your comment. The link was forwarded to me by Ron Arkin, along with pointers to his book and other writings on this topic. Your comment makes clear that ‘proof’ is contestable … not a contest that I would want to pursue further in the context of this blog, but one that is important in the wider debate on automation/autonomy in weapon systems.

  • Nao@snet.net  On December 2, 2013 at 2:29 pm

    http://today.uconn.edu/blog/2010/11/the-ethical-robot/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: