Autonomy

Media reports of developments in so-called robotic weapons systems (a broad category that includes any system involving some degree of pre-programming as well as remote control) are haunted by the question of ‘autonomy’; specifically, the prospect that technologies acting independently of human operators will run ‘out of control’ (a fear addressed by Langdon Winner in his 1977 book Autonomous Technology: technics-out-of-control as a theme in political thought).  While recognizing the very real dangers posed by increasing resort to on-board, algorithmic encoding of controls in military systems, I want to track the discussion of autonomy with respect to weapons systems a bit more closely.  A recent story in the LA Times, noted and under discussion by my colleagues in the International Committee for Robot Arms Control (ICRAC), provides a good starting place.

While I’m going to suggest here that autonomy is something of a red herring in the context of this story, let me be clear at the outset that I believe that we should be deeply concerned about the developments reported. They represent a continuation of the longstanding investment in automation in the (questionable) interest of economy; the dangers of ever-intensified speed in war fighting; the extraordinary inflation of spending on weapons systems at the expense of other social spending (see post Arming Robots); and the threat to global security of the already existing infrastructure of networked warfare.  With that said, I want to question the framing of the developments reported in this article as the beginning of something new, unprecedented and (as often goes along with these adjectives) inevitable, centering on the question of autonomy.

The article reports on the X47B drone, a demonstration aircraft currently being tested by the Navy at a cost of $813 million.

“The X-47B drone, above, marks a paradigm shift in warfare, one that is likely to have far-reaching consequences. With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.” (Chad Slattery, Northrop Grumman / January 25, 2012)

A major technical requirement for this plane is that it should be able to land under on board controls on the deck of an aircraft carrier, “one of aviation’s most difficult maneuvers.”  In this respect, the X47B is a next logical step in an ongoing process of automation, of the replacement of labour with capital equipment, through the delegation of actions previously done by skillful humans to machines. The familiarity of the story in this respect raises the question: what exactly is the “paradigm shift” here?  And what are the stakes in the assertion that there is one?  The author observes:

“With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.”

Most commercial aircraft, as well as existing drones, can be put under ‘auto pilot’ controls, and are always operating ‘semi-independently.’  And the U.S. drone campaign is already dealing death and destruction.

“Although humans would program an autonomous drone’s flight plan and could override its decisions, the prospect of heavily armed aircraft screaming through the skies without direct human control is unnerving to many.”

Aren’t populations in Pakistan, Afghanistan, Yemen and other areas that are the target of U.S. drones already unnerved by heavily armed aircraft screaming through the skies?  And to what extent has ‘direct human control’ over existing drone systems ensured that civilians won’t be killed, whether as a consequence of mistaken targeting, or what seems to be accepted within military procedure as unavoidable ‘collateral’ damage?

“‘The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,’ committee [of the International Red Cross] President Jakob Kellenberger said at a recent conference. ‘The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.’”

It is clear that the ‘capacity to discriminate’ is already based on complex networks of sensors and code, and the history of the use of armed drones includes recurring examples of misrecognition of targets, extra-judicial killing, and a range of other violations of international law.

“Weapons specialists in the military and Congress acknowledge that policymakers must deal with these ethical questions long before these lethal autonomous drones go into active service, which may be a decade or more away.”

These questions – not only ethical but also moral and legal – must equally have been dealt with before lethal remotely-controlled drones went into active service.  Which means that the latter are, in their current use, unethical, immoral and illegal.

“More aggressive robotry development could lead to deploying far fewer U.S. military personnel to other countries, achieving greater national security at a much lower cost and most importantly, greatly reduced casualties,” aerospace pioneer Simon Ramo, who helped develop the intercontinental ballistic missile, wrote in his new book, “Let Robots Do the Dying.”

The promise of lower cost rings hollow in the context of a defense budget that continues to grow, and the prediction that annual global spending on drones will double to $11.5 billion in the next few years (reported by the New Internationalist in their December 2011 issue).  But ‘most importantly,’ as Ramo puts it, the ‘reduction in casualties’ refers only to ‘our’ side, and it is not only robots that are dying.

The Air Force says in the Unmanned Aircraft Systems Flight Plan 2009-2047 that “it’s only a matter of time before drones have the capability to make life-or-death decisions as they circle the battlefield.” What’s missing from this projection (we should be suspicious whenever we hear ‘it’s only a matter of time’) are the unresolved problems of decision-making that plague already existing armed drone systems. The focus on the future ignores the already unacceptable present.  And the focus on autonomy as the threat directs our attention away from the autonomous arms-industry-out-of-control, of which the X47B is a symptom.

Post a comment or leave a trackback: Trackback URL.

Comments

  • andrewclement  On January 30, 2012 at 6:43 pm

    While achieving the kind of ‘autonomy’ that these drones are intended to exhibit, as in landing on an aircraft carrier without a remote human controller, would represent a major technological advance, it wouldn’t make any difference in assigning responsibility for any actions taken by the craft. Driving with cruise control puts the accelerator pedal under an ‘autonomous’ algorithmic control, but offers no defense in the case of a driving infraction.

  • Mark Gubrud  On January 31, 2012 at 12:11 pm

    The X-47B’s ability to land autonomously on the deck of an aircraft carrier is by itself of little significance other than as a milestone of technological progress in UAV development. As you point out, conventional aircraft have long been autopiloted and even auto-landed with the aid of computers and radio beacons. Landing on a carrier is much harder, but still just an evolutionary advance.

    What would and, unless somehow prevented, will be a profound and fundamental change, is when UAVs and other lethal robots are enabled and allowed to make life-and-death decisions autonomously or with only minimal human oversight; when we allow machines to decide when, where, and whom to kill, when machines are turned loose to hunt human beings and kill them.

    This kind of autonomy is not a red herring at all, it is the heart of the issue posed by today’s explosion of military robotics.

    • Lucy Suchman  On January 31, 2012 at 12:35 pm

      Mark’s comment affords me an opportunity to try to clarify my argument on the question of autonomy. A primary issue in this context is targeting, the premise of ‘positive identification’ as a precondition for legal engagement. We see that targeting is a serious problem already, in the existing configurations of human and automated sensing that comprise the drone system, as the reports of mis-recognition and civilian casualties multiply. Given that, what justifiable basis is there for further automation, for the intensification of the move towards fewer humans, dependent on more complexly mediated perception, operating larger numbers of lethal weapons simultaneously, and at greater speed?

      So we agree I think that the move towards increasing automation is an urgent concern. And this raises the question of how we would delineate the line across which it’s illegal to go in automated targeting. Thinking through that question is vitally important but so, I want to argue, is articulating the illegality of those systems already in use, notwithstanding the inclusion of human oversight.

  • dbellin  On March 4, 2012 at 2:58 pm

    See this: Aerial robots swarm the stage at TED [video] link is at:

    http://arstechnica.com/science/news/2012/03/robots-swarm-the-stage-at-ted.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

    Things are going to get pretty scary….

  • Lucy Suchman  On April 25, 2012 at 9:35 am

    I’ll do a longer reading of this video soon, after the Drone Summit on Killing and Spying by Remote Control in Washington, D.C. this weekend. (See: http://www.codepinkalert.org/article.php?id=6065). But in the meantime we might think about Medea Benjamin’s report, in her new book Drone Warfare: Killing by Remote Control, that on September 1, 2011 the company AeroVironment announced that it was awarded a $4.9 million contract from the U.S. Army to build a 5 ½ pound drone called the Switchblade. Designed to “carry an explosive payload into a target” (Scott Shane, “Coming Soon – The Drone Arms Race,” The New York Times, October 9, 2011), Benjamin observes that this drone can also serve as the U.S. military’s very own robotic suicide bomber. Scary, and very sad.

  • Rob Wilcox  On September 23, 2012 at 9:51 am

    The autonomous machine on the battlefield was one of the concerns of of the late Gary Chapman, the first executive director of CPSR and an ex-Special Forces officer. Even if the practice was outlawed by international treaty, breakout capacity could be maintained as is the case with mines, where countries are permitted to retain them for research. The issue is with upstream research, and there are only a handful of agencies in the world funding this work.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 70 other followers

%d bloggers like this: