Monthly Archives: January 2012

Autonomy

Media reports of developments in so-called robotic weapons systems (a broad category that includes any system involving some degree of pre-programming as well as remote control) are haunted by the question of ‘autonomy’; specifically, the prospect that technologies acting independently of human operators will run ‘out of control’ (a fear addressed by Langdon Winner in his 1977 book Autonomous Technology: technics-out-of-control as a theme in political thought).  While recognizing the very real dangers posed by increasing resort to on-board, algorithmic encoding of controls in military systems, I want to track the discussion of autonomy with respect to weapons systems a bit more closely.  A recent story in the LA Times, noted and under discussion by my colleagues in the International Committee for Robot Arms Control (ICRAC), provides a good starting place.

While I’m going to suggest here that autonomy is something of a red herring in the context of this story, let me be clear at the outset that I believe that we should be deeply concerned about the developments reported. They represent a continuation of the longstanding investment in automation in the (questionable) interest of economy; the dangers of ever-intensified speed in war fighting; the extraordinary inflation of spending on weapons systems at the expense of other social spending (see post Arming Robots); and the threat to global security of the already existing infrastructure of networked warfare.  With that said, I want to question the framing of the developments reported in this article as the beginning of something new, unprecedented and (as often goes along with these adjectives) inevitable, centering on the question of autonomy.

The article reports on the X47B drone, a demonstration aircraft currently being tested by the Navy at a cost of $813 million.

“The X-47B drone, above, marks a paradigm shift in warfare, one that is likely to have far-reaching consequences. With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.” (Chad Slattery, Northrop Grumman / January 25, 2012)

A major technical requirement for this plane is that it should be able to land under on board controls on the deck of an aircraft carrier, “one of aviation’s most difficult maneuvers.”  In this respect, the X47B is a next logical step in an ongoing process of automation, of the replacement of labour with capital equipment, through the delegation of actions previously done by skillful humans to machines. The familiarity of the story in this respect raises the question: what exactly is the “paradigm shift” here?  And what are the stakes in the assertion that there is one?  The author observes:

“With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.”

Most commercial aircraft, as well as existing drones, can be put under ‘auto pilot’ controls, and are always operating ‘semi-independently.’  And the U.S. drone campaign is already dealing death and destruction.

“Although humans would program an autonomous drone’s flight plan and could override its decisions, the prospect of heavily armed aircraft screaming through the skies without direct human control is unnerving to many.”

Aren’t populations in Pakistan, Afghanistan, Yemen and other areas that are the target of U.S. drones already unnerved by heavily armed aircraft screaming through the skies?  And to what extent has ‘direct human control’ over existing drone systems ensured that civilians won’t be killed, whether as a consequence of mistaken targeting, or what seems to be accepted within military procedure as unavoidable ‘collateral’ damage?

“‘The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,’ committee [of the International Red Cross] President Jakob Kellenberger said at a recent conference. ‘The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.’”

It is clear that the ‘capacity to discriminate’ is already based on complex networks of sensors and code, and the history of the use of armed drones includes recurring examples of misrecognition of targets, extra-judicial killing, and a range of other violations of international law.

“Weapons specialists in the military and Congress acknowledge that policymakers must deal with these ethical questions long before these lethal autonomous drones go into active service, which may be a decade or more away.”

These questions – not only ethical but also moral and legal – must equally have been dealt with before lethal remotely-controlled drones went into active service.  Which means that the latter are, in their current use, unethical, immoral and illegal.

“More aggressive robotry development could lead to deploying far fewer U.S. military personnel to other countries, achieving greater national security at a much lower cost and most importantly, greatly reduced casualties,” aerospace pioneer Simon Ramo, who helped develop the intercontinental ballistic missile, wrote in his new book, “Let Robots Do the Dying.”

The promise of lower cost rings hollow in the context of a defense budget that continues to grow, and the prediction that annual global spending on drones will double to $11.5 billion in the next few years (reported by the New Internationalist in their December 2011 issue).  But ‘most importantly,’ as Ramo puts it, the ‘reduction in casualties’ refers only to ‘our’ side, and it is not only robots that are dying.

The Air Force says in the Unmanned Aircraft Systems Flight Plan 2009-2047 that “it’s only a matter of time before drones have the capability to make life-or-death decisions as they circle the battlefield.” What’s missing from this projection (we should be suspicious whenever we hear ‘it’s only a matter of time’) are the unresolved problems of decision-making that plague already existing armed drone systems. The focus on the future ignores the already unacceptable present.  And the focus on autonomy as the threat directs our attention away from the autonomous arms-industry-out-of-control, of which the X47B is a symptom.

Arming robots

One of my central concerns in this blog is developments in remotely controlled weapon systems, including the arming of ground robots.  A sense of the wider context for these developments is provided by Dinyar Godrej in his succinct, and chilling, analysis of the current state of the arms industry globally, published in the December 2011 issue of the New Internationalist.  As Godrej observes: “Despite the fact that arms manufacturing in most Western nations ultimately represents vast fortunes of public funds flowing into private coffers for products that deal in injury or death, the industry is usually represented as a source of pride … Nowhere is this more evident than in the US, which spends almost as much as the rest of the world combined on arms [almost $700 billion in 2010, 43% of all military expenditures globally] and is the world’s largest arms exporter to boot.  Between 2001 (the start of the ‘war on terror’) and 2003, just the increase in military spending of this country was larger than the entire military budgets of countries like China or Britain.”

So war continues to be good for business, as military funding supports technology research and development, which brings us back to iRobot. According to IEEE Spectrum, iRobot has only relatively recently, and reluctantly, entered into the field of weaponized robots, perhaps driven to do so by the increasing importance of the military market to the company’s financial well being.  An initial configuration of iRobot’s 710 Warrior, as described in Defense Systems, features an Anti-Personnel Obstacle Breaching System or APOBS: more specifically “an explosive line charge deployed by a rocket that pulls a rope with a string of fragmentation grenades attached and a small parachute at the opposite end. The explosive line charge, which the robot fires from a distance of 35 meters, can clear a path 45 meters wide.”

iRobot’s reluctance may help to explain some of the notable absences in the company’s representations of the range and functionality of its robotic products.  As an ‘anti-personnel obstacle breaching system,’ the Warrior can be seen as not only a technological but also a logical extension of iRobot’s previous offerings in the line of ‘life saving’ devices, a kind of bigger brother to the Packbot, furthering the objective of clearing away potential explosives planted by an enemy.  But where the Packbot would be sent to inspect a single device, the Warrior – as seen in this demonstration video – has a wider and more indeterminate target.

The presence or absence of humans as targets of the Warrior is a point of debate even among the actors involved: IEEE Spectrum’s Automaton blog reports that in response to its first report titled ‘iRobot demonstrates new weaponized robot,’ “Some readers argued that the APOBS, or Anti-Personnel Obstacle Breaching System, developed in a joint program of the U.S. Army and Navy, is not, technically, a weapon, because it’s not an anti-personnel system but rather a system used against obstacles. Perry Villanueva, the project engineer for the APOBS program on the Army side, says the APOBS “is not a weapon in the traditional sense, but it is a weapon.”  The point of confusion here seems to center on the question of against just which ‘personnel’ explosives are being deployed (in reports of IEDs, the ‘personnel’ involved are assumed to be to ‘our’ side, and ‘anti-personnel obstacles’ those deployed by the other side). The demonstration video, in any case, makes no reference to the possibility that other humans might be targets, or even caught within the Warrior’s very wide destructive path: the only bodies that we see are three US soldiers lying prone in readiness at a safe distance from the explosion.

The built environment here appears as an indistinct collection of metal objects and other debris, a kind of junk yard ready to be further pulverized. In actual use situations, however, we can assume a high probability that the vicinity of the ‘obstacle’ would itself be populated.  The problems arise, most obviously, when the barren ‘battlefield’ of the demonstration video (staged at China Lake in the Mohave Desert) is replaced by more densely inhabited landscapes, home as well to non-combatants.

Robot alerts

One of the aims of this blog is to offer some critical readings of popular media representations of robots, particularly in the areas of warfare and healthcare.  So let’s take the most recent Google ‘alert’ on robots to come across my inbox, dated January 22, 2012.  We get the usual collection of stories, falling roughly into these genre:

Heroic robot ‘rescue’ missions.  Reports on the use of remotely controlled, non-humanoid robots in responding to a variety of emergency situations.  In this case, The Telegraph reports on the use of an ‘underwater robot equipped with a camera’ sent to monitor the area of the wreckage of the cruise ship Costa Concordia in an ongoing search for victims.  A second story in the Irish Independent reports the failure of a Navy team equipped with a ‘robot camera’ to find the bodies of three missing fishermen in a trawler wrecked off the West coast of Ireland.  I note that the almost mundane use of this relatively straightforward technology is performed as newsworthy in these stories through its figuration as at once humanlike, and more-than-human in its capabilities.  A familiar theme, in this case working to keep the robot future alive in the face of a tragic cessation in the recovery of those humans who have died.

Roboticists’ commentaries on the field.  I’m pleased to see Helen Greiner, co-founder of iRobot Corporation and CEO of robotics start-up CyPhy Works, writing a column in the New Scientist urging that roboticists get more serious, less focused on ‘cool’ and more on ‘practicality, ruggedness and cost,’ three qualities that she believes necessary to move robots from promissory prototypes to products on the market. To exemplify the latter she points to the non-humanoid, yet useful Roomba vacuuming robot (perhaps more on Roomba in a later post), and the success of ‘iRobot’s military robots, originally deployed in Afghanistan to defuse improvised explosive devices, [which] proved very useful to the human teams dealing with the nuclear emergency at the Fukushima Daiichi power plant in Japan.’ (See ‘heroic robots’ above.)  Notably absent from mention is the iRobot 710 Warrior.   Nor does iRobot advertise the robot’s ‘firefighting’ potential on its product web pages, but Wikipedia tells us that iRobot has teamed up with Australian partner Metal Storm to mount an electronically controlled firing system on a Warrior, capable of firing up to 16 rounds per second (definitely more on the Warrior in a later post).

Care robots.  The majority of stories echo the pervasive fantasy of the robot caregiver, humanoid projects framed as vague promises of a future in which the burden of our responsibility for those figured as dependents – children on one hand, the elderly on the other – will be cared for by loving machines.  While not my focus here, these stories invariably translate the extraordinarily skillful, open-ended and irreducible complexities of caregiving into a cartoon of itself – another instance of asserting the existence of a world in which the autonomous robot would be possible, rather than imaginatively rethinking the assistive possibilities that a robot not invested in its own humanness might actually embody.

Automata.  Finally, and most interestingly, we find on the IEEE Spectrum Automaton blog a story on the work of animatronic designer Chris Clarke.  Animation, in its many and evolving forms, is an art that relies upon the animator’s close and insightful observations of the creatures that inform his or her machines, combined with ingenious invention and reconfiguration of materials and mechanisms.  Not fetishizing autonomy, the art of animation relies instead on the same suspension of disbelief that enlivens the cinema – some ideas that my colleague Jackie Stacey and I explore at greater length in our paper ‘Animation and Automation: The liveliness and labours of bodies and machines’, soon to be out in the journal Body & Society.

remote control

According to media reports more than 7,000 drones of all types are in use over Iraq and Afghanistan, and remote control is seen as the vanguard of a ‘revolution in military affairs’ in which U.S. military and intelligence agencies are heavily invested, in both senses of the word.  With the integration of Hellfire missiles, the first armed version of the Predator drone (designated MQ-1) was deployed in Afghanistan in 2002 as part of what the U.S. military names Operation Enduring Freedom (originally Operation Infinite Justice), under the auspices of what George Bush declared in September 2001 to be a  Global War (without end) on Terror.  In 2001, the U.S. Congress gave the Pentagon the goal of making one-third of ground combat vehicles remotely operated by 2015.  A decade later, under President Obama’s less colorfully named ‘Overseas Contingency Operations’, the amount of money being spent on research for military robotics surpasses the budget of the entire National Science Foundation.

‘War would be a lot safer, the Army says, if only more of it were fought by robots’  (John Markoff, NY Times, November 27, 2010).  Statements like this at once assume the reader to be one of the ‘we’ for whom war would be safer, while deleting war’s Others from our view.  This erasure is rendered more graphically in the image that accompanies Markoff’s article, titled ‘Remotely controlled: Some armed robots are operated with video-game-style consoles, helping to keep humans away from danger’ (my emphasis).  These reports valorize the nonhuman qualities of the robots which, Markoff reports, are ‘never distracted, using an unblinking digital eye, or “persistent stare,” that automatically detects even the smallest motion. Nor do they ever panic under fire … When a robot looks around a battlefield [says Joseph W. Dyer, a former vice admiral and the chief operating officer of iRobot], the remote technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.’  But the translation of bodies into persons, and persons into targets, is not a straightforward one.

My thinking about the human-machine interface to this point has focused on questioning assumptions about the fixity of its boundaries, while at the same time slowing down too easy erasures of differences that matter between humans and machines.  I’ve been particularly concerned with machines modelled in the image of a human that many of us in science and technology studies and feminist theory have been at pains to refigure; that is, one for whom autonomous agency and instrumental reasoning are the gold standard.  In the interest of avoiding essentialism, I’ve tried to base my arguments for difference on the ways in which different forms of embodiment afford different possibilities for reflexively co-enacting what we think of as shared situations, or reciprocity, or mutual intelligibility, or what feminist scholars like Donna Haraway have proposed that we think about as ‘response-ability’.  This argument has provided a generative basis for critique of initiatives in artificial intelligence, robotics and the like.

“Some of us think that the right organizational structure for the future is one that skillfully blends humans and intelligent machines,” [says John Arquilla, executive director of the Information Operations Center at the Naval Postgraduate School]  “We think that that’s the key to the mastery of 21st-century military affairs” (quoted in Markoff November 27, 2010). Hardly a new idea (remembering the Strategic Computing Initiative of the Reagan era), this persistent vision of mastery-to-come underwrites old and new alliances in research and development, funded by defense spending, taken up by academic and industrial suppliers, echoed and hyped by the media, and embraced by entertainment producers and consumers. So how, I’m now wondering, might I usefully mobilise and expand my earlier arguments regarding shifting boundaries and differences that matter between humans and machines, to aid efforts to map and interrupt what James der Derian (2009) has called ‘virtuous war’ – that is, warfighting justified on the grounds of a presumed moral superiority, persistent mortal threat and, most crucially, minimal casualties on our side – and the military-industrial-media-entertainment network that comprises its infrastructure.