Category Archives: remote control

Slow robots and slippery rhetorics

DRC

The recently concluded DARPA Robotics Challenge (DRC), held this past week at a NASCAR racetrack near Homestead, Florida, seems to have had a refreshingly sobering effect on the media coverage of advances in robotics.  A field of sixteen competitors, the victors of earlier trials (it was to be seventeen, but ‘travel issues’ prevented the Chinese team from participating), the teams represented the state of the art internationally in the development of mobile, and more specifically ‘legged’ robots.  The majority of the teams worked with machines configured as upright, bipedal humanoids, while two figured their robots as primates (Robosimian and CHIMP), and one as a non-anthropomorphised ‘hexapod’. The Challenge staged a real-time, public demonstration of the state of the art; one which, it seems, proved disillusioning to many who witnessed it.  For all but the most technically knowledgeable in the audience, the actual engineering achievements were hard to appreciate.  More clearly evident was the slowness and clumsiness of the robots, and their vulnerability to failure at what to human contenders would have proven quite unremarkable tasks.  A photo gallery titled Robots to the Rescue, Slowly is indicative, and the BBC titles its coverage of the Challenge Robot competition reveals rise of the machines not imminent.

Reporter Zachary Fagenson sets the scene with a representative moment in the competition:

As a squat, red and black robot nicknamed CHIMP gingerly pushed open a spring-loaded door a gust of wind swooped down onto the track at the Homestead-Miami Speedway and slammed the door shut, eliciting a collective sigh of disappointment from the audience.

In the BBC’s video coverage of the event, Dennis Hong, Director of the Virginia Tech Robotics Lab, tells the interviewer: “When many people think about robots, they watch so many science fiction movies, they think that robots can run and do all the things that humans can do.  From this competition you’ll actually see that that is not the truth. The robots will fall, it’s gonna be really, really slow…” and DARPA Director Arati Prabhaker concurs “I think that robotics is an area where our imaginations have run way ahead of where the technology actually is, and this challenge is not about science fiction it’s about science fact.”  While many aspects of the competition would challenge the separateness of fiction and fact (not least the investment of its funders and competitors in figuring robots as humanoids), this is nonetheless a difference that matters.

These cautionary messages are contradicted, however, in a whip-lash inducing moment at the close of the BBC clip, when Boston Dynamics Project Manager Joe Bondaryk makes the canonical analogy between the trials and the Wright brothers’ first flight, reassuring us that “If all this keeps going, then we can imagine having robots by 2015 that will, you know, that will help our firefighters, help our policemen to do their jobs” (just one year after next year’s Finals, and a short time frame even compared to the remarkable history of flight).

The winning team, University of Tokyo’s spin-out company Schaft (recently acquired by Google), attributes their differentiating edge in the competition to a new high-voltage liquid-cooled motor technology making use of a capacitor rather than a battery for power, which the engineers say lets the robot’s arms move and pivot at higher speeds than would otherwise be possible. Second and fourth place went to teams that had adopted the Boston Dynamics (another recent Google acquisition) Atlas robot as their hardware platform, the Florida Institute for Human and Machine Cognition (IHMC) and MIT teams respectively. (With much fanfare, DARPA funded the delivery of Atlas robots to a number of the contenders earlier this year.)  Third place went to Carnegie Mellon University’s ‘CHIMP,’ while one of the least successful entrants, scoring zero points, was NASA’s ‘Valkyrie’, described in media reports as the only gendered robot in the group (as signaled  by its white plastic vinyl body and suggestive bulges in the ‘chest’ area).  Asked about the logic of Valkyrie’s form factor, Christopher McQuin, Nasa’s chief engineer for hardware development, offered: “The goal is to make it comfortable for people to work with and to touch.”  (To adequately read this comment, and Valkyrie’s identification as gendered against the ‘neutrality’ of the other competitors, would require its own post.)  The eight teams with the highest scores are eligible to apply for up to $1-million in funding to prepare for the final round of the Challenge in late 2014, where a winner will take a $2-million prize.

An article on the Challenge in the MIT Technology Review  by journalist Will Knight includes the sidebar: ‘Why it Matters: If they can become nimbler, more dexterous, and safer, robots could transform the way we work and live.’  Knight thereby implies that we should care about robots, their actual clumsiness and unwieldiness notwithstanding, because if they were like us, they could transform our lives.  The invocation of the way we live here echos the orientation of the Challenge overall, away from robots  as weapons – as instruments of death – and towards the figure of the first responder as the preserver of life.  Despite its sponsorship by the Defense Advanced Research Projects Agency (DARPA), the agency charged with developing new technology for the military, the Challenge is framed not in terms of military R&D, but as an exercise in the development of ‘rescue robots‘.

More specifically, DARPA statements, as well as media reports, position the Challenge itself, along with the eight tasks assigned to the robotics teams (e.g. walking over rubble, clearing debris, punching a hole in a drywall, turning a valve, attaching a fire hose, climbing a ladder), as a response to the disastrous melt down of the Fukushima-Daiichi nuclear power plant.  (For a challenge to this logic see Maggie Mort’s comment to my earlier post ‘will we be rescued?’)  While this begs the question of how robots would be hardened against the effects of nuclear radiation and at what cost (the robots competing in the challenge already costing up to several million dollars each), Knight suggests that if robots can be developed that are capable of taking on these tasks, “they could also be useful for much more than just rescue missions.”  Knight observes that the robot of the winning team “is the culmination of many years of research in Japan, inspired in large part by concerns over the country’s rapidly aging population,” a proposition affirmed by DARPA Program Manager Gill Pratt who “believes that home help is the big business opportunity [for] humanoid robots.”  Just what the connection might be between these pieces of heavy machinery and care at home is left to our imaginations, but quite remarkably Pratt further suggests “that the challenges faced by the robots involved in the DARPA event are quite similar to those that would be faced in hospitals and nursing homes.”

In an article by Pratt published early in December in the Bulletin of the Atomic Scientists titled Robot to the Rescue, we catch a further glimpse of what the ‘more than rescue’ applications for the Challenge robots might be.  Pratt’s aspirations for the DARPA Robotics Challenge invoke the familiar (though highly misleading) analogy between the robot and the developing human: “by the time of the DRC Finals, DARPA hopes the competing robots will demonstrate the mobility and dexterity competence of a 2-year-old child, in particular the ability to execute autonomous, short tasks such as ‘clear out the debris in front of you’ or ‘close the valve,’ regardless of outdoor lighting conditions and other variations.”  I would challenge this comparison on the basis that it underestimates the level of the 2 year old child’s competencies, but I suspect that many parents of 2 year olds might question its aptness on other grounds as well.

Having set out the motivation and conditions of the Challenge, in a section titled ‘Don’t be scared of the robot’ Pratt  turns to the “broad moral, ethical, and societal questions” that it raises, noting that “although the DRC will not develop lethal or fully autonomous systems, some of the technology being developed in the competition may eventually be used in such systems.”  He continues:

society is now wrestling with moral and ethical issues raised by remotely operated unmanned aerial vehicles that enable reconnaissance and projection of lethal force from a great distance … the tempo of modern warfare is escalating, generating a need for systems that can respond faster than human reflexes. The Defense Department has considered the most responsible way to develop autonomous technology, issuing a directive in November 2012 that carefully regulates the way robotic autonomy is developed and used in weapons systems. Even though DRC robots may look like those in the movies that are both lethal and autonomous, in fact they are neither.

The slippery slope of automation and autonomy in military systems, and the U.S. Defense Department’s ambiguous assurances about their commitment to the continued role of humans in targeting and killing, are the topic of ongoing debate and a growing campaign to ban lethal autonomous weapons (See ICRAC website for details.)  I would simply note here the moment of tautological reasoning wherein ‘the tempo of modern warfare,’ presented as a naturally occurring state of the world, becomes the problem for which faster response is the solution, which in turn justifies the need for automation, which in turn increases the tempo, which in turn, etc.

In elaborating the motivation for the Challenge, Gill Pratt invokes a grab-bag of familiar specters of an increasingly ‘vulnerable society’ (population explosion with disproportionate numbers of frail elderly, climate change, weapons of mass destruction held in the wrong hands) as calling for, if not a technological solution, at least a broad mandate for robotics research and development:

The world’s population is continuing to grow and move to cities situated along flood-prone coasts. The population over age 65 in the United States is forecast to increase from 13 percent to 20 percent by 2030, and the elderly require more help in emergency situations. Climate change and the growing threat of proliferation of weapons of mass destruction to non-state actors add to the concern. Today’s natural, man-made and mixed disasters might be only modest warnings of how vulnerable society is becoming.

One implication of this enumeration is that even disaster can be good for business, and humanoid robotics research, Pratt assures us, is all about saving lives and protecting humans (a term that seems all-encompassing at the same time that it erases from view how differently actual persons are valued in the rhetorics of ‘Homeland Security’).  The figure of the ‘warfighter’ appears only once towards the end of Pratt’s piece, and even there the robot’s role in the military is about preserving, not taking life.  But many of us are not reassured by the prospect of robot rescue, and would instead call on the U.S. Government to take action to mitigate climate change, to dial down its commitment to militarism along with its leading role in the international arms trade, and to invest in programs to provide meaningful jobs at a living wage to humans, not least those engaged in the work of care.  The robot Challenge could truly be an advance if the spectacle of slow robots would raise questions about the future of humanoid robotics as a project for our governments and universities to be invested in, and about the good faith of slippery rhetorics that promise the robot first responder as the remedy for our collective vulnerability.

The vagaries of ‘precision’ in targeted killing

Two events in the past week highlight the striking contrast between the Obama administration’s current policy regarding the use of armed drones as part of the U.S. ‘Counterterrorism Strategy,’ and those who challenge that strategy’s legality and morality.

The first is the Drone Summit held on April 28-29th in Washington, D.C., co-organized by activist group CodePink, the Center for Constitutional Rights, and the UK organization Reprieve. The summit presentations offered compelling testimony, from participants including Pakistani attorney Shahzad Akbar, Reprieve’s Clive Stafford Smith, Chris Woods of the Bureau of Investigative Journalism, Pakistani journalist Madiha Tahir, and Somali activist Sadia Ali Aden, for documented and extensive civilian injury and death from U.S. Drone strikes in Pakistan, Yemen and Somalia.  While popular support in the United States is based on the premise (and promise) that strikes only kill ‘militants,’ these speakers underscored the vagaries of the categories that inform the (il)legitimacy of extrajudicial targeted killing.

According to the Bureau of Investigative Journalism, between 2004 and 2011 the CIA conducted over 300 drone strikes in Pakistan, killing somewhere between 2,372 and 2,997 people.  Waziristan, in the northwest of Pakistan on the frontier with Afghanistan (the so-called Federally Administered Tribal Area) is the focus of these targeted killings. Shahzad Akbar cited estimates that more than 3,000 people have been killed in the area, but its closure to outside journalists adds to the secrecy in which killings are carried out. One recent victim of the strikes, 16 year old Tariq Aziz, had joined a campaign organized by Akbar’s Foundation for Fundamental Rights in collaboration with Reprieve to crowd source documentation of strikes inside Waziristan using cell phones. Within 72 hours of his participation in the training, Aziz himself was killed in a drone strike on the car in which he was traveling with his younger cousin.  Whether Aziz was deliberately targeted or was another innocent casualty remains unknown.

In the targeting of houses believed to house ‘militants’, according to Akbar, strikes are concentrated during mealtimes and at night, when families are most likely to be assembled.  Not only do immediate family members die in these strikes, but often those in neighboring houses as well, particularly children hit by shrapnel. So how is the category of ‘militant’ defined?  Clive Stafford Smith of Reprieve points out that targeted killing relies upon the same intelligence that informed the detention of ‘militants’ at Guantanamo, where 80% of those held have been cleared.  He reported as well that the U.S. routinely offers $5,000 to informants, the equivalent of a quarter of a million dollars to relatively more affluent Americans, for information leading to the identification of ‘bad guys.’

Particularly in those areas where targeted killings are concentrated, being identified as ‘militant,’ even being armed, does not in itself meet the criterion of posing an imminent threat to the United States.  But the U.S. Government has so far refused to release either the criteria or the evidentiary bases for its placement of persons on targeted kill lists.  This problem is intensified by the administration’s recent endorsement of so-called ‘signature’ targeting, where in place of positive identification of individuals who pose concrete, specific and imminent threat to life (as required by the laws of armed conflict), targeting can be based on patterns of behavior, observed from the air, that correspond with profiles specified as evidence for ‘militancy’. Shahzad Akbar points out that ‘signature’ effectively means profiling, adding that “before they used to arrest and question you, now they just kill you.”  The elision of distinctions between being armed and being a ‘terror suspect’ allows wide scope for action, as does the failure to recognize how these ‘targeted’ killings (where we now have to put targeted as well into scare quotes, insofar as we’re coming to recognize the questions and uncertainties that it masks) might themselves be experienced as terror by civilians on the ground.  Pakistani journalist Madiha Tahir urges us, in considering who is a ‘militant,’ to ask: how does a person become one?  People join ‘militant’ groups largely in relation to internal divisions quite apart from actions aimed at the U.S, but now increasingly also because of U.S. Attacks. “On what grounds,’ she asked ‘does it make sense to terrorize people in order to hunt terrorists?”

The second event of the past week was the appearance of President Obama’s ‘top counterterrorism advisor’ John Brennan at the Wilson Center, where he asserted that the growing use of armed unmanned aircraft in Pakistan, Yemen and Somalia have saved American lives, and that civilian casualties from U.S. drones were “exceedingly rare” and rigorously investigated.  As the LA Times reports, “Brennan emphasized throughout his speech that drone strikes are carried out against ‘individual terrorists.’ He did not mention so-called signature strikes, a type of attack the U.S. has used in Pakistan against facilities and suspected militants without knowing the target’s name. When asked later by a member of the audience whether the standards he outlined for drone attacks also applied to signature strikes, Brennan said he was not speaking of signature strikes but that all attacks launched by the U.S. are done in accordance with the rule of law. The White House this month approved the use of signature strikes in Yemen after U.S. officials previously insisted that it would target only people whose names are known. The new rules permit attacks against individuals suspected of militant activities, even when their names are unknown or only partially known, a U.S. official said.”

Contrasting war by remote control with traditional military operations, Brennan argued that “large, intrusive military deployments risk playing into Al Qaeda’s strategy of trying to draw us into long, costly wars that drain us financially, inflame anti-American resentment and inspire the next generation of terrorists.”  The implication is that death by remote control does not have the same consequences.

CodePink anti-warrior Medea Benjamin brought the contradictions of these two events together when she staged a courageous interruption of Brennan’s speech, continuing her testimony on behalf of innocent victims of U.S. drone strikes even as she was literally lifted off of her feet and carried out of the room by a burly security guard.

For a compelling and carefully researched introduction to the drone industry and its problems see Medea Benjamin’s new book Drone Warfare: Killing by remote control.

Autonomy

Media reports of developments in so-called robotic weapons systems (a broad category that includes any system involving some degree of pre-programming as well as remote control) are haunted by the question of ‘autonomy’; specifically, the prospect that technologies acting independently of human operators will run ‘out of control’ (a fear addressed by Langdon Winner in his 1977 book Autonomous Technology: technics-out-of-control as a theme in political thought).  While recognizing the very real dangers posed by increasing resort to on-board, algorithmic encoding of controls in military systems, I want to track the discussion of autonomy with respect to weapons systems a bit more closely.  A recent story in the LA Times, noted and under discussion by my colleagues in the International Committee for Robot Arms Control (ICRAC), provides a good starting place.

While I’m going to suggest here that autonomy is something of a red herring in the context of this story, let me be clear at the outset that I believe that we should be deeply concerned about the developments reported. They represent a continuation of the longstanding investment in automation in the (questionable) interest of economy; the dangers of ever-intensified speed in war fighting; the extraordinary inflation of spending on weapons systems at the expense of other social spending (see post Arming Robots); and the threat to global security of the already existing infrastructure of networked warfare.  With that said, I want to question the framing of the developments reported in this article as the beginning of something new, unprecedented and (as often goes along with these adjectives) inevitable, centering on the question of autonomy.

The article reports on the X47B drone, a demonstration aircraft currently being tested by the Navy at a cost of $813 million.

“The X-47B drone, above, marks a paradigm shift in warfare, one that is likely to have far-reaching consequences. With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.” (Chad Slattery, Northrop Grumman / January 25, 2012)

A major technical requirement for this plane is that it should be able to land under on board controls on the deck of an aircraft carrier, “one of aviation’s most difficult maneuvers.”  In this respect, the X47B is a next logical step in an ongoing process of automation, of the replacement of labour with capital equipment, through the delegation of actions previously done by skillful humans to machines. The familiarity of the story in this respect raises the question: what exactly is the “paradigm shift” here?  And what are the stakes in the assertion that there is one?  The author observes:

“With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.”

Most commercial aircraft, as well as existing drones, can be put under ‘auto pilot’ controls, and are always operating ‘semi-independently.’  And the U.S. drone campaign is already dealing death and destruction.

“Although humans would program an autonomous drone’s flight plan and could override its decisions, the prospect of heavily armed aircraft screaming through the skies without direct human control is unnerving to many.”

Aren’t populations in Pakistan, Afghanistan, Yemen and other areas that are the target of U.S. drones already unnerved by heavily armed aircraft screaming through the skies?  And to what extent has ‘direct human control’ over existing drone systems ensured that civilians won’t be killed, whether as a consequence of mistaken targeting, or what seems to be accepted within military procedure as unavoidable ‘collateral’ damage?

“‘The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,’ committee [of the International Red Cross] President Jakob Kellenberger said at a recent conference. ‘The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.’”

It is clear that the ‘capacity to discriminate’ is already based on complex networks of sensors and code, and the history of the use of armed drones includes recurring examples of misrecognition of targets, extra-judicial killing, and a range of other violations of international law.

“Weapons specialists in the military and Congress acknowledge that policymakers must deal with these ethical questions long before these lethal autonomous drones go into active service, which may be a decade or more away.”

These questions – not only ethical but also moral and legal – must equally have been dealt with before lethal remotely-controlled drones went into active service.  Which means that the latter are, in their current use, unethical, immoral and illegal.

“More aggressive robotry development could lead to deploying far fewer U.S. military personnel to other countries, achieving greater national security at a much lower cost and most importantly, greatly reduced casualties,” aerospace pioneer Simon Ramo, who helped develop the intercontinental ballistic missile, wrote in his new book, “Let Robots Do the Dying.”

The promise of lower cost rings hollow in the context of a defense budget that continues to grow, and the prediction that annual global spending on drones will double to $11.5 billion in the next few years (reported by the New Internationalist in their December 2011 issue).  But ‘most importantly,’ as Ramo puts it, the ‘reduction in casualties’ refers only to ‘our’ side, and it is not only robots that are dying.

The Air Force says in the Unmanned Aircraft Systems Flight Plan 2009-2047 that “it’s only a matter of time before drones have the capability to make life-or-death decisions as they circle the battlefield.” What’s missing from this projection (we should be suspicious whenever we hear ‘it’s only a matter of time’) are the unresolved problems of decision-making that plague already existing armed drone systems. The focus on the future ignores the already unacceptable present.  And the focus on autonomy as the threat directs our attention away from the autonomous arms-industry-out-of-control, of which the X47B is a symptom.

remote control

According to media reports more than 7,000 drones of all types are in use over Iraq and Afghanistan, and remote control is seen as the vanguard of a ‘revolution in military affairs’ in which U.S. military and intelligence agencies are heavily invested, in both senses of the word.  With the integration of Hellfire missiles, the first armed version of the Predator drone (designated MQ-1) was deployed in Afghanistan in 2002 as part of what the U.S. military names Operation Enduring Freedom (originally Operation Infinite Justice), under the auspices of what George Bush declared in September 2001 to be a  Global War (without end) on Terror.  In 2001, the U.S. Congress gave the Pentagon the goal of making one-third of ground combat vehicles remotely operated by 2015.  A decade later, under President Obama’s less colorfully named ‘Overseas Contingency Operations’, the amount of money being spent on research for military robotics surpasses the budget of the entire National Science Foundation.

‘War would be a lot safer, the Army says, if only more of it were fought by robots’  (John Markoff, NY Times, November 27, 2010).  Statements like this at once assume the reader to be one of the ‘we’ for whom war would be safer, while deleting war’s Others from our view.  This erasure is rendered more graphically in the image that accompanies Markoff’s article, titled ‘Remotely controlled: Some armed robots are operated with video-game-style consoles, helping to keep humans away from danger’ (my emphasis).  These reports valorize the nonhuman qualities of the robots which, Markoff reports, are ‘never distracted, using an unblinking digital eye, or “persistent stare,” that automatically detects even the smallest motion. Nor do they ever panic under fire … When a robot looks around a battlefield [says Joseph W. Dyer, a former vice admiral and the chief operating officer of iRobot], the remote technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.’  But the translation of bodies into persons, and persons into targets, is not a straightforward one.

My thinking about the human-machine interface to this point has focused on questioning assumptions about the fixity of its boundaries, while at the same time slowing down too easy erasures of differences that matter between humans and machines.  I’ve been particularly concerned with machines modelled in the image of a human that many of us in science and technology studies and feminist theory have been at pains to refigure; that is, one for whom autonomous agency and instrumental reasoning are the gold standard.  In the interest of avoiding essentialism, I’ve tried to base my arguments for difference on the ways in which different forms of embodiment afford different possibilities for reflexively co-enacting what we think of as shared situations, or reciprocity, or mutual intelligibility, or what feminist scholars like Donna Haraway have proposed that we think about as ‘response-ability’.  This argument has provided a generative basis for critique of initiatives in artificial intelligence, robotics and the like.

“Some of us think that the right organizational structure for the future is one that skillfully blends humans and intelligent machines,” [says John Arquilla, executive director of the Information Operations Center at the Naval Postgraduate School]  “We think that that’s the key to the mastery of 21st-century military affairs” (quoted in Markoff November 27, 2010). Hardly a new idea (remembering the Strategic Computing Initiative of the Reagan era), this persistent vision of mastery-to-come underwrites old and new alliances in research and development, funded by defense spending, taken up by academic and industrial suppliers, echoed and hyped by the media, and embraced by entertainment producers and consumers. So how, I’m now wondering, might I usefully mobilise and expand my earlier arguments regarding shifting boundaries and differences that matter between humans and machines, to aid efforts to map and interrupt what James der Derian (2009) has called ‘virtuous war’ – that is, warfighting justified on the grounds of a presumed moral superiority, persistent mortal threat and, most crucially, minimal casualties on our side – and the military-industrial-media-entertainment network that comprises its infrastructure.