Slow robots and slippery rhetorics

DRC

The recently concluded DARPA Robotics Challenge (DRC), held this past week at a NASCAR racetrack near Homestead, Florida, seems to have had a refreshingly sobering effect on the media coverage of advances in robotics.  A field of sixteen competitors, the victors of earlier trials (it was to be seventeen, but ‘travel issues’ prevented the Chinese team from participating), the teams represented the state of the art internationally in the development of mobile, and more specifically ‘legged’ robots.  The majority of the teams worked with machines configured as upright, bipedal humanoids, while two figured their robots as primates (Robosimian and CHIMP), and one as a non-anthropomorphised ‘hexapod’. The Challenge staged a real-time, public demonstration of the state of the art; one which, it seems, proved disillusioning to many who witnessed it.  For all but the most technically knowledgeable in the audience, the actual engineering achievements were hard to appreciate.  More clearly evident was the slowness and clumsiness of the robots, and their vulnerability to failure at what to human contenders would have proven quite unremarkable tasks.  A photo gallery titled Robots to the Rescue, Slowly is indicative, and the BBC titles its coverage of the Challenge Robot competition reveals rise of the machines not imminent.

Reporter Zachary Fagenson sets the scene with a representative moment in the competition:

As a squat, red and black robot nicknamed CHIMP gingerly pushed open a spring-loaded door a gust of wind swooped down onto the track at the Homestead-Miami Speedway and slammed the door shut, eliciting a collective sigh of disappointment from the audience.

In the BBC’s video coverage of the event, Dennis Hong, Director of the Virginia Tech Robotics Lab, tells the interviewer: “When many people think about robots, they watch so many science fiction movies, they think that robots can run and do all the things that humans can do.  From this competition you’ll actually see that that is not the truth. The robots will fall, it’s gonna be really, really slow…” and DARPA Director Arati Prabhaker concurs “I think that robotics is an area where our imaginations have run way ahead of where the technology actually is, and this challenge is not about science fiction it’s about science fact.”  While many aspects of the competition would challenge the separateness of fiction and fact (not least the investment of its funders and competitors in figuring robots as humanoids), this is nonetheless a difference that matters.

These cautionary messages are contradicted, however, in a whip-lash inducing moment at the close of the BBC clip, when Boston Dynamics Project Manager Joe Bondaryk makes the canonical analogy between the trials and the Wright brothers’ first flight, reassuring us that “If all this keeps going, then we can imagine having robots by 2015 that will, you know, that will help our firefighters, help our policemen to do their jobs” (just one year after next year’s Finals, and a short time frame even compared to the remarkable history of flight).

The winning team, University of Tokyo’s spin-out company Schaft (recently acquired by Google), attributes their differentiating edge in the competition to a new high-voltage liquid-cooled motor technology making use of a capacitor rather than a battery for power, which the engineers say lets the robot’s arms move and pivot at higher speeds than would otherwise be possible. Second and fourth place went to teams that had adopted the Boston Dynamics (another recent Google acquisition) Atlas robot as their hardware platform, the Florida Institute for Human and Machine Cognition (IHMC) and MIT teams respectively. (With much fanfare, DARPA funded the delivery of Atlas robots to a number of the contenders earlier this year.)  Third place went to Carnegie Mellon University’s ‘CHIMP,’ while one of the least successful entrants, scoring zero points, was NASA’s ‘Valkyrie’, described in media reports as the only gendered robot in the group (as signaled  by its white plastic vinyl body and suggestive bulges in the ‘chest’ area).  Asked about the logic of Valkyrie’s form factor, Christopher McQuin, Nasa’s chief engineer for hardware development, offered: “The goal is to make it comfortable for people to work with and to touch.”  (To adequately read this comment, and Valkyrie’s identification as gendered against the ‘neutrality’ of the other competitors, would require its own post.)  The eight teams with the highest scores are eligible to apply for up to $1-million in funding to prepare for the final round of the Challenge in late 2014, where a winner will take a $2-million prize.

An article on the Challenge in the MIT Technology Review  by journalist Will Knight includes the sidebar: ‘Why it Matters: If they can become nimbler, more dexterous, and safer, robots could transform the way we work and live.’  Knight thereby implies that we should care about robots, their actual clumsiness and unwieldiness notwithstanding, because if they were like us, they could transform our lives.  The invocation of the way we live here echos the orientation of the Challenge overall, away from robots  as weapons – as instruments of death – and towards the figure of the first responder as the preserver of life.  Despite its sponsorship by the Defense Advanced Research Projects Agency (DARPA), the agency charged with developing new technology for the military, the Challenge is framed not in terms of military R&D, but as an exercise in the development of ‘rescue robots‘.

More specifically, DARPA statements, as well as media reports, position the Challenge itself, along with the eight tasks assigned to the robotics teams (e.g. walking over rubble, clearing debris, punching a hole in a drywall, turning a valve, attaching a fire hose, climbing a ladder), as a response to the disastrous melt down of the Fukushima-Daiichi nuclear power plant.  (For a challenge to this logic see Maggie Mort’s comment to my earlier post ‘will we be rescued?’)  While this begs the question of how robots would be hardened against the effects of nuclear radiation and at what cost (the robots competing in the challenge already costing up to several million dollars each), Knight suggests that if robots can be developed that are capable of taking on these tasks, “they could also be useful for much more than just rescue missions.”  Knight observes that the robot of the winning team “is the culmination of many years of research in Japan, inspired in large part by concerns over the country’s rapidly aging population,” a proposition affirmed by DARPA Program Manager Gill Pratt who “believes that home help is the big business opportunity [for] humanoid robots.”  Just what the connection might be between these pieces of heavy machinery and care at home is left to our imaginations, but quite remarkably Pratt further suggests “that the challenges faced by the robots involved in the DARPA event are quite similar to those that would be faced in hospitals and nursing homes.”

In an article by Pratt published early in December in the Bulletin of the Atomic Scientists titled Robot to the Rescue, we catch a further glimpse of what the ‘more than rescue’ applications for the Challenge robots might be.  Pratt’s aspirations for the DARPA Robotics Challenge invoke the familiar (though highly misleading) analogy between the robot and the developing human: “by the time of the DRC Finals, DARPA hopes the competing robots will demonstrate the mobility and dexterity competence of a 2-year-old child, in particular the ability to execute autonomous, short tasks such as ‘clear out the debris in front of you’ or ‘close the valve,’ regardless of outdoor lighting conditions and other variations.”  I would challenge this comparison on the basis that it underestimates the level of the 2 year old child’s competencies, but I suspect that many parents of 2 year olds might question its aptness on other grounds as well.

Having set out the motivation and conditions of the Challenge, in a section titled ‘Don’t be scared of the robot’ Pratt  turns to the “broad moral, ethical, and societal questions” that it raises, noting that “although the DRC will not develop lethal or fully autonomous systems, some of the technology being developed in the competition may eventually be used in such systems.”  He continues:

society is now wrestling with moral and ethical issues raised by remotely operated unmanned aerial vehicles that enable reconnaissance and projection of lethal force from a great distance … the tempo of modern warfare is escalating, generating a need for systems that can respond faster than human reflexes. The Defense Department has considered the most responsible way to develop autonomous technology, issuing a directive in November 2012 that carefully regulates the way robotic autonomy is developed and used in weapons systems. Even though DRC robots may look like those in the movies that are both lethal and autonomous, in fact they are neither.

The slippery slope of automation and autonomy in military systems, and the U.S. Defense Department’s ambiguous assurances about their commitment to the continued role of humans in targeting and killing, are the topic of ongoing debate and a growing campaign to ban lethal autonomous weapons (See ICRAC website for details.)  I would simply note here the moment of tautological reasoning wherein ‘the tempo of modern warfare,’ presented as a naturally occurring state of the world, becomes the problem for which faster response is the solution, which in turn justifies the need for automation, which in turn increases the tempo, which in turn, etc.

In elaborating the motivation for the Challenge, Gill Pratt invokes a grab-bag of familiar specters of an increasingly ‘vulnerable society’ (population explosion with disproportionate numbers of frail elderly, climate change, weapons of mass destruction held in the wrong hands) as calling for, if not a technological solution, at least a broad mandate for robotics research and development:

The world’s population is continuing to grow and move to cities situated along flood-prone coasts. The population over age 65 in the United States is forecast to increase from 13 percent to 20 percent by 2030, and the elderly require more help in emergency situations. Climate change and the growing threat of proliferation of weapons of mass destruction to non-state actors add to the concern. Today’s natural, man-made and mixed disasters might be only modest warnings of how vulnerable society is becoming.

One implication of this enumeration is that even disaster can be good for business, and humanoid robotics research, Pratt assures us, is all about saving lives and protecting humans (a term that seems all-encompassing at the same time that it erases from view how differently actual persons are valued in the rhetorics of ‘Homeland Security’).  The figure of the ‘warfighter’ appears only once towards the end of Pratt’s piece, and even there the robot’s role in the military is about preserving, not taking life.  But many of us are not reassured by the prospect of robot rescue, and would instead call on the U.S. Government to take action to mitigate climate change, to dial down its commitment to militarism along with its leading role in the international arms trade, and to invest in programs to provide meaningful jobs at a living wage to humans, not least those engaged in the work of care.  The robot Challenge could truly be an advance if the spectacle of slow robots would raise questions about the future of humanoid robotics as a project for our governments and universities to be invested in, and about the good faith of slippery rhetorics that promise the robot first responder as the remedy for our collective vulnerability.

Postscript to Ethical Governor 0.1

I’ve been encouraged by a colleague to add a postscript to my last post, lest its irony be lost on any of my readers. The post was a form of thought experiment on what it would mean to take Ron Arkin at his word (at least in the venue of the aforementioned debate), to put his proposal to the test by following it out to (one of) its logically absurd conclusions. That is, if as Arkin claims it’s the failures of humans that are his primary concern, and that his ‘ethical governor’ is designed to correct, why wait for the realization of robot weapons to implement it?  Why not introduce it as a restraint into conventional weapons in the first instance, as a check on the faulty behaviours of the humans who operate them?  Of course I assume that the answer to this question is that the ‘governor’ remains in the realm of aspirational fantasy, existing I’m told only in the form of a sketch of an idea and some preliminary mathematics developed within a briefly funded student project back in 2009, with no actual proposal for how to translate the requisite legal frameworks into code. Needless to say, I hope, my proposal for the Ethical Governor 0.1 is not something that I would want the DoD actually to fund, though there seems little danger that they would be keen to introduce such a restraint into existing weapon systems even if it could plausibly be realized.

There are two crucial issues here. The first is Arkin’s premise that, insofar as war is conducted outside of the legal frameworks developed to govern it, there could be a technological solution to that problem. And the second is that such a solution could take the form of an ‘ethical governor’ based on the translation of legal frameworks like the Geneva Convention, International Humanitarian Law and Human Rights Law into algorithmic specifications for robot perception and action.  Both of these have been carefully critiqued by my ICRAC colleagues (see http://icrac.net/resources/ for references), as well as in a paper that I’ve co-authored with ICRAC Chair Noel Sharkey. A core problem is that prescriptive frameworks like these presuppose, rather than specify, the capacities for comprehension and judgment required for their implementation in any actual situation.  And it’s precisely those capacities that artificial intelligences lack, now and for the foreseeable future. Arkin’s imaginary of the encoding of battlefield ethics brings the field no closer to the realization of the highly contingent and contextual abilities that are requisite to the situated enactment of ethical conduct, begging these fundamental questions rather than seriously addressing them.

Ethical Governor 0.1

Last Monday, November 18th, Georgia Tech’s Center for Ethics and Technology hosted a debate on lethal autonomous robots, between roboticist Ron Arkin (author of Governing Lethal Behavior in Autonomous Robots, 2009) and philosopher of technology Rob Sparrow (founding member of the International Committee for Robot Arms Control).  Along with the crucial issues raised by Rob Sparrow (regarding the lost opportunities of ongoing, disproportionate expenditures on military technologies; American exceptionalism and the assumption that ‘we’ will be on the programming end and not the targets of such weapons; the prospects of an arms race in robotic weaponry and its contributions to greater global insecurity, etc.), as well as the various objections that I would raise to Arkin’s premise that the situational awareness requisite to legitimate killing could be programmable, one premise of Arkin’s position in particular inspired this immediate response.

Arkin insists that his commitment to research on lethal autonomous robots is based first and foremost in a concern for saving the lives of non-combatants.  He proceeds from there to the ‘thesis’ that an ethically-governed robot could adhere more reliably to the laws of armed conflict and rules of engagement than human soldiers have demonstrably done.  He points to the history of atrocities committed by humans, and emphasizes that his project is aimed not at the creation of an ethical robot (which would require moral agency), but (simply and/or more technically) at the creation of an ‘ethical governor’ to control the procedures for target identification and engagement and ensure their compliance with international law.  Taking seriously that premise, my partner (who I’ll refer to here as the Lapsed Computer Scientist) suggests an intriguing beta release for Arkin’s project, preliminary to the creation of fully autonomous, ethically-governed robots. This would be to incorporate ethical governors into existing, human-operated weapon systems.  Before a decision to fire could be made, in other words, the conditions of engagement would be represented and submitted to the assessment of the automated ethical governor; only if the requirements for justifiable killing were met would the soldier’s rifle or the hellfire missile be enabled.  This would at once provide a means for testing the efficacy of Arkin’s governor (he insists that proof of its reliability would be a prerequisite to its deployment), and hasten its beneficial effects on the battlefield (reckless actions on the part of human soldiers being automatically prevented).  I would be interested in the response to this suggestion, both from Ron Arkin (who insists that he’s not a proponent of lethal autonomous weapons per se, but only in the interest of saving lives) and from the Department of Defense by whom his research is funded.  If there were objections to proceeding in this way, what would they be?

Will we be rescued?

I’m well overdue for a follow up post to ‘Don’t Kick the Dog’, and the latest media barrage on the untethering of Boston Dynamics’ Wild Cat has got me moving.  An unholy hybrid that runs backwards (at least, in biomimetic terms, according to the bend of its ‘knees’), the Wild Cat appears as a larger offspring of Boston Dynamics’ Cheetah, and a kind of faster, if no less noisy*, cousin of Big Dog (now officially christened with the DOD acronym LS3, for Legged Squadron Support System).

The occasion for Wild Cat’s celebration via a YouTube disseminated demonstration is its release from the hydraulic line that has previously connected it to an offboard power source, into a parking lot where it can dash around seemingly autonomously, albeit under the watchful eye of its minders.  What’s less obvious from this 2 minute demonstration are the enduring problems (or alternatively saving limits) of the power supply, reliant as these devices still are on noisy motors or the short life of batteries.

Also relegated to the margins are the projected applications for the Wild Cat, which brings me to Boston Dynamics’ other recent release, the robot Atlas, and the Defense Advanced Research Projects Agency (DARPA) Challenge that accompanies it.

DARPA-Atlas

As the Boston Globe announces it, university teams selected for the competition are “vying to give [the] brawny robot a brain’ (Bray, August 5, 2013).  Delivered to seven universities selected by the DOD for the semi-finals, the web is now awash with ‘unbox’ videos of technicians sporting crisp Boston Dynamics uniforms unpacking the robot into large warehouses specially configured to receive it, as the select students stand around excitedly anticipating the opportunity to develop its applications. Weighing in at 330 lbs. and standing over 6 ft. tall, the device corporealizes the quintessentially threatening humanoid robot.  As the Boston Globe observes, “It’s easy to imagine Atlas’s hydraulic arms smashing a flimsy wooden door to reach the humans cowering inside.”  But this scene of terror is quickly displaced by the reassurance that “[i]f that ever happens, the humans will probably cheer. Designed by Boston Dynamics, a Waltham company founded by MIT engineers, Atlas is intended to be a first-responder or rescue robot, working in environments too dangerous for humans.”  This fantasy rescue scenario will be carried through in the “real-world” test that will end the current qualifying round, scheduled for December of this year, in which the Atlases programmed by the universities will compete for the chance to prepare for a final competition in December 2014.

Atlas, we’re told, is the DARPA/Boston Dynamics response to the Fukushima earthquake, though it seems highly unlikely that the project was inspired uniquely by, or initiated directly as a consequence of, that event.  The DARPA/Boston Dynamics’ partnership insists that:

“The two-legged Atlas is designed for rescue, not combat. ‘Many of the places where disasters might occur are places that are designed for people,’ said [Boston Dynamics co-founder Marc] Raibert. ‘People can fit in there and maneuver through them.’ So too will Atlas, using the same tools as human first responders. A humanoid robot could climb into a car and drive itself to the disaster scene. Once there, it could open doors, climb ladders, turn valves or throw switches, just like a person.”

Should we be heartened by the fact that DARPA  feels compelled to cloth Atlas’ threatening body in fantasies of rescue?   I’m afraid not.  On the contrary, to my reading this appropriation of the  body of the first responder adds another layer of harm to these projects, resonant with the virtues of the claimed ‘precision’ of the targeted strike.  While DARPA challengers and their sponsoring institutions may find a moral high ground on which to stand under the cover of these reassuring scenarios, the robot’s ancestral inheritance from its military-industrial-academic family is too deeply coded into Atlas’ contemporary lifeworld for the claim of its innocent future to be a credible one.

* Health warning: I would advise using your mute key when watching this video.  It’s also worth noting here the report by Matthew Humphries on Sept 24 of this year that DARPA has awarded an additional $10 million to Boston Dynamics to redesign the LS3 to be “as close to silent as possible so as to be stealthy, while also being bulletproof.”

Talking the talk

Following two busy teaching terms I’m finally able to turn back to Robot Futures, and into my inbox comes an uncommonly candid account of the most recent demonstration of one of the longest-lived humanoid robots, Honda’s Asimo.  Titled ‘Honda robot has trouble talking the talk’ (Yuri Kageyama, 04 July 2013, Independent.ie), the article describes Asimo’s debut as a proposed museum tour guide at the Miraikan science museum.  An innumerable stream of media reports on Asimo over the years since the robot’s first version was announced in 2000 have focused on the robot’s ability to walk the walk of a humanoid biped, although troubles have occurred there as well.  But to join the ranks of imagined robot service providers requires that Asimo add to its navigational abilities some interactional ones.  And it’s here that this latest trouble arises, as Kageyama reports that “The bubble-headed Asimo machine had problems telling the difference between people raising their hands to ask questions and those aiming their smartphones to take photos. It froze mid-action and repeated a programmed remark, ‘Who wants to ask Asimo a question?’.”  The same technological revolution that has provided the context for Asimo’s humanoid promise, in other words, has configured a human whose raised hand comprises a noisy signal, ambiguously identifying her as interlocutor or spectator.

At least some publics, it seems, are growing weary of the perpetually imminent arrival of useful humanoids. Kageyama cites complaints that Honda has yet to develop any practical applications for Asimo.  While one of the uses promised for Asimo and his service robot kin has been to take up tasks too dangerous for human bodies, it seems that robot bodies may be just as fragile: Kageyama reports that “Asimo was too sensitive to go into irradiated areas after the 2011 Fukushima nuclear crisis.”  As a less demanding alternative, Asimo’s engineering overseer, Satoshi Shigemi, suggests that a “possible future use for Asimo would be to help people buy tickets from vending machines at train stations,” speeding up the process for humans unfamiliar with those devices.  I can’t help noting the similarity of this projected robot future, however, to the expert system photocopier coach that was the first object of my own research in the mid-1980s.  As the ethnomethodologists have taught us, instructions presuppose the competencies that are required for their successful execution.  This poses if not an infinite, at least a pragmatically indefinite, regress for artificial intelligences and interactive machines.

Asimo’s troubles take on far more serious proportions in the case of robotic weapon systems, required to make critical and exceedingly more challenging discriminations among the humans who face them.  For a recent reflection on this worrying robot future, see Sharkey and Suchman ‘Wishful Mnemonics and Autonomous Killing Machines’ in the most recent issue of AISBQ Quarterly, the Newsletter of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, No. 136, May 2013.

Robot rhetorics

An announcement from Voice of America online (repeated from many other media sources over the past couple of days) nicely illustrates the slippery discourses of robotic ability.  Titled ‘Autonomous Aerial Robot Maneuvers Like a Bird,’ the article announces that researchers at Cornell University ‘have developed a flying robot they say is “as smart as a bird” because it can maneuver to avoid obstacles,’ then concludes several paragraphs later:

‘Still, hurdles remain before the robot could be used in a real-world scenario. It still needs to be able to adapt to environmental variations like wind as well as be able to detect moving objects like birds.’

Enough said.

Robot Celebrities in the Military-Entertainment Complex

The announcement of this year’s inductees into the Robot Hall of Fame® (‘powered by Carnegie Mellon’) reaffirms the celebrity of four highly mediagenic automata, all positioned at the centre of what historian Tim Lenoir (2000) has named the  military-entertainment complex.  Of the candidates, my vote would have gone to just one – Pixar’s Wall-E, about whom media studies scholar Vivian Sobchack has written so eloquently.

In her analysis of the film in which WALL-E stars as the last surviving/operating human-like machine, Sobchack reads the film as a portrait of humans becoming more frenzied at the same time that they are increasingly inert: a kind of inverse relation between motion and life. The recumbent and machine-dependent humans on the mother ship Axiom, Sobchack observes, are members of the ultimate leisure society:

their possibilities for demonstrating any meaningful human agency – purposeful effort, curiosity, desire – are both limited and regulated by the computerized screens and electronic machines that constantly surround them. Round as they are, these cartoon humans have been flattened into automated similitude (2009: 388).

At the same time, the ship that the supports the life of both humans and robots is itself the ultimate automaton: a deterministically directed entity, following out its program with perfect correctness but no possibility – until WALL-E’s intercession – of questioning the continued validity of the directive’s logic.

The program of the mother ship is the link that joins the film to two of Wall-E’s fellow inductees, Big Dog and PackBot, both of whom have appeared in previous posts.*  That these two American armed robots-in-the-making should gain the popular vote is hardly surprising given their frequent appearances in the popular media.  But it’s testimony as well to the degree to which the U.S. military comes second only to Hollywood in informing what we recognize as achievements in robotic design, and defining the limits of our collective imagination.

*see Don’t Kick the Dog and Arming Robots.  The fourth inductee is Aldebaran Robotics’ NAO, whose synchronized choreography is impressive.  But personally I’d rather watch Cyrus and Twitch from season 9 of So You Think You Can Dance …

References

Lenoir, Tim (2000) All But War is Simulation: The Military-Entertainment Complex. Configurations 8.

Sobchack, Vivian (2009) Animation and automation, or, the incredible effortfulness of being. Screen 50: 375-391.

See also Stacey, Jackie and Suchman, Lucy (2012) Animation and Automation: The liveliness and labours of bodies and machines.  Body & Society 18(1): 1-46.

Made in the U.S.A.

Well placed during an election season in which US foreign policy has been almost entirely displaced by a focus on the domestic economy – and specifically jobs – Rodney Brooks’ start up Rethink Robots has announced its first product, Baxter the ‘friendly faced factory robot.’ Dutifully (robotically, we might even say) picked up and repeated by the media, reports of Baxter’s arrival invariably emphasize the promise of a return of manufacturing to the homeland from offshore, made possible by an increase in American worker efficiency and U.S. competitiveness. Associated prospects of further U.S. job losses are muted in these stories, and in any case we’re reminded that U.S. factory workers have little to say since their unions have already been decimated by offshoring. Those few workers who are left, we’re assured, will come to love their Baxter co-workers as they learn how quickly they can be programmed to perform the menial assembly line tasks that have previously gone to even less empowered workers elsewhere.

 

Photo: David Yellen for IEEE Spectrum , caption ‘BAD BOY: Rodney Brooks, who has been called the “bad boy of robotics,” is back with another disruptive creation: a factory robot to help workers become more productive.’

In the implicit elision of ‘the human’ and ‘we Americans’ that I’ve commented on with respect to remotely controlled weapon systems, IEEE Spectrum enthuses that ‘by improving the efficiency of human employees, [Rethink Robots’ products] could make making things in the industrialized world just as cost effective as making them in the developing world.’ I can’t help noting as well that Brook’s departure in 2008 from his previous start up, iRobot, and the founding of Rethink Robots coincides with (or perhaps precedes?) iRobot’s entry into the armed robots market (see Arming Robots).  It’s at least possible that for Brooks, Rethink Robots represents not only a return to US manufacturing, but an escape from the global assembly line of remotely-controlled killing machines.

Don’t kick the Dog

A chaff of media stories entitled ‘Running Robot is Faster than Usain Bolt’ (or close variations) in the past week announce the unveiling of Boston Dynamics’ Cheetah robot, developed with funding from the Defense Advanced Research Projects Agency (DARPA).  Invoking the name (as well as the persona and body) of the world record-breaking Jamaican sprinter who was the star of the recent London Olympics, these headlines suggest that a humanoid has for the first time outrun the fastest human. Closer inspection reveals that the Cheetah’s sprint occurred on a treadmill, with the robot tethered to the hydraulic pump that ensures its energy. In the genre of media proclamations of the arrival of artificial intelligence in 1997, on the occasion of Deep Blue’s chess victory over world champion Gary Kasparov, the headlines obscure the differences between robotic and human accomplishments, as well as the extensive networks of associated people and technologies that make those accomplishments possible.

Taken on its own terms the Cheetah is unquestionably a remarkable machine, one of an extended family of masterfully engineered navigational robots created by Boston Dynamics over the past two decades. Inspired by nature, according to their designers, these robots are characterized by their uncanny resemblances to familiar animal figures and gaits – a resemblance that inspires a mix of affection and horror in the robots’ many commentators.  I find myself experiencing more the latter in my own response to the video demonstrations of BigDog and other Boston Dynamics robots that densely populate YouTube.  For some time now I’ve wanted to try to articulate the basis for my reaction, less one of horror perhaps than of distress.

There’s no question that the distress begins with the plan for these machines’ conscription to serve as beasts of burden (and perhaps inevitably, bearers of weaponry) for the U.S. Military.  The prospect of the appearance of BigDog and its kin in parts of the world distant from the Waltham warehouses of their creation, as part of the American military’s projection of force, further helps me to appreciate the latter’s invasive alienness and its attendant terrors for local populations. Coupled with this is the intensely technophilic, science fiction fantasies that inform these robots’ figuration as animate creatures, designed to inspire new forms of shock and awe. Combined with that ambition is the slavish subservience that the robots themselves materialize in concert with their human masters, exemplified in the act of kicking the robot that seems to be an obligatory element of every demonstration video, so that we can watch it stagger and right itself again. (As well as its explicit figuration as an animal – canine and/or insect – BigDog evokes for me the image of two stooped humans sharing a heavy load, one walking forward and one walking backwards.) More generally, I note the complete absence of any critical discussion of the wider context of these robots’ development, in service of the increasing automation of the so-called irregular warfare in which the United States is now interminably engaged.

I wonder in the end how, within a very different political environment and funding regime, the extraordinary technical achievements of Boston Dynamics might be configured differently.  This would require much greater imagination than currently inspires the field of robotics, as well as a radical change in our collective sense of what’s worth a headline.

The vagaries of ‘precision’ in targeted killing

Two events in the past week highlight the striking contrast between the Obama administration’s current policy regarding the use of armed drones as part of the U.S. ‘Counterterrorism Strategy,’ and those who challenge that strategy’s legality and morality.

The first is the Drone Summit held on April 28-29th in Washington, D.C., co-organized by activist group CodePink, the Center for Constitutional Rights, and the UK organization Reprieve. The summit presentations offered compelling testimony, from participants including Pakistani attorney Shahzad Akbar, Reprieve’s Clive Stafford Smith, Chris Woods of the Bureau of Investigative Journalism, Pakistani journalist Madiha Tahir, and Somali activist Sadia Ali Aden, for documented and extensive civilian injury and death from U.S. Drone strikes in Pakistan, Yemen and Somalia.  While popular support in the United States is based on the premise (and promise) that strikes only kill ‘militants,’ these speakers underscored the vagaries of the categories that inform the (il)legitimacy of extrajudicial targeted killing.

According to the Bureau of Investigative Journalism, between 2004 and 2011 the CIA conducted over 300 drone strikes in Pakistan, killing somewhere between 2,372 and 2,997 people.  Waziristan, in the northwest of Pakistan on the frontier with Afghanistan (the so-called Federally Administered Tribal Area) is the focus of these targeted killings. Shahzad Akbar cited estimates that more than 3,000 people have been killed in the area, but its closure to outside journalists adds to the secrecy in which killings are carried out. One recent victim of the strikes, 16 year old Tariq Aziz, had joined a campaign organized by Akbar’s Foundation for Fundamental Rights in collaboration with Reprieve to crowd source documentation of strikes inside Waziristan using cell phones. Within 72 hours of his participation in the training, Aziz himself was killed in a drone strike on the car in which he was traveling with his younger cousin.  Whether Aziz was deliberately targeted or was another innocent casualty remains unknown.

In the targeting of houses believed to house ‘militants’, according to Akbar, strikes are concentrated during mealtimes and at night, when families are most likely to be assembled.  Not only do immediate family members die in these strikes, but often those in neighboring houses as well, particularly children hit by shrapnel. So how is the category of ‘militant’ defined?  Clive Stafford Smith of Reprieve points out that targeted killing relies upon the same intelligence that informed the detention of ‘militants’ at Guantanamo, where 80% of those held have been cleared.  He reported as well that the U.S. routinely offers $5,000 to informants, the equivalent of a quarter of a million dollars to relatively more affluent Americans, for information leading to the identification of ‘bad guys.’

Particularly in those areas where targeted killings are concentrated, being identified as ‘militant,’ even being armed, does not in itself meet the criterion of posing an imminent threat to the United States.  But the U.S. Government has so far refused to release either the criteria or the evidentiary bases for its placement of persons on targeted kill lists.  This problem is intensified by the administration’s recent endorsement of so-called ‘signature’ targeting, where in place of positive identification of individuals who pose concrete, specific and imminent threat to life (as required by the laws of armed conflict), targeting can be based on patterns of behavior, observed from the air, that correspond with profiles specified as evidence for ‘militancy’. Shahzad Akbar points out that ‘signature’ effectively means profiling, adding that “before they used to arrest and question you, now they just kill you.”  The elision of distinctions between being armed and being a ‘terror suspect’ allows wide scope for action, as does the failure to recognize how these ‘targeted’ killings (where we now have to put targeted as well into scare quotes, insofar as we’re coming to recognize the questions and uncertainties that it masks) might themselves be experienced as terror by civilians on the ground.  Pakistani journalist Madiha Tahir urges us, in considering who is a ‘militant,’ to ask: how does a person become one?  People join ‘militant’ groups largely in relation to internal divisions quite apart from actions aimed at the U.S, but now increasingly also because of U.S. Attacks. “On what grounds,’ she asked ‘does it make sense to terrorize people in order to hunt terrorists?”

The second event of the past week was the appearance of President Obama’s ‘top counterterrorism advisor’ John Brennan at the Wilson Center, where he asserted that the growing use of armed unmanned aircraft in Pakistan, Yemen and Somalia have saved American lives, and that civilian casualties from U.S. drones were “exceedingly rare” and rigorously investigated.  As the LA Times reports, “Brennan emphasized throughout his speech that drone strikes are carried out against ‘individual terrorists.’ He did not mention so-called signature strikes, a type of attack the U.S. has used in Pakistan against facilities and suspected militants without knowing the target’s name. When asked later by a member of the audience whether the standards he outlined for drone attacks also applied to signature strikes, Brennan said he was not speaking of signature strikes but that all attacks launched by the U.S. are done in accordance with the rule of law. The White House this month approved the use of signature strikes in Yemen after U.S. officials previously insisted that it would target only people whose names are known. The new rules permit attacks against individuals suspected of militant activities, even when their names are unknown or only partially known, a U.S. official said.”

Contrasting war by remote control with traditional military operations, Brennan argued that “large, intrusive military deployments risk playing into Al Qaeda’s strategy of trying to draw us into long, costly wars that drain us financially, inflame anti-American resentment and inspire the next generation of terrorists.”  The implication is that death by remote control does not have the same consequences.

CodePink anti-warrior Medea Benjamin brought the contradictions of these two events together when she staged a courageous interruption of Brennan’s speech, continuing her testimony on behalf of innocent victims of U.S. drone strikes even as she was literally lifted off of her feet and carried out of the room by a burly security guard.

For a compelling and carefully researched introduction to the drone industry and its problems see Medea Benjamin’s new book Drone Warfare: Killing by remote control.

Follow

Get every new post delivered to your Inbox.

Join 54 other followers