Category Archives: (mis)identification

Talking the talk

Following two busy teaching terms I’m finally able to turn back to Robot Futures, and into my inbox comes an uncommonly candid account of the most recent demonstration of one of the longest-lived humanoid robots, Honda’s Asimo.  Titled ‘Honda robot has trouble talking the talk’ (Yuri Kageyama, 04 July 2013, Independent.ie), the article describes Asimo’s debut as a proposed museum tour guide at the Miraikan science museum.  An innumerable stream of media reports on Asimo over the years since the robot’s first version was announced in 2000 have focused on the robot’s ability to walk the walk of a humanoid biped, although troubles have occurred there as well.  But to join the ranks of imagined robot service providers requires that Asimo add to its navigational abilities some interactional ones.  And it’s here that this latest trouble arises, as Kageyama reports that “The bubble-headed Asimo machine had problems telling the difference between people raising their hands to ask questions and those aiming their smartphones to take photos. It froze mid-action and repeated a programmed remark, ‘Who wants to ask Asimo a question?’.”  The same technological revolution that has provided the context for Asimo’s humanoid promise, in other words, has configured a human whose raised hand comprises a noisy signal, ambiguously identifying her as interlocutor or spectator.

At least some publics, it seems, are growing weary of the perpetually imminent arrival of useful humanoids. Kageyama cites complaints that Honda has yet to develop any practical applications for Asimo.  While one of the uses promised for Asimo and his service robot kin has been to take up tasks too dangerous for human bodies, it seems that robot bodies may be just as fragile: Kageyama reports that “Asimo was too sensitive to go into irradiated areas after the 2011 Fukushima nuclear crisis.”  As a less demanding alternative, Asimo’s engineering overseer, Satoshi Shigemi, suggests that a “possible future use for Asimo would be to help people buy tickets from vending machines at train stations,” speeding up the process for humans unfamiliar with those devices.  I can’t help noting the similarity of this projected robot future, however, to the expert system photocopier coach that was the first object of my own research in the mid-1980s.  As the ethnomethodologists have taught us, instructions presuppose the competencies that are required for their successful execution.  This poses if not an infinite, at least a pragmatically indefinite, regress for artificial intelligences and interactive machines.

Asimo’s troubles take on far more serious proportions in the case of robotic weapon systems, required to make critical and exceedingly more challenging discriminations among the humans who face them.  For a recent reflection on this worrying robot future, see Sharkey and Suchman ‘Wishful Mnemonics and Autonomous Killing Machines’ in the most recent issue of AISBQ Quarterly, the Newsletter of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, No. 136, May 2013.

The vagaries of ‘precision’ in targeted killing

Two events in the past week highlight the striking contrast between the Obama administration’s current policy regarding the use of armed drones as part of the U.S. ‘Counterterrorism Strategy,’ and those who challenge that strategy’s legality and morality.

The first is the Drone Summit held on April 28-29th in Washington, D.C., co-organized by activist group CodePink, the Center for Constitutional Rights, and the UK organization Reprieve. The summit presentations offered compelling testimony, from participants including Pakistani attorney Shahzad Akbar, Reprieve’s Clive Stafford Smith, Chris Woods of the Bureau of Investigative Journalism, Pakistani journalist Madiha Tahir, and Somali activist Sadia Ali Aden, for documented and extensive civilian injury and death from U.S. Drone strikes in Pakistan, Yemen and Somalia.  While popular support in the United States is based on the premise (and promise) that strikes only kill ‘militants,’ these speakers underscored the vagaries of the categories that inform the (il)legitimacy of extrajudicial targeted killing.

According to the Bureau of Investigative Journalism, between 2004 and 2011 the CIA conducted over 300 drone strikes in Pakistan, killing somewhere between 2,372 and 2,997 people.  Waziristan, in the northwest of Pakistan on the frontier with Afghanistan (the so-called Federally Administered Tribal Area) is the focus of these targeted killings. Shahzad Akbar cited estimates that more than 3,000 people have been killed in the area, but its closure to outside journalists adds to the secrecy in which killings are carried out. One recent victim of the strikes, 16 year old Tariq Aziz, had joined a campaign organized by Akbar’s Foundation for Fundamental Rights in collaboration with Reprieve to crowd source documentation of strikes inside Waziristan using cell phones. Within 72 hours of his participation in the training, Aziz himself was killed in a drone strike on the car in which he was traveling with his younger cousin.  Whether Aziz was deliberately targeted or was another innocent casualty remains unknown.

In the targeting of houses believed to house ‘militants’, according to Akbar, strikes are concentrated during mealtimes and at night, when families are most likely to be assembled.  Not only do immediate family members die in these strikes, but often those in neighboring houses as well, particularly children hit by shrapnel. So how is the category of ‘militant’ defined?  Clive Stafford Smith of Reprieve points out that targeted killing relies upon the same intelligence that informed the detention of ‘militants’ at Guantanamo, where 80% of those held have been cleared.  He reported as well that the U.S. routinely offers $5,000 to informants, the equivalent of a quarter of a million dollars to relatively more affluent Americans, for information leading to the identification of ‘bad guys.’

Particularly in those areas where targeted killings are concentrated, being identified as ‘militant,’ even being armed, does not in itself meet the criterion of posing an imminent threat to the United States.  But the U.S. Government has so far refused to release either the criteria or the evidentiary bases for its placement of persons on targeted kill lists.  This problem is intensified by the administration’s recent endorsement of so-called ‘signature’ targeting, where in place of positive identification of individuals who pose concrete, specific and imminent threat to life (as required by the laws of armed conflict), targeting can be based on patterns of behavior, observed from the air, that correspond with profiles specified as evidence for ‘militancy’. Shahzad Akbar points out that ‘signature’ effectively means profiling, adding that “before they used to arrest and question you, now they just kill you.”  The elision of distinctions between being armed and being a ‘terror suspect’ allows wide scope for action, as does the failure to recognize how these ‘targeted’ killings (where we now have to put targeted as well into scare quotes, insofar as we’re coming to recognize the questions and uncertainties that it masks) might themselves be experienced as terror by civilians on the ground.  Pakistani journalist Madiha Tahir urges us, in considering who is a ‘militant,’ to ask: how does a person become one?  People join ‘militant’ groups largely in relation to internal divisions quite apart from actions aimed at the U.S, but now increasingly also because of U.S. Attacks. “On what grounds,’ she asked ‘does it make sense to terrorize people in order to hunt terrorists?”

The second event of the past week was the appearance of President Obama’s ‘top counterterrorism advisor’ John Brennan at the Wilson Center, where he asserted that the growing use of armed unmanned aircraft in Pakistan, Yemen and Somalia have saved American lives, and that civilian casualties from U.S. drones were “exceedingly rare” and rigorously investigated.  As the LA Times reports, “Brennan emphasized throughout his speech that drone strikes are carried out against ‘individual terrorists.’ He did not mention so-called signature strikes, a type of attack the U.S. has used in Pakistan against facilities and suspected militants without knowing the target’s name. When asked later by a member of the audience whether the standards he outlined for drone attacks also applied to signature strikes, Brennan said he was not speaking of signature strikes but that all attacks launched by the U.S. are done in accordance with the rule of law. The White House this month approved the use of signature strikes in Yemen after U.S. officials previously insisted that it would target only people whose names are known. The new rules permit attacks against individuals suspected of militant activities, even when their names are unknown or only partially known, a U.S. official said.”

Contrasting war by remote control with traditional military operations, Brennan argued that “large, intrusive military deployments risk playing into Al Qaeda’s strategy of trying to draw us into long, costly wars that drain us financially, inflame anti-American resentment and inspire the next generation of terrorists.”  The implication is that death by remote control does not have the same consequences.

CodePink anti-warrior Medea Benjamin brought the contradictions of these two events together when she staged a courageous interruption of Brennan’s speech, continuing her testimony on behalf of innocent victims of U.S. drone strikes even as she was literally lifted off of her feet and carried out of the room by a burly security guard.

For a compelling and carefully researched introduction to the drone industry and its problems see Medea Benjamin’s new book Drone Warfare: Killing by remote control.