Talking the talk

Following two busy teaching terms I’m finally able to turn back to Robot Futures, and into my inbox comes an uncommonly candid account of the most recent demonstration of one of the longest-lived humanoid robots, Honda’s Asimo.  Titled ‘Honda robot has trouble talking the talk’ (Yuri Kageyama, 04 July 2013, Independent.ie), the article describes Asimo’s debut as a proposed museum tour guide at the Miraikan science museum.  An innumerable stream of media reports on Asimo over the years since the robot’s first version was announced in 2000 have focused on the robot’s ability to walk the walk of a humanoid biped, although troubles have occurred there as well.  But to join the ranks of imagined robot service providers requires that Asimo add to its navigational abilities some interactional ones.  And it’s here that this latest trouble arises, as Kageyama reports that “The bubble-headed Asimo machine had problems telling the difference between people raising their hands to ask questions and those aiming their smartphones to take photos. It froze mid-action and repeated a programmed remark, ‘Who wants to ask Asimo a question?’.”  The same technological revolution that has provided the context for Asimo’s humanoid promise, in other words, has configured a human whose raised hand comprises a noisy signal, ambiguously identifying her as interlocutor or spectator.

At least some publics, it seems, are growing weary of the perpetually imminent arrival of useful humanoids. Kageyama cites complaints that Honda has yet to develop any practical applications for Asimo.  While one of the uses promised for Asimo and his service robot kin has been to take up tasks too dangerous for human bodies, it seems that robot bodies may be just as fragile: Kageyama reports that “Asimo was too sensitive to go into irradiated areas after the 2011 Fukushima nuclear crisis.”  As a less demanding alternative, Asimo’s engineering overseer, Satoshi Shigemi, suggests that a “possible future use for Asimo would be to help people buy tickets from vending machines at train stations,” speeding up the process for humans unfamiliar with those devices.  I can’t help noting the similarity of this projected robot future, however, to the expert system photocopier coach that was the first object of my own research in the mid-1980s.  As the ethnomethodologists have taught us, instructions presuppose the competencies that are required for their successful execution.  This poses if not an infinite, at least a pragmatically indefinite, regress for artificial intelligences and interactive machines.

Asimo’s troubles take on far more serious proportions in the case of robotic weapon systems, required to make critical and exceedingly more challenging discriminations among the humans who face them.  For a recent reflection on this worrying robot future, see Sharkey and Suchman ‘Wishful Mnemonics and Autonomous Killing Machines’ in the most recent issue of AISBQ Quarterly, the Newsletter of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, No. 136, May 2013.

Robot rhetorics

An announcement from Voice of America online (repeated from many other media sources over the past couple of days) nicely illustrates the slippery discourses of robotic ability.  Titled ‘Autonomous Aerial Robot Maneuvers Like a Bird,’ the article announces that researchers at Cornell University ‘have developed a flying robot they say is “as smart as a bird” because it can maneuver to avoid obstacles,’ then concludes several paragraphs later:

‘Still, hurdles remain before the robot could be used in a real-world scenario. It still needs to be able to adapt to environmental variations like wind as well as be able to detect moving objects like birds.’

Enough said.

Robot Celebrities in the Military-Entertainment Complex

The announcement of this year’s inductees into the Robot Hall of Fame® (‘powered by Carnegie Mellon’) reaffirms the celebrity of four highly mediagenic automata, all positioned at the centre of what historian Tim Lenoir (2000) has named the  military-entertainment complex.  Of the candidates, my vote would have gone to just one – Pixar’s Wall-E, about whom media studies scholar Vivian Sobchack has written so eloquently.

In her analysis of the film in which WALL-E stars as the last surviving/operating human-like machine, Sobchack reads the film as a portrait of humans becoming more frenzied at the same time that they are increasingly inert: a kind of inverse relation between motion and life. The recumbent and machine-dependent humans on the mother ship Axiom, Sobchack observes, are members of the ultimate leisure society:

their possibilities for demonstrating any meaningful human agency – purposeful effort, curiosity, desire – are both limited and regulated by the computerized screens and electronic machines that constantly surround them. Round as they are, these cartoon humans have been flattened into automated similitude (2009: 388).

At the same time, the ship that the supports the life of both humans and robots is itself the ultimate automaton: a deterministically directed entity, following out its program with perfect correctness but no possibility – until WALL-E’s intercession – of questioning the continued validity of the directive’s logic.

The program of the mother ship is the link that joins the film to two of Wall-E’s fellow inductees, Big Dog and PackBot, both of whom have appeared in previous posts.*  That these two American armed robots-in-the-making should gain the popular vote is hardly surprising given their frequent appearances in the popular media.  But it’s testimony as well to the degree to which the U.S. military comes second only to Hollywood in informing what we recognize as achievements in robotic design, and defining the limits of our collective imagination.

*see Don’t Kick the Dog and Arming Robots.  The fourth inductee is Aldebaran Robotics’ NAO, whose synchronized choreography is impressive.  But personally I’d rather watch Cyrus and Twitch from season 9 of So You Think You Can Dance …

References

Lenoir, Tim (2000) All But War is Simulation: The Military-Entertainment Complex. Configurations 8.

Sobchack, Vivian (2009) Animation and automation, or, the incredible effortfulness of being. Screen 50: 375-391.

See also Stacey, Jackie and Suchman, Lucy (2012) Animation and Automation: The liveliness and labours of bodies and machines.  Body & Society 18(1): 1-46.

Made in the U.S.A.

Well placed during an election season in which US foreign policy has been almost entirely displaced by a focus on the domestic economy – and specifically jobs – Rodney Brooks’ start up Rethink Robots has announced its first product, Baxter the ‘friendly faced factory robot.’ Dutifully (robotically, we might even say) picked up and repeated by the media, reports of Baxter’s arrival invariably emphasize the promise of a return of manufacturing to the homeland from offshore, made possible by an increase in American worker efficiency and U.S. competitiveness. Associated prospects of further U.S. job losses are muted in these stories, and in any case we’re reminded that U.S. factory workers have little to say since their unions have already been decimated by offshoring. Those few workers who are left, we’re assured, will come to love their Baxter co-workers as they learn how quickly they can be programmed to perform the menial assembly line tasks that have previously gone to even less empowered workers elsewhere.

 

Photo: David Yellen for IEEE Spectrum , caption ‘BAD BOY: Rodney Brooks, who has been called the “bad boy of robotics,” is back with another disruptive creation: a factory robot to help workers become more productive.’

In the implicit elision of ‘the human’ and ‘we Americans’ that I’ve commented on with respect to remotely controlled weapon systems, IEEE Spectrum enthuses that ‘by improving the efficiency of human employees, [Rethink Robots’ products] could make making things in the industrialized world just as cost effective as making them in the developing world.’ I can’t help noting as well that Brook’s departure in 2008 from his previous start up, iRobot, and the founding of Rethink Robots coincides with (or perhaps precedes?) iRobot’s entry into the armed robots market (see Arming Robots).  It’s at least possible that for Brooks, Rethink Robots represents not only a return to US manufacturing, but an escape from the global assembly line of remotely-controlled killing machines.

Don’t kick the Dog

A chaff of media stories entitled ‘Running Robot is Faster than Usain Bolt’ (or close variations) in the past week announce the unveiling of Boston Dynamics’ Cheetah robot, developed with funding from the Defense Advanced Research Projects Agency (DARPA).  Invoking the name (as well as the persona and body) of the world record-breaking Jamaican sprinter who was the star of the recent London Olympics, these headlines suggest that a humanoid has for the first time outrun the fastest human. Closer inspection reveals that the Cheetah’s sprint occurred on a treadmill, with the robot tethered to the hydraulic pump that ensures its energy. In the genre of media proclamations of the arrival of artificial intelligence in 1997, on the occasion of Deep Blue’s chess victory over world champion Gary Kasparov, the headlines obscure the differences between robotic and human accomplishments, as well as the extensive networks of associated people and technologies that make those accomplishments possible.

Taken on its own terms the Cheetah is unquestionably a remarkable machine, one of an extended family of masterfully engineered navigational robots created by Boston Dynamics over the past two decades. Inspired by nature, according to their designers, these robots are characterized by their uncanny resemblances to familiar animal figures and gaits – a resemblance that inspires a mix of affection and horror in the robots’ many commentators.  I find myself experiencing more the latter in my own response to the video demonstrations of BigDog and other Boston Dynamics robots that densely populate YouTube.  For some time now I’ve wanted to try to articulate the basis for my reaction, less one of horror perhaps than of distress.

There’s no question that the distress begins with the plan for these machines’ conscription to serve as beasts of burden (and perhaps inevitably, bearers of weaponry) for the U.S. Military.  The prospect of the appearance of BigDog and its kin in parts of the world distant from the Waltham warehouses of their creation, as part of the American military’s projection of force, further helps me to appreciate the latter’s invasive alienness and its attendant terrors for local populations. Coupled with this is the intensely technophilic, science fiction fantasies that inform these robots’ figuration as animate creatures, designed to inspire new forms of shock and awe. Combined with that ambition is the slavish subservience that the robots themselves materialize in concert with their human masters, exemplified in the act of kicking the robot that seems to be an obligatory element of every demonstration video, so that we can watch it stagger and right itself again. (As well as its explicit figuration as an animal – canine and/or insect – BigDog evokes for me the image of two stooped humans sharing a heavy load, one walking forward and one walking backwards.) More generally, I note the complete absence of any critical discussion of the wider context of these robots’ development, in service of the increasing automation of the so-called irregular warfare in which the United States is now interminably engaged.

I wonder in the end how, within a very different political environment and funding regime, the extraordinary technical achievements of Boston Dynamics might be configured differently.  This would require much greater imagination than currently inspires the field of robotics, as well as a radical change in our collective sense of what’s worth a headline.

The vagaries of ‘precision’ in targeted killing

Two events in the past week highlight the striking contrast between the Obama administration’s current policy regarding the use of armed drones as part of the U.S. ‘Counterterrorism Strategy,’ and those who challenge that strategy’s legality and morality.

The first is the Drone Summit held on April 28-29th in Washington, D.C., co-organized by activist group CodePink, the Center for Constitutional Rights, and the UK organization Reprieve. The summit presentations offered compelling testimony, from participants including Pakistani attorney Shahzad Akbar, Reprieve’s Clive Stafford Smith, Chris Woods of the Bureau of Investigative Journalism, Pakistani journalist Madiha Tahir, and Somali activist Sadia Ali Aden, for documented and extensive civilian injury and death from U.S. Drone strikes in Pakistan, Yemen and Somalia.  While popular support in the United States is based on the premise (and promise) that strikes only kill ‘militants,’ these speakers underscored the vagaries of the categories that inform the (il)legitimacy of extrajudicial targeted killing.

According to the Bureau of Investigative Journalism, between 2004 and 2011 the CIA conducted over 300 drone strikes in Pakistan, killing somewhere between 2,372 and 2,997 people.  Waziristan, in the northwest of Pakistan on the frontier with Afghanistan (the so-called Federally Administered Tribal Area) is the focus of these targeted killings. Shahzad Akbar cited estimates that more than 3,000 people have been killed in the area, but its closure to outside journalists adds to the secrecy in which killings are carried out. One recent victim of the strikes, 16 year old Tariq Aziz, had joined a campaign organized by Akbar’s Foundation for Fundamental Rights in collaboration with Reprieve to crowd source documentation of strikes inside Waziristan using cell phones. Within 72 hours of his participation in the training, Aziz himself was killed in a drone strike on the car in which he was traveling with his younger cousin.  Whether Aziz was deliberately targeted or was another innocent casualty remains unknown.

In the targeting of houses believed to house ‘militants’, according to Akbar, strikes are concentrated during mealtimes and at night, when families are most likely to be assembled.  Not only do immediate family members die in these strikes, but often those in neighboring houses as well, particularly children hit by shrapnel. So how is the category of ‘militant’ defined?  Clive Stafford Smith of Reprieve points out that targeted killing relies upon the same intelligence that informed the detention of ‘militants’ at Guantanamo, where 80% of those held have been cleared.  He reported as well that the U.S. routinely offers $5,000 to informants, the equivalent of a quarter of a million dollars to relatively more affluent Americans, for information leading to the identification of ‘bad guys.’

Particularly in those areas where targeted killings are concentrated, being identified as ‘militant,’ even being armed, does not in itself meet the criterion of posing an imminent threat to the United States.  But the U.S. Government has so far refused to release either the criteria or the evidentiary bases for its placement of persons on targeted kill lists.  This problem is intensified by the administration’s recent endorsement of so-called ‘signature’ targeting, where in place of positive identification of individuals who pose concrete, specific and imminent threat to life (as required by the laws of armed conflict), targeting can be based on patterns of behavior, observed from the air, that correspond with profiles specified as evidence for ‘militancy’. Shahzad Akbar points out that ‘signature’ effectively means profiling, adding that “before they used to arrest and question you, now they just kill you.”  The elision of distinctions between being armed and being a ‘terror suspect’ allows wide scope for action, as does the failure to recognize how these ‘targeted’ killings (where we now have to put targeted as well into scare quotes, insofar as we’re coming to recognize the questions and uncertainties that it masks) might themselves be experienced as terror by civilians on the ground.  Pakistani journalist Madiha Tahir urges us, in considering who is a ‘militant,’ to ask: how does a person become one?  People join ‘militant’ groups largely in relation to internal divisions quite apart from actions aimed at the U.S, but now increasingly also because of U.S. Attacks. “On what grounds,’ she asked ‘does it make sense to terrorize people in order to hunt terrorists?”

The second event of the past week was the appearance of President Obama’s ‘top counterterrorism advisor’ John Brennan at the Wilson Center, where he asserted that the growing use of armed unmanned aircraft in Pakistan, Yemen and Somalia have saved American lives, and that civilian casualties from U.S. drones were “exceedingly rare” and rigorously investigated.  As the LA Times reports, “Brennan emphasized throughout his speech that drone strikes are carried out against ‘individual terrorists.’ He did not mention so-called signature strikes, a type of attack the U.S. has used in Pakistan against facilities and suspected militants without knowing the target’s name. When asked later by a member of the audience whether the standards he outlined for drone attacks also applied to signature strikes, Brennan said he was not speaking of signature strikes but that all attacks launched by the U.S. are done in accordance with the rule of law. The White House this month approved the use of signature strikes in Yemen after U.S. officials previously insisted that it would target only people whose names are known. The new rules permit attacks against individuals suspected of militant activities, even when their names are unknown or only partially known, a U.S. official said.”

Contrasting war by remote control with traditional military operations, Brennan argued that “large, intrusive military deployments risk playing into Al Qaeda’s strategy of trying to draw us into long, costly wars that drain us financially, inflame anti-American resentment and inspire the next generation of terrorists.”  The implication is that death by remote control does not have the same consequences.

CodePink anti-warrior Medea Benjamin brought the contradictions of these two events together when she staged a courageous interruption of Brennan’s speech, continuing her testimony on behalf of innocent victims of U.S. drone strikes even as she was literally lifted off of her feet and carried out of the room by a burly security guard.

For a compelling and carefully researched introduction to the drone industry and its problems see Medea Benjamin’s new book Drone Warfare: Killing by remote control.

Autonomy

Media reports of developments in so-called robotic weapons systems (a broad category that includes any system involving some degree of pre-programming as well as remote control) are haunted by the question of ‘autonomy’; specifically, the prospect that technologies acting independently of human operators will run ‘out of control’ (a fear addressed by Langdon Winner in his 1977 book Autonomous Technology: technics-out-of-control as a theme in political thought).  While recognizing the very real dangers posed by increasing resort to on-board, algorithmic encoding of controls in military systems, I want to track the discussion of autonomy with respect to weapons systems a bit more closely.  A recent story in the LA Times, noted and under discussion by my colleagues in the International Committee for Robot Arms Control (ICRAC), provides a good starting place.

While I’m going to suggest here that autonomy is something of a red herring in the context of this story, let me be clear at the outset that I believe that we should be deeply concerned about the developments reported. They represent a continuation of the longstanding investment in automation in the (questionable) interest of economy; the dangers of ever-intensified speed in war fighting; the extraordinary inflation of spending on weapons systems at the expense of other social spending (see post Arming Robots); and the threat to global security of the already existing infrastructure of networked warfare.  With that said, I want to question the framing of the developments reported in this article as the beginning of something new, unprecedented and (as often goes along with these adjectives) inevitable, centering on the question of autonomy.

The article reports on the X47B drone, a demonstration aircraft currently being tested by the Navy at a cost of $813 million.

“The X-47B drone, above, marks a paradigm shift in warfare, one that is likely to have far-reaching consequences. With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.” (Chad Slattery, Northrop Grumman / January 25, 2012)

A major technical requirement for this plane is that it should be able to land under on board controls on the deck of an aircraft carrier, “one of aviation’s most difficult maneuvers.”  In this respect, the X47B is a next logical step in an ongoing process of automation, of the replacement of labour with capital equipment, through the delegation of actions previously done by skillful humans to machines. The familiarity of the story in this respect raises the question: what exactly is the “paradigm shift” here?  And what are the stakes in the assertion that there is one?  The author observes:

“With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.”

Most commercial aircraft, as well as existing drones, can be put under ‘auto pilot’ controls, and are always operating ‘semi-independently.’  And the U.S. drone campaign is already dealing death and destruction.

“Although humans would program an autonomous drone’s flight plan and could override its decisions, the prospect of heavily armed aircraft screaming through the skies without direct human control is unnerving to many.”

Aren’t populations in Pakistan, Afghanistan, Yemen and other areas that are the target of U.S. drones already unnerved by heavily armed aircraft screaming through the skies?  And to what extent has ‘direct human control’ over existing drone systems ensured that civilians won’t be killed, whether as a consequence of mistaken targeting, or what seems to be accepted within military procedure as unavoidable ‘collateral’ damage?

“‘The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,’ committee [of the International Red Cross] President Jakob Kellenberger said at a recent conference. ‘The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.’”

It is clear that the ‘capacity to discriminate’ is already based on complex networks of sensors and code, and the history of the use of armed drones includes recurring examples of misrecognition of targets, extra-judicial killing, and a range of other violations of international law.

“Weapons specialists in the military and Congress acknowledge that policymakers must deal with these ethical questions long before these lethal autonomous drones go into active service, which may be a decade or more away.”

These questions – not only ethical but also moral and legal – must equally have been dealt with before lethal remotely-controlled drones went into active service.  Which means that the latter are, in their current use, unethical, immoral and illegal.

“More aggressive robotry development could lead to deploying far fewer U.S. military personnel to other countries, achieving greater national security at a much lower cost and most importantly, greatly reduced casualties,” aerospace pioneer Simon Ramo, who helped develop the intercontinental ballistic missile, wrote in his new book, “Let Robots Do the Dying.”

The promise of lower cost rings hollow in the context of a defense budget that continues to grow, and the prediction that annual global spending on drones will double to $11.5 billion in the next few years (reported by the New Internationalist in their December 2011 issue).  But ‘most importantly,’ as Ramo puts it, the ‘reduction in casualties’ refers only to ‘our’ side, and it is not only robots that are dying.

The Air Force says in the Unmanned Aircraft Systems Flight Plan 2009-2047 that “it’s only a matter of time before drones have the capability to make life-or-death decisions as they circle the battlefield.” What’s missing from this projection (we should be suspicious whenever we hear ‘it’s only a matter of time’) are the unresolved problems of decision-making that plague already existing armed drone systems. The focus on the future ignores the already unacceptable present.  And the focus on autonomy as the threat directs our attention away from the autonomous arms-industry-out-of-control, of which the X47B is a symptom.

Arming robots

One of my central concerns in this blog is developments in remotely controlled weapon systems, including the arming of ground robots.  A sense of the wider context for these developments is provided by Dinyar Godrej in his succinct, and chilling, analysis of the current state of the arms industry globally, published in the December 2011 issue of the New Internationalist.  As Godrej observes: “Despite the fact that arms manufacturing in most Western nations ultimately represents vast fortunes of public funds flowing into private coffers for products that deal in injury or death, the industry is usually represented as a source of pride … Nowhere is this more evident than in the US, which spends almost as much as the rest of the world combined on arms [almost $700 billion in 2010, 43% of all military expenditures globally] and is the world’s largest arms exporter to boot.  Between 2001 (the start of the ‘war on terror’) and 2003, just the increase in military spending of this country was larger than the entire military budgets of countries like China or Britain.”

So war continues to be good for business, as military funding supports technology research and development, which brings us back to iRobot. According to IEEE Spectrum, iRobot has only relatively recently, and reluctantly, entered into the field of weaponized robots, perhaps driven to do so by the increasing importance of the military market to the company’s financial well being.  An initial configuration of iRobot’s 710 Warrior, as described in Defense Systems, features an Anti-Personnel Obstacle Breaching System or APOBS: more specifically “an explosive line charge deployed by a rocket that pulls a rope with a string of fragmentation grenades attached and a small parachute at the opposite end. The explosive line charge, which the robot fires from a distance of 35 meters, can clear a path 45 meters wide.”

iRobot’s reluctance may help to explain some of the notable absences in the company’s representations of the range and functionality of its robotic products.  As an ‘anti-personnel obstacle breaching system,’ the Warrior can be seen as not only a technological but also a logical extension of iRobot’s previous offerings in the line of ‘life saving’ devices, a kind of bigger brother to the Packbot, furthering the objective of clearing away potential explosives planted by an enemy.  But where the Packbot would be sent to inspect a single device, the Warrior – as seen in this demonstration video – has a wider and more indeterminate target.

The presence or absence of humans as targets of the Warrior is a point of debate even among the actors involved: IEEE Spectrum’s Automaton blog reports that in response to its first report titled ‘iRobot demonstrates new weaponized robot,’ “Some readers argued that the APOBS, or Anti-Personnel Obstacle Breaching System, developed in a joint program of the U.S. Army and Navy, is not, technically, a weapon, because it’s not an anti-personnel system but rather a system used against obstacles. Perry Villanueva, the project engineer for the APOBS program on the Army side, says the APOBS “is not a weapon in the traditional sense, but it is a weapon.”  The point of confusion here seems to center on the question of against just which ‘personnel’ explosives are being deployed (in reports of IEDs, the ‘personnel’ involved are assumed to be to ‘our’ side, and ‘anti-personnel obstacles’ those deployed by the other side). The demonstration video, in any case, makes no reference to the possibility that other humans might be targets, or even caught within the Warrior’s very wide destructive path: the only bodies that we see are three US soldiers lying prone in readiness at a safe distance from the explosion.

The built environment here appears as an indistinct collection of metal objects and other debris, a kind of junk yard ready to be further pulverized. In actual use situations, however, we can assume a high probability that the vicinity of the ‘obstacle’ would itself be populated.  The problems arise, most obviously, when the barren ‘battlefield’ of the demonstration video (staged at China Lake in the Mohave Desert) is replaced by more densely inhabited landscapes, home as well to non-combatants.

Robot alerts

One of the aims of this blog is to offer some critical readings of popular media representations of robots, particularly in the areas of warfare and healthcare.  So let’s take the most recent Google ‘alert’ on robots to come across my inbox, dated January 22, 2012.  We get the usual collection of stories, falling roughly into these genre:

Heroic robot ‘rescue’ missions.  Reports on the use of remotely controlled, non-humanoid robots in responding to a variety of emergency situations.  In this case, The Telegraph reports on the use of an ‘underwater robot equipped with a camera’ sent to monitor the area of the wreckage of the cruise ship Costa Concordia in an ongoing search for victims.  A second story in the Irish Independent reports the failure of a Navy team equipped with a ‘robot camera’ to find the bodies of three missing fishermen in a trawler wrecked off the West coast of Ireland.  I note that the almost mundane use of this relatively straightforward technology is performed as newsworthy in these stories through its figuration as at once humanlike, and more-than-human in its capabilities.  A familiar theme, in this case working to keep the robot future alive in the face of a tragic cessation in the recovery of those humans who have died.

Roboticists’ commentaries on the field.  I’m pleased to see Helen Greiner, co-founder of iRobot Corporation and CEO of robotics start-up CyPhy Works, writing a column in the New Scientist urging that roboticists get more serious, less focused on ‘cool’ and more on ‘practicality, ruggedness and cost,’ three qualities that she believes necessary to move robots from promissory prototypes to products on the market. To exemplify the latter she points to the non-humanoid, yet useful Roomba vacuuming robot (perhaps more on Roomba in a later post), and the success of ‘iRobot’s military robots, originally deployed in Afghanistan to defuse improvised explosive devices, [which] proved very useful to the human teams dealing with the nuclear emergency at the Fukushima Daiichi power plant in Japan.’ (See ‘heroic robots’ above.)  Notably absent from mention is the iRobot 710 Warrior.   Nor does iRobot advertise the robot’s ‘firefighting’ potential on its product web pages, but Wikipedia tells us that iRobot has teamed up with Australian partner Metal Storm to mount an electronically controlled firing system on a Warrior, capable of firing up to 16 rounds per second (definitely more on the Warrior in a later post).

Care robots.  The majority of stories echo the pervasive fantasy of the robot caregiver, humanoid projects framed as vague promises of a future in which the burden of our responsibility for those figured as dependents – children on one hand, the elderly on the other – will be cared for by loving machines.  While not my focus here, these stories invariably translate the extraordinarily skillful, open-ended and irreducible complexities of caregiving into a cartoon of itself – another instance of asserting the existence of a world in which the autonomous robot would be possible, rather than imaginatively rethinking the assistive possibilities that a robot not invested in its own humanness might actually embody.

Automata.  Finally, and most interestingly, we find on the IEEE Spectrum Automaton blog a story on the work of animatronic designer Chris Clarke.  Animation, in its many and evolving forms, is an art that relies upon the animator’s close and insightful observations of the creatures that inform his or her machines, combined with ingenious invention and reconfiguration of materials and mechanisms.  Not fetishizing autonomy, the art of animation relies instead on the same suspension of disbelief that enlivens the cinema – some ideas that my colleague Jackie Stacey and I explore at greater length in our paper ‘Animation and Automation: The liveliness and labours of bodies and machines’, soon to be out in the journal Body & Society.

remote control

According to media reports more than 7,000 drones of all types are in use over Iraq and Afghanistan, and remote control is seen as the vanguard of a ‘revolution in military affairs’ in which U.S. military and intelligence agencies are heavily invested, in both senses of the word.  With the integration of Hellfire missiles, the first armed version of the Predator drone (designated MQ-1) was deployed in Afghanistan in 2002 as part of what the U.S. military names Operation Enduring Freedom (originally Operation Infinite Justice), under the auspices of what George Bush declared in September 2001 to be a  Global War (without end) on Terror.  In 2001, the U.S. Congress gave the Pentagon the goal of making one-third of ground combat vehicles remotely operated by 2015.  A decade later, under President Obama’s less colorfully named ‘Overseas Contingency Operations’, the amount of money being spent on research for military robotics surpasses the budget of the entire National Science Foundation.

‘War would be a lot safer, the Army says, if only more of it were fought by robots’  (John Markoff, NY Times, November 27, 2010).  Statements like this at once assume the reader to be one of the ‘we’ for whom war would be safer, while deleting war’s Others from our view.  This erasure is rendered more graphically in the image that accompanies Markoff’s article, titled ‘Remotely controlled: Some armed robots are operated with video-game-style consoles, helping to keep humans away from danger’ (my emphasis).  These reports valorize the nonhuman qualities of the robots which, Markoff reports, are ‘never distracted, using an unblinking digital eye, or “persistent stare,” that automatically detects even the smallest motion. Nor do they ever panic under fire … When a robot looks around a battlefield [says Joseph W. Dyer, a former vice admiral and the chief operating officer of iRobot], the remote technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.’  But the translation of bodies into persons, and persons into targets, is not a straightforward one.

My thinking about the human-machine interface to this point has focused on questioning assumptions about the fixity of its boundaries, while at the same time slowing down too easy erasures of differences that matter between humans and machines.  I’ve been particularly concerned with machines modelled in the image of a human that many of us in science and technology studies and feminist theory have been at pains to refigure; that is, one for whom autonomous agency and instrumental reasoning are the gold standard.  In the interest of avoiding essentialism, I’ve tried to base my arguments for difference on the ways in which different forms of embodiment afford different possibilities for reflexively co-enacting what we think of as shared situations, or reciprocity, or mutual intelligibility, or what feminist scholars like Donna Haraway have proposed that we think about as ‘response-ability’.  This argument has provided a generative basis for critique of initiatives in artificial intelligence, robotics and the like.

“Some of us think that the right organizational structure for the future is one that skillfully blends humans and intelligent machines,” [says John Arquilla, executive director of the Information Operations Center at the Naval Postgraduate School]  “We think that that’s the key to the mastery of 21st-century military affairs” (quoted in Markoff November 27, 2010). Hardly a new idea (remembering the Strategic Computing Initiative of the Reagan era), this persistent vision of mastery-to-come underwrites old and new alliances in research and development, funded by defense spending, taken up by academic and industrial suppliers, echoed and hyped by the media, and embraced by entertainment producers and consumers. So how, I’m now wondering, might I usefully mobilise and expand my earlier arguments regarding shifting boundaries and differences that matter between humans and machines, to aid efforts to map and interrupt what James der Derian (2009) has called ‘virtuous war’ – that is, warfighting justified on the grounds of a presumed moral superiority, persistent mortal threat and, most crucially, minimal casualties on our side – and the military-industrial-media-entertainment network that comprises its infrastructure.