Tag Archives: military robotics

Reality Bites

 

big dog trial

LS3 test during Rim of the Pacific Exercise, July 2014

A pack of international news outlets over the past few days have reported the abandonment by the US Department of Defence of Boston Dynamic’s Legged Squad Support System or LS3 (aka ‘Big Dog’) and its offspring (see Don’t kick the Dog). After five years and USD $42 million in investment, what was promised to be a best in breed warfighting companion stumbled over a mundane but apparently intractable problem – noise. Powered by a gas (petrol) motor likened to a lawnmower in sound, the robot’s capacity for carrying heavy loads (400 lbs or 181.4kg), and its much celebrated ability to navigate rough terrain and right itself after falling (or be easily assisted in doing so), in the end were not enough to make up for the fact that, in the assessment of the US Marines who tested the robot, the LS3 was simply ‘too loud’ (BBC News 30 January 2015). The trial’s inescapable conclusion was that the noise would reveal a unit’s presence and position, bringing more danger than aid to the U.S. warfighters that it was deployed to support.

A second concern contributing to the DoD’s decision was the question of the machine’s maintenance and repair. Long ignored in narratives about technological progress, the place of essential practices of inventive maintenance and repair has recently become a central topic in social studies of science and technology (see Steven J. Jackson, “Rethinking Repair,” in Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot, eds. Media Technologies: Essays on Communication, Materiality and Society. MIT Press: Cambridge MA, 2014.). These studies are part of a wider project of recognizing the myriad forms of invisible labour that are essential conditions for keeping machines working – one of the enduring continuities in the history of technology.

The LS3 trials were run by the Marine’s Warfighting Lab, most recently at Kahuku Training Area in Hawaii during the Rim of the Pacific exercise in July of 2014. Kyle Olson, spokesperson for the Lab, reported that seeing the robot’s potential was challenging “because of the limitations of the robot itself.” This phrasing is noteworthy, as the robot itself – the actual material technology – interrupts the progressive elaboration of the promise that keeps investment in place. According to the Guardian report (30 December 2015) both ‘Big Dog’ and ‘Spot,’ an electrically powered and therefore quieter but significantly smaller prototype, are now in storage, with no future experiments planned.

The cessation of the DoD investment will presumably come as a relief to Google, which acquired Boston Dynamics in 2013, saying at the time that it planned to move away from the military contracts that it inherited with the acquisition.  Boston Dynamics will now, we can assume, turn its prodigious ingenuity in electrical and mechanical engineering to other tasks of automation, most obviously in manufacturing. The automation of industrial labour has, somewhat ironically given its status as the original site for robotics, recently been proclaimed to be robotics’ next frontier. While both the BBC and Guardian offer links to a 2013 story about the great plans that accompanied Google’s investments in robotics, more recent reports characterize the status of the initiative (internally named ‘Replicant’) as “in flux,” and its goal of producing a consumer robot by 2020 as in question (Business Insider November 8, 2015). This follows the departure of former Google VP Andy Rubin in 2014 (to launch his own company with the extraordinary name ‘Playground Global’), just a year after he was hailed as the great visionary leader who would turn Google’s much celebrated acquisition of a suite of robotics companies into a unified effort. Having joined Google in 2005, when the latter acquired his smartphone company Android, Rubin was assigned to the leadership of Google’s robotics division by co-founder Larry Page. According to Business Insider’s Jillian D’Onfro, Page

had a broad vision of creating general-purpose bots that could cook, take care of the elderly, or build other machines, but the actual specifics of Replicant’s efforts were all entrusted to Rubin. Rubin has said that Page gave him a free hand to run the robotics effort as he wanted, and the company spent an estimated $50 million to $90 million on eight wide-ranging acquisitions before the end of 2013.

The unifying vision apparently left with Rubin, who has yet to be replaced. D’Onfro continues:

One former high-ranking Google executive says the robot group is a “mess that hasn’t been cleaned up yet.” The robot group is a collection of individual companies “who didn’t know or care about each other, who were all in research in different areas,” the person says. “I would never want that job.”

So another reality that ‘bites back’ is added to those that make up the robot itself; that is, the alignment of the humans engaged in its creation. Meanwhile, Boston Dynamics’ attempt to position itself on the entertainment side of the military-entertainment complex this holiday season was met less with amusement than alarm, as media coverage characterized it variously as ‘creepy’ and ‘nightmarish.’

synthxmas-590x330

Resistance, it seems, is not entirely futile.

On killer robots, celebrity scientists, and the campaign to ban lethal autonomous weapons

autonomousweapons

Screencap of South Korean autonomous weapon in action courtesy of Richard Anders via YouTube.  Reticle added by Curiousmatic.

Amidst endless screen shots from Terminator 3: Rise of the Machines (Warner Bros Pictures, 2003), and seemingly obligatory invocations of Stephen Hawking, Elon Musk and Steve Wozniak as signatories, the media reported the release on 28 July of an open letter signed by thousands of robotics and AI researchers calling for a ban on lethal autonomous weapons. The letter’s release to the press was timed to coincide with the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2015) in Buenos Aires. Far more significant than the inclusion of celebrity signatories – their stunning effect in drawing international media attention notwithstanding – is the number of prominent computer scientists (not a group prone to add their names to political calls to action) who have been moved to endorse the letter. Consistent with this combination of noise and signal, the commentaries generated by the occasion of the letter’s release range from aggravatingly misleading to helpfully illuminating.

The former category is well represented in an interview by Fox News’ Shepard Smith with theoretical physicist and media scientist Michio Kaku. In response to Smith’s opening question regarding whether or not concerns about autonomous weapons are overblown, Kaku suggests that “Hollywood has us brainwashed” into thinking that Terminator-style robots are just around the corner. Quite the contrary, he assures us, “we have a long ways to go before we have sentient robots on the battlefield.” This ‘long ways to go’ is typical of futurist hedges that, while seemingly interrupting narratives of the imminent rise of the machines, implicitly endorse the assumption of continuing progress in that direction. Kaku then further affirms the possibility, if not inevitability, of the humanoid weapon: “Now, the bad news of course is that once we do have such robots, these autonomous killing machines could be a game changer.” Having effectively clarified that his complaint with Hollywood is less the figure of the Terminator-style robot than its timeline, he reassures us that “the good news is, they’re decades away. We have plenty of time to deal with this threat.” “Decades away, for sure?” asks Shepard Smith. “Not for sure, cuz we don’t know how progress is,” Kaku replies, and then offers what could be a more fundamental critique of the sentient robot project. Citing the disappointments of the recent DARPA Robotics Challenge as evidence, he explains: “It turns out that our brain is not really a digital computer.” The lesson to take from this, he proposes, is that the autonomous killing machine “is a long term threat, it’s a threat that we have time to digest and deal with, rather than running to the hills like a headless chicken” (at which he and Shepard share a laugh). While I applaud Kaku’s scepticism regarding advances in humanoid robots, it’s puzzling that he himself frames the question in these terms, suggesting that it’s the prospect of humanoid killer robots to which the open letter is addressed, and (at least implicitly) dismissing its signatories as the progeny of Chicken Little.

Having by now spent all but 30 seconds of his 3 minutes and 44, Kaku then points out that “one day we may have a drone that can seek out human targets and just kill them indiscriminately. That could be a danger, a drone that’s only mission is to kill anything that resembles a human form … so that is potentially a problem – it doesn’t require that much artificial intelligence for a robot to simply identify a human form, and zap it.” Setting aside the hyperbolic reference to indiscriminate targeting of any human form (though see the Super Aegis 2 system projected to patrol the heavily armed ‘demilitarized zone’ between North and South Korea), this final sentence (after which the interview concludes) begins to acknowledge the actual concerns behind the urgency of the campaign for a ban on lethal autonomous weapons. Those turn not on the prospect of a Terminator-style humanoid or ‘sentient’ bot, but on the much more mundane progression of increasing automation in military weapon systems: in this case, automation of the identification of particular categories of humans (those in a designated area, or who fit a specified and machine-readable profile) as legitimate targets for killing. In fact, it’s only the popular media that have raised the prospect of fully intelligent humanoid robots: the letter, and the wider campaign for a ban on lethal autonomous weapons, has nothing to do with ‘Terminator-style’ robots. The developments that are cited in the letter are both far more specific, and more imminent.

That specificity is clarified in a CNET story about the open letter produced by Luke Westaway, broadcast on July 27th. Despite its inclusion of cuts from Terminator 3 and its invocation of the celebrity triad, we’re also informed that the open letter defines autonomous weapons as those that “select and engage targets without human intervention.” The story features interviews with ICRAC’s Noel Sharkey, and Thomas Nash of the UK NGO Article 36. Sharkey helpfully points out that rather than assuming humanoid form, lethal autonomous weapons are much more likely to look like already-existing weapons systems, including tanks, battle ships and jet fighters. He explains that the core issue for the campaign is an international ban that would pre-empt the delegation of ‘decisions’ to kill to machines. It’s worth noting that the word ‘decision’ in this context needs to be read without the connotations of that term that associate it with human deliberation. A crucial issue here – and one that could be much more systematically highlighted in my view – is that this delegation of ‘the decision to kill’ presupposes the specification, in a computationally tractable way, of algorithms for the discriminatory identification of a legitimate target. The latter, under the Rules of Engagement, International Humanitarian Law and the Geneva Conventions, is an opponent that is engaged in combat and poses an ‘imminent threat’. We have ample evidence for the increasing uncertainties involved in differentiating combatants from non-combatants under contemporary conditions of war fighting (even apart from crucial contests over the legitimacy of targeting protocols). The premise that legitimate target identification could be rendered sufficiently unambiguous to be automated reliably is at this point unfounded (apart from certain nonhuman targets like incoming missiles with very specific ‘signatures’, which also clearly pose an imminent threat).

‘Do we want to live in a world in which we have given machines the power to take human lives, without a human being there to pull the trigger?’ asks Thomas Nash of Article 36 (CNET 27 July 2015)? Of course the individual human with their hand on the trigger is effectively dis-integrated – or better highly distributed – in the case of advanced weapon systems. But the existing regulatory apparatus that comprises the laws of war relies fundamentally on the possibility of assigning moral and legal responsibility. However partial and fragile its reach, this regime is our best current hope for articulating limits on killing. The precedent for a ban on lethal autonomous weapons lies in the United Nations Convention on Certain Conventional Weapons (CCW), the body created ‘to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.’  Achieving that kind of legally binding international agreement, as Westaway points out, is a huge task but as Thomas Nash explains there is some progress. Since the launch of the campaign in 2013, the CCW has put the debate on lethal autonomous weapons onto its agenda and held two international ‘expert’ consultations. At the end of this year, the CCW will consider whether to continue discussions, or to move forwards on the negotiation of an international treaty.

CCW

Convention on Certain Conventional Weapons, May 2014

To appreciate the urgency of interventions into the development of lethal autonomous weapons, science and technology studies (STS) offers a useful concept. The idea of ‘irreversibility’ points to the observation that, while technological trajectories are never self-determining or inevitable, the difficulties of undoing technological projects increase over time. (See for example Callon, Michel (1990), Techno-economic networks and irreversibility. The Sociological Review, 38: 132–161) Investments (both financial and political) increase as does the iterative installation and institutionalization of associated infrastructures (both material and social). The investments required to dismantle established systems grow commensurately. In the CNET interview, Nash points to the entrenched and expanding infrastructures of drone technology as a case in point.

BBC World News (after invoking the Big Three, and also offering the obligatory reference to The Terminator) interviews Professor Heather Roff who helped to draft the letter. The BBC’s Dominic Laurie asks Roff to clarify the difference between a remotely-operated drone, and the class of weapons to which the letter is addressed. Roff points to the fact that the targets for current drone operations are ‘vetted and checked’, in the case of the US military by a Judge Advocate General (JAG). She is quick to add, “Now, whether or not that was an appropriate target or that there are friendly fire issues or there are collateral killings is a completely different matter”; what matters for a ban on lethal autonomous weapons, she emphasizes, is that “there is a human being actually making that decision, and there is a locus of responsibility and accountability that we can place on that human.” In the case of lethal autonomous weapons, she argues, human control is lacking “in any meaningful sense”.

The question of ‘meaningful human control’ has become central to debates about lethal autonomous weapons. As formulated by Article 36 and embraced by United Nations special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns, it is precisely the ambiguity of the phrase that works to open up the discussion in vital and generative ways. In collaboration with Article 36, Roff is now beginning a project – funded by the Future of Life Institute – to develop the concept of meaningful human control more fully. The project aims to create a dataset “of existing and emerging semi-autonomous weapons, to examine how autonomous functions are already being deployed and how human control is maintained. The project will also bring together a range of actors including computer scientists, roboticists, ethicists, lawyers, diplomats and others to feed into international discussions in this area.”

While those of us engaged in thinking through STS are preoccupied with the contingent and shifting distributions of agency that comprise complex sociotechnical systems, the hope for calling central actors to account rests on the possibility of articulating relevant legal and normative frameworks. These two approaches are not, in my view, incommensurable. Jutta Weber and I have recently attempted to set out a conception of human-machine autonomies that recognizes the inseparability of human and machine agencies, and the always contingent nature of ideas of autonomy, in a way that supports the campaign against lethal autonomous weapons. Like the signatories to the open letter, and as part of a broader concern to interrupt the intensification of automated killing, we write of the urgent need to reinstate human deliberation at the heart of matters of life and death.

 

Slow robots and slippery rhetorics

DRC

The recently concluded DARPA Robotics Challenge (DRC), held this past week at a NASCAR racetrack near Homestead, Florida, seems to have had a refreshingly sobering effect on the media coverage of advances in robotics.  A field of sixteen competitors, the victors of earlier trials (it was to be seventeen, but ‘travel issues’ prevented the Chinese team from participating), the teams represented the state of the art internationally in the development of mobile, and more specifically ‘legged’ robots.  The majority of the teams worked with machines configured as upright, bipedal humanoids, while two figured their robots as primates (Robosimian and CHIMP), and one as a non-anthropomorphised ‘hexapod’. The Challenge staged a real-time, public demonstration of the state of the art; one which, it seems, proved disillusioning to many who witnessed it.  For all but the most technically knowledgeable in the audience, the actual engineering achievements were hard to appreciate.  More clearly evident was the slowness and clumsiness of the robots, and their vulnerability to failure at what to human contenders would have proven quite unremarkable tasks.  A photo gallery titled Robots to the Rescue, Slowly is indicative, and the BBC titles its coverage of the Challenge Robot competition reveals rise of the machines not imminent.

Reporter Zachary Fagenson sets the scene with a representative moment in the competition:

As a squat, red and black robot nicknamed CHIMP gingerly pushed open a spring-loaded door a gust of wind swooped down onto the track at the Homestead-Miami Speedway and slammed the door shut, eliciting a collective sigh of disappointment from the audience.

In the BBC’s video coverage of the event, Dennis Hong, Director of the Virginia Tech Robotics Lab, tells the interviewer: “When many people think about robots, they watch so many science fiction movies, they think that robots can run and do all the things that humans can do.  From this competition you’ll actually see that that is not the truth. The robots will fall, it’s gonna be really, really slow…” and DARPA Director Arati Prabhaker concurs “I think that robotics is an area where our imaginations have run way ahead of where the technology actually is, and this challenge is not about science fiction it’s about science fact.”  While many aspects of the competition would challenge the separateness of fiction and fact (not least the investment of its funders and competitors in figuring robots as humanoids), this is nonetheless a difference that matters.

These cautionary messages are contradicted, however, in a whip-lash inducing moment at the close of the BBC clip, when Boston Dynamics Project Manager Joe Bondaryk makes the canonical analogy between the trials and the Wright brothers’ first flight, reassuring us that “If all this keeps going, then we can imagine having robots by 2015 that will, you know, that will help our firefighters, help our policemen to do their jobs” (just one year after next year’s Finals, and a short time frame even compared to the remarkable history of flight).

The winning team, University of Tokyo’s spin-out company Schaft (recently acquired by Google), attributes their differentiating edge in the competition to a new high-voltage liquid-cooled motor technology making use of a capacitor rather than a battery for power, which the engineers say lets the robot’s arms move and pivot at higher speeds than would otherwise be possible. Second and fourth place went to teams that had adopted the Boston Dynamics (another recent Google acquisition) Atlas robot as their hardware platform, the Florida Institute for Human and Machine Cognition (IHMC) and MIT teams respectively. (With much fanfare, DARPA funded the delivery of Atlas robots to a number of the contenders earlier this year.)  Third place went to Carnegie Mellon University’s ‘CHIMP,’ while one of the least successful entrants, scoring zero points, was NASA’s ‘Valkyrie’, described in media reports as the only gendered robot in the group (as signaled  by its white plastic vinyl body and suggestive bulges in the ‘chest’ area).  Asked about the logic of Valkyrie’s form factor, Christopher McQuin, Nasa’s chief engineer for hardware development, offered: “The goal is to make it comfortable for people to work with and to touch.”  (To adequately read this comment, and Valkyrie’s identification as gendered against the ‘neutrality’ of the other competitors, would require its own post.)  The eight teams with the highest scores are eligible to apply for up to $1-million in funding to prepare for the final round of the Challenge in late 2014, where a winner will take a $2-million prize.

An article on the Challenge in the MIT Technology Review  by journalist Will Knight includes the sidebar: ‘Why it Matters: If they can become nimbler, more dexterous, and safer, robots could transform the way we work and live.’  Knight thereby implies that we should care about robots, their actual clumsiness and unwieldiness notwithstanding, because if they were like us, they could transform our lives.  The invocation of the way we live here echos the orientation of the Challenge overall, away from robots  as weapons – as instruments of death – and towards the figure of the first responder as the preserver of life.  Despite its sponsorship by the Defense Advanced Research Projects Agency (DARPA), the agency charged with developing new technology for the military, the Challenge is framed not in terms of military R&D, but as an exercise in the development of ‘rescue robots‘.

More specifically, DARPA statements, as well as media reports, position the Challenge itself, along with the eight tasks assigned to the robotics teams (e.g. walking over rubble, clearing debris, punching a hole in a drywall, turning a valve, attaching a fire hose, climbing a ladder), as a response to the disastrous melt down of the Fukushima-Daiichi nuclear power plant.  (For a challenge to this logic see Maggie Mort’s comment to my earlier post ‘will we be rescued?’)  While this begs the question of how robots would be hardened against the effects of nuclear radiation and at what cost (the robots competing in the challenge already costing up to several million dollars each), Knight suggests that if robots can be developed that are capable of taking on these tasks, “they could also be useful for much more than just rescue missions.”  Knight observes that the robot of the winning team “is the culmination of many years of research in Japan, inspired in large part by concerns over the country’s rapidly aging population,” a proposition affirmed by DARPA Program Manager Gill Pratt who “believes that home help is the big business opportunity [for] humanoid robots.”  Just what the connection might be between these pieces of heavy machinery and care at home is left to our imaginations, but quite remarkably Pratt further suggests “that the challenges faced by the robots involved in the DARPA event are quite similar to those that would be faced in hospitals and nursing homes.”

In an article by Pratt published early in December in the Bulletin of the Atomic Scientists titled Robot to the Rescue, we catch a further glimpse of what the ‘more than rescue’ applications for the Challenge robots might be.  Pratt’s aspirations for the DARPA Robotics Challenge invoke the familiar (though highly misleading) analogy between the robot and the developing human: “by the time of the DRC Finals, DARPA hopes the competing robots will demonstrate the mobility and dexterity competence of a 2-year-old child, in particular the ability to execute autonomous, short tasks such as ‘clear out the debris in front of you’ or ‘close the valve,’ regardless of outdoor lighting conditions and other variations.”  I would challenge this comparison on the basis that it underestimates the level of the 2 year old child’s competencies, but I suspect that many parents of 2 year olds might question its aptness on other grounds as well.

Having set out the motivation and conditions of the Challenge, in a section titled ‘Don’t be scared of the robot’ Pratt  turns to the “broad moral, ethical, and societal questions” that it raises, noting that “although the DRC will not develop lethal or fully autonomous systems, some of the technology being developed in the competition may eventually be used in such systems.”  He continues:

society is now wrestling with moral and ethical issues raised by remotely operated unmanned aerial vehicles that enable reconnaissance and projection of lethal force from a great distance … the tempo of modern warfare is escalating, generating a need for systems that can respond faster than human reflexes. The Defense Department has considered the most responsible way to develop autonomous technology, issuing a directive in November 2012 that carefully regulates the way robotic autonomy is developed and used in weapons systems. Even though DRC robots may look like those in the movies that are both lethal and autonomous, in fact they are neither.

The slippery slope of automation and autonomy in military systems, and the U.S. Defense Department’s ambiguous assurances about their commitment to the continued role of humans in targeting and killing, are the topic of ongoing debate and a growing campaign to ban lethal autonomous weapons (See ICRAC website for details.)  I would simply note here the moment of tautological reasoning wherein ‘the tempo of modern warfare,’ presented as a naturally occurring state of the world, becomes the problem for which faster response is the solution, which in turn justifies the need for automation, which in turn increases the tempo, which in turn, etc.

In elaborating the motivation for the Challenge, Gill Pratt invokes a grab-bag of familiar specters of an increasingly ‘vulnerable society’ (population explosion with disproportionate numbers of frail elderly, climate change, weapons of mass destruction held in the wrong hands) as calling for, if not a technological solution, at least a broad mandate for robotics research and development:

The world’s population is continuing to grow and move to cities situated along flood-prone coasts. The population over age 65 in the United States is forecast to increase from 13 percent to 20 percent by 2030, and the elderly require more help in emergency situations. Climate change and the growing threat of proliferation of weapons of mass destruction to non-state actors add to the concern. Today’s natural, man-made and mixed disasters might be only modest warnings of how vulnerable society is becoming.

One implication of this enumeration is that even disaster can be good for business, and humanoid robotics research, Pratt assures us, is all about saving lives and protecting humans (a term that seems all-encompassing at the same time that it erases from view how differently actual persons are valued in the rhetorics of ‘Homeland Security’).  The figure of the ‘warfighter’ appears only once towards the end of Pratt’s piece, and even there the robot’s role in the military is about preserving, not taking life.  But many of us are not reassured by the prospect of robot rescue, and would instead call on the U.S. Government to take action to mitigate climate change, to dial down its commitment to militarism along with its leading role in the international arms trade, and to invest in programs to provide meaningful jobs at a living wage to humans, not least those engaged in the work of care.  The robot Challenge could truly be an advance if the spectacle of slow robots would raise questions about the future of humanoid robotics as a project for our governments and universities to be invested in, and about the good faith of slippery rhetorics that promise the robot first responder as the remedy for our collective vulnerability.

Talking the talk

Following two busy teaching terms I’m finally able to turn back to Robot Futures, and into my inbox comes an uncommonly candid account of the most recent demonstration of one of the longest-lived humanoid robots, Honda’s Asimo.  Titled ‘Honda robot has trouble talking the talk’ (Yuri Kageyama, 04 July 2013, Independent.ie), the article describes Asimo’s debut as a proposed museum tour guide at the Miraikan science museum.  An innumerable stream of media reports on Asimo over the years since the robot’s first version was announced in 2000 have focused on the robot’s ability to walk the walk of a humanoid biped, although troubles have occurred there as well.  But to join the ranks of imagined robot service providers requires that Asimo add to its navigational abilities some interactional ones.  And it’s here that this latest trouble arises, as Kageyama reports that “The bubble-headed Asimo machine had problems telling the difference between people raising their hands to ask questions and those aiming their smartphones to take photos. It froze mid-action and repeated a programmed remark, ‘Who wants to ask Asimo a question?’.”  The same technological revolution that has provided the context for Asimo’s humanoid promise, in other words, has configured a human whose raised hand comprises a noisy signal, ambiguously identifying her as interlocutor or spectator.

At least some publics, it seems, are growing weary of the perpetually imminent arrival of useful humanoids. Kageyama cites complaints that Honda has yet to develop any practical applications for Asimo.  While one of the uses promised for Asimo and his service robot kin has been to take up tasks too dangerous for human bodies, it seems that robot bodies may be just as fragile: Kageyama reports that “Asimo was too sensitive to go into irradiated areas after the 2011 Fukushima nuclear crisis.”  As a less demanding alternative, Asimo’s engineering overseer, Satoshi Shigemi, suggests that a “possible future use for Asimo would be to help people buy tickets from vending machines at train stations,” speeding up the process for humans unfamiliar with those devices.  I can’t help noting the similarity of this projected robot future, however, to the expert system photocopier coach that was the first object of my own research in the mid-1980s.  As the ethnomethodologists have taught us, instructions presuppose the competencies that are required for their successful execution.  This poses if not an infinite, at least a pragmatically indefinite, regress for artificial intelligences and interactive machines.

Asimo’s troubles take on far more serious proportions in the case of robotic weapon systems, required to make critical and exceedingly more challenging discriminations among the humans who face them.  For a recent reflection on this worrying robot future, see Sharkey and Suchman ‘Wishful Mnemonics and Autonomous Killing Machines’ in the most recent issue of AISBQ Quarterly, the Newsletter of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, No. 136, May 2013.

remote control

According to media reports more than 7,000 drones of all types are in use over Iraq and Afghanistan, and remote control is seen as the vanguard of a ‘revolution in military affairs’ in which U.S. military and intelligence agencies are heavily invested, in both senses of the word.  With the integration of Hellfire missiles, the first armed version of the Predator drone (designated MQ-1) was deployed in Afghanistan in 2002 as part of what the U.S. military names Operation Enduring Freedom (originally Operation Infinite Justice), under the auspices of what George Bush declared in September 2001 to be a  Global War (without end) on Terror.  In 2001, the U.S. Congress gave the Pentagon the goal of making one-third of ground combat vehicles remotely operated by 2015.  A decade later, under President Obama’s less colorfully named ‘Overseas Contingency Operations’, the amount of money being spent on research for military robotics surpasses the budget of the entire National Science Foundation.

‘War would be a lot safer, the Army says, if only more of it were fought by robots’  (John Markoff, NY Times, November 27, 2010).  Statements like this at once assume the reader to be one of the ‘we’ for whom war would be safer, while deleting war’s Others from our view.  This erasure is rendered more graphically in the image that accompanies Markoff’s article, titled ‘Remotely controlled: Some armed robots are operated with video-game-style consoles, helping to keep humans away from danger’ (my emphasis).  These reports valorize the nonhuman qualities of the robots which, Markoff reports, are ‘never distracted, using an unblinking digital eye, or “persistent stare,” that automatically detects even the smallest motion. Nor do they ever panic under fire … When a robot looks around a battlefield [says Joseph W. Dyer, a former vice admiral and the chief operating officer of iRobot], the remote technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.’  But the translation of bodies into persons, and persons into targets, is not a straightforward one.

My thinking about the human-machine interface to this point has focused on questioning assumptions about the fixity of its boundaries, while at the same time slowing down too easy erasures of differences that matter between humans and machines.  I’ve been particularly concerned with machines modelled in the image of a human that many of us in science and technology studies and feminist theory have been at pains to refigure; that is, one for whom autonomous agency and instrumental reasoning are the gold standard.  In the interest of avoiding essentialism, I’ve tried to base my arguments for difference on the ways in which different forms of embodiment afford different possibilities for reflexively co-enacting what we think of as shared situations, or reciprocity, or mutual intelligibility, or what feminist scholars like Donna Haraway have proposed that we think about as ‘response-ability’.  This argument has provided a generative basis for critique of initiatives in artificial intelligence, robotics and the like.

“Some of us think that the right organizational structure for the future is one that skillfully blends humans and intelligent machines,” [says John Arquilla, executive director of the Information Operations Center at the Naval Postgraduate School]  “We think that that’s the key to the mastery of 21st-century military affairs” (quoted in Markoff November 27, 2010). Hardly a new idea (remembering the Strategic Computing Initiative of the Reagan era), this persistent vision of mastery-to-come underwrites old and new alliances in research and development, funded by defense spending, taken up by academic and industrial suppliers, echoed and hyped by the media, and embraced by entertainment producers and consumers. So how, I’m now wondering, might I usefully mobilise and expand my earlier arguments regarding shifting boundaries and differences that matter between humans and machines, to aid efforts to map and interrupt what James der Derian (2009) has called ‘virtuous war’ – that is, warfighting justified on the grounds of a presumed moral superiority, persistent mortal threat and, most crucially, minimal casualties on our side – and the military-industrial-media-entertainment network that comprises its infrastructure.