Category Archives: autonomous technologies

Still unsafe at any speed?

waymo one

Mindful of the fact that most of the posts on this blog are provoked by media coverage that works to further mystify AI/robotics, I thought I would break with that pattern to recognize a recent story that, to my reading, breaks with the pattern. A piece by Andrew J. Hawkins in The Verge reviews the state of robot taxis in Phoenix, Arizona. Having been assured on several occasions by tech-savvy friends that self-driving cars are in everyday operation in Phoenix, this story helps to clarify the actual state of the technology. And Hawkins raises some welcome questions as well about the claimed benefits of autonomous vehicles; questions that should be at the forefront of discussion about public investment in transportation infrastructures going forward.

While it’s not until paragraph nine that we discover that the autonomous vehicles deployed in the Waymo One taxi service actually still include a human ‘safety driver,’ there’s much to learn here. The piece is headlined by a video report from Hawkins on his own experience of the service. Hawkins points out that Alphabet/Google’s Waymo is the longest running and most extensive of the autonomous vehicle projects, with the lowest number of recorded “disengagements,” or events in which the human driver has to take over the wheel. The current trial is limited to four towns in the greater Phoenix area, and to voluntary members of Waymo’s “early rider” program (with requisite non-disclosure agreements and, we might assume, liability waivers.) We might note in the aerial views of the designated areas the Arizona landscape’s flatness, and of course we know that the reason for the old prescription to “ship your sinuses to Arizona” (familiar at least to TV watchers of my generation) is that state’s relatively rain free climate. There’s not much discussion of Arizona’s particularities here or in the media more generally, but they point us towards the question of what environmental conditions are required for the self-driving car’s successful operation.

Hawkins very helpfully introduces us to the infrastructure of sensor technologies that make the autonomous vehicle possible (the car’s sensor view is live streamed for the passenger, in part presumably to make the ride less boring). As we watch we begin to get a sense of how the car/environment might be a more apt unit of analysis than the car alone. Surrounding vehicles and pedestrians are rendered as categorically color-coded, edge-detected objects. As Hawkins compares the experience to “being in the back seat with a very cautious student driver,” we get to sit through an “unprotected left turn” (what we human drivers would call turning left mid-block onto a side street, by waiting for a break in oncoming traffic). We see how the Waymo One turns only when the change in the traffic light at the intersection ahead creates a clear, clean break in the traffic. Well worth the wait, I suspect most of us would say, though a source of reported frustration for other human drivers. For Waymo this breakdown in driving tempo poses the challenge of developing its software to enable the car to drive “more organically, more like a human.” We get a sense here, for better and worse, of the difference between an operation based entirely on metrics and algorithms, and a practice based on embodied experiences of space and time. Nonetheless, Daniel Chu, Director of Product from Waymo, translates aggregated time on the road and associated statistics into a characterization of each Waymo One vehicle as equivalent to “the world’s most experienced driver.”

Perhaps the most welcome moment in Hawkin’s account comes when he turns to the question of whether the car is actually the most imaginative, or even desirable, vehicle for the future of transportation. Millennials, he reports, indicate in poll data some doubt about the future of car travel, and a preference for better public transportation, along with safer spaces in which to bike and walk. While touted as a remedy to the proven fallibility of human drivers, comparable safety statistics for the driverless car aren’t really available, according to Sean Sweat of the Urban Phoenix Project, given the relative size of data sets on cars driven by humans and driverless cars over time. Sweat points out that the question of driver safety also sidesteps the question of how the design of urban spaces, particularly streets, might contribute to pedestrian fatalities or, alternatively, to their avoidance. This points to the much larger issue of how the investment in a future of self-driving cars might drive the reconfiguration of transport infrastructures required to enable them. Not only the cars themselves, but our roadways and urban landscapes will likely become further instrumented in the service of vehicle autonomy. This isn’t inherently a bad thing; the Copenhagen subway system, for example, has evolved through a thoroughgoing makeover of the city infrastructure to create a driverless and extremely safe transport system to accompany its bicycle-friendly streetscape (most obviously, there’s no open access to the track, even from the station platform). But in car cultures the financial expense required to re-engineer highways and cities in order to make them autonomous vehicle friendly is accompanied by lost opportunity costs, beginning with a sidelining of discussion about alternative possibilities. Meanwhile the future of autonomous cars that can drive any road, under any conditions, may take decades, Hawkins concludes, or may never happen. Far from inevitable, then, the driverless car is a project urgently in need of braking, to open a space for more innovative ways of thinking about safe and sustainable transport.

Which Sky is Falling?

Justin Wood, from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html

The widening arc of public discussion regarding the promises and threats of AI includes a recurring conflation of what are arguably two intersecting but importantly different matters of concern. The first is the proposition, repeated by a few prominent members of the artificial intelligentsia and their followers, that AI is proceeding towards a visible horizon of ‘superintelligence,’ culminating in the point at which power over ‘us’ (the humans who imagine themselves as currently in control) will be taken over by ‘them’ (in this case, the machines that those humans have created).[1] The second concern arises from the growing insinuation of algorithmic systems into the adjudication of a wide range of distributions, from social service provision to extrajudicial assassination. The first concern reinforces the premise of AI’s advancement, as the basis for alarm. The second concern requires no such faith in the progress of AI, but only attention to existing investments in automated ‘decision systems’ and their requisite infrastructures.

The conflation of these concerns is exemplified in a NY Times piece this past summer, reporting on debates within the upper echelons of the tech world including billionaires like Elon Musk and Mark Zuckerberg, at a series of exclusive gatherings held over the past several years. Responding to Musk’s comparison of the dangers of AI with those posed by nuclear weapons, Zuckerberg apparently invited Musk to discuss his concerns at a small dinner party in 2014. We might pause here to note the gratuitousness of the comparison; it’s difficult to take this as other than a rhetorical gesture designed to claim the gravitas of an established existential threat. But an even longer pause is warranted for the grounds of Musk’s concern, that is the ‘singularity’ or the moment when machines are imagined to surpass human intelligence in ways that will ensure their insurrection.

Let’s set aside for the moment the deeply problematic histories of slavery and rebellion that animate this anxiety, to consider the premise. To share Musk’s concern we need to accept the prospect of machine ‘superintelligence,’ a proposition that others in the technical community, including many deeply engaged with research and development in AI, have questioned. In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.

To demonstrate the extent of concern within the tech community (and by implication those who align with Musk over Zuckerberg), NY Times AI reporter Cade Metz cites recent controversy over the Pentagon’s Project Maven. But of course Project Maven has nothing to do with superintelligence. Rather, it is an initiative to automate the analysis of surveillance footage gathered through the US drone program, based on labels or ‘training data’ developed by military personnel. So being concerned about Project Maven does not require belief in the singularity, but only skepticism about the legality and morality of the processes of threat identification that underwrite the current US targeted killing program. The extensive evidence for the imprecision of those processes that has been gathered by civil society organizations is sufficient to condemn the goal of rendering targeted killing more efficient. The campaign for a prohibitive ban on lethal autonomous weapon systems is aimed at interrupting the logical extension of those processes to the point where target identification and the initiation of attack is put under fully automated machine control. Again this relies not on superintelligence, but on the automation of existing assumptions regarding who and what constitutes an imminent threat.

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.

As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

[1] We could replace the uprising subject in this imaginary with other subjugated populations figured as plotting a takeover, the difference being that here the power of the master/creator is confirmed by the increasing threat posed by his progeny. Thus also the allusion to our own creation in the illustration by Justin Wood that accompanies the article provoking this post.

Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons

Related image

In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence.  Designated a primer for CCW delegates, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based CNAS are well represented.

Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:

Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).

The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.

The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.

We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of if–then rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique that makes use of labelled training data” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.

Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:

Intelligence is a system’s ability to determine the best course of action to achieve its goals. Autonomy is the freedom a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems could be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).

The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?

The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (CCW/GGE.1/2018/WP.4). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.

On killer robots, celebrity scientists, and the campaign to ban lethal autonomous weapons

autonomousweapons

Screencap of South Korean autonomous weapon in action courtesy of Richard Anders via YouTube.  Reticle added by Curiousmatic.

Amidst endless screen shots from Terminator 3: Rise of the Machines (Warner Bros Pictures, 2003), and seemingly obligatory invocations of Stephen Hawking, Elon Musk and Steve Wozniak as signatories, the media reported the release on 28 July of an open letter signed by thousands of robotics and AI researchers calling for a ban on lethal autonomous weapons. The letter’s release to the press was timed to coincide with the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2015) in Buenos Aires. Far more significant than the inclusion of celebrity signatories – their stunning effect in drawing international media attention notwithstanding – is the number of prominent computer scientists (not a group prone to add their names to political calls to action) who have been moved to endorse the letter. Consistent with this combination of noise and signal, the commentaries generated by the occasion of the letter’s release range from aggravatingly misleading to helpfully illuminating.

The former category is well represented in an interview by Fox News’ Shepard Smith with theoretical physicist and media scientist Michio Kaku. In response to Smith’s opening question regarding whether or not concerns about autonomous weapons are overblown, Kaku suggests that “Hollywood has us brainwashed” into thinking that Terminator-style robots are just around the corner. Quite the contrary, he assures us, “we have a long ways to go before we have sentient robots on the battlefield.” This ‘long ways to go’ is typical of futurist hedges that, while seemingly interrupting narratives of the imminent rise of the machines, implicitly endorse the assumption of continuing progress in that direction. Kaku then further affirms the possibility, if not inevitability, of the humanoid weapon: “Now, the bad news of course is that once we do have such robots, these autonomous killing machines could be a game changer.” Having effectively clarified that his complaint with Hollywood is less the figure of the Terminator-style robot than its timeline, he reassures us that “the good news is, they’re decades away. We have plenty of time to deal with this threat.” “Decades away, for sure?” asks Shepard Smith. “Not for sure, cuz we don’t know how progress is,” Kaku replies, and then offers what could be a more fundamental critique of the sentient robot project. Citing the disappointments of the recent DARPA Robotics Challenge as evidence, he explains: “It turns out that our brain is not really a digital computer.” The lesson to take from this, he proposes, is that the autonomous killing machine “is a long term threat, it’s a threat that we have time to digest and deal with, rather than running to the hills like a headless chicken” (at which he and Shepard share a laugh). While I applaud Kaku’s scepticism regarding advances in humanoid robots, it’s puzzling that he himself frames the question in these terms, suggesting that it’s the prospect of humanoid killer robots to which the open letter is addressed, and (at least implicitly) dismissing its signatories as the progeny of Chicken Little.

Having by now spent all but 30 seconds of his 3 minutes and 44, Kaku then points out that “one day we may have a drone that can seek out human targets and just kill them indiscriminately. That could be a danger, a drone that’s only mission is to kill anything that resembles a human form … so that is potentially a problem – it doesn’t require that much artificial intelligence for a robot to simply identify a human form, and zap it.” Setting aside the hyperbolic reference to indiscriminate targeting of any human form (though see the Super Aegis 2 system projected to patrol the heavily armed ‘demilitarized zone’ between North and South Korea), this final sentence (after which the interview concludes) begins to acknowledge the actual concerns behind the urgency of the campaign for a ban on lethal autonomous weapons. Those turn not on the prospect of a Terminator-style humanoid or ‘sentient’ bot, but on the much more mundane progression of increasing automation in military weapon systems: in this case, automation of the identification of particular categories of humans (those in a designated area, or who fit a specified and machine-readable profile) as legitimate targets for killing. In fact, it’s only the popular media that have raised the prospect of fully intelligent humanoid robots: the letter, and the wider campaign for a ban on lethal autonomous weapons, has nothing to do with ‘Terminator-style’ robots. The developments that are cited in the letter are both far more specific, and more imminent.

That specificity is clarified in a CNET story about the open letter produced by Luke Westaway, broadcast on July 27th. Despite its inclusion of cuts from Terminator 3 and its invocation of the celebrity triad, we’re also informed that the open letter defines autonomous weapons as those that “select and engage targets without human intervention.” The story features interviews with ICRAC’s Noel Sharkey, and Thomas Nash of the UK NGO Article 36. Sharkey helpfully points out that rather than assuming humanoid form, lethal autonomous weapons are much more likely to look like already-existing weapons systems, including tanks, battle ships and jet fighters. He explains that the core issue for the campaign is an international ban that would pre-empt the delegation of ‘decisions’ to kill to machines. It’s worth noting that the word ‘decision’ in this context needs to be read without the connotations of that term that associate it with human deliberation. A crucial issue here – and one that could be much more systematically highlighted in my view – is that this delegation of ‘the decision to kill’ presupposes the specification, in a computationally tractable way, of algorithms for the discriminatory identification of a legitimate target. The latter, under the Rules of Engagement, International Humanitarian Law and the Geneva Conventions, is an opponent that is engaged in combat and poses an ‘imminent threat’. We have ample evidence for the increasing uncertainties involved in differentiating combatants from non-combatants under contemporary conditions of war fighting (even apart from crucial contests over the legitimacy of targeting protocols). The premise that legitimate target identification could be rendered sufficiently unambiguous to be automated reliably is at this point unfounded (apart from certain nonhuman targets like incoming missiles with very specific ‘signatures’, which also clearly pose an imminent threat).

‘Do we want to live in a world in which we have given machines the power to take human lives, without a human being there to pull the trigger?’ asks Thomas Nash of Article 36 (CNET 27 July 2015)? Of course the individual human with their hand on the trigger is effectively dis-integrated – or better highly distributed – in the case of advanced weapon systems. But the existing regulatory apparatus that comprises the laws of war relies fundamentally on the possibility of assigning moral and legal responsibility. However partial and fragile its reach, this regime is our best current hope for articulating limits on killing. The precedent for a ban on lethal autonomous weapons lies in the United Nations Convention on Certain Conventional Weapons (CCW), the body created ‘to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.’  Achieving that kind of legally binding international agreement, as Westaway points out, is a huge task but as Thomas Nash explains there is some progress. Since the launch of the campaign in 2013, the CCW has put the debate on lethal autonomous weapons onto its agenda and held two international ‘expert’ consultations. At the end of this year, the CCW will consider whether to continue discussions, or to move forwards on the negotiation of an international treaty.

CCW

Convention on Certain Conventional Weapons, May 2014

To appreciate the urgency of interventions into the development of lethal autonomous weapons, science and technology studies (STS) offers a useful concept. The idea of ‘irreversibility’ points to the observation that, while technological trajectories are never self-determining or inevitable, the difficulties of undoing technological projects increase over time. (See for example Callon, Michel (1990), Techno-economic networks and irreversibility. The Sociological Review, 38: 132–161) Investments (both financial and political) increase as does the iterative installation and institutionalization of associated infrastructures (both material and social). The investments required to dismantle established systems grow commensurately. In the CNET interview, Nash points to the entrenched and expanding infrastructures of drone technology as a case in point.

BBC World News (after invoking the Big Three, and also offering the obligatory reference to The Terminator) interviews Professor Heather Roff who helped to draft the letter. The BBC’s Dominic Laurie asks Roff to clarify the difference between a remotely-operated drone, and the class of weapons to which the letter is addressed. Roff points to the fact that the targets for current drone operations are ‘vetted and checked’, in the case of the US military by a Judge Advocate General (JAG). She is quick to add, “Now, whether or not that was an appropriate target or that there are friendly fire issues or there are collateral killings is a completely different matter”; what matters for a ban on lethal autonomous weapons, she emphasizes, is that “there is a human being actually making that decision, and there is a locus of responsibility and accountability that we can place on that human.” In the case of lethal autonomous weapons, she argues, human control is lacking “in any meaningful sense”.

The question of ‘meaningful human control’ has become central to debates about lethal autonomous weapons. As formulated by Article 36 and embraced by United Nations special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns, it is precisely the ambiguity of the phrase that works to open up the discussion in vital and generative ways. In collaboration with Article 36, Roff is now beginning a project – funded by the Future of Life Institute – to develop the concept of meaningful human control more fully. The project aims to create a dataset “of existing and emerging semi-autonomous weapons, to examine how autonomous functions are already being deployed and how human control is maintained. The project will also bring together a range of actors including computer scientists, roboticists, ethicists, lawyers, diplomats and others to feed into international discussions in this area.”

While those of us engaged in thinking through STS are preoccupied with the contingent and shifting distributions of agency that comprise complex sociotechnical systems, the hope for calling central actors to account rests on the possibility of articulating relevant legal and normative frameworks. These two approaches are not, in my view, incommensurable. Jutta Weber and I have recently attempted to set out a conception of human-machine autonomies that recognizes the inseparability of human and machine agencies, and the always contingent nature of ideas of autonomy, in a way that supports the campaign against lethal autonomous weapons. Like the signatories to the open letter, and as part of a broader concern to interrupt the intensification of automated killing, we write of the urgent need to reinstate human deliberation at the heart of matters of life and death.

 

Slow robots and slippery rhetorics

DRC

The recently concluded DARPA Robotics Challenge (DRC), held this past week at a NASCAR racetrack near Homestead, Florida, seems to have had a refreshingly sobering effect on the media coverage of advances in robotics.  A field of sixteen competitors, the victors of earlier trials (it was to be seventeen, but ‘travel issues’ prevented the Chinese team from participating), the teams represented the state of the art internationally in the development of mobile, and more specifically ‘legged’ robots.  The majority of the teams worked with machines configured as upright, bipedal humanoids, while two figured their robots as primates (Robosimian and CHIMP), and one as a non-anthropomorphised ‘hexapod’. The Challenge staged a real-time, public demonstration of the state of the art; one which, it seems, proved disillusioning to many who witnessed it.  For all but the most technically knowledgeable in the audience, the actual engineering achievements were hard to appreciate.  More clearly evident was the slowness and clumsiness of the robots, and their vulnerability to failure at what to human contenders would have proven quite unremarkable tasks.  A photo gallery titled Robots to the Rescue, Slowly is indicative, and the BBC titles its coverage of the Challenge Robot competition reveals rise of the machines not imminent.

Reporter Zachary Fagenson sets the scene with a representative moment in the competition:

As a squat, red and black robot nicknamed CHIMP gingerly pushed open a spring-loaded door a gust of wind swooped down onto the track at the Homestead-Miami Speedway and slammed the door shut, eliciting a collective sigh of disappointment from the audience.

In the BBC’s video coverage of the event, Dennis Hong, Director of the Virginia Tech Robotics Lab, tells the interviewer: “When many people think about robots, they watch so many science fiction movies, they think that robots can run and do all the things that humans can do.  From this competition you’ll actually see that that is not the truth. The robots will fall, it’s gonna be really, really slow…” and DARPA Director Arati Prabhaker concurs “I think that robotics is an area where our imaginations have run way ahead of where the technology actually is, and this challenge is not about science fiction it’s about science fact.”  While many aspects of the competition would challenge the separateness of fiction and fact (not least the investment of its funders and competitors in figuring robots as humanoids), this is nonetheless a difference that matters.

These cautionary messages are contradicted, however, in a whip-lash inducing moment at the close of the BBC clip, when Boston Dynamics Project Manager Joe Bondaryk makes the canonical analogy between the trials and the Wright brothers’ first flight, reassuring us that “If all this keeps going, then we can imagine having robots by 2015 that will, you know, that will help our firefighters, help our policemen to do their jobs” (just one year after next year’s Finals, and a short time frame even compared to the remarkable history of flight).

The winning team, University of Tokyo’s spin-out company Schaft (recently acquired by Google), attributes their differentiating edge in the competition to a new high-voltage liquid-cooled motor technology making use of a capacitor rather than a battery for power, which the engineers say lets the robot’s arms move and pivot at higher speeds than would otherwise be possible. Second and fourth place went to teams that had adopted the Boston Dynamics (another recent Google acquisition) Atlas robot as their hardware platform, the Florida Institute for Human and Machine Cognition (IHMC) and MIT teams respectively. (With much fanfare, DARPA funded the delivery of Atlas robots to a number of the contenders earlier this year.)  Third place went to Carnegie Mellon University’s ‘CHIMP,’ while one of the least successful entrants, scoring zero points, was NASA’s ‘Valkyrie’, described in media reports as the only gendered robot in the group (as signaled  by its white plastic vinyl body and suggestive bulges in the ‘chest’ area).  Asked about the logic of Valkyrie’s form factor, Christopher McQuin, Nasa’s chief engineer for hardware development, offered: “The goal is to make it comfortable for people to work with and to touch.”  (To adequately read this comment, and Valkyrie’s identification as gendered against the ‘neutrality’ of the other competitors, would require its own post.)  The eight teams with the highest scores are eligible to apply for up to $1-million in funding to prepare for the final round of the Challenge in late 2014, where a winner will take a $2-million prize.

An article on the Challenge in the MIT Technology Review  by journalist Will Knight includes the sidebar: ‘Why it Matters: If they can become nimbler, more dexterous, and safer, robots could transform the way we work and live.’  Knight thereby implies that we should care about robots, their actual clumsiness and unwieldiness notwithstanding, because if they were like us, they could transform our lives.  The invocation of the way we live here echos the orientation of the Challenge overall, away from robots  as weapons – as instruments of death – and towards the figure of the first responder as the preserver of life.  Despite its sponsorship by the Defense Advanced Research Projects Agency (DARPA), the agency charged with developing new technology for the military, the Challenge is framed not in terms of military R&D, but as an exercise in the development of ‘rescue robots‘.

More specifically, DARPA statements, as well as media reports, position the Challenge itself, along with the eight tasks assigned to the robotics teams (e.g. walking over rubble, clearing debris, punching a hole in a drywall, turning a valve, attaching a fire hose, climbing a ladder), as a response to the disastrous melt down of the Fukushima-Daiichi nuclear power plant.  (For a challenge to this logic see Maggie Mort’s comment to my earlier post ‘will we be rescued?’)  While this begs the question of how robots would be hardened against the effects of nuclear radiation and at what cost (the robots competing in the challenge already costing up to several million dollars each), Knight suggests that if robots can be developed that are capable of taking on these tasks, “they could also be useful for much more than just rescue missions.”  Knight observes that the robot of the winning team “is the culmination of many years of research in Japan, inspired in large part by concerns over the country’s rapidly aging population,” a proposition affirmed by DARPA Program Manager Gill Pratt who “believes that home help is the big business opportunity [for] humanoid robots.”  Just what the connection might be between these pieces of heavy machinery and care at home is left to our imaginations, but quite remarkably Pratt further suggests “that the challenges faced by the robots involved in the DARPA event are quite similar to those that would be faced in hospitals and nursing homes.”

In an article by Pratt published early in December in the Bulletin of the Atomic Scientists titled Robot to the Rescue, we catch a further glimpse of what the ‘more than rescue’ applications for the Challenge robots might be.  Pratt’s aspirations for the DARPA Robotics Challenge invoke the familiar (though highly misleading) analogy between the robot and the developing human: “by the time of the DRC Finals, DARPA hopes the competing robots will demonstrate the mobility and dexterity competence of a 2-year-old child, in particular the ability to execute autonomous, short tasks such as ‘clear out the debris in front of you’ or ‘close the valve,’ regardless of outdoor lighting conditions and other variations.”  I would challenge this comparison on the basis that it underestimates the level of the 2 year old child’s competencies, but I suspect that many parents of 2 year olds might question its aptness on other grounds as well.

Having set out the motivation and conditions of the Challenge, in a section titled ‘Don’t be scared of the robot’ Pratt  turns to the “broad moral, ethical, and societal questions” that it raises, noting that “although the DRC will not develop lethal or fully autonomous systems, some of the technology being developed in the competition may eventually be used in such systems.”  He continues:

society is now wrestling with moral and ethical issues raised by remotely operated unmanned aerial vehicles that enable reconnaissance and projection of lethal force from a great distance … the tempo of modern warfare is escalating, generating a need for systems that can respond faster than human reflexes. The Defense Department has considered the most responsible way to develop autonomous technology, issuing a directive in November 2012 that carefully regulates the way robotic autonomy is developed and used in weapons systems. Even though DRC robots may look like those in the movies that are both lethal and autonomous, in fact they are neither.

The slippery slope of automation and autonomy in military systems, and the U.S. Defense Department’s ambiguous assurances about their commitment to the continued role of humans in targeting and killing, are the topic of ongoing debate and a growing campaign to ban lethal autonomous weapons (See ICRAC website for details.)  I would simply note here the moment of tautological reasoning wherein ‘the tempo of modern warfare,’ presented as a naturally occurring state of the world, becomes the problem for which faster response is the solution, which in turn justifies the need for automation, which in turn increases the tempo, which in turn, etc.

In elaborating the motivation for the Challenge, Gill Pratt invokes a grab-bag of familiar specters of an increasingly ‘vulnerable society’ (population explosion with disproportionate numbers of frail elderly, climate change, weapons of mass destruction held in the wrong hands) as calling for, if not a technological solution, at least a broad mandate for robotics research and development:

The world’s population is continuing to grow and move to cities situated along flood-prone coasts. The population over age 65 in the United States is forecast to increase from 13 percent to 20 percent by 2030, and the elderly require more help in emergency situations. Climate change and the growing threat of proliferation of weapons of mass destruction to non-state actors add to the concern. Today’s natural, man-made and mixed disasters might be only modest warnings of how vulnerable society is becoming.

One implication of this enumeration is that even disaster can be good for business, and humanoid robotics research, Pratt assures us, is all about saving lives and protecting humans (a term that seems all-encompassing at the same time that it erases from view how differently actual persons are valued in the rhetorics of ‘Homeland Security’).  The figure of the ‘warfighter’ appears only once towards the end of Pratt’s piece, and even there the robot’s role in the military is about preserving, not taking life.  But many of us are not reassured by the prospect of robot rescue, and would instead call on the U.S. Government to take action to mitigate climate change, to dial down its commitment to militarism along with its leading role in the international arms trade, and to invest in programs to provide meaningful jobs at a living wage to humans, not least those engaged in the work of care.  The robot Challenge could truly be an advance if the spectacle of slow robots would raise questions about the future of humanoid robotics as a project for our governments and universities to be invested in, and about the good faith of slippery rhetorics that promise the robot first responder as the remedy for our collective vulnerability.

Postscript to Ethical Governor 0.1

I’ve been encouraged by a colleague to add a postscript to my last post, lest its irony be lost on any of my readers. The post was a form of thought experiment on what it would mean to take Ron Arkin at his word (at least in the venue of the aforementioned debate), to put his proposal to the test by following it out to (one of) its logically absurd conclusions. That is, if as Arkin claims it’s the failures of humans that are his primary concern, and that his ‘ethical governor’ is designed to correct, why wait for the realization of robot weapons to implement it?  Why not introduce it as a restraint into conventional weapons in the first instance, as a check on the faulty behaviours of the humans who operate them?  Of course I assume that the answer to this question is that the ‘governor’ remains in the realm of aspirational fantasy, existing I’m told only in the form of a sketch of an idea and some preliminary mathematics developed within a briefly funded student project back in 2009, with no actual proposal for how to translate the requisite legal frameworks into code. Needless to say, I hope, my proposal for the Ethical Governor 0.1 is not something that I would want the DoD actually to fund, though there seems little danger that they would be keen to introduce such a restraint into existing weapon systems even if it could plausibly be realized.

There are two crucial issues here. The first is Arkin’s premise that, insofar as war is conducted outside of the legal frameworks developed to govern it, there could be a technological solution to that problem. And the second is that such a solution could take the form of an ‘ethical governor’ based on the translation of legal frameworks like the Geneva Convention, International Humanitarian Law and Human Rights Law into algorithmic specifications for robot perception and action.  Both of these have been carefully critiqued by my ICRAC colleagues (see http://icrac.net/resources/ for references), as well as in a paper that I’ve co-authored with ICRAC Chair Noel Sharkey. A core problem is that prescriptive frameworks like these presuppose, rather than specify, the capacities for comprehension and judgment required for their implementation in any actual situation.  And it’s precisely those capacities that artificial intelligences lack, now and for the foreseeable future. Arkin’s imaginary of the encoding of battlefield ethics brings the field no closer to the realization of the highly contingent and contextual abilities that are requisite to the situated enactment of ethical conduct, begging these fundamental questions rather than seriously addressing them.

Ethical Governor 0.1

Last Monday, November 18th, Georgia Tech’s Center for Ethics and Technology hosted a debate on lethal autonomous robots, between roboticist Ron Arkin (author of Governing Lethal Behavior in Autonomous Robots, 2009) and philosopher of technology Rob Sparrow (founding member of the International Committee for Robot Arms Control).  Along with the crucial issues raised by Rob Sparrow (regarding the lost opportunities of ongoing, disproportionate expenditures on military technologies; American exceptionalism and the assumption that ‘we’ will be on the programming end and not the targets of such weapons; the prospects of an arms race in robotic weaponry and its contributions to greater global insecurity, etc.), as well as the various objections that I would raise to Arkin’s premise that the situational awareness requisite to legitimate killing could be programmable, one premise of Arkin’s position in particular inspired this immediate response.

Arkin insists that his commitment to research on lethal autonomous robots is based first and foremost in a concern for saving the lives of non-combatants.  He proceeds from there to the ‘thesis’ that an ethically-governed robot could adhere more reliably to the laws of armed conflict and rules of engagement than human soldiers have demonstrably done.  He points to the history of atrocities committed by humans, and emphasizes that his project is aimed not at the creation of an ethical robot (which would require moral agency), but (simply and/or more technically) at the creation of an ‘ethical governor’ to control the procedures for target identification and engagement and ensure their compliance with international law.  Taking seriously that premise, my partner (who I’ll refer to here as the Lapsed Computer Scientist) suggests an intriguing beta release for Arkin’s project, preliminary to the creation of fully autonomous, ethically-governed robots. This would be to incorporate ethical governors into existing, human-operated weapon systems.  Before a decision to fire could be made, in other words, the conditions of engagement would be represented and submitted to the assessment of the automated ethical governor; only if the requirements for justifiable killing were met would the soldier’s rifle or the hellfire missile be enabled.  This would at once provide a means for testing the efficacy of Arkin’s governor (he insists that proof of its reliability would be a prerequisite to its deployment), and hasten its beneficial effects on the battlefield (reckless actions on the part of human soldiers being automatically prevented).  I would be interested in the response to this suggestion, both from Ron Arkin (who insists that he’s not a proponent of lethal autonomous weapons per se, but only in the interest of saving lives) and from the Department of Defense by whom his research is funded.  If there were objections to proceeding in this way, what would they be?

Robot rhetorics

An announcement from Voice of America online (repeated from many other media sources over the past couple of days) nicely illustrates the slippery discourses of robotic ability.  Titled ‘Autonomous Aerial Robot Maneuvers Like a Bird,’ the article announces that researchers at Cornell University ‘have developed a flying robot they say is “as smart as a bird” because it can maneuver to avoid obstacles,’ then concludes several paragraphs later:

‘Still, hurdles remain before the robot could be used in a real-world scenario. It still needs to be able to adapt to environmental variations like wind as well as be able to detect moving objects like birds.’

Enough said.

Autonomy

Media reports of developments in so-called robotic weapons systems (a broad category that includes any system involving some degree of pre-programming as well as remote control) are haunted by the question of ‘autonomy’; specifically, the prospect that technologies acting independently of human operators will run ‘out of control’ (a fear addressed by Langdon Winner in his 1977 book Autonomous Technology: technics-out-of-control as a theme in political thought).  While recognizing the very real dangers posed by increasing resort to on-board, algorithmic encoding of controls in military systems, I want to track the discussion of autonomy with respect to weapons systems a bit more closely.  A recent story in the LA Times, noted and under discussion by my colleagues in the International Committee for Robot Arms Control (ICRAC), provides a good starting place.

While I’m going to suggest here that autonomy is something of a red herring in the context of this story, let me be clear at the outset that I believe that we should be deeply concerned about the developments reported. They represent a continuation of the longstanding investment in automation in the (questionable) interest of economy; the dangers of ever-intensified speed in war fighting; the extraordinary inflation of spending on weapons systems at the expense of other social spending (see post Arming Robots); and the threat to global security of the already existing infrastructure of networked warfare.  With that said, I want to question the framing of the developments reported in this article as the beginning of something new, unprecedented and (as often goes along with these adjectives) inevitable, centering on the question of autonomy.

The article reports on the X47B drone, a demonstration aircraft currently being tested by the Navy at a cost of $813 million.

“The X-47B drone, above, marks a paradigm shift in warfare, one that is likely to have far-reaching consequences. With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.” (Chad Slattery, Northrop Grumman / January 25, 2012)

A major technical requirement for this plane is that it should be able to land under on board controls on the deck of an aircraft carrier, “one of aviation’s most difficult maneuvers.”  In this respect, the X47B is a next logical step in an ongoing process of automation, of the replacement of labour with capital equipment, through the delegation of actions previously done by skillful humans to machines. The familiarity of the story in this respect raises the question: what exactly is the “paradigm shift” here?  And what are the stakes in the assertion that there is one?  The author observes:

“With the drone’s ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.”

Most commercial aircraft, as well as existing drones, can be put under ‘auto pilot’ controls, and are always operating ‘semi-independently.’  And the U.S. drone campaign is already dealing death and destruction.

“Although humans would program an autonomous drone’s flight plan and could override its decisions, the prospect of heavily armed aircraft screaming through the skies without direct human control is unnerving to many.”

Aren’t populations in Pakistan, Afghanistan, Yemen and other areas that are the target of U.S. drones already unnerved by heavily armed aircraft screaming through the skies?  And to what extent has ‘direct human control’ over existing drone systems ensured that civilians won’t be killed, whether as a consequence of mistaken targeting, or what seems to be accepted within military procedure as unavoidable ‘collateral’ damage?

“‘The deployment of such systems would reflect … a major qualitative change in the conduct of hostilities,’ committee [of the International Red Cross] President Jakob Kellenberger said at a recent conference. ‘The capacity to discriminate, as required by [international humanitarian law], will depend entirely on the quality and variety of sensors and programming employed within the system.’”

It is clear that the ‘capacity to discriminate’ is already based on complex networks of sensors and code, and the history of the use of armed drones includes recurring examples of misrecognition of targets, extra-judicial killing, and a range of other violations of international law.

“Weapons specialists in the military and Congress acknowledge that policymakers must deal with these ethical questions long before these lethal autonomous drones go into active service, which may be a decade or more away.”

These questions – not only ethical but also moral and legal – must equally have been dealt with before lethal remotely-controlled drones went into active service.  Which means that the latter are, in their current use, unethical, immoral and illegal.

“More aggressive robotry development could lead to deploying far fewer U.S. military personnel to other countries, achieving greater national security at a much lower cost and most importantly, greatly reduced casualties,” aerospace pioneer Simon Ramo, who helped develop the intercontinental ballistic missile, wrote in his new book, “Let Robots Do the Dying.”

The promise of lower cost rings hollow in the context of a defense budget that continues to grow, and the prediction that annual global spending on drones will double to $11.5 billion in the next few years (reported by the New Internationalist in their December 2011 issue).  But ‘most importantly,’ as Ramo puts it, the ‘reduction in casualties’ refers only to ‘our’ side, and it is not only robots that are dying.

The Air Force says in the Unmanned Aircraft Systems Flight Plan 2009-2047 that “it’s only a matter of time before drones have the capability to make life-or-death decisions as they circle the battlefield.” What’s missing from this projection (we should be suspicious whenever we hear ‘it’s only a matter of time’) are the unresolved problems of decision-making that plague already existing armed drone systems. The focus on the future ignores the already unacceptable present.  And the focus on autonomy as the threat directs our attention away from the autonomous arms-industry-out-of-control, of which the X47B is a symptom.