Category Archives: military robotics

Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons

Related image

In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence.  Designated a primer for CCW delegates, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based CNAS are well represented.

Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:

Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).

The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.

The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.

We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of if–then rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique that makes use of labelled training data” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.

Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:

Intelligence is a system’s ability to determine the best course of action to achieve its goals. Autonomy is the freedom a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems could be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).

The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?

The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (CCW/GGE.1/2018/WP.4). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.

Reality Bites

 

big dog trial

LS3 test during Rim of the Pacific Exercise, July 2014

A pack of international news outlets over the past few days have reported the abandonment by the US Department of Defence of Boston Dynamic’s Legged Squad Support System or LS3 (aka ‘Big Dog’) and its offspring (see Don’t kick the Dog). After five years and USD $42 million in investment, what was promised to be a best in breed warfighting companion stumbled over a mundane but apparently intractable problem – noise. Powered by a gas (petrol) motor likened to a lawnmower in sound, the robot’s capacity for carrying heavy loads (400 lbs or 181.4kg), and its much celebrated ability to navigate rough terrain and right itself after falling (or be easily assisted in doing so), in the end were not enough to make up for the fact that, in the assessment of the US Marines who tested the robot, the LS3 was simply ‘too loud’ (BBC News 30 January 2015). The trial’s inescapable conclusion was that the noise would reveal a unit’s presence and position, bringing more danger than aid to the U.S. warfighters that it was deployed to support.

A second concern contributing to the DoD’s decision was the question of the machine’s maintenance and repair. Long ignored in narratives about technological progress, the place of essential practices of inventive maintenance and repair has recently become a central topic in social studies of science and technology (see Steven J. Jackson, “Rethinking Repair,” in Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot, eds. Media Technologies: Essays on Communication, Materiality and Society. MIT Press: Cambridge MA, 2014.). These studies are part of a wider project of recognizing the myriad forms of invisible labour that are essential conditions for keeping machines working – one of the enduring continuities in the history of technology.

The LS3 trials were run by the Marine’s Warfighting Lab, most recently at Kahuku Training Area in Hawaii during the Rim of the Pacific exercise in July of 2014. Kyle Olson, spokesperson for the Lab, reported that seeing the robot’s potential was challenging “because of the limitations of the robot itself.” This phrasing is noteworthy, as the robot itself – the actual material technology – interrupts the progressive elaboration of the promise that keeps investment in place. According to the Guardian report (30 December 2015) both ‘Big Dog’ and ‘Spot,’ an electrically powered and therefore quieter but significantly smaller prototype, are now in storage, with no future experiments planned.

The cessation of the DoD investment will presumably come as a relief to Google, which acquired Boston Dynamics in 2013, saying at the time that it planned to move away from the military contracts that it inherited with the acquisition.  Boston Dynamics will now, we can assume, turn its prodigious ingenuity in electrical and mechanical engineering to other tasks of automation, most obviously in manufacturing. The automation of industrial labour has, somewhat ironically given its status as the original site for robotics, recently been proclaimed to be robotics’ next frontier. While both the BBC and Guardian offer links to a 2013 story about the great plans that accompanied Google’s investments in robotics, more recent reports characterize the status of the initiative (internally named ‘Replicant’) as “in flux,” and its goal of producing a consumer robot by 2020 as in question (Business Insider November 8, 2015). This follows the departure of former Google VP Andy Rubin in 2014 (to launch his own company with the extraordinary name ‘Playground Global’), just a year after he was hailed as the great visionary leader who would turn Google’s much celebrated acquisition of a suite of robotics companies into a unified effort. Having joined Google in 2005, when the latter acquired his smartphone company Android, Rubin was assigned to the leadership of Google’s robotics division by co-founder Larry Page. According to Business Insider’s Jillian D’Onfro, Page

had a broad vision of creating general-purpose bots that could cook, take care of the elderly, or build other machines, but the actual specifics of Replicant’s efforts were all entrusted to Rubin. Rubin has said that Page gave him a free hand to run the robotics effort as he wanted, and the company spent an estimated $50 million to $90 million on eight wide-ranging acquisitions before the end of 2013.

The unifying vision apparently left with Rubin, who has yet to be replaced. D’Onfro continues:

One former high-ranking Google executive says the robot group is a “mess that hasn’t been cleaned up yet.” The robot group is a collection of individual companies “who didn’t know or care about each other, who were all in research in different areas,” the person says. “I would never want that job.”

So another reality that ‘bites back’ is added to those that make up the robot itself; that is, the alignment of the humans engaged in its creation. Meanwhile, Boston Dynamics’ attempt to position itself on the entertainment side of the military-entertainment complex this holiday season was met less with amusement than alarm, as media coverage characterized it variously as ‘creepy’ and ‘nightmarish.’

synthxmas-590x330

Resistance, it seems, is not entirely futile.