Which Sky is Falling?

Justin Wood, from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html

The widening arc of public discussion regarding the promises and threats of AI includes a recurring conflation of what are arguably two intersecting but importantly different matters of concern. The first is the proposition, repeated by a few prominent members of the artificial intelligentsia and their followers, that AI is proceeding towards a visible horizon of ‘superintelligence,’ culminating in the point at which power over ‘us’ (the humans who imagine themselves as currently in control) will be taken over by ‘them’ (in this case, the machines that those humans have created).[1] The second concern arises from the growing insinuation of algorithmic systems into the adjudication of a wide range of distributions, from social service provision to extrajudicial assassination. The first concern reinforces the premise of AI’s advancement, as the basis for alarm. The second concern requires no such faith in the progress of AI, but only attention to existing investments in automated ‘decision systems’ and their requisite infrastructures.

The conflation of these concerns is exemplified in a NY Times piece this past summer, reporting on debates within the upper echelons of the tech world including billionaires like Elon Musk and Mark Zuckerberg, at a series of exclusive gatherings held over the past several years. Responding to Musk’s comparison of the dangers of AI with those posed by nuclear weapons, Zuckerberg apparently invited Musk to discuss his concerns at a small dinner party in 2014. We might pause here to note the gratuitousness of the comparison; it’s difficult to take this as other than a rhetorical gesture designed to claim the gravitas of an established existential threat. But an even longer pause is warranted for the grounds of Musk’s concern, that is the ‘singularity’ or the moment when machines are imagined to surpass human intelligence in ways that will ensure their insurrection.

Let’s set aside for the moment the deeply problematic histories of slavery and rebellion that animate this anxiety, to consider the premise. To share Musk’s concern we need to accept the prospect of machine ‘superintelligence,’ a proposition that others in the technical community, including many deeply engaged with research and development in AI, have questioned. In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.

To demonstrate the extent of concern within the tech community (and by implication those who align with Musk over Zuckerberg), NY Times AI reporter Cade Metz cites recent controversy over the Pentagon’s Project Maven. But of course Project Maven has nothing to do with superintelligence. Rather, it is an initiative to automate the analysis of surveillance footage gathered through the US drone program, based on labels or ‘training data’ developed by military personnel. So being concerned about Project Maven does not require belief in the singularity, but only skepticism about the legality and morality of the processes of threat identification that underwrite the current US targeted killing program. The extensive evidence for the imprecision of those processes that has been gathered by civil society organizations is sufficient to condemn the goal of rendering targeted killing more efficient. The campaign for a prohibitive ban on lethal autonomous weapon systems is aimed at interrupting the logical extension of those processes to the point where target identification and the initiation of attack is put under fully automated machine control. Again this relies not on superintelligence, but on the automation of existing assumptions regarding who and what constitutes an imminent threat.

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.

As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

[1] We could replace the uprising subject in this imaginary with other subjugated populations figured as plotting a takeover, the difference being that here the power of the master/creator is confirmed by the increasing threat posed by his progeny. Thus also the allusion to our own creation in the illustration by Justin Wood that accompanies the article provoking this post.

Corporate Accountability

Jacbin image.jpg

This graphic appears at https://www.jacobinmag.com/2018/06/google-project-maven-military-tech-workers

On June 7th, Google CEO Sundar Pichai published a post on the company’s public blog site titled ‘AI at Google: our Principles.’ (Subsequently abbreviated to Our Principles.) The release of this statement was responsive in large measure to dissent from Google employees beginning early in the Fall of last year; while these debates are not addressed directly, their traces are evident in the subtext. The employee dissent focused on the company’s contracts with the US Department of Defense, particularly for work on its Algorithmic Warfare Cross Functional Team, also known as Project Maven. The controversy was receiving increasingly widespread attention in the press.

It is to the credit of Google workers that they have the courage and commitment to express their concerns. And it is to Google management’s credit that, unusually among major US corporations, it both encourages dissent and feels compelled to respond. I was involved in organizing a letter from researchers in support of Googlers and other tech workers, and in that capacity was gratified to hear Google announce that it would not renew the Project Maven contract next year. (Disclosure: I think US militarism is a global problem, perpetrating unaccountable violence while further jeopardizing the safety of US citizens.) In this post I want to take a step away from that particular issue, however, to do a closer reading of the principles that Pichai has set out. In doing so, I want to acknowledge Google’s leadership in creating a public statement of its principles for the development of technologies; a move that is also quite unprecedented, as far as I’m aware, for private corporations. And I want to emphasize that the critique that I set out here is not aimed at Google uniquely, but rather is meant to highlight matters of concern across the tech industry, as well as within wider discourses of technology development.

One question we might ask at the outset is why this statement of principles is framed in terms of AI, rather than software development more broadly. Pichai’s blog post opens with this sentence: “At its heart, AI is computer programming that learns and adapts.” Those who have been following this blog will be able to anticipate my problems with this statement, singularizing ‘AI’ as an agent with a ‘heart’ that engages in learning, and in that way contributing to its mystification. I would rephrase this along the lines of “AI is the cover term for a range of techniques for data analysis and processing, the relevant parameters of which can be adjusted according to either internally or externally generated feedback.” One could substitute “information technologies (IT)” or “software” for AI throughout the principles, moreover, and their sense would be the same.

Pichai continues: “It [AI] can’t solve every problem, but its potential to improve our lives is profound.” While this is a familiar (and some would argue innocent enough) premise, it’s always worth asking several questions in response: What’s the evidentiary basis for AI’s “profound potential”? Whose lives, more specifically, stand to be improved? And what other avenues for the enhancement of human well being might the potential of AI be compared to, both in terms of efficacy and the number of persons positively affected?

Regrettably, the opening paragraph closes with some product placement, as Pichai asserts that Google’s development of AI makes its products more useful, from “email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” with embedded links to associated promotional sites (removed here in order not to propagate the promotion). The subsequent paragraph then offers a list of non-commercial applications of Google’s data analytics, whose “clear benefits are why Google invests heavily in AI research and development.”

This promotional opening then segues to the preamble to the Principles, explaining that they are motivated by the recognition that “How AI is developed and used will have a significant impact on society for many years to come.” Readers familiar with the field of science and technology studies (STS) will know that the term ‘impact’ has been extensively critiqued within STS for its presupposition that technology is somehow outside of society to begin with. Like any technology, AI/IT does not originate elsewhere, like an asteroid, and then make contact. Rather, like Google, AI/IT is constituted from the start by relevant cultural, political, and economic imaginaries, investments, and interests. The challenge is to acknowledge the genealogies of technical systems and to take responsibility for ongoing, including critical, engagement with their consequences.

The preamble then closes with this proviso: “We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.” Notwithstanding my difficulties in thinking of a precedent for humility in the case of Google (or any of the other Big Five), this is a welcome statement, particularly in its commitment to continuing to listen both to employees and to relevant voices beyond the company.

The principles themselves are framed as a set of objectives for the company’s AI applications, all of which are unarguable goods. These are: being socially beneficial, avoiding the creation or reinforcement of social bias, ensuring safety, providing accountability, protecting privacy, and upholding standards of scientific excellence. Taken together, Google’s technologies should “be available for uses that support these principles.” While there is much to commend here, some passages shouldn’t go by unremarked.

The principle, “Be built and tested for safety” closes with this sentence: “In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.” What does this imply for the cases where this is not “appropriate,” that is, what would justify putting AI technologies into use in unconstrained environments, where their operations are more consequential but harder to monitor?

­The principle “Be accountable to people,” states “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.” This is a key objective but how, realistically, will this promise be implemented? As worded, it implicitly acknowledges a series of complex and unsolved problems: the increasing opacity of algorithmic operations, the absence of due process for those who are adversely affected, and the increasing threat that automation will translate into autonomy, in the sense of technologies that operate in ways that matter without provision for human judgment or accountability. Similarly, for privacy design, Google promises to “give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” Again we know that these are precisely the areas that have been demonstrated to be highly problematic with more conventional techniques; when and how will those longstanding, and intensifying, problems be fully acknowledged and addressed?

The statement closes, admirably, with an explicit list of applications that Google will not pursue. The first item, however, includes a rather curious set of qualifications:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

What are the qualifiers “overall” doing here, or “material”? What will be the basis for the belief that “the benefits substantially outweigh the risks,” and who will adjudicate that?

There is a welcome commitment not to participate in the development of

2.  Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

As the Project Maven example illustrates, the line between a weapon and a weapon system can be a tricky one to draw. Again from STS we know that technologies are not discrete entities; their purposes and implementations need to be assessed in the context of the more extended sociotechnical systems of which they’re part.

And finally, Google pledges not to develop:

  1. Technologies that gather or use information for surveillance violating internationally accepted norms.
  2. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Again, these commitments are laudable; however we know that the normative and legal frameworks governing surveillance and human rights are highly contested and frequently violated. This means that adherence to these principles will require working with relevant NGOs (for example, the International Committee of the Red Cross, Human Rights Watch), continuing to monitor the application of Google’s technologies, and welcoming challenges based on evidence for uses that violate the principles.

A coda to this list ensures Google’s commitment to work with “governments and the military in many other areas,” under the pretense that this can be restricted to operations that “keep [LS: read US] service members and civilians safe.” This odd pairing of governments, in the plural, and the military singular might raise further questions regarding the obligations of global companies like Google and the other Big Five information technology companies. What if it were to read “governments and militaries in many other areas”? What does work either with one nation’s military, or many, imply for Google’s commitment to users and customers around the world?

The statement closes with:

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

This passage is presumably responsive to media reports of changes to Google’s Code of Conduct, from “Don’t Be Evil” (highly lauded but actually setting quite a low bar), to Alphabet’s “Do the Right Thing.” This familiar injunction is also a famously vacuous one, in the absence of the requisite bodies for deliberation, appeal, and redress.

The overriding question for all of these principles, in the end, concerns the processes through which their meaning and adherence to them will be adjudicated. It’s here that Google’s own status as a private corporation, but one now a giant operating in the context of wider economic and political orders, needs to be brought forward from the subtext and subject to more explicit debate. While Google can rightfully claim some leadership among the Big Five in being explicit about its guiding principles and areas that it will not pursue, this is only because the standards are so abysmally low. We should demand a lot more from companies as large as Google, which control such disproportionate amounts of the world’s wealth, and yet operate largely outside the realm of democratic or public accountability.

Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons

Related image

In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence.  Designated a primer for CCW delegates, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based CNAS are well represented.

Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:

Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).

The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.

The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.

We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of if–then rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique that makes use of labelled training data” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.

Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:

Intelligence is a system’s ability to determine the best course of action to achieve its goals. Autonomy is the freedom a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems could be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).

The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?

The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (CCW/GGE.1/2018/WP.4). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.

Swords to Ploughshares

3032F71E00000578-0-image-a-4_1452886145040

Having completed my domestic labors for the day (including a turn around the house with my lovely red Miele), I take a moment to consider the most recent development from Boston Dynamics (now part of the robotics initiative at sea in Google’s Alphabet soup).  More specifically, the news is of the latest incarnation of Boston Dynamic’s bipedal robot, nicknamed Atlas and famous for its distribution as a platform for DARPA’s Robotics Challenges. Previously figured as life-saving first responder and life-destroying robot soldier,  Atlas is now being repositioned – tongue firmly in cheek – as a member of the domestic workforce.

While irony operates effectively to distance its author from serious investment in truth claims or moral positioning, irony also generally offers a glimpse into its stance towards its objects. So at the risk of humorlessness, it’s worth reading this latest rendering of Atlas’ promises seriously, in the context of other recent developments in humanoid robotics. I wrote in a former post about the U.S. military’s abandonment of Boston Dynamic’s Big Dog and its kin, attributed by commentators to some combination of disappointment in the robot’s performance in the field, and a move toward disinvestment in military applications on the part of Google’s ‘Replicant’ initiative (recently restructured as the ‘X’ group). This leaves a robot solution in search of its problem, and where better to turn than to the last stronghold against automation; that is, the home. Along with the work of care (another favourite for robotics prognosticators), domestic labor (the wonders of dish and clotheswashing machines notwithstanding) has proven remarkably resistant to automation (remarkable at least to roboticists, if not to those of us well versed in this work’s practical contingencies). In a piece headlined ‘Multimillion dollar humanoid robot doesn’t make for a good cleaner,’ the Guardian reproduces a video clip (produced in fast motion with upbeat technomusic soundtrack) showing Florida’s Institute for Human and Machine Cognition (IHMC), runner up in the 2015 Robotics Challenge, testing new code ‘by getting the multimillion dollar Atlas robot to do household chores.’ In an interesting inversion, the robot is described as the ‘Google-developed US government Atlas robot,’ a formulation which sounds as though the development path went from industry to the public sector, rather than the other way around.

Housework, we’re told, proves ‘more difficult than you might imagine,’ suggesting that the reader imagined by the Guardian is one unfamiliar with the actual exigencies of domestic work (while for other readers those difficulties are easily imaginable). The challenge of housework is revealing of the conditions required for effective automation, and their absence in particular forms of labor. Specifically, robots work well just to the extent that their environments – basically the stimuli that they have to process, and the conditions for an appropriate response – can be engineered to fit their capacities. The factory assembly line has, in this respect, been made into the robot’s home. Domestic spaces, in contrast, and the practicalities of work within them (not least the work of care) are characterized by a level of contingency that has so far flummoxed attempts at automation beyond the kinds of appliances that can either depend on human peripherals to set up their conditions of operation (think loading the dishwasher), or can operate successfully through repetitive, random motion (think Roomba and its clones). Long underestimated in the value chain of labor, robotics for domestic work might just teach us some lessons about the extraordinary complexity of even the most ordinary human activities.

Meanwhile in Davos the captains of multinational finance and industry and their advisors are gathered to contemplate the always-imminent tsunami of automation, including artificial intelligence and humanoid robots, that is predicted to sweep world economies in the coming decades. The Chicago Tribune reports:

At IBM, researchers are working to build products atop the Watson computing platform – best known for its skill answering questions on the television quiz show “Jeopardy” – that will search for job candidates, analyze academic research or even help oncologists make better treatment decisions. Such revolutionary technology is the only way to solve “the big problems” like climate change and disease, while also making plenty of ordinary workers more productive and better at their jobs, according to Guru Banavar, IBM’s vice president for cognitive computing. “Fundamentally,” Banavar said, “people have to get comfortable using these machines that are learning and reasoning.” [SIC]

Missing between the lines of the reports from and around Davos are the persistent gaps between the rhetoric of AI and robotics, and the realities. These gaps mean that the progress of automation will be more one of degradation of labor than its replication, so that those who lose their jobs will be accompanied by those forced to adjust to the limits and rigidities of automated service provision. The threat, in other words, is not that any job can be automated, as the gurus assert, but rather that in a political economy based on maximizing profitability for the few, more and more jobs will be transformed into jobs that can be automated, regardless of what is lost.  Let us hope that in this economy, low and no wage jobs, like care provision and housework, might show a path to resistance.

hu·brisˈ(h)yo͞obrəs/noun: hubris 1. excessive pride or self-confidence.

NemesisRethel30q3.5x7.4@162

Nemesis, by Alfred Rethel (1837)

The new year opens with an old story, as The Independent headlines that Facebook multibillionaire Mark Zuckerberg (perhaps finding himself in a crisis of work/life balance) will “build [a] robot butler to look after his child” [sic: those of us who watch Downton Abbey know that childcare is not included in the self-respecting butler’s job description; even the account of divisions of labour among the servants is garbled here], elaborating that “The Facebook founder and CEO’s resolution for 2016 is to build an artificially intelligent system that will be able to control his house, watch over his child and help him to run Facebook.” To put this year’s resolution into perspective, we learn (too much information) that “Mr. Zuckerberg has in the past taken on ‘personal challenges’ that have included reading two books per month, learning Mandarin and meeting a new person each day.” “Every challenge has a theme,” Zuckerberg explains, “and this year’s theme is invention” (a word that, as we know, has many meanings).

We’re reminded that FB has already made substantial investments in AI in areas such as automatic image analysis, though we learn little about the relations (and differences) between those technologies and the project of humanoid robotics. I’m reassured to hear that Zuckerberg has said “that he would start by looking into existing technologies,” and hope that might include signing up to be a follower of this blog.  But as the story proceeds, it appears in any case that the technologies that Z has in mind are less humanoid robots, than the so-called Internet of things (i.e. networked devices, presumably including babycams) and data visualization (for his day job). This is of course all much more mundane and so, in the eyes of The Independent’s headline writers, less newsworthy.

The title of this post is of course the most obvious conclusion to draw regarding the case of Mark Zuckerberg; in its modern form, ‘hubris’ refers to an arrogant individual who believes himself capable of anything. And surely in a political economy where excessive wealth enables disproportionate command of other resources, Zuckerberg’s self-confidence is not entirely unwarranted. In this case, however, Zuckerberg’s power is further endowed by non-investigative journalism, which fails to engage in any critical interrogation of his announcement. Rather than questioning Zuckerberg’s resolution for 2016 on the grounds of its shaky technical feasibility or dubious politics (trivializing the labours of service and ignoring their problematic histories), the Independent makes a jump cut to the old saws of Stephen Hawking, Elon Musk and Ex Machina. Of course The Independent wouldn’t be the first to notice the film’s obvious citation of Facebook and its founder and CEO (however well the latter is disguised by the hyper-masculine and morally degenerate figure of Nathan). But the comparison, I think, ends there and of course, however fabulous, neither Zuckerberg nor Facebook are fictional.

The original Greek connotations of the term ‘hubris’ referenced not just overweening pride, but more violent acts of humiliation and degradation, offensive to the gods. While Zuckerberg’s pride is certainly more mundane, his ambitions join with those of his fellow multibillionaires in their distorting effects on the worlds in which their wealth is deployed (see Democracy Now for the case of Zuckerberg’s interventions into education).  And it might be helpful to be reminded that in Greek tragedy excessive pride towards or defiance of the gods lead to nemesis. The gods may play a smaller role in the fate of Mark Zuckerberg, however, and the appropriate response I think is less retributive than redistributive justice.

Reality Bites

 

big dog trial

LS3 test during Rim of the Pacific Exercise, July 2014

A pack of international news outlets over the past few days have reported the abandonment by the US Department of Defence of Boston Dynamic’s Legged Squad Support System or LS3 (aka ‘Big Dog’) and its offspring (see Don’t kick the Dog). After five years and USD $42 million in investment, what was promised to be a best in breed warfighting companion stumbled over a mundane but apparently intractable problem – noise. Powered by a gas (petrol) motor likened to a lawnmower in sound, the robot’s capacity for carrying heavy loads (400 lbs or 181.4kg), and its much celebrated ability to navigate rough terrain and right itself after falling (or be easily assisted in doing so), in the end were not enough to make up for the fact that, in the assessment of the US Marines who tested the robot, the LS3 was simply ‘too loud’ (BBC News 30 January 2015). The trial’s inescapable conclusion was that the noise would reveal a unit’s presence and position, bringing more danger than aid to the U.S. warfighters that it was deployed to support.

A second concern contributing to the DoD’s decision was the question of the machine’s maintenance and repair. Long ignored in narratives about technological progress, the place of essential practices of inventive maintenance and repair has recently become a central topic in social studies of science and technology (see Steven J. Jackson, “Rethinking Repair,” in Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot, eds. Media Technologies: Essays on Communication, Materiality and Society. MIT Press: Cambridge MA, 2014.). These studies are part of a wider project of recognizing the myriad forms of invisible labour that are essential conditions for keeping machines working – one of the enduring continuities in the history of technology.

The LS3 trials were run by the Marine’s Warfighting Lab, most recently at Kahuku Training Area in Hawaii during the Rim of the Pacific exercise in July of 2014. Kyle Olson, spokesperson for the Lab, reported that seeing the robot’s potential was challenging “because of the limitations of the robot itself.” This phrasing is noteworthy, as the robot itself – the actual material technology – interrupts the progressive elaboration of the promise that keeps investment in place. According to the Guardian report (30 December 2015) both ‘Big Dog’ and ‘Spot,’ an electrically powered and therefore quieter but significantly smaller prototype, are now in storage, with no future experiments planned.

The cessation of the DoD investment will presumably come as a relief to Google, which acquired Boston Dynamics in 2013, saying at the time that it planned to move away from the military contracts that it inherited with the acquisition.  Boston Dynamics will now, we can assume, turn its prodigious ingenuity in electrical and mechanical engineering to other tasks of automation, most obviously in manufacturing. The automation of industrial labour has, somewhat ironically given its status as the original site for robotics, recently been proclaimed to be robotics’ next frontier. While both the BBC and Guardian offer links to a 2013 story about the great plans that accompanied Google’s investments in robotics, more recent reports characterize the status of the initiative (internally named ‘Replicant’) as “in flux,” and its goal of producing a consumer robot by 2020 as in question (Business Insider November 8, 2015). This follows the departure of former Google VP Andy Rubin in 2014 (to launch his own company with the extraordinary name ‘Playground Global’), just a year after he was hailed as the great visionary leader who would turn Google’s much celebrated acquisition of a suite of robotics companies into a unified effort. Having joined Google in 2005, when the latter acquired his smartphone company Android, Rubin was assigned to the leadership of Google’s robotics division by co-founder Larry Page. According to Business Insider’s Jillian D’Onfro, Page

had a broad vision of creating general-purpose bots that could cook, take care of the elderly, or build other machines, but the actual specifics of Replicant’s efforts were all entrusted to Rubin. Rubin has said that Page gave him a free hand to run the robotics effort as he wanted, and the company spent an estimated $50 million to $90 million on eight wide-ranging acquisitions before the end of 2013.

The unifying vision apparently left with Rubin, who has yet to be replaced. D’Onfro continues:

One former high-ranking Google executive says the robot group is a “mess that hasn’t been cleaned up yet.” The robot group is a collection of individual companies “who didn’t know or care about each other, who were all in research in different areas,” the person says. “I would never want that job.”

So another reality that ‘bites back’ is added to those that make up the robot itself; that is, the alignment of the humans engaged in its creation. Meanwhile, Boston Dynamics’ attempt to position itself on the entertainment side of the military-entertainment complex this holiday season was met less with amusement than alarm, as media coverage characterized it variously as ‘creepy’ and ‘nightmarish.’

synthxmas-590x330

Resistance, it seems, is not entirely futile.

Just-so stories

IBM-Watson

Alerted that BBC News/Technology has developed a story titled ‘Intelligent Machines: The Truth Behind AI Fiction’, I follow the link with some hopeful anticipation. The piece opens: ‘Over the next week, the BBC will be looking into all aspects of artificial intelligence – from how to build a thinking machine, to the ethics of doing so, to questions about whether an AI can ever be creative.’  But as I read on my state changes to one that my English friends would characterize as gobsmacked.  Instead of in-depth, critical journalism this piece reads like a (somewhat patronizing) children’s primer with corporate sponsorship.  We’re told, for example, that Watson, IBM’s supercomputer ‘can understand natural language and read millions of documents in seconds’.  But if it’s a deeper understanding of the state of the art in AI that we’re after, we can’t let terms like ‘understand’  and ‘read’ go by unremarked. Rather, it’s precisely the translation of computational processes as ‘understanding’ or ‘reading’, and the difference lost in that translation from our understanding and reading of those terms, that needs to be illuminated.  We might then fully appreciate the ingenious programming that enables the system singularized as ‘Watson’ to compete successfully on the televised quiz show Jeopardy, despite the machine’s cluelessness regarding the cultural references that its algorithms and databases encode.

Things go from bad to worse, however, when we’re told that Watson ‘is currently working in harmony with humans, in diverse fields such as the research and development departments of big companies such as Proctor and Gamble and Coca-Cola – helping them find new products’.  Why equate harmonious working relations with the  deployment of an IBM supercomputer in the service of corporate R&D?  And what kinds of ongoing labours of code development and maintenance are required to reconfigure a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight core processor in such a way that it can operate usefully within these enterprises?  The anthropomorphism of Watson obfuscates, rather than explicates, these ‘truths’ about artificial intelligence and its agencies.

The structure of the story is a series of loops between fiction and ‘fact’, moving from blockbuster films to just-so stories. In place of the Terminator, we’re told, ‘The US military unit Darpa [sic] is developing lots of robotic kit, such as exoskeletons to give soldiers superhuman strength and access to visual displays that will help their decision making. It is also using Atlas robots, developed by Boston Dynamics, intended for search and rescue.’ (There is a brief mention of the campaign against lethal autonomous weapons, though with no links provided).  After a reference to C-3PO, we’re told that ‘In the real world, companion robots are really starting to take off’, exemplified by Pepper, which ‘has learnt about human emotions by watching videos showing facial expressions.’ (See my earlier post on companion robots here.) From Wall-E, surely among the most endearing of fictional robots (see Vivian Sobchack’s brilliant analysis) we go to Roomba, about which we’re told that ‘[a]necdotal evidence suggests some people become as attached to them as pets and take them on holiday.’ We finally close (not a moment too soon) with Ex Machina’s AVA on one hand, and roboticist Hiroshi Ishiguro’s humanoid twin on the other, along with the assurance by Prof Chetan Dube, chief executive of software firm IPsoft, that his virtual assistant Amelia ‘will be given human form indistinguishable from the real thing at some point this decade.’

In the absence of any indication that this story is part of a paid advertisement, I’m at a loss to explain how it achieved the status of investigative journalism within the context of a news source like the BBC. If this is what counts as thoughtful reporting, the prospects for AI-based replication are promising indeed.

On killer robots, celebrity scientists, and the campaign to ban lethal autonomous weapons

autonomousweapons

Screencap of South Korean autonomous weapon in action courtesy of Richard Anders via YouTube.  Reticle added by Curiousmatic.

Amidst endless screen shots from Terminator 3: Rise of the Machines (Warner Bros Pictures, 2003), and seemingly obligatory invocations of Stephen Hawking, Elon Musk and Steve Wozniak as signatories, the media reported the release on 28 July of an open letter signed by thousands of robotics and AI researchers calling for a ban on lethal autonomous weapons. The letter’s release to the press was timed to coincide with the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2015) in Buenos Aires. Far more significant than the inclusion of celebrity signatories – their stunning effect in drawing international media attention notwithstanding – is the number of prominent computer scientists (not a group prone to add their names to political calls to action) who have been moved to endorse the letter. Consistent with this combination of noise and signal, the commentaries generated by the occasion of the letter’s release range from aggravatingly misleading to helpfully illuminating.

The former category is well represented in an interview by Fox News’ Shepard Smith with theoretical physicist and media scientist Michio Kaku. In response to Smith’s opening question regarding whether or not concerns about autonomous weapons are overblown, Kaku suggests that “Hollywood has us brainwashed” into thinking that Terminator-style robots are just around the corner. Quite the contrary, he assures us, “we have a long ways to go before we have sentient robots on the battlefield.” This ‘long ways to go’ is typical of futurist hedges that, while seemingly interrupting narratives of the imminent rise of the machines, implicitly endorse the assumption of continuing progress in that direction. Kaku then further affirms the possibility, if not inevitability, of the humanoid weapon: “Now, the bad news of course is that once we do have such robots, these autonomous killing machines could be a game changer.” Having effectively clarified that his complaint with Hollywood is less the figure of the Terminator-style robot than its timeline, he reassures us that “the good news is, they’re decades away. We have plenty of time to deal with this threat.” “Decades away, for sure?” asks Shepard Smith. “Not for sure, cuz we don’t know how progress is,” Kaku replies, and then offers what could be a more fundamental critique of the sentient robot project. Citing the disappointments of the recent DARPA Robotics Challenge as evidence, he explains: “It turns out that our brain is not really a digital computer.” The lesson to take from this, he proposes, is that the autonomous killing machine “is a long term threat, it’s a threat that we have time to digest and deal with, rather than running to the hills like a headless chicken” (at which he and Shepard share a laugh). While I applaud Kaku’s scepticism regarding advances in humanoid robots, it’s puzzling that he himself frames the question in these terms, suggesting that it’s the prospect of humanoid killer robots to which the open letter is addressed, and (at least implicitly) dismissing its signatories as the progeny of Chicken Little.

Having by now spent all but 30 seconds of his 3 minutes and 44, Kaku then points out that “one day we may have a drone that can seek out human targets and just kill them indiscriminately. That could be a danger, a drone that’s only mission is to kill anything that resembles a human form … so that is potentially a problem – it doesn’t require that much artificial intelligence for a robot to simply identify a human form, and zap it.” Setting aside the hyperbolic reference to indiscriminate targeting of any human form (though see the Super Aegis 2 system projected to patrol the heavily armed ‘demilitarized zone’ between North and South Korea), this final sentence (after which the interview concludes) begins to acknowledge the actual concerns behind the urgency of the campaign for a ban on lethal autonomous weapons. Those turn not on the prospect of a Terminator-style humanoid or ‘sentient’ bot, but on the much more mundane progression of increasing automation in military weapon systems: in this case, automation of the identification of particular categories of humans (those in a designated area, or who fit a specified and machine-readable profile) as legitimate targets for killing. In fact, it’s only the popular media that have raised the prospect of fully intelligent humanoid robots: the letter, and the wider campaign for a ban on lethal autonomous weapons, has nothing to do with ‘Terminator-style’ robots. The developments that are cited in the letter are both far more specific, and more imminent.

That specificity is clarified in a CNET story about the open letter produced by Luke Westaway, broadcast on July 27th. Despite its inclusion of cuts from Terminator 3 and its invocation of the celebrity triad, we’re also informed that the open letter defines autonomous weapons as those that “select and engage targets without human intervention.” The story features interviews with ICRAC’s Noel Sharkey, and Thomas Nash of the UK NGO Article 36. Sharkey helpfully points out that rather than assuming humanoid form, lethal autonomous weapons are much more likely to look like already-existing weapons systems, including tanks, battle ships and jet fighters. He explains that the core issue for the campaign is an international ban that would pre-empt the delegation of ‘decisions’ to kill to machines. It’s worth noting that the word ‘decision’ in this context needs to be read without the connotations of that term that associate it with human deliberation. A crucial issue here – and one that could be much more systematically highlighted in my view – is that this delegation of ‘the decision to kill’ presupposes the specification, in a computationally tractable way, of algorithms for the discriminatory identification of a legitimate target. The latter, under the Rules of Engagement, International Humanitarian Law and the Geneva Conventions, is an opponent that is engaged in combat and poses an ‘imminent threat’. We have ample evidence for the increasing uncertainties involved in differentiating combatants from non-combatants under contemporary conditions of war fighting (even apart from crucial contests over the legitimacy of targeting protocols). The premise that legitimate target identification could be rendered sufficiently unambiguous to be automated reliably is at this point unfounded (apart from certain nonhuman targets like incoming missiles with very specific ‘signatures’, which also clearly pose an imminent threat).

‘Do we want to live in a world in which we have given machines the power to take human lives, without a human being there to pull the trigger?’ asks Thomas Nash of Article 36 (CNET 27 July 2015)? Of course the individual human with their hand on the trigger is effectively dis-integrated – or better highly distributed – in the case of advanced weapon systems. But the existing regulatory apparatus that comprises the laws of war relies fundamentally on the possibility of assigning moral and legal responsibility. However partial and fragile its reach, this regime is our best current hope for articulating limits on killing. The precedent for a ban on lethal autonomous weapons lies in the United Nations Convention on Certain Conventional Weapons (CCW), the body created ‘to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.’  Achieving that kind of legally binding international agreement, as Westaway points out, is a huge task but as Thomas Nash explains there is some progress. Since the launch of the campaign in 2013, the CCW has put the debate on lethal autonomous weapons onto its agenda and held two international ‘expert’ consultations. At the end of this year, the CCW will consider whether to continue discussions, or to move forwards on the negotiation of an international treaty.

CCW

Convention on Certain Conventional Weapons, May 2014

To appreciate the urgency of interventions into the development of lethal autonomous weapons, science and technology studies (STS) offers a useful concept. The idea of ‘irreversibility’ points to the observation that, while technological trajectories are never self-determining or inevitable, the difficulties of undoing technological projects increase over time. (See for example Callon, Michel (1990), Techno-economic networks and irreversibility. The Sociological Review, 38: 132–161) Investments (both financial and political) increase as does the iterative installation and institutionalization of associated infrastructures (both material and social). The investments required to dismantle established systems grow commensurately. In the CNET interview, Nash points to the entrenched and expanding infrastructures of drone technology as a case in point.

BBC World News (after invoking the Big Three, and also offering the obligatory reference to The Terminator) interviews Professor Heather Roff who helped to draft the letter. The BBC’s Dominic Laurie asks Roff to clarify the difference between a remotely-operated drone, and the class of weapons to which the letter is addressed. Roff points to the fact that the targets for current drone operations are ‘vetted and checked’, in the case of the US military by a Judge Advocate General (JAG). She is quick to add, “Now, whether or not that was an appropriate target or that there are friendly fire issues or there are collateral killings is a completely different matter”; what matters for a ban on lethal autonomous weapons, she emphasizes, is that “there is a human being actually making that decision, and there is a locus of responsibility and accountability that we can place on that human.” In the case of lethal autonomous weapons, she argues, human control is lacking “in any meaningful sense”.

The question of ‘meaningful human control’ has become central to debates about lethal autonomous weapons. As formulated by Article 36 and embraced by United Nations special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns, it is precisely the ambiguity of the phrase that works to open up the discussion in vital and generative ways. In collaboration with Article 36, Roff is now beginning a project – funded by the Future of Life Institute – to develop the concept of meaningful human control more fully. The project aims to create a dataset “of existing and emerging semi-autonomous weapons, to examine how autonomous functions are already being deployed and how human control is maintained. The project will also bring together a range of actors including computer scientists, roboticists, ethicists, lawyers, diplomats and others to feed into international discussions in this area.”

While those of us engaged in thinking through STS are preoccupied with the contingent and shifting distributions of agency that comprise complex sociotechnical systems, the hope for calling central actors to account rests on the possibility of articulating relevant legal and normative frameworks. These two approaches are not, in my view, incommensurable. Jutta Weber and I have recently attempted to set out a conception of human-machine autonomies that recognizes the inseparability of human and machine agencies, and the always contingent nature of ideas of autonomy, in a way that supports the campaign against lethal autonomous weapons. Like the signatories to the open letter, and as part of a broader concern to interrupt the intensification of automated killing, we write of the urgent need to reinstate human deliberation at the heart of matters of life and death.

 

Humanizing humanity

A series of recent media reports on robotic futures have provoked a post.  I’ll begin with the latest announcements of the imminent arrival of the perfect domestic robot friend/pet/servant, this time in the form of Jibo the ‘family robot’.  The crowdfunding appeal via IndieGogo features a promotional video headlined by CEO of Jibo, Inc. Cynthia Breazeal, faculty member in MIT’s Media Lab.

 

jibo-a-robot-for-your-family-video-1097040-TwoByOne

In a kind of retro throwback to the sit coms and Madmenesque consumer advertising of the 1950s and 60s, the video shows us an affluent, Caucasian, heteronormative American family demonstrating their love and connectedness through a series of vignettes in which Jibo plays a supporting, but clearly central role. With a piano solo, feel-good soundtrack playing in the background, the video opens with a slow zoom in on an image of a pristine family home, as the narrator explains “This is your house [cut to slow zoom on the family car parked in the driveway] this is your car, [cut and slow zoom to electric toothbrush on the bathroom vanity] this is your toothbrush. These are your things, but these [cut to slow zoom on framed family photo] are the things that matter.  And somewhere in between [cut to Jibo, which swivels its ‘head’ in the direction of the camera] is this guy. Introducing Jibo, the world’s first family robot” (my emphasis).  As this stereotypical American family becomes the world, or at least those first to experience what the world presumably desires, we see a series of scenes in which their already privileged lives are further enhanced through Jibo’s obsequious intercessions.  At the video’s end, the scene shifts to Cynthia Breazeal, seated in what looks like a tidy garage workshop, who poses the questions: “What if technology actually treated you like a human being? … What if technology helped you, like a partner, rather than simply being a tool? That’s what Jibo’s about.”  This is followed by a call for our help “to build Jibo, to bring it to the world, and to build the community.  Let’s work together, to make Jibo truly great. And together, we can humanize technology.”

As promotion morphs into mobilization, and consumerism into a call for collective action, we might turn to a second story from The China Post published several days earlier, titled ‘Foxconn to increase robot usage to curb workers’ suicide rates’ (Lan Lan and Li Jun, Asia News Network, July 14, 2014).

foxconn-assembly-line

From this story we learn that “Foxconn Technology Group plans to use more robots in its various manufacturing operations as part of its efforts to replace ‘dangerous, boring and repeated’ work, which has often been blamed for the series of suicides at its various facilities in recent years.”  While the embedded quote is not attributed, it cites the oft-repeated triple of ‘dangerous, dull, and dirty’ that characterizes those forms of labour considered a priority for automation.  Assumed to be jobs that no human would want, this valuation makes absent the fact that these are the only jobs that, worldwide, increasing numbers of people rely upon to survive.  The article goes on to describe the new industrial park in Guiyang being custom designed for Foxconn’s automated production lines, in which energy saving and environmental protection will be prioritized to meet the preference of customers like Apple for more environmentally friendly manufacturing.

As robots like Jibo, designed for friendship with certain humans, appear in these stories, other humans (those whose already-precarious labour is soon to be displaced by further automation) are erased.  And then there’s the robot apocalypse, which according to tech reporter Dylan Love, “scientists are afraid to talk about” (Business Insider, July 18, 2014).  In a story that invites roboticists and other experts to comment on the prospective risks of a “post-singularity world” (the ‘singularity’ being that moment at which the capacities of artificially intelligent machines exceed those of the human), Love quotes Northwestern University law professor John O. McGinnis, who in his paper ‘Accelerating AI’ writes:

The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.

In the context of these earlier stories, concerns about the possibility that future humanlike machines might be indifferent to human welfare can’t help but beg the question of contemporary humans’ seeming indifference to the welfare of other humans.  As long as representations of the human family like those of Jibo’s promotion continue to universalize the privileged forms of life that they depict, they effectively erase the unequal global divisions of labour and livelihood on which the production of ‘our things’ currently depends.  As long as news of Foxconn celebrates the company’s turn to environmentally friendly manufacturing while failing to acknowledge the desperate labour conditions that drive Foxconn workers first to take the dangerous, boring, and repetitive work on offer in the manufacture of Apple products, then drives many of them to suicide, and now threatens to render their lives more desperate with the loss of even those jobs, the problem of just what our shared ‘human values’ are remains.  And before we take seriously the question of what it would mean for our technology to treat us as human beings, we might ask what it would mean for us to treat other humans as human beings, including the commitments to social justice that would entail.

Postscript:  For a small bit of good news we might turn to one more story that appears this week. Reporter Martyn Williams writes today in PC World that since its purchase by Google, robot company Boston Dynamics funding from the US Defense Department has dropped from the $30 million/year range of the past several years, to just $1.1 million for 2014 (the latter for participation in DARPA’s robotics challenge).  Our relief might be mitigated by speculation that Google will focus its own robotics efforts on factory automation and ‘home help’, but this small movement away from militarism is a welcome one nonetheless.

Slow robots and slippery rhetorics

DRC

The recently concluded DARPA Robotics Challenge (DRC), held this past week at a NASCAR racetrack near Homestead, Florida, seems to have had a refreshingly sobering effect on the media coverage of advances in robotics.  A field of sixteen competitors, the victors of earlier trials (it was to be seventeen, but ‘travel issues’ prevented the Chinese team from participating), the teams represented the state of the art internationally in the development of mobile, and more specifically ‘legged’ robots.  The majority of the teams worked with machines configured as upright, bipedal humanoids, while two figured their robots as primates (Robosimian and CHIMP), and one as a non-anthropomorphised ‘hexapod’. The Challenge staged a real-time, public demonstration of the state of the art; one which, it seems, proved disillusioning to many who witnessed it.  For all but the most technically knowledgeable in the audience, the actual engineering achievements were hard to appreciate.  More clearly evident was the slowness and clumsiness of the robots, and their vulnerability to failure at what to human contenders would have proven quite unremarkable tasks.  A photo gallery titled Robots to the Rescue, Slowly is indicative, and the BBC titles its coverage of the Challenge Robot competition reveals rise of the machines not imminent.

Reporter Zachary Fagenson sets the scene with a representative moment in the competition:

As a squat, red and black robot nicknamed CHIMP gingerly pushed open a spring-loaded door a gust of wind swooped down onto the track at the Homestead-Miami Speedway and slammed the door shut, eliciting a collective sigh of disappointment from the audience.

In the BBC’s video coverage of the event, Dennis Hong, Director of the Virginia Tech Robotics Lab, tells the interviewer: “When many people think about robots, they watch so many science fiction movies, they think that robots can run and do all the things that humans can do.  From this competition you’ll actually see that that is not the truth. The robots will fall, it’s gonna be really, really slow…” and DARPA Director Arati Prabhaker concurs “I think that robotics is an area where our imaginations have run way ahead of where the technology actually is, and this challenge is not about science fiction it’s about science fact.”  While many aspects of the competition would challenge the separateness of fiction and fact (not least the investment of its funders and competitors in figuring robots as humanoids), this is nonetheless a difference that matters.

These cautionary messages are contradicted, however, in a whip-lash inducing moment at the close of the BBC clip, when Boston Dynamics Project Manager Joe Bondaryk makes the canonical analogy between the trials and the Wright brothers’ first flight, reassuring us that “If all this keeps going, then we can imagine having robots by 2015 that will, you know, that will help our firefighters, help our policemen to do their jobs” (just one year after next year’s Finals, and a short time frame even compared to the remarkable history of flight).

The winning team, University of Tokyo’s spin-out company Schaft (recently acquired by Google), attributes their differentiating edge in the competition to a new high-voltage liquid-cooled motor technology making use of a capacitor rather than a battery for power, which the engineers say lets the robot’s arms move and pivot at higher speeds than would otherwise be possible. Second and fourth place went to teams that had adopted the Boston Dynamics (another recent Google acquisition) Atlas robot as their hardware platform, the Florida Institute for Human and Machine Cognition (IHMC) and MIT teams respectively. (With much fanfare, DARPA funded the delivery of Atlas robots to a number of the contenders earlier this year.)  Third place went to Carnegie Mellon University’s ‘CHIMP,’ while one of the least successful entrants, scoring zero points, was NASA’s ‘Valkyrie’, described in media reports as the only gendered robot in the group (as signaled  by its white plastic vinyl body and suggestive bulges in the ‘chest’ area).  Asked about the logic of Valkyrie’s form factor, Christopher McQuin, Nasa’s chief engineer for hardware development, offered: “The goal is to make it comfortable for people to work with and to touch.”  (To adequately read this comment, and Valkyrie’s identification as gendered against the ‘neutrality’ of the other competitors, would require its own post.)  The eight teams with the highest scores are eligible to apply for up to $1-million in funding to prepare for the final round of the Challenge in late 2014, where a winner will take a $2-million prize.

An article on the Challenge in the MIT Technology Review  by journalist Will Knight includes the sidebar: ‘Why it Matters: If they can become nimbler, more dexterous, and safer, robots could transform the way we work and live.’  Knight thereby implies that we should care about robots, their actual clumsiness and unwieldiness notwithstanding, because if they were like us, they could transform our lives.  The invocation of the way we live here echos the orientation of the Challenge overall, away from robots  as weapons – as instruments of death – and towards the figure of the first responder as the preserver of life.  Despite its sponsorship by the Defense Advanced Research Projects Agency (DARPA), the agency charged with developing new technology for the military, the Challenge is framed not in terms of military R&D, but as an exercise in the development of ‘rescue robots‘.

More specifically, DARPA statements, as well as media reports, position the Challenge itself, along with the eight tasks assigned to the robotics teams (e.g. walking over rubble, clearing debris, punching a hole in a drywall, turning a valve, attaching a fire hose, climbing a ladder), as a response to the disastrous melt down of the Fukushima-Daiichi nuclear power plant.  (For a challenge to this logic see Maggie Mort’s comment to my earlier post ‘will we be rescued?’)  While this begs the question of how robots would be hardened against the effects of nuclear radiation and at what cost (the robots competing in the challenge already costing up to several million dollars each), Knight suggests that if robots can be developed that are capable of taking on these tasks, “they could also be useful for much more than just rescue missions.”  Knight observes that the robot of the winning team “is the culmination of many years of research in Japan, inspired in large part by concerns over the country’s rapidly aging population,” a proposition affirmed by DARPA Program Manager Gill Pratt who “believes that home help is the big business opportunity [for] humanoid robots.”  Just what the connection might be between these pieces of heavy machinery and care at home is left to our imaginations, but quite remarkably Pratt further suggests “that the challenges faced by the robots involved in the DARPA event are quite similar to those that would be faced in hospitals and nursing homes.”

In an article by Pratt published early in December in the Bulletin of the Atomic Scientists titled Robot to the Rescue, we catch a further glimpse of what the ‘more than rescue’ applications for the Challenge robots might be.  Pratt’s aspirations for the DARPA Robotics Challenge invoke the familiar (though highly misleading) analogy between the robot and the developing human: “by the time of the DRC Finals, DARPA hopes the competing robots will demonstrate the mobility and dexterity competence of a 2-year-old child, in particular the ability to execute autonomous, short tasks such as ‘clear out the debris in front of you’ or ‘close the valve,’ regardless of outdoor lighting conditions and other variations.”  I would challenge this comparison on the basis that it underestimates the level of the 2 year old child’s competencies, but I suspect that many parents of 2 year olds might question its aptness on other grounds as well.

Having set out the motivation and conditions of the Challenge, in a section titled ‘Don’t be scared of the robot’ Pratt  turns to the “broad moral, ethical, and societal questions” that it raises, noting that “although the DRC will not develop lethal or fully autonomous systems, some of the technology being developed in the competition may eventually be used in such systems.”  He continues:

society is now wrestling with moral and ethical issues raised by remotely operated unmanned aerial vehicles that enable reconnaissance and projection of lethal force from a great distance … the tempo of modern warfare is escalating, generating a need for systems that can respond faster than human reflexes. The Defense Department has considered the most responsible way to develop autonomous technology, issuing a directive in November 2012 that carefully regulates the way robotic autonomy is developed and used in weapons systems. Even though DRC robots may look like those in the movies that are both lethal and autonomous, in fact they are neither.

The slippery slope of automation and autonomy in military systems, and the U.S. Defense Department’s ambiguous assurances about their commitment to the continued role of humans in targeting and killing, are the topic of ongoing debate and a growing campaign to ban lethal autonomous weapons (See ICRAC website for details.)  I would simply note here the moment of tautological reasoning wherein ‘the tempo of modern warfare,’ presented as a naturally occurring state of the world, becomes the problem for which faster response is the solution, which in turn justifies the need for automation, which in turn increases the tempo, which in turn, etc.

In elaborating the motivation for the Challenge, Gill Pratt invokes a grab-bag of familiar specters of an increasingly ‘vulnerable society’ (population explosion with disproportionate numbers of frail elderly, climate change, weapons of mass destruction held in the wrong hands) as calling for, if not a technological solution, at least a broad mandate for robotics research and development:

The world’s population is continuing to grow and move to cities situated along flood-prone coasts. The population over age 65 in the United States is forecast to increase from 13 percent to 20 percent by 2030, and the elderly require more help in emergency situations. Climate change and the growing threat of proliferation of weapons of mass destruction to non-state actors add to the concern. Today’s natural, man-made and mixed disasters might be only modest warnings of how vulnerable society is becoming.

One implication of this enumeration is that even disaster can be good for business, and humanoid robotics research, Pratt assures us, is all about saving lives and protecting humans (a term that seems all-encompassing at the same time that it erases from view how differently actual persons are valued in the rhetorics of ‘Homeland Security’).  The figure of the ‘warfighter’ appears only once towards the end of Pratt’s piece, and even there the robot’s role in the military is about preserving, not taking life.  But many of us are not reassured by the prospect of robot rescue, and would instead call on the U.S. Government to take action to mitigate climate change, to dial down its commitment to militarism along with its leading role in the international arms trade, and to invest in programs to provide meaningful jobs at a living wage to humans, not least those engaged in the work of care.  The robot Challenge could truly be an advance if the spectacle of slow robots would raise questions about the future of humanoid robotics as a project for our governments and universities to be invested in, and about the good faith of slippery rhetorics that promise the robot first responder as the remedy for our collective vulnerability.