Author Archives: Lucy Suchman

The algorithmically accelerated killing machine

Destruction caused by Israeli bombings is seen inside Al-Shati refugee camp, northern Gaza Strip, November 16, 2023. (Yonatan Sindel/Flash90)

On 11 January 2024, the International Court of Justice opened proceedings on charges of genocide brought by South Africa against Israel’s operations in Gaza. Israel, on its side, frames its military operations as self defense and a justifiable response to the massacre of Israeli civilians by Hamas on 7 October 2023. In the media coverage on Israeli operations in Gaza, one investigative report stood out for those of us who have been following developments in the algorithmic intensification* of military killing machines, a story of Israel’s AI-enabled targeting system named Habsora, or the Gospel.

Headlined ‘“A mass assassination factory”: Inside Israel’s calculated bombing of Gaza,’ the report draws on sources within the Israeli intelligence community who confirm that Israeli Defense Force (IDF) operations in the Gaza strip combine more permissive authorization for the bombing of non-military targets with a loosening of constraints regarding expected civilian casualties. This policy sanctions the bombing of densely populated civilian areas, including high-rise residential and public buildings designated as so called ‘power targets’. Official legal guidelines require that selected buildings must house a legitimate military target and be empty at the time of their destruction; the latter has resulted in the IDF’s issuance of a constant and changing succession of unfeasible evacuation orders to those trapped in diminishingly small areas of Gaza. These targeting practices are presumably facilitated by the extent and intensity of the surveillance infrastructure in the Occupied Palestinian Territories (see Anthony Lowenstein’s The Palestine Laboratory). Moreover, once Israel declares the entire surface of Gaza as a cover for Hamas tunnels, all of which are assumed to be legitimate military targets, the entire strip becomes fair game for destruction.

A direct corollary of this operational strategy is the need for an unbroken stream of candidate targets. To meet this requirement, Habsora is designed to accelerate the generation of targets from surveillance data, creating what one former intelligence officer (quoted in the story’s headline) describes as a “mass assassination factory”. Most notably, the Israeli bombardment of Gaza has shifted the argument for AI-enabled targeting from claims to greater precision and accuracy, to the objective of accelerating the rate of destruction. IDF spokesperson R Adm Daniel Hagari has acknowledged that in the bombing of Gaza “the emphasis is on damage and not on accuracy.” For those who have been advancing precision and accuracy as the high moral ground of data-driven targeting, this admission must surely be disruptive. It shifts the narrative from a technology in aid of adherence to International Humanitarian Law and the Geneva Conventions, to automation in the name of industrial scale productivity in target generation, enabling greater speed and efficiency in killing. As the intelligence sources acknowledge, moreover, Israel’s operations are not indiscriminate but are deliberately designed to create ‘shock’ among the civilian population, under the premise that this will somehow contribute to Israel’s aim of eliminating Hamas.

Israel’s mobilization of algorithmic intensification to accelerate target production should be understood within a wider technopolitical context of so-called network-centric warfare. A project dating back to the 1990s, with roots in the cybernetic imaginary of the Cold War, data-driven warfighting promises a technological solution to the longstanding problem of ‘situational awareness’ as a prerequisite for the perpetuation of military logics. As National Defense Magazine observes of the various proposals for networked warfare “what all these concepts have in common is the vision of a truly networked battlefield in which data moves at the speed of light to connect not only sensors to shooters, but also the totality of deployed forces and platforms.” Data here are naturalised, treated as self-evident signs emitted by an objectively existing world ‘out there,’ rather than as the product of an extensively engineered chain of translation from machine readable signals to ad hoc systems of classification and interpretation. And contra the idea that it is the demonstrated value of data that leads to surveillance and data gathering, data-driven operations are mandated by investment in those infrastructures. The on-faith investment in surveillance and data gathering, in other words, feeds a desire to rely on data for decision-making, however questionable the provenance and chains of inference.

All of this occurs in a context of Israel’s economic commitment to establishing itself as a leading purveyor of high-tech military technoscience, not least in so-called AI-enabled warfighting. For both Ukraine and Israel these wars are an opportunity to boost their arms sales. Battle-tested systems are easier to sell, and US venture capital firms like Eric Schmidt’s Innovation Endeavors, and companies like Palantir, are lining up to be part of the booming weapons industry. Yet enormous questions remain regarding the validity of the assumptions built into these systems about who comprises an imminent threat and about the legitimacy of their targeting functions under the Geneva Conventions and the laws of war. We know that these platforms require continually updated datasets sourced from satellite imagery, drone footage and other surveillance data monitoring the movements and behaviour patterns of individuals and groups, including cell phone tracking, social media, and intercepted communications. But we don’t know how data quality is validated or what assumptions are built into categories like “military objects” or “persons of interest” and their designation as legitimate targets. The evidence from Gaza, whereas of this writing civilian casualties have surpassed 25,000 (and are likely significantly higher), including over 10,000 children and destruction of roughly 70% of Gaza’s buildings and critical infrastructure, the gospel of AI-enabled precision and accuracy has now been revealed as a pretext for the acceleration of unrestrained and criminal acts of killing. While the masters of war enjoy their short-term profits through the promise of technological solutions, critical voices, including a growing number inside Israel, agree that only an immediate ceasefire and unconditional release of hostages can re-open the path to a political solution. Let us hope that the IJC reaches the same conclusion.

*An alternative reading of ‘AI’ suggested by Andrew Clement (personal communication)

DoD Patronage

Ash CarterJacquelyn Martin/AP

On June 6th, 2019 The Boston Globe published an opinion piece titled ‘The morality of defending America: A letter to a young Googler.’ The title evokes an earlier set of epistles, German lyric poet Rainer Maria Rilke’s Letters to a Young Poet, published in 1898 (though the citation by Carter is more likely to E.O.Wilson’s 2013 Letters to a Young Scientist.) This letter was penned not by a poet, however, or even by a senior Googler, but rather by Ashton Carter, former Secretary of Defense under Barack Obama and 30 year veteran of the US Department of Defense.

Let’s start with the letter’s title. By taking as given that the project of the US military is to defend America, the title of Carter’s letter assumes that the homeland is under threat; a premise that is arguable in other contexts (though evidently not for Carter here). Carter’s title also forecloses the possibility that US military operations might themselves be seen as questionable, even immoral, by some of the country’s citizens. In that way it silences from the outset a concern that might be at the core of a Google employee’s act of resistance. It ignores, for example, the possibility that what is being defended by US militarism extends beyond the safety of US citizens, to the strategic interests of only some, very specific beneficiaries. It denies that the US military might itself be a provocateur of conflicts and a perpetrator of acts of aggression across the globe, which not only harm those caught up in their path but also, arguably, increase American insecurity. (See for example the reflections of retired colonel Andrew Bacevich on American militarism.) With a ‘defense’ budget that exceeds those of the next 8 most heavily militarized countries in the world together (including China and Russia), the role of the United States as defender of the homeland and peacekeeper elsewhere is at the very least highly contested. Yet Carter’s letter presupposes a US military devoted to “the work it takes to make our world safer, freer, and more peaceful.” If we accept that premise then yes of course, denying to work for the Defense Department can only be read as churlish, or at the very least (as the title implies) childish.

Carter assures his young reader that as a scientist by background, “I share your commitment to ensuring that technology is used for moral ends.” As a case in point (somewhat astonishingly) he offers “the tradition of the Manhattan Project scientists who created that iconic ‘disruptive’ technology: atomic weapons.” Some of us are immediately reminded of the enormous moral agonies faced by those scientists, captured most poignantly in ‘father’ of the bomb J. Robert Oppenheimer’s quote from the Bhagavad Gita, “Now I am become Death, destroyer of worlds.” We might think as well about the question of whether the bombings of Hiroshima and Nagasaki should be recorded among history’s most horrendous of war crimes, rather than as proportional actions that, as Carter asserts, “saved lives by bringing a swift end to World War II.” Again, this point is at least highly contested. And Carter’s assertion that the invention of the atom bomb “then deterred another, even more destructive war between superpowers” fails to recognize the continued threat of nuclear war, as the most immediate potential for collective annihilation that we currently face.

After taking some personal credit for working to minimize the threats that the atomic weapons he so admires created, Carter moves on to set out his reasons for urging his readers to become contributors to the DoD. First on the list is the “inescapable necessity” of national defense, and the simple fact that “AI is an increasingly important military tool.” Who says so; that is, who exactly is invested in the necessity of a national defense based on military hegemony? And who promotes ‘AI’ as a necessary part of the military ‘toolkit’? Rather than even acknowledge these questions, the question Carter poses is this:

Will it be crude, indiscriminate, and needlessly destructive? Or will it be controlled, precise, and designed to follow US laws and international norms?

If you don’t step up to the project of developing AI, Carter continues, you face “ceding the assignment to others who may not share your skill or moral center.” The choice here, it would seem, is a clear one, between indiscriminate killing and killing that is controlled and precise. The possibility that one might rightly question the logics and legitimacy of US targeting operations, and the project of further automating the technologies that enable those operations, seems beyond the scope of Carter’s moral compass.

Carter next cites his own role in the issuance of the Pentagon’s 2012 policy on AI in weapon systems. The requirement of human involvement in any decision to use force, he points out, is tricky; in the case of real time calculations involved in, for example, a guided missile, “avoidance of error is designed into the weapon and checked during rigorous testing.” There has of course been extensive critical analysis of the possibility of testing weapons in conditions that match those in which they’ll subsequently be used, but the key issue here is that of target identification before a weapon is fired. In the case of automated target identification Carter observes:

A reasonable level of traceability — strong enough to satisfy the vital standard of human responsibility — must be designed into those algorithms. The integrity of the huge underlying data sets must also be checked. This is complex work, and it takes specialists like you to ensure it’s done right.

Again, the assumption that automated targeting can be “done right” begs the wider, more profound and critical question of how the legitimacy of targeting is determined by the US military command at present. As long as serious questions remain regarding practices through which the US military defines what comprises an ‘imminent threat,’ rendering targeting more efficient through its automation is equally questionable.

Carters then turns to Google’s Dragonfly Project, the controversial development of a search engine compliant with the Chinese government’s censorship regime, instructing his young reader that “Working in and for China effectively means cooperating with the People’s Liberation Army.” Even without addressing the breadth of this statement, it ignores the possibility that it might be those same “young Googlers” who also strenuously protested the company’s participation in the Dragonfly effort. Moreover the implication is that the illegitimacy of one company project precludes raising concerns about the legitimacy of another, and that somehow not participating in DoD projects is itself effectively aiding and abetting America’s designated ‘enemies.’

Carter’s final point requires citation in full:

Third, and perhaps most important, Google itself and most of your colleagues are American. Your way of life, the democratic institutions that empower you, the laws that enable the operations and profits of the corporation, and even your very survival rely on the protection of the United States. Surely you have a responsibility to contribute as best you can to the shared project of defending the country that has given Google so much.

To be a loyal American, in other words, is to embrace the version of defense of the homeland that Carter assumes; one based in US military hegemony. Nowhere is there a hint of what the alternatives might be – resources redirected to US diplomatic capacities, or working to alleviate rather than exacerbate the conditions that lead to armed conflict around the world (many of them arguably the effects of earlier US interventions). Anything less than unquestioning support of the project of achieving US security through supremacy in military technology is ingratitude, unworthy of us as US citizens, dependent upon the benevolent patronage of the military hand that must continue to feed us and that we, in return, must loyally serve.

Still unsafe at any speed?

waymo one

Mindful of the fact that most of the posts on this blog are provoked by media coverage that works to further mystify AI/robotics, I thought I would break with that pattern to recognize a recent story that, to my reading, breaks with the pattern. A piece by Andrew J. Hawkins in The Verge reviews the state of robot taxis in Phoenix, Arizona. Having been assured on several occasions by tech-savvy friends that self-driving cars are in everyday operation in Phoenix, this story helps to clarify the actual state of the technology. And Hawkins raises some welcome questions as well about the claimed benefits of autonomous vehicles; questions that should be at the forefront of discussion about public investment in transportation infrastructures going forward.

While it’s not until paragraph nine that we discover that the autonomous vehicles deployed in the Waymo One taxi service actually still include a human ‘safety driver,’ there’s much to learn here. The piece is headlined by a video report from Hawkins on his own experience of the service. Hawkins points out that Alphabet/Google’s Waymo is the longest running and most extensive of the autonomous vehicle projects, with the lowest number of recorded “disengagements,” or events in which the human driver has to take over the wheel. The current trial is limited to four towns in the greater Phoenix area, and to voluntary members of Waymo’s “early rider” program (with requisite non-disclosure agreements and, we might assume, liability waivers.) We might note in the aerial views of the designated areas the Arizona landscape’s flatness, and of course we know that the reason for the old prescription to “ship your sinuses to Arizona” (familiar at least to TV watchers of my generation) is that state’s relatively rain free climate. There’s not much discussion of Arizona’s particularities here or in the media more generally, but they point us towards the question of what environmental conditions are required for the self-driving car’s successful operation.

Hawkins very helpfully introduces us to the infrastructure of sensor technologies that make the autonomous vehicle possible (the car’s sensor view is live streamed for the passenger, in part presumably to make the ride less boring). As we watch we begin to get a sense of how the car/environment might be a more apt unit of analysis than the car alone. Surrounding vehicles and pedestrians are rendered as categorically color-coded, edge-detected objects. As Hawkins compares the experience to “being in the back seat with a very cautious student driver,” we get to sit through an “unprotected left turn” (what we human drivers would call turning left mid-block onto a side street, by waiting for a break in oncoming traffic). We see how the Waymo One turns only when the change in the traffic light at the intersection ahead creates a clear, clean break in the traffic. Well worth the wait, I suspect most of us would say, though a source of reported frustration for other human drivers. For Waymo this breakdown in driving tempo poses the challenge of developing its software to enable the car to drive “more organically, more like a human.” We get a sense here, for better and worse, of the difference between an operation based entirely on metrics and algorithms, and a practice based on embodied experiences of space and time. Nonetheless, Daniel Chu, Director of Product from Waymo, translates aggregated time on the road and associated statistics into a characterization of each Waymo One vehicle as equivalent to “the world’s most experienced driver.”

Perhaps the most welcome moment in Hawkin’s account comes when he turns to the question of whether the car is actually the most imaginative, or even desirable, vehicle for the future of transportation. Millennials, he reports, indicate in poll data some doubt about the future of car travel, and a preference for better public transportation, along with safer spaces in which to bike and walk. While touted as a remedy to the proven fallibility of human drivers, comparable safety statistics for the driverless car aren’t really available, according to Sean Sweat of the Urban Phoenix Project, given the relative size of data sets on cars driven by humans and driverless cars over time. Sweat points out that the question of driver safety also sidesteps the question of how the design of urban spaces, particularly streets, might contribute to pedestrian fatalities or, alternatively, to their avoidance. This points to the much larger issue of how the investment in a future of self-driving cars might drive the reconfiguration of transport infrastructures required to enable them. Not only the cars themselves, but our roadways and urban landscapes will likely become further instrumented in the service of vehicle autonomy. This isn’t inherently a bad thing; the Copenhagen subway system, for example, has evolved through a thoroughgoing makeover of the city infrastructure to create a driverless and extremely safe transport system to accompany its bicycle-friendly streetscape (most obviously, there’s no open access to the track, even from the station platform). But in car cultures the financial expense required to re-engineer highways and cities in order to make them autonomous vehicle friendly is accompanied by lost opportunity costs, beginning with a sidelining of discussion about alternative possibilities. Meanwhile the future of autonomous cars that can drive any road, under any conditions, may take decades, Hawkins concludes, or may never happen. Far from inevitable, then, the driverless car is a project urgently in need of braking, to open a space for more innovative ways of thinking about safe and sustainable transport.

Which Sky is Falling?

Justin Wood, from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html

The widening arc of public discussion regarding the promises and threats of AI includes a recurring conflation of what are arguably two intersecting but importantly different matters of concern. The first is the proposition, repeated by a few prominent members of the artificial intelligentsia and their followers, that AI is proceeding towards a visible horizon of ‘superintelligence,’ culminating in the point at which power over ‘us’ (the humans who imagine themselves as currently in control) will be taken over by ‘them’ (in this case, the machines that those humans have created).[1] The second concern arises from the growing insinuation of algorithmic systems into the adjudication of a wide range of distributions, from social service provision to extrajudicial assassination. The first concern reinforces the premise of AI’s advancement, as the basis for alarm. The second concern requires no such faith in the progress of AI, but only attention to existing investments in automated ‘decision systems’ and their requisite infrastructures.

The conflation of these concerns is exemplified in a NY Times piece this past summer, reporting on debates within the upper echelons of the tech world including billionaires like Elon Musk and Mark Zuckerberg, at a series of exclusive gatherings held over the past several years. Responding to Musk’s comparison of the dangers of AI with those posed by nuclear weapons, Zuckerberg apparently invited Musk to discuss his concerns at a small dinner party in 2014. We might pause here to note the gratuitousness of the comparison; it’s difficult to take this as other than a rhetorical gesture designed to claim the gravitas of an established existential threat. But an even longer pause is warranted for the grounds of Musk’s concern, that is the ‘singularity’ or the moment when machines are imagined to surpass human intelligence in ways that will ensure their insurrection.

Let’s set aside for the moment the deeply problematic histories of slavery and rebellion that animate this anxiety, to consider the premise. To share Musk’s concern we need to accept the prospect of machine ‘superintelligence,’ a proposition that others in the technical community, including many deeply engaged with research and development in AI, have questioned. In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.

To demonstrate the extent of concern within the tech community (and by implication those who align with Musk over Zuckerberg), NY Times AI reporter Cade Metz cites recent controversy over the Pentagon’s Project Maven. But of course Project Maven has nothing to do with superintelligence. Rather, it is an initiative to automate the analysis of surveillance footage gathered through the US drone program, based on labels or ‘training data’ developed by military personnel. So being concerned about Project Maven does not require belief in the singularity, but only skepticism about the legality and morality of the processes of threat identification that underwrite the current US targeted killing program. The extensive evidence for the imprecision of those processes that has been gathered by civil society organizations is sufficient to condemn the goal of rendering targeted killing more efficient. The campaign for a prohibitive ban on lethal autonomous weapon systems is aimed at interrupting the logical extension of those processes to the point where target identification and the initiation of attack is put under fully automated machine control. Again this relies not on superintelligence, but on the automation of existing assumptions regarding who and what constitutes an imminent threat.

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.

As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

[1] We could replace the uprising subject in this imaginary with other subjugated populations figured as plotting a takeover, the difference being that here the power of the master/creator is confirmed by the increasing threat posed by his progeny. Thus also the allusion to our own creation in the illustration by Justin Wood that accompanies the article provoking this post.

Corporate Accountability

Jacbin image.jpg

This graphic appears at https://www.jacobinmag.com/2018/06/google-project-maven-military-tech-workers

On June 7th, Google CEO Sundar Pichai published a post on the company’s public blog site titled ‘AI at Google: our Principles.’ (Subsequently abbreviated to Our Principles.) The release of this statement was responsive in large measure to dissent from Google employees beginning early in the Fall of last year; while these debates are not addressed directly, their traces are evident in the subtext. The employee dissent focused on the company’s contracts with the US Department of Defense, particularly for work on its Algorithmic Warfare Cross Functional Team, also known as Project Maven. The controversy was receiving increasingly widespread attention in the press.

It is to the credit of Google workers that they have the courage and commitment to express their concerns. And it is to Google management’s credit that, unusually among major US corporations, it both encourages dissent and feels compelled to respond. I was involved in organizing a letter from researchers in support of Googlers and other tech workers, and in that capacity was gratified to hear Google announce that it would not renew the Project Maven contract next year. (Disclosure: I think US militarism is a global problem, perpetrating unaccountable violence while further jeopardizing the safety of US citizens.) In this post I want to take a step away from that particular issue, however, to do a closer reading of the principles that Pichai has set out. In doing so, I want to acknowledge Google’s leadership in creating a public statement of its principles for the development of technologies; a move that is also quite unprecedented, as far as I’m aware, for private corporations. And I want to emphasize that the critique that I set out here is not aimed at Google uniquely, but rather is meant to highlight matters of concern across the tech industry, as well as within wider discourses of technology development.

One question we might ask at the outset is why this statement of principles is framed in terms of AI, rather than software development more broadly. Pichai’s blog post opens with this sentence: “At its heart, AI is computer programming that learns and adapts.” Those who have been following this blog will be able to anticipate my problems with this statement, singularizing ‘AI’ as an agent with a ‘heart’ that engages in learning, and in that way contributing to its mystification. I would rephrase this along the lines of “AI is the cover term for a range of techniques for data analysis and processing, the relevant parameters of which can be adjusted according to either internally or externally generated feedback.” One could substitute “information technologies (IT)” or “software” for AI throughout the principles, moreover, and their sense would be the same.

Pichai continues: “It [AI] can’t solve every problem, but its potential to improve our lives is profound.” While this is a familiar (and some would argue innocent enough) premise, it’s always worth asking several questions in response: What’s the evidentiary basis for AI’s “profound potential”? Whose lives, more specifically, stand to be improved? And what other avenues for the enhancement of human well being might the potential of AI be compared to, both in terms of efficacy and the number of persons positively affected?

Regrettably, the opening paragraph closes with some product placement, as Pichai asserts that Google’s development of AI makes its products more useful, from “email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” with embedded links to associated promotional sites (removed here in order not to propagate the promotion). The subsequent paragraph then offers a list of non-commercial applications of Google’s data analytics, whose “clear benefits are why Google invests heavily in AI research and development.”

This promotional opening then segues to the preamble to the Principles, explaining that they are motivated by the recognition that “How AI is developed and used will have a significant impact on society for many years to come.” Readers familiar with the field of science and technology studies (STS) will know that the term ‘impact’ has been extensively critiqued within STS for its presupposition that technology is somehow outside of society to begin with. Like any technology, AI/IT does not originate elsewhere, like an asteroid, and then make contact. Rather, like Google, AI/IT is constituted from the start by relevant cultural, political, and economic imaginaries, investments, and interests. The challenge is to acknowledge the genealogies of technical systems and to take responsibility for ongoing, including critical, engagement with their consequences.

The preamble then closes with this proviso: “We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.” Notwithstanding my difficulties in thinking of a precedent for humility in the case of Google (or any of the other Big Five), this is a welcome statement, particularly in its commitment to continuing to listen both to employees and to relevant voices beyond the company.

The principles themselves are framed as a set of objectives for the company’s AI applications, all of which are unarguable goods. These are: being socially beneficial, avoiding the creation or reinforcement of social bias, ensuring safety, providing accountability, protecting privacy, and upholding standards of scientific excellence. Taken together, Google’s technologies should “be available for uses that support these principles.” While there is much to commend here, some passages shouldn’t go by unremarked.

The principle, “Be built and tested for safety” closes with this sentence: “In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.” What does this imply for the cases where this is not “appropriate,” that is, what would justify putting AI technologies into use in unconstrained environments, where their operations are more consequential but harder to monitor?

­The principle “Be accountable to people,” states “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.” This is a key objective but how, realistically, will this promise be implemented? As worded, it implicitly acknowledges a series of complex and unsolved problems: the increasing opacity of algorithmic operations, the absence of due process for those who are adversely affected, and the increasing threat that automation will translate into autonomy, in the sense of technologies that operate in ways that matter without provision for human judgment or accountability. Similarly, for privacy design, Google promises to “give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” Again we know that these are precisely the areas that have been demonstrated to be highly problematic with more conventional techniques; when and how will those longstanding, and intensifying, problems be fully acknowledged and addressed?

The statement closes, admirably, with an explicit list of applications that Google will not pursue. The first item, however, includes a rather curious set of qualifications:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

What are the qualifiers “overall” doing here, or “material”? What will be the basis for the belief that “the benefits substantially outweigh the risks,” and who will adjudicate that?

There is a welcome commitment not to participate in the development of

2.  Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

As the Project Maven example illustrates, the line between a weapon and a weapon system can be a tricky one to draw. Again from STS we know that technologies are not discrete entities; their purposes and implementations need to be assessed in the context of the more extended sociotechnical systems of which they’re part.

And finally, Google pledges not to develop:

  1. Technologies that gather or use information for surveillance violating internationally accepted norms.
  2. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Again, these commitments are laudable; however we know that the normative and legal frameworks governing surveillance and human rights are highly contested and frequently violated. This means that adherence to these principles will require working with relevant NGOs (for example, the International Committee of the Red Cross, Human Rights Watch), continuing to monitor the application of Google’s technologies, and welcoming challenges based on evidence for uses that violate the principles.

A coda to this list ensures Google’s commitment to work with “governments and the military in many other areas,” under the pretense that this can be restricted to operations that “keep [LS: read US] service members and civilians safe.” This odd pairing of governments, in the plural, and the military singular might raise further questions regarding the obligations of global companies like Google and the other Big Five information technology companies. What if it were to read “governments and militaries in many other areas”? What does work either with one nation’s military, or many, imply for Google’s commitment to users and customers around the world?

The statement closes with:

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

This passage is presumably responsive to media reports of changes to Google’s Code of Conduct, from “Don’t Be Evil” (highly lauded but actually setting quite a low bar), to Alphabet’s “Do the Right Thing.” This familiar injunction is also a famously vacuous one, in the absence of the requisite bodies for deliberation, appeal, and redress.

The overriding question for all of these principles, in the end, concerns the processes through which their meaning and adherence to them will be adjudicated. It’s here that Google’s own status as a private corporation, but one now a giant operating in the context of wider economic and political orders, needs to be brought forward from the subtext and subject to more explicit debate. While Google can rightfully claim some leadership among the Big Five in being explicit about its guiding principles and areas that it will not pursue, this is only because the standards are so abysmally low. We should demand a lot more from companies as large as Google, which control such disproportionate amounts of the world’s wealth, and yet operate largely outside the realm of democratic or public accountability.

Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons

Related image

In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence.  Designated a primer for CCW delegates, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based CNAS are well represented.

Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:

Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).

The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.

The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.

We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of if–then rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique that makes use of labelled training data” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.

Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:

Intelligence is a system’s ability to determine the best course of action to achieve its goals. Autonomy is the freedom a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems could be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).

The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?

The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (CCW/GGE.1/2018/WP.4). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.

Swords to Ploughshares

3032F71E00000578-0-image-a-4_1452886145040

Having completed my domestic labors for the day (including a turn around the house with my lovely red Miele), I take a moment to consider the most recent development from Boston Dynamics (now part of the robotics initiative at sea in Google’s Alphabet soup).  More specifically, the news is of the latest incarnation of Boston Dynamic’s bipedal robot, nicknamed Atlas and famous for its distribution as a platform for DARPA’s Robotics Challenges. Previously figured as life-saving first responder and life-destroying robot soldier,  Atlas is now being repositioned – tongue firmly in cheek – as a member of the domestic workforce.

While irony operates effectively to distance its author from serious investment in truth claims or moral positioning, irony also generally offers a glimpse into its stance towards its objects. So at the risk of humorlessness, it’s worth reading this latest rendering of Atlas’ promises seriously, in the context of other recent developments in humanoid robotics. I wrote in a former post about the U.S. military’s abandonment of Boston Dynamic’s Big Dog and its kin, attributed by commentators to some combination of disappointment in the robot’s performance in the field, and a move toward disinvestment in military applications on the part of Google’s ‘Replicant’ initiative (recently restructured as the ‘X’ group). This leaves a robot solution in search of its problem, and where better to turn than to the last stronghold against automation; that is, the home. Along with the work of care (another favourite for robotics prognosticators), domestic labor (the wonders of dish and clotheswashing machines notwithstanding) has proven remarkably resistant to automation (remarkable at least to roboticists, if not to those of us well versed in this work’s practical contingencies). In a piece headlined ‘Multimillion dollar humanoid robot doesn’t make for a good cleaner,’ the Guardian reproduces a video clip (produced in fast motion with upbeat technomusic soundtrack) showing Florida’s Institute for Human and Machine Cognition (IHMC), runner up in the 2015 Robotics Challenge, testing new code ‘by getting the multimillion dollar Atlas robot to do household chores.’ In an interesting inversion, the robot is described as the ‘Google-developed US government Atlas robot,’ a formulation which sounds as though the development path went from industry to the public sector, rather than the other way around.

Housework, we’re told, proves ‘more difficult than you might imagine,’ suggesting that the reader imagined by the Guardian is one unfamiliar with the actual exigencies of domestic work (while for other readers those difficulties are easily imaginable). The challenge of housework is revealing of the conditions required for effective automation, and their absence in particular forms of labor. Specifically, robots work well just to the extent that their environments – basically the stimuli that they have to process, and the conditions for an appropriate response – can be engineered to fit their capacities. The factory assembly line has, in this respect, been made into the robot’s home. Domestic spaces, in contrast, and the practicalities of work within them (not least the work of care) are characterized by a level of contingency that has so far flummoxed attempts at automation beyond the kinds of appliances that can either depend on human peripherals to set up their conditions of operation (think loading the dishwasher), or can operate successfully through repetitive, random motion (think Roomba and its clones). Long underestimated in the value chain of labor, robotics for domestic work might just teach us some lessons about the extraordinary complexity of even the most ordinary human activities.

Meanwhile in Davos the captains of multinational finance and industry and their advisors are gathered to contemplate the always-imminent tsunami of automation, including artificial intelligence and humanoid robots, that is predicted to sweep world economies in the coming decades. The Chicago Tribune reports:

At IBM, researchers are working to build products atop the Watson computing platform – best known for its skill answering questions on the television quiz show “Jeopardy” – that will search for job candidates, analyze academic research or even help oncologists make better treatment decisions. Such revolutionary technology is the only way to solve “the big problems” like climate change and disease, while also making plenty of ordinary workers more productive and better at their jobs, according to Guru Banavar, IBM’s vice president for cognitive computing. “Fundamentally,” Banavar said, “people have to get comfortable using these machines that are learning and reasoning.” [SIC]

Missing between the lines of the reports from and around Davos are the persistent gaps between the rhetoric of AI and robotics, and the realities. These gaps mean that the progress of automation will be more one of degradation of labor than its replication, so that those who lose their jobs will be accompanied by those forced to adjust to the limits and rigidities of automated service provision. The threat, in other words, is not that any job can be automated, as the gurus assert, but rather that in a political economy based on maximizing profitability for the few, more and more jobs will be transformed into jobs that can be automated, regardless of what is lost.  Let us hope that in this economy, low and no wage jobs, like care provision and housework, might show a path to resistance.

hu·brisˈ(h)yo͞obrəs/noun: hubris 1. excessive pride or self-confidence.

NemesisRethel30q3.5x7.4@162

Nemesis, by Alfred Rethel (1837)

The new year opens with an old story, as The Independent headlines that Facebook multibillionaire Mark Zuckerberg (perhaps finding himself in a crisis of work/life balance) will “build [a] robot butler to look after his child” [sic: those of us who watch Downton Abbey know that childcare is not included in the self-respecting butler’s job description; even the account of divisions of labour among the servants is garbled here], elaborating that “The Facebook founder and CEO’s resolution for 2016 is to build an artificially intelligent system that will be able to control his house, watch over his child and help him to run Facebook.” To put this year’s resolution into perspective, we learn (too much information) that “Mr. Zuckerberg has in the past taken on ‘personal challenges’ that have included reading two books per month, learning Mandarin and meeting a new person each day.” “Every challenge has a theme,” Zuckerberg explains, “and this year’s theme is invention” (a word that, as we know, has many meanings).

We’re reminded that FB has already made substantial investments in AI in areas such as automatic image analysis, though we learn little about the relations (and differences) between those technologies and the project of humanoid robotics. I’m reassured to hear that Zuckerberg has said “that he would start by looking into existing technologies,” and hope that might include signing up to be a follower of this blog.  But as the story proceeds, it appears in any case that the technologies that Z has in mind are less humanoid robots, than the so-called Internet of things (i.e. networked devices, presumably including babycams) and data visualization (for his day job). This is of course all much more mundane and so, in the eyes of The Independent’s headline writers, less newsworthy.

The title of this post is of course the most obvious conclusion to draw regarding the case of Mark Zuckerberg; in its modern form, ‘hubris’ refers to an arrogant individual who believes himself capable of anything. And surely in a political economy where excessive wealth enables disproportionate command of other resources, Zuckerberg’s self-confidence is not entirely unwarranted. In this case, however, Zuckerberg’s power is further endowed by non-investigative journalism, which fails to engage in any critical interrogation of his announcement. Rather than questioning Zuckerberg’s resolution for 2016 on the grounds of its shaky technical feasibility or dubious politics (trivializing the labours of service and ignoring their problematic histories), the Independent makes a jump cut to the old saws of Stephen Hawking, Elon Musk and Ex Machina. Of course The Independent wouldn’t be the first to notice the film’s obvious citation of Facebook and its founder and CEO (however well the latter is disguised by the hyper-masculine and morally degenerate figure of Nathan). But the comparison, I think, ends there and of course, however fabulous, neither Zuckerberg nor Facebook are fictional.

The original Greek connotations of the term ‘hubris’ referenced not just overweening pride, but more violent acts of humiliation and degradation, offensive to the gods. While Zuckerberg’s pride is certainly more mundane, his ambitions join with those of his fellow multibillionaires in their distorting effects on the worlds in which their wealth is deployed (see Democracy Now for the case of Zuckerberg’s interventions into education).  And it might be helpful to be reminded that in Greek tragedy excessive pride towards or defiance of the gods lead to nemesis. The gods may play a smaller role in the fate of Mark Zuckerberg, however, and the appropriate response I think is less retributive than redistributive justice.

Reality Bites

 

big dog trial

LS3 test during Rim of the Pacific Exercise, July 2014

A pack of international news outlets over the past few days have reported the abandonment by the US Department of Defence of Boston Dynamic’s Legged Squad Support System or LS3 (aka ‘Big Dog’) and its offspring (see Don’t kick the Dog). After five years and USD $42 million in investment, what was promised to be a best in breed warfighting companion stumbled over a mundane but apparently intractable problem – noise. Powered by a gas (petrol) motor likened to a lawnmower in sound, the robot’s capacity for carrying heavy loads (400 lbs or 181.4kg), and its much celebrated ability to navigate rough terrain and right itself after falling (or be easily assisted in doing so), in the end were not enough to make up for the fact that, in the assessment of the US Marines who tested the robot, the LS3 was simply ‘too loud’ (BBC News 30 January 2015). The trial’s inescapable conclusion was that the noise would reveal a unit’s presence and position, bringing more danger than aid to the U.S. warfighters that it was deployed to support.

A second concern contributing to the DoD’s decision was the question of the machine’s maintenance and repair. Long ignored in narratives about technological progress, the place of essential practices of inventive maintenance and repair has recently become a central topic in social studies of science and technology (see Steven J. Jackson, “Rethinking Repair,” in Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot, eds. Media Technologies: Essays on Communication, Materiality and Society. MIT Press: Cambridge MA, 2014.). These studies are part of a wider project of recognizing the myriad forms of invisible labour that are essential conditions for keeping machines working – one of the enduring continuities in the history of technology.

The LS3 trials were run by the Marine’s Warfighting Lab, most recently at Kahuku Training Area in Hawaii during the Rim of the Pacific exercise in July of 2014. Kyle Olson, spokesperson for the Lab, reported that seeing the robot’s potential was challenging “because of the limitations of the robot itself.” This phrasing is noteworthy, as the robot itself – the actual material technology – interrupts the progressive elaboration of the promise that keeps investment in place. According to the Guardian report (30 December 2015) both ‘Big Dog’ and ‘Spot,’ an electrically powered and therefore quieter but significantly smaller prototype, are now in storage, with no future experiments planned.

The cessation of the DoD investment will presumably come as a relief to Google, which acquired Boston Dynamics in 2013, saying at the time that it planned to move away from the military contracts that it inherited with the acquisition.  Boston Dynamics will now, we can assume, turn its prodigious ingenuity in electrical and mechanical engineering to other tasks of automation, most obviously in manufacturing. The automation of industrial labour has, somewhat ironically given its status as the original site for robotics, recently been proclaimed to be robotics’ next frontier. While both the BBC and Guardian offer links to a 2013 story about the great plans that accompanied Google’s investments in robotics, more recent reports characterize the status of the initiative (internally named ‘Replicant’) as “in flux,” and its goal of producing a consumer robot by 2020 as in question (Business Insider November 8, 2015). This follows the departure of former Google VP Andy Rubin in 2014 (to launch his own company with the extraordinary name ‘Playground Global’), just a year after he was hailed as the great visionary leader who would turn Google’s much celebrated acquisition of a suite of robotics companies into a unified effort. Having joined Google in 2005, when the latter acquired his smartphone company Android, Rubin was assigned to the leadership of Google’s robotics division by co-founder Larry Page. According to Business Insider’s Jillian D’Onfro, Page

had a broad vision of creating general-purpose bots that could cook, take care of the elderly, or build other machines, but the actual specifics of Replicant’s efforts were all entrusted to Rubin. Rubin has said that Page gave him a free hand to run the robotics effort as he wanted, and the company spent an estimated $50 million to $90 million on eight wide-ranging acquisitions before the end of 2013.

The unifying vision apparently left with Rubin, who has yet to be replaced. D’Onfro continues:

One former high-ranking Google executive says the robot group is a “mess that hasn’t been cleaned up yet.” The robot group is a collection of individual companies “who didn’t know or care about each other, who were all in research in different areas,” the person says. “I would never want that job.”

So another reality that ‘bites back’ is added to those that make up the robot itself; that is, the alignment of the humans engaged in its creation. Meanwhile, Boston Dynamics’ attempt to position itself on the entertainment side of the military-entertainment complex this holiday season was met less with amusement than alarm, as media coverage characterized it variously as ‘creepy’ and ‘nightmarish.’

synthxmas-590x330

Resistance, it seems, is not entirely futile.

Just-so stories

IBM-Watson

Alerted that BBC News/Technology has developed a story titled ‘Intelligent Machines: The Truth Behind AI Fiction’, I follow the link with some hopeful anticipation. The piece opens: ‘Over the next week, the BBC will be looking into all aspects of artificial intelligence – from how to build a thinking machine, to the ethics of doing so, to questions about whether an AI can ever be creative.’  But as I read on my state changes to one that my English friends would characterize as gobsmacked.  Instead of in-depth, critical journalism this piece reads like a (somewhat patronizing) children’s primer with corporate sponsorship.  We’re told, for example, that Watson, IBM’s supercomputer ‘can understand natural language and read millions of documents in seconds’.  But if it’s a deeper understanding of the state of the art in AI that we’re after, we can’t let terms like ‘understand’  and ‘read’ go by unremarked. Rather, it’s precisely the translation of computational processes as ‘understanding’ or ‘reading’, and the difference lost in that translation from our understanding and reading of those terms, that needs to be illuminated.  We might then fully appreciate the ingenious programming that enables the system singularized as ‘Watson’ to compete successfully on the televised quiz show Jeopardy, despite the machine’s cluelessness regarding the cultural references that its algorithms and databases encode.

Things go from bad to worse, however, when we’re told that Watson ‘is currently working in harmony with humans, in diverse fields such as the research and development departments of big companies such as Proctor and Gamble and Coca-Cola – helping them find new products’.  Why equate harmonious working relations with the  deployment of an IBM supercomputer in the service of corporate R&D?  And what kinds of ongoing labours of code development and maintenance are required to reconfigure a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight core processor in such a way that it can operate usefully within these enterprises?  The anthropomorphism of Watson obfuscates, rather than explicates, these ‘truths’ about artificial intelligence and its agencies.

The structure of the story is a series of loops between fiction and ‘fact’, moving from blockbuster films to just-so stories. In place of the Terminator, we’re told, ‘The US military unit Darpa [sic] is developing lots of robotic kit, such as exoskeletons to give soldiers superhuman strength and access to visual displays that will help their decision making. It is also using Atlas robots, developed by Boston Dynamics, intended for search and rescue.’ (There is a brief mention of the campaign against lethal autonomous weapons, though with no links provided).  After a reference to C-3PO, we’re told that ‘In the real world, companion robots are really starting to take off’, exemplified by Pepper, which ‘has learnt about human emotions by watching videos showing facial expressions.’ (See my earlier post on companion robots here.) From Wall-E, surely among the most endearing of fictional robots (see Vivian Sobchack’s brilliant analysis) we go to Roomba, about which we’re told that ‘[a]necdotal evidence suggests some people become as attached to them as pets and take them on holiday.’ We finally close (not a moment too soon) with Ex Machina’s AVA on one hand, and roboticist Hiroshi Ishiguro’s humanoid twin on the other, along with the assurance by Prof Chetan Dube, chief executive of software firm IPsoft, that his virtual assistant Amelia ‘will be given human form indistinguishable from the real thing at some point this decade.’

In the absence of any indication that this story is part of a paid advertisement, I’m at a loss to explain how it achieved the status of investigative journalism within the context of a news source like the BBC. If this is what counts as thoughtful reporting, the prospects for AI-based replication are promising indeed.