Category Archives: AI and accountability

DoD Patronage

Ash CarterJacquelyn Martin/AP

On June 6th, 2019 The Boston Globe published an opinion piece titled ‘The morality of defending America: A letter to a young Googler.’ The title evokes an earlier set of epistles, German lyric poet Rainer Maria Rilke’s Letters to a Young Poet, published in 1898 (though the citation by Carter is more likely to E.O.Wilson’s 2013 Letters to a Young Scientist.) This letter was penned not by a poet, however, or even by a senior Googler, but rather by Ashton Carter, former Secretary of Defense under Barack Obama and 30 year veteran of the US Department of Defense.

Let’s start with the letter’s title. By taking as given that the project of the US military is to defend America, the title of Carter’s letter assumes that the homeland is under threat; a premise that is arguable in other contexts (though evidently not for Carter here). Carter’s title also forecloses the possibility that US military operations might themselves be seen as questionable, even immoral, by some of the country’s citizens. In that way it silences from the outset a concern that might be at the core of a Google employee’s act of resistance. It ignores, for example, the possibility that what is being defended by US militarism extends beyond the safety of US citizens, to the strategic interests of only some, very specific beneficiaries. It denies that the US military might itself be a provocateur of conflicts and a perpetrator of acts of aggression across the globe, which not only harm those caught up in their path but also, arguably, increase American insecurity. (See for example the reflections of retired colonel Andrew Bacevich on American militarism.) With a ‘defense’ budget that exceeds those of the next 8 most heavily militarized countries in the world together (including China and Russia), the role of the United States as defender of the homeland and peacekeeper elsewhere is at the very least highly contested. Yet Carter’s letter presupposes a US military devoted to “the work it takes to make our world safer, freer, and more peaceful.” If we accept that premise then yes of course, denying to work for the Defense Department can only be read as churlish, or at the very least (as the title implies) childish.

Carter assures his young reader that as a scientist by background, “I share your commitment to ensuring that technology is used for moral ends.” As a case in point (somewhat astonishingly) he offers “the tradition of the Manhattan Project scientists who created that iconic ‘disruptive’ technology: atomic weapons.” Some of us are immediately reminded of the enormous moral agonies faced by those scientists, captured most poignantly in ‘father’ of the bomb J. Robert Oppenheimer’s quote from the Bhagavad Gita, “Now I am become Death, destroyer of worlds.” We might think as well about the question of whether the bombings of Hiroshima and Nagasaki should be recorded among history’s most horrendous of war crimes, rather than as proportional actions that, as Carter asserts, “saved lives by bringing a swift end to World War II.” Again, this point is at least highly contested. And Carter’s assertion that the invention of the atom bomb “then deterred another, even more destructive war between superpowers” fails to recognize the continued threat of nuclear war, as the most immediate potential for collective annihilation that we currently face.

After taking some personal credit for working to minimize the threats that the atomic weapons he so admires created, Carter moves on to set out his reasons for urging his readers to become contributors to the DoD. First on the list is the “inescapable necessity” of national defense, and the simple fact that “AI is an increasingly important military tool.” Who says so; that is, who exactly is invested in the necessity of a national defense based on military hegemony? And who promotes ‘AI’ as a necessary part of the military ‘toolkit’? Rather than even acknowledge these questions, the question Carter poses is this:

Will it be crude, indiscriminate, and needlessly destructive? Or will it be controlled, precise, and designed to follow US laws and international norms?

If you don’t step up to the project of developing AI, Carter continues, you face “ceding the assignment to others who may not share your skill or moral center.” The choice here, it would seem, is a clear one, between indiscriminate killing and killing that is controlled and precise. The possibility that one might rightly question the logics and legitimacy of US targeting operations, and the project of further automating the technologies that enable those operations, seems beyond the scope of Carter’s moral compass.

Carter next cites his own role in the issuance of the Pentagon’s 2012 policy on AI in weapon systems. The requirement of human involvement in any decision to use force, he points out, is tricky; in the case of real time calculations involved in, for example, a guided missile, “avoidance of error is designed into the weapon and checked during rigorous testing.” There has of course been extensive critical analysis of the possibility of testing weapons in conditions that match those in which they’ll subsequently be used, but the key issue here is that of target identification before a weapon is fired. In the case of automated target identification Carter observes:

A reasonable level of traceability — strong enough to satisfy the vital standard of human responsibility — must be designed into those algorithms. The integrity of the huge underlying data sets must also be checked. This is complex work, and it takes specialists like you to ensure it’s done right.

Again, the assumption that automated targeting can be “done right” begs the wider, more profound and critical question of how the legitimacy of targeting is determined by the US military command at present. As long as serious questions remain regarding practices through which the US military defines what comprises an ‘imminent threat,’ rendering targeting more efficient through its automation is equally questionable.

Carters then turns to Google’s Dragonfly Project, the controversial development of a search engine compliant with the Chinese government’s censorship regime, instructing his young reader that “Working in and for China effectively means cooperating with the People’s Liberation Army.” Even without addressing the breadth of this statement, it ignores the possibility that it might be those same “young Googlers” who also strenuously protested the company’s participation in the Dragonfly effort. Moreover the implication is that the illegitimacy of one company project precludes raising concerns about the legitimacy of another, and that somehow not participating in DoD projects is itself effectively aiding and abetting America’s designated ‘enemies.’

Carter’s final point requires citation in full:

Third, and perhaps most important, Google itself and most of your colleagues are American. Your way of life, the democratic institutions that empower you, the laws that enable the operations and profits of the corporation, and even your very survival rely on the protection of the United States. Surely you have a responsibility to contribute as best you can to the shared project of defending the country that has given Google so much.

To be a loyal American, in other words, is to embrace the version of defense of the homeland that Carter assumes; one based in US military hegemony. Nowhere is there a hint of what the alternatives might be – resources redirected to US diplomatic capacities, or working to alleviate rather than exacerbate the conditions that lead to armed conflict around the world (many of them arguably the effects of earlier US interventions). Anything less than unquestioning support of the project of achieving US security through supremacy in military technology is ingratitude, unworthy of us as US citizens, dependent upon the benevolent patronage of the military hand that must continue to feed us and that we, in return, must loyally serve.

Which Sky is Falling?

Justin Wood, from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html

The widening arc of public discussion regarding the promises and threats of AI includes a recurring conflation of what are arguably two intersecting but importantly different matters of concern. The first is the proposition, repeated by a few prominent members of the artificial intelligentsia and their followers, that AI is proceeding towards a visible horizon of ‘superintelligence,’ culminating in the point at which power over ‘us’ (the humans who imagine themselves as currently in control) will be taken over by ‘them’ (in this case, the machines that those humans have created).[1] The second concern arises from the growing insinuation of algorithmic systems into the adjudication of a wide range of distributions, from social service provision to extrajudicial assassination. The first concern reinforces the premise of AI’s advancement, as the basis for alarm. The second concern requires no such faith in the progress of AI, but only attention to existing investments in automated ‘decision systems’ and their requisite infrastructures.

The conflation of these concerns is exemplified in a NY Times piece this past summer, reporting on debates within the upper echelons of the tech world including billionaires like Elon Musk and Mark Zuckerberg, at a series of exclusive gatherings held over the past several years. Responding to Musk’s comparison of the dangers of AI with those posed by nuclear weapons, Zuckerberg apparently invited Musk to discuss his concerns at a small dinner party in 2014. We might pause here to note the gratuitousness of the comparison; it’s difficult to take this as other than a rhetorical gesture designed to claim the gravitas of an established existential threat. But an even longer pause is warranted for the grounds of Musk’s concern, that is the ‘singularity’ or the moment when machines are imagined to surpass human intelligence in ways that will ensure their insurrection.

Let’s set aside for the moment the deeply problematic histories of slavery and rebellion that animate this anxiety, to consider the premise. To share Musk’s concern we need to accept the prospect of machine ‘superintelligence,’ a proposition that others in the technical community, including many deeply engaged with research and development in AI, have questioned. In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.

To demonstrate the extent of concern within the tech community (and by implication those who align with Musk over Zuckerberg), NY Times AI reporter Cade Metz cites recent controversy over the Pentagon’s Project Maven. But of course Project Maven has nothing to do with superintelligence. Rather, it is an initiative to automate the analysis of surveillance footage gathered through the US drone program, based on labels or ‘training data’ developed by military personnel. So being concerned about Project Maven does not require belief in the singularity, but only skepticism about the legality and morality of the processes of threat identification that underwrite the current US targeted killing program. The extensive evidence for the imprecision of those processes that has been gathered by civil society organizations is sufficient to condemn the goal of rendering targeted killing more efficient. The campaign for a prohibitive ban on lethal autonomous weapon systems is aimed at interrupting the logical extension of those processes to the point where target identification and the initiation of attack is put under fully automated machine control. Again this relies not on superintelligence, but on the automation of existing assumptions regarding who and what constitutes an imminent threat.

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.

As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

[1] We could replace the uprising subject in this imaginary with other subjugated populations figured as plotting a takeover, the difference being that here the power of the master/creator is confirmed by the increasing threat posed by his progeny. Thus also the allusion to our own creation in the illustration by Justin Wood that accompanies the article provoking this post.

Corporate Accountability

Jacbin image.jpg

This graphic appears at https://www.jacobinmag.com/2018/06/google-project-maven-military-tech-workers

On June 7th, Google CEO Sundar Pichai published a post on the company’s public blog site titled ‘AI at Google: our Principles.’ (Subsequently abbreviated to Our Principles.) The release of this statement was responsive in large measure to dissent from Google employees beginning early in the Fall of last year; while these debates are not addressed directly, their traces are evident in the subtext. The employee dissent focused on the company’s contracts with the US Department of Defense, particularly for work on its Algorithmic Warfare Cross Functional Team, also known as Project Maven. The controversy was receiving increasingly widespread attention in the press.

It is to the credit of Google workers that they have the courage and commitment to express their concerns. And it is to Google management’s credit that, unusually among major US corporations, it both encourages dissent and feels compelled to respond. I was involved in organizing a letter from researchers in support of Googlers and other tech workers, and in that capacity was gratified to hear Google announce that it would not renew the Project Maven contract next year. (Disclosure: I think US militarism is a global problem, perpetrating unaccountable violence while further jeopardizing the safety of US citizens.) In this post I want to take a step away from that particular issue, however, to do a closer reading of the principles that Pichai has set out. In doing so, I want to acknowledge Google’s leadership in creating a public statement of its principles for the development of technologies; a move that is also quite unprecedented, as far as I’m aware, for private corporations. And I want to emphasize that the critique that I set out here is not aimed at Google uniquely, but rather is meant to highlight matters of concern across the tech industry, as well as within wider discourses of technology development.

One question we might ask at the outset is why this statement of principles is framed in terms of AI, rather than software development more broadly. Pichai’s blog post opens with this sentence: “At its heart, AI is computer programming that learns and adapts.” Those who have been following this blog will be able to anticipate my problems with this statement, singularizing ‘AI’ as an agent with a ‘heart’ that engages in learning, and in that way contributing to its mystification. I would rephrase this along the lines of “AI is the cover term for a range of techniques for data analysis and processing, the relevant parameters of which can be adjusted according to either internally or externally generated feedback.” One could substitute “information technologies (IT)” or “software” for AI throughout the principles, moreover, and their sense would be the same.

Pichai continues: “It [AI] can’t solve every problem, but its potential to improve our lives is profound.” While this is a familiar (and some would argue innocent enough) premise, it’s always worth asking several questions in response: What’s the evidentiary basis for AI’s “profound potential”? Whose lives, more specifically, stand to be improved? And what other avenues for the enhancement of human well being might the potential of AI be compared to, both in terms of efficacy and the number of persons positively affected?

Regrettably, the opening paragraph closes with some product placement, as Pichai asserts that Google’s development of AI makes its products more useful, from “email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” with embedded links to associated promotional sites (removed here in order not to propagate the promotion). The subsequent paragraph then offers a list of non-commercial applications of Google’s data analytics, whose “clear benefits are why Google invests heavily in AI research and development.”

This promotional opening then segues to the preamble to the Principles, explaining that they are motivated by the recognition that “How AI is developed and used will have a significant impact on society for many years to come.” Readers familiar with the field of science and technology studies (STS) will know that the term ‘impact’ has been extensively critiqued within STS for its presupposition that technology is somehow outside of society to begin with. Like any technology, AI/IT does not originate elsewhere, like an asteroid, and then make contact. Rather, like Google, AI/IT is constituted from the start by relevant cultural, political, and economic imaginaries, investments, and interests. The challenge is to acknowledge the genealogies of technical systems and to take responsibility for ongoing, including critical, engagement with their consequences.

The preamble then closes with this proviso: “We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.” Notwithstanding my difficulties in thinking of a precedent for humility in the case of Google (or any of the other Big Five), this is a welcome statement, particularly in its commitment to continuing to listen both to employees and to relevant voices beyond the company.

The principles themselves are framed as a set of objectives for the company’s AI applications, all of which are unarguable goods. These are: being socially beneficial, avoiding the creation or reinforcement of social bias, ensuring safety, providing accountability, protecting privacy, and upholding standards of scientific excellence. Taken together, Google’s technologies should “be available for uses that support these principles.” While there is much to commend here, some passages shouldn’t go by unremarked.

The principle, “Be built and tested for safety” closes with this sentence: “In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.” What does this imply for the cases where this is not “appropriate,” that is, what would justify putting AI technologies into use in unconstrained environments, where their operations are more consequential but harder to monitor?

­The principle “Be accountable to people,” states “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.” This is a key objective but how, realistically, will this promise be implemented? As worded, it implicitly acknowledges a series of complex and unsolved problems: the increasing opacity of algorithmic operations, the absence of due process for those who are adversely affected, and the increasing threat that automation will translate into autonomy, in the sense of technologies that operate in ways that matter without provision for human judgment or accountability. Similarly, for privacy design, Google promises to “give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” Again we know that these are precisely the areas that have been demonstrated to be highly problematic with more conventional techniques; when and how will those longstanding, and intensifying, problems be fully acknowledged and addressed?

The statement closes, admirably, with an explicit list of applications that Google will not pursue. The first item, however, includes a rather curious set of qualifications:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

What are the qualifiers “overall” doing here, or “material”? What will be the basis for the belief that “the benefits substantially outweigh the risks,” and who will adjudicate that?

There is a welcome commitment not to participate in the development of

2.  Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

As the Project Maven example illustrates, the line between a weapon and a weapon system can be a tricky one to draw. Again from STS we know that technologies are not discrete entities; their purposes and implementations need to be assessed in the context of the more extended sociotechnical systems of which they’re part.

And finally, Google pledges not to develop:

  1. Technologies that gather or use information for surveillance violating internationally accepted norms.
  2. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Again, these commitments are laudable; however we know that the normative and legal frameworks governing surveillance and human rights are highly contested and frequently violated. This means that adherence to these principles will require working with relevant NGOs (for example, the International Committee of the Red Cross, Human Rights Watch), continuing to monitor the application of Google’s technologies, and welcoming challenges based on evidence for uses that violate the principles.

A coda to this list ensures Google’s commitment to work with “governments and the military in many other areas,” under the pretense that this can be restricted to operations that “keep [LS: read US] service members and civilians safe.” This odd pairing of governments, in the plural, and the military singular might raise further questions regarding the obligations of global companies like Google and the other Big Five information technology companies. What if it were to read “governments and militaries in many other areas”? What does work either with one nation’s military, or many, imply for Google’s commitment to users and customers around the world?

The statement closes with:

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

This passage is presumably responsive to media reports of changes to Google’s Code of Conduct, from “Don’t Be Evil” (highly lauded but actually setting quite a low bar), to Alphabet’s “Do the Right Thing.” This familiar injunction is also a famously vacuous one, in the absence of the requisite bodies for deliberation, appeal, and redress.

The overriding question for all of these principles, in the end, concerns the processes through which their meaning and adherence to them will be adjudicated. It’s here that Google’s own status as a private corporation, but one now a giant operating in the context of wider economic and political orders, needs to be brought forward from the subtext and subject to more explicit debate. While Google can rightfully claim some leadership among the Big Five in being explicit about its guiding principles and areas that it will not pursue, this is only because the standards are so abysmally low. We should demand a lot more from companies as large as Google, which control such disproportionate amounts of the world’s wealth, and yet operate largely outside the realm of democratic or public accountability.