Tag Archives: artificial intelligence

DoD Patronage

Ash CarterJacquelyn Martin/AP

On June 6th, 2019 The Boston Globe published an opinion piece titled ‘The morality of defending America: A letter to a young Googler.’ The title evokes an earlier set of epistles, German lyric poet Rainer Maria Rilke’s Letters to a Young Poet, published in 1898 (though the citation by Carter is more likely to E.O.Wilson’s 2013 Letters to a Young Scientist.) This letter was penned not by a poet, however, or even by a senior Googler, but rather by Ashton Carter, former Secretary of Defense under Barack Obama and 30 year veteran of the US Department of Defense.

Let’s start with the letter’s title. By taking as given that the project of the US military is to defend America, the title of Carter’s letter assumes that the homeland is under threat; a premise that is arguable in other contexts (though evidently not for Carter here). Carter’s title also forecloses the possibility that US military operations might themselves be seen as questionable, even immoral, by some of the country’s citizens. In that way it silences from the outset a concern that might be at the core of a Google employee’s act of resistance. It ignores, for example, the possibility that what is being defended by US militarism extends beyond the safety of US citizens, to the strategic interests of only some, very specific beneficiaries. It denies that the US military might itself be a provocateur of conflicts and a perpetrator of acts of aggression across the globe, which not only harm those caught up in their path but also, arguably, increase American insecurity. (See for example the reflections of retired colonel Andrew Bacevich on American militarism.) With a ‘defense’ budget that exceeds those of the next 8 most heavily militarized countries in the world together (including China and Russia), the role of the United States as defender of the homeland and peacekeeper elsewhere is at the very least highly contested. Yet Carter’s letter presupposes a US military devoted to “the work it takes to make our world safer, freer, and more peaceful.” If we accept that premise then yes of course, denying to work for the Defense Department can only be read as churlish, or at the very least (as the title implies) childish.

Carter assures his young reader that as a scientist by background, “I share your commitment to ensuring that technology is used for moral ends.” As a case in point (somewhat astonishingly) he offers “the tradition of the Manhattan Project scientists who created that iconic ‘disruptive’ technology: atomic weapons.” Some of us are immediately reminded of the enormous moral agonies faced by those scientists, captured most poignantly in ‘father’ of the bomb J. Robert Oppenheimer’s quote from the Bhagavad Gita, “Now I am become Death, destroyer of worlds.” We might think as well about the question of whether the bombings of Hiroshima and Nagasaki should be recorded among history’s most horrendous of war crimes, rather than as proportional actions that, as Carter asserts, “saved lives by bringing a swift end to World War II.” Again, this point is at least highly contested. And Carter’s assertion that the invention of the atom bomb “then deterred another, even more destructive war between superpowers” fails to recognize the continued threat of nuclear war, as the most immediate potential for collective annihilation that we currently face.

After taking some personal credit for working to minimize the threats that the atomic weapons he so admires created, Carter moves on to set out his reasons for urging his readers to become contributors to the DoD. First on the list is the “inescapable necessity” of national defense, and the simple fact that “AI is an increasingly important military tool.” Who says so; that is, who exactly is invested in the necessity of a national defense based on military hegemony? And who promotes ‘AI’ as a necessary part of the military ‘toolkit’? Rather than even acknowledge these questions, the question Carter poses is this:

Will it be crude, indiscriminate, and needlessly destructive? Or will it be controlled, precise, and designed to follow US laws and international norms?

If you don’t step up to the project of developing AI, Carter continues, you face “ceding the assignment to others who may not share your skill or moral center.” The choice here, it would seem, is a clear one, between indiscriminate killing and killing that is controlled and precise. The possibility that one might rightly question the logics and legitimacy of US targeting operations, and the project of further automating the technologies that enable those operations, seems beyond the scope of Carter’s moral compass.

Carter next cites his own role in the issuance of the Pentagon’s 2012 policy on AI in weapon systems. The requirement of human involvement in any decision to use force, he points out, is tricky; in the case of real time calculations involved in, for example, a guided missile, “avoidance of error is designed into the weapon and checked during rigorous testing.” There has of course been extensive critical analysis of the possibility of testing weapons in conditions that match those in which they’ll subsequently be used, but the key issue here is that of target identification before a weapon is fired. In the case of automated target identification Carter observes:

A reasonable level of traceability — strong enough to satisfy the vital standard of human responsibility — must be designed into those algorithms. The integrity of the huge underlying data sets must also be checked. This is complex work, and it takes specialists like you to ensure it’s done right.

Again, the assumption that automated targeting can be “done right” begs the wider, more profound and critical question of how the legitimacy of targeting is determined by the US military command at present. As long as serious questions remain regarding practices through which the US military defines what comprises an ‘imminent threat,’ rendering targeting more efficient through its automation is equally questionable.

Carters then turns to Google’s Dragonfly Project, the controversial development of a search engine compliant with the Chinese government’s censorship regime, instructing his young reader that “Working in and for China effectively means cooperating with the People’s Liberation Army.” Even without addressing the breadth of this statement, it ignores the possibility that it might be those same “young Googlers” who also strenuously protested the company’s participation in the Dragonfly effort. Moreover the implication is that the illegitimacy of one company project precludes raising concerns about the legitimacy of another, and that somehow not participating in DoD projects is itself effectively aiding and abetting America’s designated ‘enemies.’

Carter’s final point requires citation in full:

Third, and perhaps most important, Google itself and most of your colleagues are American. Your way of life, the democratic institutions that empower you, the laws that enable the operations and profits of the corporation, and even your very survival rely on the protection of the United States. Surely you have a responsibility to contribute as best you can to the shared project of defending the country that has given Google so much.

To be a loyal American, in other words, is to embrace the version of defense of the homeland that Carter assumes; one based in US military hegemony. Nowhere is there a hint of what the alternatives might be – resources redirected to US diplomatic capacities, or working to alleviate rather than exacerbate the conditions that lead to armed conflict around the world (many of them arguably the effects of earlier US interventions). Anything less than unquestioning support of the project of achieving US security through supremacy in military technology is ingratitude, unworthy of us as US citizens, dependent upon the benevolent patronage of the military hand that must continue to feed us and that we, in return, must loyally serve.

Just-so stories


Alerted that BBC News/Technology has developed a story titled ‘Intelligent Machines: The Truth Behind AI Fiction’, I follow the link with some hopeful anticipation. The piece opens: ‘Over the next week, the BBC will be looking into all aspects of artificial intelligence – from how to build a thinking machine, to the ethics of doing so, to questions about whether an AI can ever be creative.’  But as I read on my state changes to one that my English friends would characterize as gobsmacked.  Instead of in-depth, critical journalism this piece reads like a (somewhat patronizing) children’s primer with corporate sponsorship.  We’re told, for example, that Watson, IBM’s supercomputer ‘can understand natural language and read millions of documents in seconds’.  But if it’s a deeper understanding of the state of the art in AI that we’re after, we can’t let terms like ‘understand’  and ‘read’ go by unremarked. Rather, it’s precisely the translation of computational processes as ‘understanding’ or ‘reading’, and the difference lost in that translation from our understanding and reading of those terms, that needs to be illuminated.  We might then fully appreciate the ingenious programming that enables the system singularized as ‘Watson’ to compete successfully on the televised quiz show Jeopardy, despite the machine’s cluelessness regarding the cultural references that its algorithms and databases encode.

Things go from bad to worse, however, when we’re told that Watson ‘is currently working in harmony with humans, in diverse fields such as the research and development departments of big companies such as Proctor and Gamble and Coca-Cola – helping them find new products’.  Why equate harmonious working relations with the  deployment of an IBM supercomputer in the service of corporate R&D?  And what kinds of ongoing labours of code development and maintenance are required to reconfigure a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight core processor in such a way that it can operate usefully within these enterprises?  The anthropomorphism of Watson obfuscates, rather than explicates, these ‘truths’ about artificial intelligence and its agencies.

The structure of the story is a series of loops between fiction and ‘fact’, moving from blockbuster films to just-so stories. In place of the Terminator, we’re told, ‘The US military unit Darpa [sic] is developing lots of robotic kit, such as exoskeletons to give soldiers superhuman strength and access to visual displays that will help their decision making. It is also using Atlas robots, developed by Boston Dynamics, intended for search and rescue.’ (There is a brief mention of the campaign against lethal autonomous weapons, though with no links provided).  After a reference to C-3PO, we’re told that ‘In the real world, companion robots are really starting to take off’, exemplified by Pepper, which ‘has learnt about human emotions by watching videos showing facial expressions.’ (See my earlier post on companion robots here.) From Wall-E, surely among the most endearing of fictional robots (see Vivian Sobchack’s brilliant analysis) we go to Roomba, about which we’re told that ‘[a]necdotal evidence suggests some people become as attached to them as pets and take them on holiday.’ We finally close (not a moment too soon) with Ex Machina’s AVA on one hand, and roboticist Hiroshi Ishiguro’s humanoid twin on the other, along with the assurance by Prof Chetan Dube, chief executive of software firm IPsoft, that his virtual assistant Amelia ‘will be given human form indistinguishable from the real thing at some point this decade.’

In the absence of any indication that this story is part of a paid advertisement, I’m at a loss to explain how it achieved the status of investigative journalism within the context of a news source like the BBC. If this is what counts as thoughtful reporting, the prospects for AI-based replication are promising indeed.