Monthly Archives: November 2018

Which Sky is Falling?

Justin Wood, from

The widening arc of public discussion regarding the promises and threats of AI includes a recurring conflation of what are arguably two intersecting but importantly different matters of concern. The first is the proposition, repeated by a few prominent members of the artificial intelligentsia and their followers, that AI is proceeding towards a visible horizon of ‘superintelligence,’ culminating in the point at which power over ‘us’ (the humans who imagine themselves as currently in control) will be taken over by ‘them’ (in this case, the machines that those humans have created).[1] The second concern arises from the growing insinuation of algorithmic systems into the adjudication of a wide range of distributions, from social service provision to extrajudicial assassination. The first concern reinforces the premise of AI’s advancement, as the basis for alarm. The second concern requires no such faith in the progress of AI, but only attention to existing investments in automated ‘decision systems’ and their requisite infrastructures.

The conflation of these concerns is exemplified in a NY Times piece this past summer, reporting on debates within the upper echelons of the tech world including billionaires like Elon Musk and Mark Zuckerberg, at a series of exclusive gatherings held over the past several years. Responding to Musk’s comparison of the dangers of AI with those posed by nuclear weapons, Zuckerberg apparently invited Musk to discuss his concerns at a small dinner party in 2014. We might pause here to note the gratuitousness of the comparison; it’s difficult to take this as other than a rhetorical gesture designed to claim the gravitas of an established existential threat. But an even longer pause is warranted for the grounds of Musk’s concern, that is the ‘singularity’ or the moment when machines are imagined to surpass human intelligence in ways that will ensure their insurrection.

Let’s set aside for the moment the deeply problematic histories of slavery and rebellion that animate this anxiety, to consider the premise. To share Musk’s concern we need to accept the prospect of machine ‘superintelligence,’ a proposition that others in the technical community, including many deeply engaged with research and development in AI, have questioned. In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.

To demonstrate the extent of concern within the tech community (and by implication those who align with Musk over Zuckerberg), NY Times AI reporter Cade Metz cites recent controversy over the Pentagon’s Project Maven. But of course Project Maven has nothing to do with superintelligence. Rather, it is an initiative to automate the analysis of surveillance footage gathered through the US drone program, based on labels or ‘training data’ developed by military personnel. So being concerned about Project Maven does not require belief in the singularity, but only skepticism about the legality and morality of the processes of threat identification that underwrite the current US targeted killing program. The extensive evidence for the imprecision of those processes that has been gathered by civil society organizations is sufficient to condemn the goal of rendering targeted killing more efficient. The campaign for a prohibitive ban on lethal autonomous weapon systems is aimed at interrupting the logical extension of those processes to the point where target identification and the initiation of attack is put under fully automated machine control. Again this relies not on superintelligence, but on the automation of existing assumptions regarding who and what constitutes an imminent threat.

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.

As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

[1] We could replace the uprising subject in this imaginary with other subjugated populations figured as plotting a takeover, the difference being that here the power of the master/creator is confirmed by the increasing threat posed by his progeny. Thus also the allusion to our own creation in the illustration by Justin Wood that accompanies the article provoking this post.