Category Archives: labour and automation

Which Sky is Falling?

Justin Wood, from https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html

The widening arc of public discussion regarding the promises and threats of AI includes a recurring conflation of what are arguably two intersecting but importantly different matters of concern. The first is the proposition, repeated by a few prominent members of the artificial intelligentsia and their followers, that AI is proceeding towards a visible horizon of ‘superintelligence,’ culminating in the point at which power over ‘us’ (the humans who imagine themselves as currently in control) will be taken over by ‘them’ (in this case, the machines that those humans have created).[1] The second concern arises from the growing insinuation of algorithmic systems into the adjudication of a wide range of distributions, from social service provision to extrajudicial assassination. The first concern reinforces the premise of AI’s advancement, as the basis for alarm. The second concern requires no such faith in the progress of AI, but only attention to existing investments in automated ‘decision systems’ and their requisite infrastructures.

The conflation of these concerns is exemplified in a NY Times piece this past summer, reporting on debates within the upper echelons of the tech world including billionaires like Elon Musk and Mark Zuckerberg, at a series of exclusive gatherings held over the past several years. Responding to Musk’s comparison of the dangers of AI with those posed by nuclear weapons, Zuckerberg apparently invited Musk to discuss his concerns at a small dinner party in 2014. We might pause here to note the gratuitousness of the comparison; it’s difficult to take this as other than a rhetorical gesture designed to claim the gravitas of an established existential threat. But an even longer pause is warranted for the grounds of Musk’s concern, that is the ‘singularity’ or the moment when machines are imagined to surpass human intelligence in ways that will ensure their insurrection.

Let’s set aside for the moment the deeply problematic histories of slavery and rebellion that animate this anxiety, to consider the premise. To share Musk’s concern we need to accept the prospect of machine ‘superintelligence,’ a proposition that others in the technical community, including many deeply engaged with research and development in AI, have questioned. In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.

To demonstrate the extent of concern within the tech community (and by implication those who align with Musk over Zuckerberg), NY Times AI reporter Cade Metz cites recent controversy over the Pentagon’s Project Maven. But of course Project Maven has nothing to do with superintelligence. Rather, it is an initiative to automate the analysis of surveillance footage gathered through the US drone program, based on labels or ‘training data’ developed by military personnel. So being concerned about Project Maven does not require belief in the singularity, but only skepticism about the legality and morality of the processes of threat identification that underwrite the current US targeted killing program. The extensive evidence for the imprecision of those processes that has been gathered by civil society organizations is sufficient to condemn the goal of rendering targeted killing more efficient. The campaign for a prohibitive ban on lethal autonomous weapon systems is aimed at interrupting the logical extension of those processes to the point where target identification and the initiation of attack is put under fully automated machine control. Again this relies not on superintelligence, but on the automation of existing assumptions regarding who and what constitutes an imminent threat.

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.

As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

[1] We could replace the uprising subject in this imaginary with other subjugated populations figured as plotting a takeover, the difference being that here the power of the master/creator is confirmed by the increasing threat posed by his progeny. Thus also the allusion to our own creation in the illustration by Justin Wood that accompanies the article provoking this post.

Swords to Ploughshares

3032F71E00000578-0-image-a-4_1452886145040

Having completed my domestic labors for the day (including a turn around the house with my lovely red Miele), I take a moment to consider the most recent development from Boston Dynamics (now part of the robotics initiative at sea in Google’s Alphabet soup).  More specifically, the news is of the latest incarnation of Boston Dynamic’s bipedal robot, nicknamed Atlas and famous for its distribution as a platform for DARPA’s Robotics Challenges. Previously figured as life-saving first responder and life-destroying robot soldier,  Atlas is now being repositioned – tongue firmly in cheek – as a member of the domestic workforce.

While irony operates effectively to distance its author from serious investment in truth claims or moral positioning, irony also generally offers a glimpse into its stance towards its objects. So at the risk of humorlessness, it’s worth reading this latest rendering of Atlas’ promises seriously, in the context of other recent developments in humanoid robotics. I wrote in a former post about the U.S. military’s abandonment of Boston Dynamic’s Big Dog and its kin, attributed by commentators to some combination of disappointment in the robot’s performance in the field, and a move toward disinvestment in military applications on the part of Google’s ‘Replicant’ initiative (recently restructured as the ‘X’ group). This leaves a robot solution in search of its problem, and where better to turn than to the last stronghold against automation; that is, the home. Along with the work of care (another favourite for robotics prognosticators), domestic labor (the wonders of dish and clotheswashing machines notwithstanding) has proven remarkably resistant to automation (remarkable at least to roboticists, if not to those of us well versed in this work’s practical contingencies). In a piece headlined ‘Multimillion dollar humanoid robot doesn’t make for a good cleaner,’ the Guardian reproduces a video clip (produced in fast motion with upbeat technomusic soundtrack) showing Florida’s Institute for Human and Machine Cognition (IHMC), runner up in the 2015 Robotics Challenge, testing new code ‘by getting the multimillion dollar Atlas robot to do household chores.’ In an interesting inversion, the robot is described as the ‘Google-developed US government Atlas robot,’ a formulation which sounds as though the development path went from industry to the public sector, rather than the other way around.

Housework, we’re told, proves ‘more difficult than you might imagine,’ suggesting that the reader imagined by the Guardian is one unfamiliar with the actual exigencies of domestic work (while for other readers those difficulties are easily imaginable). The challenge of housework is revealing of the conditions required for effective automation, and their absence in particular forms of labor. Specifically, robots work well just to the extent that their environments – basically the stimuli that they have to process, and the conditions for an appropriate response – can be engineered to fit their capacities. The factory assembly line has, in this respect, been made into the robot’s home. Domestic spaces, in contrast, and the practicalities of work within them (not least the work of care) are characterized by a level of contingency that has so far flummoxed attempts at automation beyond the kinds of appliances that can either depend on human peripherals to set up their conditions of operation (think loading the dishwasher), or can operate successfully through repetitive, random motion (think Roomba and its clones). Long underestimated in the value chain of labor, robotics for domestic work might just teach us some lessons about the extraordinary complexity of even the most ordinary human activities.

Meanwhile in Davos the captains of multinational finance and industry and their advisors are gathered to contemplate the always-imminent tsunami of automation, including artificial intelligence and humanoid robots, that is predicted to sweep world economies in the coming decades. The Chicago Tribune reports:

At IBM, researchers are working to build products atop the Watson computing platform – best known for its skill answering questions on the television quiz show “Jeopardy” – that will search for job candidates, analyze academic research or even help oncologists make better treatment decisions. Such revolutionary technology is the only way to solve “the big problems” like climate change and disease, while also making plenty of ordinary workers more productive and better at their jobs, according to Guru Banavar, IBM’s vice president for cognitive computing. “Fundamentally,” Banavar said, “people have to get comfortable using these machines that are learning and reasoning.” [SIC]

Missing between the lines of the reports from and around Davos are the persistent gaps between the rhetoric of AI and robotics, and the realities. These gaps mean that the progress of automation will be more one of degradation of labor than its replication, so that those who lose their jobs will be accompanied by those forced to adjust to the limits and rigidities of automated service provision. The threat, in other words, is not that any job can be automated, as the gurus assert, but rather that in a political economy based on maximizing profitability for the few, more and more jobs will be transformed into jobs that can be automated, regardless of what is lost.  Let us hope that in this economy, low and no wage jobs, like care provision and housework, might show a path to resistance.

hu·brisˈ(h)yo͞obrəs/noun: hubris 1. excessive pride or self-confidence.

NemesisRethel30q3.5x7.4@162

Nemesis, by Alfred Rethel (1837)

The new year opens with an old story, as The Independent headlines that Facebook multibillionaire Mark Zuckerberg (perhaps finding himself in a crisis of work/life balance) will “build [a] robot butler to look after his child” [sic: those of us who watch Downton Abbey know that childcare is not included in the self-respecting butler’s job description; even the account of divisions of labour among the servants is garbled here], elaborating that “The Facebook founder and CEO’s resolution for 2016 is to build an artificially intelligent system that will be able to control his house, watch over his child and help him to run Facebook.” To put this year’s resolution into perspective, we learn (too much information) that “Mr. Zuckerberg has in the past taken on ‘personal challenges’ that have included reading two books per month, learning Mandarin and meeting a new person each day.” “Every challenge has a theme,” Zuckerberg explains, “and this year’s theme is invention” (a word that, as we know, has many meanings).

We’re reminded that FB has already made substantial investments in AI in areas such as automatic image analysis, though we learn little about the relations (and differences) between those technologies and the project of humanoid robotics. I’m reassured to hear that Zuckerberg has said “that he would start by looking into existing technologies,” and hope that might include signing up to be a follower of this blog.  But as the story proceeds, it appears in any case that the technologies that Z has in mind are less humanoid robots, than the so-called Internet of things (i.e. networked devices, presumably including babycams) and data visualization (for his day job). This is of course all much more mundane and so, in the eyes of The Independent’s headline writers, less newsworthy.

The title of this post is of course the most obvious conclusion to draw regarding the case of Mark Zuckerberg; in its modern form, ‘hubris’ refers to an arrogant individual who believes himself capable of anything. And surely in a political economy where excessive wealth enables disproportionate command of other resources, Zuckerberg’s self-confidence is not entirely unwarranted. In this case, however, Zuckerberg’s power is further endowed by non-investigative journalism, which fails to engage in any critical interrogation of his announcement. Rather than questioning Zuckerberg’s resolution for 2016 on the grounds of its shaky technical feasibility or dubious politics (trivializing the labours of service and ignoring their problematic histories), the Independent makes a jump cut to the old saws of Stephen Hawking, Elon Musk and Ex Machina. Of course The Independent wouldn’t be the first to notice the film’s obvious citation of Facebook and its founder and CEO (however well the latter is disguised by the hyper-masculine and morally degenerate figure of Nathan). But the comparison, I think, ends there and of course, however fabulous, neither Zuckerberg nor Facebook are fictional.

The original Greek connotations of the term ‘hubris’ referenced not just overweening pride, but more violent acts of humiliation and degradation, offensive to the gods. While Zuckerberg’s pride is certainly more mundane, his ambitions join with those of his fellow multibillionaires in their distorting effects on the worlds in which their wealth is deployed (see Democracy Now for the case of Zuckerberg’s interventions into education).  And it might be helpful to be reminded that in Greek tragedy excessive pride towards or defiance of the gods lead to nemesis. The gods may play a smaller role in the fate of Mark Zuckerberg, however, and the appropriate response I think is less retributive than redistributive justice.

Humanizing humanity

A series of recent media reports on robotic futures have provoked a post.  I’ll begin with the latest announcements of the imminent arrival of the perfect domestic robot friend/pet/servant, this time in the form of Jibo the ‘family robot’.  The crowdfunding appeal via IndieGogo features a promotional video headlined by CEO of Jibo, Inc. Cynthia Breazeal, faculty member in MIT’s Media Lab.

 

jibo-a-robot-for-your-family-video-1097040-TwoByOne

In a kind of retro throwback to the sit coms and Madmenesque consumer advertising of the 1950s and 60s, the video shows us an affluent, Caucasian, heteronormative American family demonstrating their love and connectedness through a series of vignettes in which Jibo plays a supporting, but clearly central role. With a piano solo, feel-good soundtrack playing in the background, the video opens with a slow zoom in on an image of a pristine family home, as the narrator explains “This is your house [cut to slow zoom on the family car parked in the driveway] this is your car, [cut and slow zoom to electric toothbrush on the bathroom vanity] this is your toothbrush. These are your things, but these [cut to slow zoom on framed family photo] are the things that matter.  And somewhere in between [cut to Jibo, which swivels its ‘head’ in the direction of the camera] is this guy. Introducing Jibo, the world’s first family robot” (my emphasis).  As this stereotypical American family becomes the world, or at least those first to experience what the world presumably desires, we see a series of scenes in which their already privileged lives are further enhanced through Jibo’s obsequious intercessions.  At the video’s end, the scene shifts to Cynthia Breazeal, seated in what looks like a tidy garage workshop, who poses the questions: “What if technology actually treated you like a human being? … What if technology helped you, like a partner, rather than simply being a tool? That’s what Jibo’s about.”  This is followed by a call for our help “to build Jibo, to bring it to the world, and to build the community.  Let’s work together, to make Jibo truly great. And together, we can humanize technology.”

As promotion morphs into mobilization, and consumerism into a call for collective action, we might turn to a second story from The China Post published several days earlier, titled ‘Foxconn to increase robot usage to curb workers’ suicide rates’ (Lan Lan and Li Jun, Asia News Network, July 14, 2014).

foxconn-assembly-line

From this story we learn that “Foxconn Technology Group plans to use more robots in its various manufacturing operations as part of its efforts to replace ‘dangerous, boring and repeated’ work, which has often been blamed for the series of suicides at its various facilities in recent years.”  While the embedded quote is not attributed, it cites the oft-repeated triple of ‘dangerous, dull, and dirty’ that characterizes those forms of labour considered a priority for automation.  Assumed to be jobs that no human would want, this valuation makes absent the fact that these are the only jobs that, worldwide, increasing numbers of people rely upon to survive.  The article goes on to describe the new industrial park in Guiyang being custom designed for Foxconn’s automated production lines, in which energy saving and environmental protection will be prioritized to meet the preference of customers like Apple for more environmentally friendly manufacturing.

As robots like Jibo, designed for friendship with certain humans, appear in these stories, other humans (those whose already-precarious labour is soon to be displaced by further automation) are erased.  And then there’s the robot apocalypse, which according to tech reporter Dylan Love, “scientists are afraid to talk about” (Business Insider, July 18, 2014).  In a story that invites roboticists and other experts to comment on the prospective risks of a “post-singularity world” (the ‘singularity’ being that moment at which the capacities of artificially intelligent machines exceed those of the human), Love quotes Northwestern University law professor John O. McGinnis, who in his paper ‘Accelerating AI’ writes:

The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.

In the context of these earlier stories, concerns about the possibility that future humanlike machines might be indifferent to human welfare can’t help but beg the question of contemporary humans’ seeming indifference to the welfare of other humans.  As long as representations of the human family like those of Jibo’s promotion continue to universalize the privileged forms of life that they depict, they effectively erase the unequal global divisions of labour and livelihood on which the production of ‘our things’ currently depends.  As long as news of Foxconn celebrates the company’s turn to environmentally friendly manufacturing while failing to acknowledge the desperate labour conditions that drive Foxconn workers first to take the dangerous, boring, and repetitive work on offer in the manufacture of Apple products, then drives many of them to suicide, and now threatens to render their lives more desperate with the loss of even those jobs, the problem of just what our shared ‘human values’ are remains.  And before we take seriously the question of what it would mean for our technology to treat us as human beings, we might ask what it would mean for us to treat other humans as human beings, including the commitments to social justice that would entail.

Postscript:  For a small bit of good news we might turn to one more story that appears this week. Reporter Martyn Williams writes today in PC World that since its purchase by Google, robot company Boston Dynamics funding from the US Defense Department has dropped from the $30 million/year range of the past several years, to just $1.1 million for 2014 (the latter for participation in DARPA’s robotics challenge).  Our relief might be mitigated by speculation that Google will focus its own robotics efforts on factory automation and ‘home help’, but this small movement away from militarism is a welcome one nonetheless.