A series of recent media reports on robotic futures have provoked a post. I’ll begin with the latest announcements of the imminent arrival of the perfect domestic robot friend/pet/servant, this time in the form of Jibo the ‘family robot’. The crowdfunding appeal via IndieGogo features a promotional video headlined by CEO of Jibo, Inc. Cynthia Breazeal, faculty member in MIT’s Media Lab.
In a kind of retro throwback to the sit coms and Madmenesque consumer advertising of the 1950s and 60s, the video shows us an affluent, Caucasian, heteronormative American family demonstrating their love and connectedness through a series of vignettes in which Jibo plays a supporting, but clearly central role. With a piano solo, feel-good soundtrack playing in the background, the video opens with a slow zoom in on an image of a pristine family home, as the narrator explains “This is your house [cut to slow zoom on the family car parked in the driveway] this is your car, [cut and slow zoom to electric toothbrush on the bathroom vanity] this is your toothbrush. These are your things, but these [cut to slow zoom on framed family photo] are the things that matter. And somewhere in between [cut to Jibo, which swivels its ‘head’ in the direction of the camera] is this guy. Introducing Jibo, the world’s first family robot” (my emphasis). As this stereotypical American family becomes the world, or at least those first to experience what the world presumably desires, we see a series of scenes in which their already privileged lives are further enhanced through Jibo’s obsequious intercessions. At the video’s end, the scene shifts to Cynthia Breazeal, seated in what looks like a tidy garage workshop, who poses the questions: “What if technology actually treated you like a human being? … What if technology helped you, like a partner, rather than simply being a tool? That’s what Jibo’s about.” This is followed by a call for our help “to build Jibo, to bring it to the world, and to build the community. Let’s work together, to make Jibo truly great. And together, we can humanize technology.”
As promotion morphs into mobilization, and consumerism into a call for collective action, we might turn to a second story from The China Post published several days earlier, titled ‘Foxconn to increase robot usage to curb workers’ suicide rates’ (Lan Lan and Li Jun, Asia News Network, July 14, 2014).
From this story we learn that “Foxconn Technology Group plans to use more robots in its various manufacturing operations as part of its efforts to replace ‘dangerous, boring and repeated’ work, which has often been blamed for the series of suicides at its various facilities in recent years.” While the embedded quote is not attributed, it cites the oft-repeated triple of ‘dangerous, dull, and dirty’ that characterizes those forms of labour considered a priority for automation. Assumed to be jobs that no human would want, this valuation makes absent the fact that these are the only jobs that, worldwide, increasing numbers of people rely upon to survive. The article goes on to describe the new industrial park in Guiyang being custom designed for Foxconn’s automated production lines, in which energy saving and environmental protection will be prioritized to meet the preference of customers like Apple for more environmentally friendly manufacturing.
As robots like Jibo, designed for friendship with certain humans, appear in these stories, other humans (those whose already-precarious labour is soon to be displaced by further automation) are erased. And then there’s the robot apocalypse, which according to tech reporter Dylan Love, “scientists are afraid to talk about” (Business Insider, July 18, 2014). In a story that invites roboticists and other experts to comment on the prospective risks of a “post-singularity world” (the ‘singularity’ being that moment at which the capacities of artificially intelligent machines exceed those of the human), Love quotes Northwestern University law professor John O. McGinnis, who in his paper ‘Accelerating AI’ writes:
The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.
In the context of these earlier stories, concerns about the possibility that future humanlike machines might be indifferent to human welfare can’t help but beg the question of contemporary humans’ seeming indifference to the welfare of other humans. As long as representations of the human family like those of Jibo’s promotion continue to universalize the privileged forms of life that they depict, they effectively erase the unequal global divisions of labour and livelihood on which the production of ‘our things’ currently depends. As long as news of Foxconn celebrates the company’s turn to environmentally friendly manufacturing while failing to acknowledge the desperate labour conditions that drive Foxconn workers first to take the dangerous, boring, and repetitive work on offer in the manufacture of Apple products, then drives many of them to suicide, and now threatens to render their lives more desperate with the loss of even those jobs, the problem of just what our shared ‘human values’ are remains. And before we take seriously the question of what it would mean for our technology to treat us as human beings, we might ask what it would mean for us to treat other humans as human beings, including the commitments to social justice that would entail.
Postscript: For a small bit of good news we might turn to one more story that appears this week. Reporter Martyn Williams writes today in PC World that since its purchase by Google, robot company Boston Dynamics’ funding from the US Defense Department has dropped from the $30 million/year range of the past several years, to just $1.1 million for 2014 (the latter for participation in DARPA’s robotics challenge). Our relief might be mitigated by speculation that Google will focus its own robotics efforts on factory automation and ‘home help’, but this small movement away from militarism is a welcome one nonetheless.