BRAINIAC Q&A (17)
By:
November 25, 2012
From late September 2002 through early 2006, HILOBROW’s Joshua Glenn wrote THE EXAMINED LIFE, a weekly three-item column for the Boston Globe’s Ideas section; and from late 2006 though mid-2008, he wrote BRAINIAC, an Ideas section blog that was repurposed as a three-item weekly column in the paper. This series reprints a few Q&As from Glenn’s two Ideas columns. [Brainiac image via 4CP]
February 6, 2005
THE THEOLOGICAL ROBOT
While visiting MIT’s Artificial Intelligence Lab in the fall of 1995, esteemed Harvard Divinity School professor Harvey Cox noticed that the motordriven eyes of Cog, a 7-foot-tall humanoid robot, were tracking his every movement. So he reached out and shook the creature’s hand. “There was a collective gasp from the Harvard theologians and MIT scientists present,” self-described robotics theologian Anne Foerst recounts in her new book, God in the Machine (Dutton). In her book, Foerst seeks to bridge the divide between religion and AI researchby arguing that robots have much to teach us about ourselves and our relationship with God. Foerst spoke with me from St. Bonaventure University in upstate New York, where she teaches theology and computer science.
IDEAS: You engineered the CoxCog meetup while working as a theologian at MIT’s AI Lab. Why would a robotics group invite you to join their team?
FOERST: Back in 1993, when I met Rodney Brooks, the AI Lab’s associate director at the time, he’d broken with the traditional assumption within AI that intelligence is merely a kind of software that can be programmed into a machine. Rod’s group had recently built Cog, a machine that learned through physical embodiment and social interaction, just like we humans do… I wanted to ask [his team], “What does it mean to be human? Are we made in the image of God? Can a robot be human?” He decided that my questions might prove helpful to their work, and invited me aboard.
IDEAS: So did you decide whether or not robots can, in fact, be human?
FOERST: What I learned from the AI Lab’s robots, which were designed to trigger emotional and social responses, is that we can bond with them. So although they can’t be human — to be human, I think, means needing to participate in the mutual process of telling stories that make sense of the world and who we are — humanoid robots can still be considered persons. Personhood simply means playing a role, if only a passive one, in that mutual narrative process. Like babies, or Alzheimer’s patients, humanoid robots don’t tell their own stories, but they play a role in our lives so we include them in our narrative structures. This suggests that perhaps we ought to think about treating robots right.
IDEAS: And what does this have to do with God?
FOERST: We too often use narratives of exclusivity — based on skin color, religion, language — to define the personhood of others. Yet the author of Psalm 139 writes of God that “You created me as a golem in my mother’s womb…./My frame was not hidden from you, when I was being made.” God built us, according to this ancient biblical tradition, in much the same way that we now build emotional and social robots. Yet despite knowing each of us so intimately, in all our imperfection, God loves all of us. Thinking about humanoid robots can possibly help us learn to tell inclusive stories, narratives that are unprejudiced.
READ MORE essays by Joshua Glenn, originally published in: THE BAFFLER | BOSTON GLOBE IDEAS | BRAINIAC | CABINET | FEED | HERMENAUT | HILOBROW | HILOBROW: GENERATIONS | HILOBROW: RADIUM AGE SCIENCE FICTION | HILOBROW: SHOCKING BLOCKING | THE IDLER | IO9 | N+1 | NEW YORK TIMES BOOK REVIEW | SEMIONAUT | SLATE