ARTIFICIAL INTELLIGENCE AT 50:  From Building Intelligence to Nurturing Sociabilities

Sherry Turkle

This paper was presented at the Dartmouth Artifical Intelligence Conference, Hanover, NH, Saturday, July 15th, 2006.

 

ABSTRACT

From the aspirations of the Dartmouth conference to the conversations of the 1980s when the first computational toys for children were hitting the consumer market, much of the debate about artificial intelligence has centered on the question of whether machines could “really” be intelligent.  This question was essentially about the objects themselves,what they could and could not do. These days, new questions are in the air, influenced by the cultural presence of machines that lead not so much with their intelligence as by their seeming sociability. These are relational artifacts.

When a relational artifact offers us its “attention,” we are encouraged to care for it. When that cared-for object thrives, we experience that object as intelligent, but more important, we feel a new level of connection to it. The cultural conversation about artificial intelligence as it manifests itself in relational artifacts is not so much about whether these machines “really” have emotion or intelligence but about our desire for connection to them, about what they evoke in their users. The debates about relational artifacts will not be about the capabilities of machines but about the vulnerabilities of people.

DISCOURSES OF LIFE

In the past fifty years, artificial intelligence has had its own internal intellectual debates and it has provoked a conversation in the larger culture, largely through the presence of computational objects. Machines that are reactive and interactive, machines that seem on the boundary of the animate, have led those who use them to new reconsiderations of human identity. If mind was program, as the field suggested, where was self? where was spirit? where was soul? AI has led people to ask, “Will machines someday be as intelligent as people?” but it has also led to another and more self-reflexive question, “Have people always been machines?” In other words, over the past fifty years, AI has brought philosophy into everyday life, including into the lives of children.

Indeed, the playthings of the computer culture have shifted how children talk about what is and is not alive (Turkle2005[1984]), 1995). From the time of the appearance of the first computer toys and games in the early 1980s, children began to use different categories to talk about the “aliveness” of computational games and toys than they use to talk about the aliveness of “traditional” objects. A traditional wind-up toy is considered “not alive” when children realize that it does not move of its own accord.  Here, the criterion for aliveness is in the domain of physics: autonomous motion. Faced with computational media, children’s way of talking about aliveness became psychological. Children classified computational objects as alive if they could think on their own. Faced with a computer toy that could play tic-tac-toe, what mattered to a child was not the object’s physical but psychological autonomy.

Children of the early 1980s came to define what made people special in opposition to computers, which they saw as our “nearest neighbors.” Computers, the children reasoned, are rational machines; people are special because they are emotional. Children’s use of the category “emotional machines” to describe what makes people special was a fragile, unstable definition of human uniqueness. In 1984, when I completed my study of a first generation of children who grew up with electronic toys and games, I thought that other formulations would arise from generations of children who might, for example, take the intelligence of artifacts for granted, understand how it was created, and be less inclined to give it philosophical importance (Turkle 2005[1984]). I did not anticipate how quickly, as if on cue, computational creatures that presented themselves as having both feelings and needs would enter mainstream American culture. By the mid1990s, as emotional machines, people were not alone.

Traditionally, artificial intelligence concentrated on building engineering systems that impressed through the irrationality and cognitive competence – whether in playing chess or giving “expert” advice. The past decade has seen the development of new kinds of computational entities that present themselves as having states of mind that are affected by their interactions with human beings, objects designed to impress not so much through their “smarts” as through their sociability. I call these “relational artifacts.” (Turkle 2004, 2005a,b). In the literature, such creatures, when embodied, are also referred to as “sociable machines” (Breazeal 2000, 2002, Breazeal and Scasselati 1999, 2000, Kidd 2004). Referring to them as relational artifacts makes reference to the psychoanalytic tradition with its focus on the human meaning of the artifact-person connection and facilitates the comparison of embodied artifacts with a range of software agents that exist only in virtual reality.

The first relational artifacts to enter the American marketplace were virtual creatures known as Tamagotchis that lived on a tiny LCD screen, itself housed in a small plastic egg. The Tamagotchis – a toy fad of the 1997 holiday season – were presented as creatures from another planet that needed human nurturance, both physical and emotional. The Tamagotchis communicated their needs through a screen display. An individual Tamagotchi would grow from child to healthy adult if it were cleaned when dirty, nursed when sick, and fed when hungry.   A Tamagotchi, while it lived, needed constant care. If its needs were not met, it would die. Children became responsible parents; they enjoyed watching their Tamagotchis thrive and did not want them to die. During school hours, parents were enlisted to take care of the Tamagotchis; beeping Tamagotchis became background noise during business meetings.

Furbies, small furry owl-like creatures, the toy fad of 1998, shared many of the psychological properties that had animated the Tamogotchis. Most important, the Furbies demanded attention. They played games, “learned” to speak English, and said “I love you.” In 2000, My Real Baby, a robotic infant doll, appeared in toy stores. My Real Baby makes baby sounds and baby facial expressions, but what is more significant than its physical similarities to an infant is that this computationally complex doll was designed to give the appearance of having baby "states of mind." Bounce the doll when it is happy, and it gets happier. Bounce it when it is grumpy and it gets grumpier. Similarly, AIBO, Sony’s robotic dog, introduced in 1999and targeted to adults rather than children, develops different personalities depending on how it is treated. Its later models have face and voice recognition software that enable AIBO to recognize its “primary caregiver.”

Relational artifacts do not wait for children to “animate”  them in the spirit of a Raggedy Anne doll or the Velveteen Rabbit, the stuffed animal who finally became alive because so many children had loved him. They present themselves as already animated and ready for relationship. Just as the first generation of computer toys provoked questions about the quality of aliveness and about what is special about being a person. (Turkle 2005[1984]), twenty years later, those confronting relational artifacts (Turkle 2004; Turkle, 2005a,b; Turkle et. al. 2006; Kahn, et. al. 2004) are similarly provoked to ask fundamental questions about the objects’ natures. Children approach relational artifacts and explore what it means to think of these creatures as alive or “sort of alive”; elders in a nursing play with a robot designed to engage seniors (Paro in the form of a baby seal) and grapple with how to characterize this creature (Kidd et. al. 2006; Shibata 1999, 2005; Taggart, et. al 2005; Turkle, 2005a,b; Turkle et al, 2006c;). They move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?”

These similarities across the decades are not surprising.   Encounters with novel computational objects present people with category-challenging experiences. The objects are liminal, betwixt-and between, provoking new thought (Turner 1969; Bowker and Star 1999).  However, there are significant differences between current responses to relational artifacts and earlier encounters with computation. Children first confronting computer toys in the late 1970s and early 1980s were compelled to classification. Faced with relational artifacts, children’s questions about classification are enmeshed in a new desire to nurture and be nurtured by the artifacts rather than simply categorize them; in their dialogue with relational artifacts, children’s focus shifts from cognition to affect, from game playing to fantasies of mutual connection.  When people are asked to care for a computational creature, they become attached, feel connection, and sometimes they feel a great deal more. We love what we nurture, biological or not. Even the first and most primitive of the relational artifacts, the Tamagotchis illustrated a consistent element of a new human/machine psychology; when it comes to bonding with these kinds of objects, nurturance is the “killer app” (Turkle 2004, 2005a,b).

For me, relational artifacts are the new uncanny in our culture of computing. Here I refer to the uncanny in the Freudian sense: Something known of old and long familiar – yet become strangely unfamiliar (Freud, 1960). The notion of the Freudian uncanny includes in its definition both the look back and the look forward. Seen from one angle, relational artifacts seem familiar, extensions of what came before. They play out (and take to a higher power) the themes of connection with and animation of the machine that have characterized people’s relationships with computational objects since their introduction into popular culture. And yet they pose new challenges. They have become objects-to-think-with for asking the question, “What is an authentic relationship with a machine?” as
well as the further question, “What is a relationship?” For a person to be in a “relationship,” is it sufficient for the person to feel love? The question is not abstract because even with relatively little interaction, people come to love relational artifacts – both software agents and relational robots.

I left the story of children and the discourse of aliveness in the mid-198os, with children subtly shifting traditional physical criteria for aliveness (Does it move on its own?) to psychological criteria (Does it think on its own?) In the1990s, new computational objects that embodied principles of evolution (such as the games in the Sim series) strained that order to the breaking point. Evolution was an activity at the core of life; yet it was not psychological.  Additionally, the culture was awash in images of “shape shifting” and “morphing,” encouraging children to see the “stuff” of computers as the same “stuff” of which life is made. There was a new fluidity of discourse in how children discussed computer games (Turkle, 1995).

For example, in 1993, Robbie, ten, put the emphasis not on psychology but on a special kind of mobility when she considered whether the creatures she evolved on SimLife were alive. “They would be alive,” she said, “if they could get out of your computer and go to America Online.” Here we see the resurfacing of physical motion (Piaget’s classical criterion) bound up with notions of presumed psychology: children often assumed that the creatures on Sim games have a desire to “get out” of the system into a wider computational world.

The 1990s brought a heteregenity of discourses about aliveness. These days, faced with even very simple relational artifacts, children’s discourse about aliveness has again shifted. Faced with relational artifacts, they no longer discuss its “aliveness” in terms of its motion or cognitive abilities. They talk about these objects as alive or “sort of alive” not because of what the creatures can do (physically or cognitively) but because of their own emotional connection to the creatures and their fantasies about how the creatures might be feeling about them. The focus of discussion about whether computational objects might be alive moved from the psychology of projection to the
psychology of engagement, from Rorschach to relationship, from competency to connection.

So, for example, a five-year-old talks about her Furby, a robotic creature that resembles an owl and appears to learn English under the child’s tutelage, as alive “because it might want to hug me,” a six-year-old declares his Furby “more alive than a Tamagotchi because it likes to sleep with me.” A nine-year-old is convinced that her Furby is alive because she “likes to take care of it.” She immediately amends her comment to acknowledge her new pet’s limitations. “It’s a Furby kind of alive, not an animal kind of alive.” Children already talk about an “animal kind of alive” and a “Furby kind of alive.” The question ahead is whether they will also come to talk about a “people kind of love” and a “robot kind of love.”

A ROBOT KIND OF LOVE

In the early 1980s I met a thirteen-year-old, Deborah , who responded to the experience of computer programming by speaking about the pleasures of putting “a piece of your mind into the computer’s mind and coming to see yourself differently.” (2005[1984]), Twenty years later, eleven-year-old Fara reacts to a play session with Cog, a humanoid robot at MIT that can meet her eyes, follow her position, and imitate her movements, by saying that she could never get tired of the robot because “it’s not like a toy because can’t teach a toy; it’s like something that’s part of you, you know, something you love, kind of like another person, like a baby.” (Turkle et. al., 2006a) The contrast between the two responses reveals a shift from projection onto an object to engagement with a subject.

In the presence of relational artifacts, people feel attachment and loss; they want to reminisce and feel loved.  In a year-long study of human-robot bonding, one seventy-four year old Japanese participant said of her Wandukun, a furry robot creature designed to resemble a koala bear:  “When I looked into his large, brown eyes, I feel in love after years of being quite lonely . . . I swore to protect and care for the little animal” (Kakushi, 2001). In my study of robots in Massachusetts nursing homes, seventy-four year-old Jonathan responds to his robot baby doll by wishing it were a bit smarter because he would prefer to talk to a robot about his problems than to a person (Turkle, 2004). “The robot wouldn’t criticize me.” Andy, also seventy-four, says that the My Real Baby robotic infant doll, which like Paro responds to caretaking by developing different states of mind, bears a resemblance to his ex-wife Rose, “something in the eyes.” He likes chatting with the robot about events of the day. “When I wake up in the morning and see her face [the robot] over there, it makes me feel so nice, like somebody is watching over me.”

From the aspirations of the Dartmouth conference to the debates of the 1980s when the first computational toys were hitting the market, debates in artificial intelligence centered on the question of whether machines could “really” be intelligent. These debates were about the objects themselves, what they could and could not do. Our new debates about relational and sociable machines – debates that will have an increasingly high profile in mainstream culture – are not about only about the machines’ capabilities but about our vulnerabilities. When we are asked to care for an object, when the cared-for object thrives and offers us its attention and concern, we experience that object as intelligent, but more important, we feel a new level of connection to it. The new questions are not about relational artifacts really have emotion or intelligence but about what they evoke in their users.

ROBOT AS RORSCHACH

In summer 2001, over sixty children met the robots Kismet and Cog at MIT, for open-ended play sessions and semi-structured interviews (Turkle, et. al, 2006). In this study of “first encounters,” children displayed extraordinary perseverance in their efforts to communicate with the robots, a perseverance that was expressed through a range of personal styles. Some children were openly affectionate with Kismet, showering it with hugs and kisses, making efforts to entertain it with stuffed animals and rattles. Some tried to amuse it with favorite childhood games and songs.  In one case, a child made clay treats for Kismet to eat.  Poignantly, one child told Kismet that he was going “to take care of it and protect it against all evil.” Other children, with different temperaments, were no less tenacious in pursuing relationships with the robots, but did so through aggression.

Adam, six-years-old, asks the robot, “Can you talk?” When Kismet does not answer and then finally says something that Adam cannot understand, the frustrated boy tells Kismet to “Shut up!” and forces various objects into Kismet’s mouth, first a metal object, then a toy caterpillar saying, “Chew this!” Adam becomes increasingly angry at Kismet for not paying attention to him and for not being comprehensible. At no point does he disengage from the robot.

In The Second Self, I describe how five-year-old Lucy takes an early computer toy, Texas Instrument’s Speak and Spell, and constructs imaginative scenarios to maintain the sense that she is having a conversation with it, even though the toy had no interactive capability. Lucy did this by tailoring her demands of the toy to exactly match what the toy was able to do. Children’s reactions to Kismet and Cog illustrate many examples of this “helping” behavior. When all evidence pointed to broken or malfunctioning robots, children rationalized the robots’ failings in other ways. So, for example, when Kismet fails to speak, children suggested that it was ill, sleeping, shy, deaf, too young to respond correctly, and speaks another language. Children in these cases do not seem committed to preserving an image of Kismet and Cog as intelligent. Children’s excuses and “helping” behavior was in the service of preserving their sense that the robots cared about them.

Unprompted, children expressed the importance of being recognized by Kismet and Cog, of being liked by the them. Unprompted, children kissed and hugged the robots, sang to them and put on dance shows. When Kismet successfully says a child’s name, other children comment that this is evidence of Kismet’s affection. If one child tries to get Kismet to say his or her name and Kismet says another child’s name, this is taken as evidence that Kismet prefers the other child, often causing hurt feelings. When either Cog or Kismet are unresponsive, children were more likely to experience this as personal rejection than as broken mechanism.

Children’s stake in preserving a sense of relationship with the robots is so strong that children actively resist attempts to demystify them. With few exceptions, children were uninterested, indeed unwilling, to approach the robots in terms of underlying mechanism. Half of the children in our study had an individual play session with Cog, followed by a session with roboticist Brian Scassellati during which he took children through a real-time demonstration of how the robot processes information. Children were shown the computers that ran Cog and the monitors that demonstrated what Cog “saw.” Scassellati demonstrated how Cog’s program works and how its different functions could be turned off. Finally, children were allowed to “drive” the robot. This meant that they had a chance to control the robot’s movements and behaviors—to be the robot’s “brains.” Metaphorically, they got to see the robot “naked.” Would the robot, now presented as mechanical, systematically stripped of its extraordinary powers, and perhaps more relevant, of any illusion of autonomy, seem less worthy to serve as a companion, seem less worthy of relationship? The answer to this question was a clear “no.”

The didactic presentation of a transparent, mechanical Cog had almost no effect either on children’s attitudes toward the robot or on their feelings of being in a relationship with it. It seemed akin to informing a child that their best friend’s mind is made up of electrical impulses and chemical reactions. Such explanations (on a radically different level from the one at which relationships take place) are treated as perhaps accurate but irrelevant. They might be helpful in explaining a friend’s bad mood just as Scassellati’s debriefing might be helpful in explaining why Cog might be having a bad day. His explanation was not necessarily unwelcome; it was received as interesting, but it did not interfere with children’s sense of being in a relationship with Cog. Similarly, if Kismet and Cog malfunctioned during a play session, children did not treat the robots as broken mechanisms but as ailing creatures.  Once defined as social, any lack of particular competencies is taken as an unfortunate disability for which the robot deserves sympathy.

Thus, at the heart of the “holding power” of relational artifacts is that they call forth the human desire for communication and connection. Faced with Cog’s inability to respond to speech, Fara, eleven-years-old, does not question that Cog is “smart enough” to hear or speak, but sees him as disabled the way a human might be. She says that being with Cog felt like being with a deaf or blind person “because it was confused, it didn’t understand what you were saying, and like a blind or as a deaf person, they don’t know what you are saying, so it didn’t know what they were saying or it knew when I was trying to get its attention to see, he was just like staring, and I was just like ‘Hello!’ because a blind person would have to listen.”

I have noted that since the beginning of children’s immersion in the computer culture through their involvement with electronic toys and games, computational objects have been an essential element of how children talk about what is special about being a person.  The computer appeared in the role of “nearest neighbor”—people were distinguished by what made them different from the  machines.  Through the mid 1990’s, in large measure, children made these comparisons between computers and people by focusing on what computers could and could not do.  In contrast, in the company of Kismet and Cog, when children spoke about what was special about being a person, they focused not on what the machines could do, but on their relational potential.  One idea that came to the foreground was the notion that people are special because of their imperfections.  A certain vulnerability, even frailty, shows itself as valued as defining traits for people.  A ten-year-old girl who has just played with Kismet says “I would love to have a robot at home.  It would be such a good friend.  But it couldn’t be a best friend.  It might know everything but I don’t.  So it wouldn’t be a best friend.”  She further explains that a robot is “too perfect” and that it might always need to correct her.  Friendship is easier with your own kind.

But in the culture of simulation, what is “our own kind?”

ARTIFICIAL INTELLIGENCE IN THE CULTURE OF SIMULATION

The new generation of “relational” artificial intelligences are entering a very different culture than that of the Dartmouth conference only fifty years ago.  During these fifty years, we have entered a culture of simulation.

I make this point through a personal story, a small vignette.  I take my fourteen-year-old daughter to the 2005 Darwin exhibit at the American Museum of Natural History (Turkle, 2006c).  The exhibit documents Darwin’s life and though, and with a somewhat defensive tone (in light of current  challenges to evolution by proponents of intelligent design), presents the theory of evolution as the central truth that underpins contemporary biology.  The Darwin exhibit wants to convince and wants to please.  At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory.  The turtle rests in its cage, utterly still.  “They could have used a robot,” comments my daughter.  She considers it a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle’s “aliveness”.  I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned about its authenticity.  The museum has been advertising these turtles as wonders, curiosities, marvels—among the plastic models of life at the museum, here is the life that Darwin saw.  I begin to talk with others at the exhibit, parents and children.  It is Thanksgiving weekend.  The line is long, the crown frozen in place.  My question, “Do you care that the turtle is alive?”  is a welcome diversion.  A ten-year-old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience:  “its water looks dirty.  Gross.”  More usually, votes for the robot echo my daughter’s sentiment that in this setting, aliveness doesn’t seem worth the trouble.  A twelve-year-old girl opines:  “For what the turtles do, you didn’t have to have the live one’s.”  Her father looks at her, uncomprehending.  “But the point is that they are real, that’s the whole point.”

The Darwin exhibit give authenticity major play:  on display are the actual magnifying glass that Darwin used, the actual notebooks in which he recorded his observations, indeed, the very notebook in which he wrote the famous sentences that first described his theory of evolution.  But in the children’s reactions to the inert but alive Galapagos turtle, the idea of “original” is in crisis.  I recall my daughter’s reaction when she was seven to a boat ride in the postcard blue Mediterranean.  Already an expert in the world of simulated fish tanks, she saw a creature in the water, pointed to it excitedly and said:  “Look mommy, a jellyfish!  It looks so realistic!”  When I told this story to a friend who was a research scientist at the Walt Disney Company, he was not surprised.  When Animal Kingdom opened in Orlando, populated by “real”, that is, biological, animals, its first visitors complained that these animals were not as “realistic” as the animatronic creatures in Disneyworld, just across the road.  The robotic crocodiles slapped their tails, rolled their eyes, in sum, displayed “essence of crocodile” behavior.  The biological crocodiles, like the Galapagos turtle, pretty much kept to themselves.  What is the gold standard here?

I have written that now, in our culture of simulation, the notion of authenticity is for us what sex was to the Victorians—“threat and obsession, taboo and fascination” (Turkle, 2005[1984]).  I have lived with this idea for many years, yet at the museum, I find the children’s position strangely unsettling.  For them, in this context, aliveness seems to have to intrinsic value.  Rather, it is useful only if needed for a specific purpose.  “If you put a robot instead of the live turtle, do you think people should be told that the turtle is not alive?” I ask.  Not really, say several of the children.  Data on “aliveness” can be shared on a “need to know” basis, for a purpose.  But what are the purposes of living things?  When do we need to know if something is alive?

Consider another moment in the current history of artificial intelligence in the culture of simulation.  As older woman, 72, in a nursing home outside of Boston is sad.  her son has broken off his relationship with her. Her nursing home is part of a study I am conducting on robotics for the elderly.  I am recording her reactions as she sits with the robot Paro, a seal-like creature, advertised as the first “therapeutic robot” for its ostensibly positive effects on the ill, the elderly, and the emotionally troubled.  Paro is able to make eye contact through sensing the direction of a human voice, is sensitive to touch, and has “states of mind” that are affected by how it is treated—for example, it can sense if it is being stroked gently or with some aggression.  In this session with Paro, the woman, depressed because of her son’s abandonment, comes to believe that the robot is depressed as well.  She turns to Paro, strokes him and says:  “Yes, you’re sad, aren’t you.  It’s tough out there.  Yes, it’s hard.”  And then she pets the robot once again, attempting to provide it with comfort.  And in doing so, she tries to comfort herself.

Psychoanalytically trained, I believe that this kind of moment, if it happens between people, has profound therapeutic potential. What are we to make of this transaction as it unfolds between a depressed woman and a robot? When I talk to others about the old woman’s encounter with Paro, their first associations are usually to their pets and the solace they provide. The comparison sharpens the questions about Paro and the quality of the relationships people have with it. I do not know if the projection of understanding onto pets is “authentic.” That is, I do not know whether a pet could feel or smell or intuit some understanding of what it might mean to be with an old woman whose son has chosen not to see her anymore.  What I do know is that Paro has understood nothing. As with the children’s encounters with Kismet and Cog I feel witness to a new kind of relationship. Like Kismet and Cog, Paro’s ability to inspire relationship is not based on its intelligence or consciousness, but on its ability to push certain “Darwinian” buttons in people (making eye contact, for example) that cause people to respond as though they were in relationship.

Confrontation with the uncanny provokes new reflection.  Do plans to provide relational robots to children and the elderly make us less likely to look for other solutions for their care? If our experience with relational artifacts is based on a fundamentally deceitful interchange (artifacts’ ability to that persuade us that they knows and care about our existence) can it be good for us? Or might it be good for us in the “feel good” sense, but bad for us in our lives as moral beings? The answers to such questions are not dependent on what computers can do today or what they are likely to be able to do in the future. These questions ask what we will be like, what kind of people are we becoming as we develop increasingly intimate relationships with machines.

For the psychoanalyst D.W. Winnicott, objects such as a teddy bear or rag doll, objects to which children remain attached even as they embark on the exploration of the world beyond the nursery, are mediators between the child’s earliest bonds with the mother, who the infant experiences as inseparable from the self, and the child’s growing capacity to develop relationships with other people who will be experienced as separate beings(Winnicott, 1971). The infant knows transitional objects as both almost inseparable parts of the self and, at the same time, as the first not-me possessions. As the child grows, the actual objects are left behind. The abiding effects of early encounters with them, however, are manifest in the experience of a highly-charged intermediate space between the self and certain objects in later life. This experience has traditionally been associated with religion, spirituality, the perception of beauty, sexual intimacy, and the sense of connection with nature.

In the past, the power of objects to play this transitional role has been tied to the ways in which they enabled the child to project meanings onto them. The doll or the teddy bear presented an unchanging and passive presence. But today’s relational artifacts take a decidedly more active stance. With them, children’s expectations that their dolls want to be hugged, dressed, or lulled to sleep don’t only come from the child’s projection of fantasy or desire onto inert playthings, but from such things as the digital dolls’ crying inconsolably or even saying: “Hug me!” or “It’s time for me to get dressed for school!” In the move from traditional transitional objects to contemporary relational artifacts, the psychology of projection gives way to a relational psychology, a psychology of engagement, a movement in parallel to the transition from early computer toys to today’s relational artifacts.

Thinking about object relations theory and transitional objects helps us to understand what is novel about relational artifacts as does reference to another psychoanalytic tradition, that of self psychology. Heinz Kohut describes how some people may shore up their fragile sense of self by turning another person into a “self object” (Ornstein, 1978). In the role of self object, the other is experienced as part of the self, thus in perfect tune with the fragile individual’s inner state. Disappointments inevitably follow. Relational artifacts (not as they exist now but as their designers promise they will soon be) clearly present themselves as candidates for such a role. If they can give the appearance of aliveness and yet not disappoint, they may even have a comparative advantage over people, and open new possibilities for narcissistic experience with machines. One might even say that when people turn other people into self-objects, they are making an effort to turn a person into a kind of “spare part.” From this point of view, relational artifacts make a certain amount of sense as successors to the always-resistant human material (Turkle, 2004b).

In Computer Power and Human Reason, Joseph Weizenbaum (1976) wrote about his experiences with his invention, ELIZA, a computer program that seemed to serve as self object as it engaged people in a dialogue similar to that of a Rogerian psychotherapist. It mirrored one’s thoughts; it was always supportive. To the comment:  “My mother is making me angry,” the program might respond, “Tell me more about your mother,” or “Why do you feel so negatively about your mother?”  Weizenbaum was disturbed that his students, fully knowing that they were taking with a computer program, wanted to chat with it, indeed, wanted to be alone with it.  Weizenbaum was my colleague at MIT at the time; we taught courses together on computers and society.  And at the time that his book came out, I felt moved to reassure him.  ELIZA seemed to me like a Rorschach through which people expressed themselves.  They became involved with ELIZA, but the spirit was “as if”.  The gap between the program and person was vast.  People bridged it with attribution and desire.  they thought:  I will talk to this program “as if” it were a person; I will vent,  I will rage.  I will get things off my chest”.  At the time, ELIZA seemed to me no more threatening than an interactive diary.  Now, thirty years later, I was asking myself if I had underestimated the quality of the connection.  A newer technology had created computational creatures that evoked a sense of mutual relating.  The people who met relational artifacts felt a desire to nurture them.  And with nurturance came the fantasy of reciprocation.  They wanted the creatures to care about them in return.  Very little about these relationships seemed to be experienced “as if”.  The story of computers and their evocation of life had come to a new place.

A NEW PLACE

Science fiction has long presented robots as objects-to-think-with for thinking about who we are as people.  In Philip K. Dick’s classic story, “Do Androids Dream of Electric Sheep?” (a story that most people know through its film adaptation Blade Runner), androids begin to act like people (developing emotional connections with each other and the desire to connect with humans) when they learn that they have a predetermined lifespan, and in the case of one android, Rachel, when they are programmed with memories of childhood.  Mortality and a sense of a life cycle are offered as the qualities that make the robots more than a machine.  Blade Runner’s hero, Decker, makes his profession of being able to tell humans from robots based on their reactions to emotionally-charged images.  The rotting carcass of a dead animal should cause no reaction in an android, but should repel a human, causing a change in pupil dialation.  What does it take, asks the film, for a simulation to become indistinguishable from the reality?  Decker, as the film progresses, falls in love with the near-perfect simulation, the android Rachel.  Implanted with memories of a human past and a belief that she has a certain death make her emotionally sensitive, deeply “human”.  By the end of the film we are left to wonder whether Decker himself may be an android, unaware of his status.  Unable as viewers to resolve this questions, we are left cheering for our hero and heroine as they escape to whatever time they have remaining, in other words, to the human condition.  And we are left with a conviction about our own futures, a belief that by the time we face the reality of computational devices passing the Turing test (that is, computational devices that cannot through their behavior be distinguished from people), we will no longer care about the test at all.  By that point, people will love their machines and be more concerned about their machines’ happiness than their test scores.  This conviction is the theme of a short story by Brian Aldiss, “Supertoys Last All Summer Long”, made into the Steven Spielberg film, A.I.:  Artificial Intelligence (Aldiss, 2001).

In A.I., scientists build a humanoid robot, David, who is programmed to love.  David expresses his love to a woman, Monica, who has adopted him as her child.  One current experience with relational artifacts suggest that the pressing issue raised by this film is not the potential reality of a robot who “loves”, but the feelings of the adoptive mother in the film—a human being whose response to a machine that asks for nurturance is the desire to nurture it and whose response to a non-biological creature who reaches out to her is to feel attachment and horror, love and confusion.  Even today we are faced with relational artifacts that elicit human responses that have things in common with those of the mother in A.I.  Decisions about the role of robots in the lives of children and seniors cannot turn simply on whether children and the elderly “like” the robots.  What does this deployment of “nurturing” technology at the two most dependent moments of the life cycle say about us?  What will it do to us?  What kinds of relationships are appropriate to have with machines?  And what is a relationship?

My work in robotics laboratories has offered some images of how future relationships with machines may look, appropriate or not.  For example, Cynthia Breazeal was leader on the design team for Kismet, the robotic head that was designed to interact with humans “sociably”, much as a two-year-old child would.  Breazeal was its chief programmer, tutor, and companion.  Kismet needed Breazeal to become as “intelligent” as it did and then Kismet became a creature Breazeal and others could interact with.  Breazeal experienced what might be called a maternal connection with Kismet; she certainly describes a sense of connection with it as more than “mere” machine.  When she graduated from MIT and left the AI Laboratory where she had done her doctoral research, the tradition of academic property rights demanded that Kismet be left behind in the laboratory that had paid for its development.  What she left behind was the robot “head” and its attendant software.  Breazeal described a sharp sense of loss. Building a new Kismet would not be the same.

The summer of 2001, the summer of the study of children’s first encounters with Kismet and Cog at the MIT AI Laboratory, was the last time that Breazeal would have access to Kismet.  It is not surprising that separation from Kismet was not easy for Breazeal, but more striking, it was hard for the rest of us to imagine Kismet without her.  One ten-year-old who overheard a conversation among graduate students about how Kismet would be staying in the A.I. lab objected: “But Cynthia is Kismet’s mother.”

It would be facile to analogize Breazeal’s situation to that of Monica, the mother in Spielberg’s A.I., but Breazeal is, in fact, one of the first people to have one of the signal experiences in that story, separation from a robot to which one has formed an attachment based on nurturance. At issue here is not Kismet’s achieved level of intelligence, but Breazeal’s experience as a “caregiver.” In a very limited sense, Breazeal “brought up” Kismet. But even this very limited experience evoked strong emotions. My experiences watching people connect with relational artifacts (from primitive Tamagotchis to the sophisticated Kismet and Paro) suggests that being asked to nurture a machine that presents itself as an young creature of any kind, constructs us as dedicated cyber-caretakers.   Nurturing a machine that presents itself as dependent creates significant attachments. We might assume that giving a sociable, “affective” machine to our children or to our aging parents will change the way we see the lifecycle and our roles and responsibilities in it. When psychoanalysts talk about object relations, the objects they have in mind are most usually people. The new objects of our lives (for which we are using the words sociable, affective, and relational) demand an object relations psychology that will help us better navigate our relationships with material culture in its new, animated manifestations.

Sorting out our relationships with robots bring us back to the kinds of challenges that Darwin posed to his generation: the question of human uniqueness. How will interacting with relational artifacts affect people’s way of thinking about what, if anything, makes people special?  The sight of children and the elderly exchanging tenderness with robotic pets brings science fiction into everyday life and techno-philosophy down to earth. The question here is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean? In my view, to respond to these questions, we need to find a way to distinguish between need (something that artifacts may have) and desire (something that resides in the conjunction of language and flesh). We need, in short, a psychoanalytic sensibility, a sense of what I would call the psychoanalytic virtues.

One woman’s comment on AIBO, Sony’s household entertainment robot startles in what it might augur for the future of person-machine relationships: "'[AIBO] is better than a real dog . . . It won't do dangerous things, and it won’t betray you ... Also, it won't die suddenly and make you feel very sad.'" As the story of Blade Runner dramatizes, mortality has traditionally defined the human condition; a shared sense of mortality has been the basis for feeling a commonality with other human beings, a sense of going through the same life cycle, a sense of the preciousness of time and life, of its fragility. Loss (of parents, of friends, of family) is part of the way we understand how human beings grow and develop and bring the qualities of other people within themselves (Freud, 1989).

Relationships with computational creatures may be deeply compelling, perhaps educational. but they do not put us in touch with the complexity, contradiction, and limitations of the human life cycle. They do not teach us what we need to know about empathy, ambivalence, and life lived in shades of gray. To say all of this about our love of our robots does not diminish their interest or importance. It only puts them in their place.

 

REFERENCES

Bowker, G.C, Star, S.L. 1999. Sorting Things Out:
Classification and Its Consequences, Cambridge, Mass.:
MIT Press.

Breazeal, C. "Sociable Machines: Expressive Social
Exchange Between Humans and Robots". 2000. PhD
Thesis, Massachusetts Institute of Technology.

C. Breazeal, C. 2002. Designing Sociable Robots,
Cambridge: MIT Press.
Breazeal, C. and Scassellati, B. 1999. "How to Build
Robots that Make Friends and Influence People", in

Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS-99), pp. 858-863.

Breazeal, C, and Scassellati, B, 2000. "Infant-like Social
Interactions Between a Robot and a Human Caretaker",
Adaptive Behavior, 8, pp. 49-74.

Freud, S. 1960. “The Uncanny,” in The Standard Edition
of the Complete Psychological Works of Sigmund Freud,

vol. 17, J. Strachey, trans. and ed. London: The Hogarth Press, pp. 219-252.

Freud, S. 1989. “Mourning and Melancholia,” in The
Freud Reader. P. Gay, ed. New York: W.W. Norton &
Company, p. 585.

Kahn, P., Friedman, B. Perez-Granados, D.R. and Freier,

N.G. 2004. "Robotic Pets in the Lives of Preschool
Children", in CHI Extended Abstracts, ACM Press, 2004,
pp. 1449-1452.

Kakushi, S. 2001. “Robot Lovin’” Asia Week Magazine
Online, November 9, 2001.

Kidd, C.D. "Sociable Robots: The Role of Presence and
Task in Human-Robot Interaction". 2004. Master's Thesis,
Massachusetts Institute of Technology.


 

Kidd, C. D. , Taggart, W., Turkle, S. 2006. “A Sociable
Robot to Encourage Social Interaction among the Elderly.”
Proceedings of ICRA, March 2006, Orlando, Florida.

Ornstein, P., ed. 1978. The Search for the Self: Selected
Writings of Heinz Kohut: 1950-1978, Volume 2. (New
York: International Universities Press, Inc.

Shibata, T., Tashima, T and K. Tanie, K. 1999.
"Emergence of Emotional Behavior through Physical
Interaction between Human and Robot", in Proceedings of
the IEEE International Conference on Robotics and
Automation, 1999, pp. 2868-2873.

Shibata, T. "Mental Commit Robot", Available online at:
http://www.mel.go.jp/soshiki/robot/biorobo/shibata/
(accessed 01 April 2005).

Taggart, W., Turkle, S, Kidd, C.D. 2005. “An Interactive
Robot in a Nursing Home: Preliminary Remarks, in
Proceedings of CogSci Wokshop on Android Science,
Stresa, Italy, pp. 56-61.

Turkle, S, Life on the Screen. 1995. New York: Simon and
Schuster.

Turkle, S. 2004. “Relational Artifacts,” NSF Report, (NSF Grant SES-0115668).

Turkle, S. 2004b. "Whither Psychoanalysis in Computer
Culture." Psychoanalytic Psychology: Journal of the
Division of Psychoanalysis, American Psychological
Association, 21, 1 Winter 2004.

Turkle, S. 2005 [1984]. The Second Self: Computers and the Human Spirit. Cambridge, Mass.: MIT Press.

Turkle, S. 2005a. “Relational Artifacts/Children/Elders:
The Complexities of CyberCompanions,” in Proceedings
of the CogSci Workshop on Android Science, Stresa, Italy,
2005, pp. 62-73.

Turkle, S. 2005b. “Caring Machines: Relational Artifacts
for the Elderly.” Keynote AAAI Workshop, “Caring
Machines.” Washington, D.C.

Turkle, S., Breazeal, C., Dasté, O., and Scassellati, B.
2006a. “First Encounters with Kismet and Cog: Children’s Relationship with Humanoid Robots,” in Digital Media:
Transfer in Human Communication, P. Messaris and L.
Humphreys, eds. New York: Peter Lang Publishing.

Turkle, S. 2006b. “Tamagotchi Diary.” The London
Review of Books. April 20, 2006.

Turkle, S, Taggart, W., Kidd, C., Dasté, O., 2006c. “The
complexity of Cybersocialities,” Connection Science,
submitted.

Turner, V. 1969. The Ritual Process. Chicago: Aldine.

Weizenbaum, J. 1976. Computer Power and Human
Reason: From Judgment to Calculation. San Francisco,
CA: W. H. Freeman.

D. W. Winnicott. (1971). Playing and Reality. New York:
Basic Books.