Thursday, September 29, 2011

Readings for 9/29/11: Sociable robots: Simulated stand-ins or full-time friends?

Source:
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.
Excerpts:
Part One: “The robotic moment: In solitude, new intimacies"

Overview & Synthesis:
Part one discusses the “robotic moment” and sociable robots and how they are creating new unsettling relationships and instabilities in how we understand privacy, community, intimacy, and solitude. Interactions with robots like ELIZA, Tamagotchis, Furbies, AIBO, My Real Baby, Cog, Kismet, and Paro illustrate certain bonds, deep emotional connections, and the creation of feelings of pseudo-parental attachment. Turkle poses a troubling observation that robots are filling gaps in the society and we hope to use them as a solution to our own imperfections, as an easy substitute for the difficulty of dealing with others.
Turkle (2011) describes technology as the “architect of our intimacies” and seductive when offering what meets our human vulnerabilities (p. 1). We are drawn by the illusion of companionship with convenience and without the demands of intimacy. However, this presents risks of emotional dislocation.  In her examination of the relationship between humans and robots, Turkle suggests we don’t mind giving human qualities to inanimate objects and are content to treat each other as things.
In chapter one, “Nearest neighbors” she uses the phrase “alive enough” to describe a relational readiness when speaking about sociable robots (Turkle, 2011, p. 28-9). She argues that robots are evocative objects because they prompt people to think about them and in turn people think about themselves. Chapter two, “Alive Enough,” continues this conversation about how we are at the point of seeing digital objects as both creatures and machines. Computers and robots no longer ask us to think with them, but ask us to feel for and with them as they become more sociable, affective, and relational (Turkle, 2011, p. 39). She argues that robots are a slippery slope because they take advantage of our instinct and capacity to project human traits on to inanimate objects. For example, children are able to empathize with the “needs” and “feelings” of the Tamagotchi and Furby, who in reality have no intelligence but can fake attachment. This interactivity prompts our minds to start projecting consciousness and we compensate by filling the gaps. A machine only needs to act clever and people will play along. Additionally, she points out that this simulation is enough to provoke empathetic urges, as shown by the test where subjects are asked to take a Furby, a Barbie doll and a live gerbil and hold them upside down.
AIBO, the robotic dog
In chapter three, “True companions,” Turkle points out that since we already filter companionship through machines we now look to accept machines as companions. Robots have the power to dramatically alter our social lives by offering contact. However, they pose psychological risks by making us vulnerable to simplicities that may diminish us. In her discussion of AIBO, a robotic dog, Turkle points out that it is not the practice for the real, but a possible alternative that is not necessarily second best. There is a possibility that after robots serve as a better-than-nothing substitute; they might become equal or even preferable, to a pet or person (Turkle, 2011, p. 64).  So what are we sacrificing when we look to robots as electronic companions? Turkle (2011) says “Dependence on a robot presents itself as risk free. But when one becomes accustomed to ‘companionship’ without demands, life with people may seem overwhelming” (p. 66). Chapter four, “Enchantment,” elaborates on how AIBO and My Real Baby “encourage people to imagine robots in everyday life” and serve as evocative objects to “give people a ways to talk about their disappointments with the people around them” (Turkle, 2011p. 68). As we think about robots with artificial feelings and intelligence, we reflect differently on our own and use them to practice our own relationship skills.
Paro, the therapeutic seal
Chapter five, “Complicities,” brings up the idea that robots and people are not so different from each other. Turkle (2011) argues that the new relationships we have with robots create a loop, drawing us into the complicities that make it possible; but we are playing along, willing to defer to what the robots are able to do (p. 100). Chapter six, “Love’s Labor Lost,” brings up the startling possibility that if the robots are at all successful they could replace people. The interactivity and reactivity of robots can serve as a therapeutic process ramping up our emotional involvement. To the objection that robots can only seem to care or understand, Turkle (2011) points out that people too feign caring or understanding (p. 123). While we are somewhat relieved by the prospect of robots coming to the rescue, this raises certain moral issues. Turkle (2011) says “As we learn to get the ‘most’ out of robots, we may lower our expectations of all relationships, including those with people. In the process, we betray ourselves” (p. 125).
Chapter seven, “Communion,” describes an encounter between Rich and Kismet, providing an example of how robots can offer a fantasy of near communion and we can thus imagine a meeting of the minds. Robots offer what meets our human vulnerabilities. Turkle (2011) points out that we can interact with robots in full knowledge of their limitations, but are still comforted by what must be an unrequited love (p. 133).
Turkle’s consideration of the robot for real opens up daunting philosophical and psychological conversations. As society gradually seems to be growing more “machine ready,” it seems less shocking to put robots in places where people used to be (Turkle, 2011, p. 146). This prevalence of robots makes us unwilling to put in the work required by real human relationships, which presents certain risks. Section one describes how the boundaries between people and things are shifting, but makes the reader question which of these boundaries are worth maintaining.

Questions & Reflections:
Is a machine “alive enough” to die? Or does it just break/stop working and need fixed? Do we mourn the loss of this artificial life? Think about things that may be important to you like your phone or computer. How would you feel if it “died”? Is it okay in our society to “mourn” technology?

Do we attach ourselves to particular machines? Like how the children attach to their Furbies and Tamagotchies, where even an identical one cannot replace the original. Think about how you have personalized your own technologies like your laptop or cell phone. If you lost or broke one of them, can it ever be the same again? Would you feel differently towards a replacement? 

Is using robots as “stand-ins,” such as care for children or the elderly, morally wrong? Does this substitution pose ethical problems? On one hand it frees up people to devote their time elsewhere, but at the same time does it neglect these people from receiving a human touch in their lives? What is the balance and which is more important in our society? 

No comments:

Post a Comment