Solace Shen
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Solace Shen.
Developmental Psychology | 2012
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Nathan G. Freier; Rachel L. Severson; Brian T. Gill; Jolina H. Ruckert; Solace Shen
Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter interrupted Robovies turn at a game and, against Robovies stated objections, put Robovie into a closet. Each child was then engaged in a 50-min structural-developmental interview. Results showed that during the interaction sessions, all of the children engaged in physical and verbal social behaviors with Robovie. The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovies moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed). Developmentally, while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds. Discussion focuses on how (a) childrens social and moral relationships with future personified robots may well be substantial and meaningful and (b) personified robots of the future may emerge as a unique ontological category.
human-robot interaction | 2012
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Jolina H. Ruckert; Solace Shen; Heather E. Gary; Aimee L. Reichert; Nathan G. Freier; Rachel L. Severson
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATRs humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participants performance in a game, and prevented the participant from winning a
human-robot interaction | 2011
Peter H. Kahn; Aimee L. Reichert; Heather E. Gary; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Jolina H. Ruckert; Brian T. Gill
20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.
human-robot interaction | 2010
Peter H. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Aimee L. Reichert; Heather E. Gary; Solace Shen
This paper discusses converging evidence to support the hypothesis that personified robots and other embodied personified computational systems may represent a new ontological category, where ontology refers to basic categories of being, and ways of distinguishing them.
human-robot interaction | 2015
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Heather E. Gary; Jolina H. Ruckert
This conceptual paper broaches possibilities and limits of establishing psychological intimacy in HRI.
human-robot interaction | 2013
Jolina H. Ruckert; Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Heather E. Gary
Will people keep the secret of a socially compelling robot who shares, in confidence, a “personal” (robot) failing? Toward answering this question, 81 adults participated in a 20-minute interaction with (a) a humanoid robot (Robovie) interacting in a highly social way as a lab tour guide, and (b) with a human being interacting in the same highly social way. As a baseline comparison, participants also interacted with (c) a humanoid robot (Robovie) interacting in a more rudimentary social way. In each condition, the tour guide asks for the secret keeping behavior. Results showed that the majority of the participants (59%) kept the secret of the highly social robot, and did not tell the experimenter when asked directly, with the robot present. This percentage did not differ statistically from the percentage who kept the humans secret (67%). It did differ statistically when the robot engaged in the more rudimentary social interaction (11%). These results suggest that as humanoid robots become increasingly social in their interaction, that people will form increasingly intimate and trusting psychological relationships with them. Discussion focuses on design principles (how to engender psychological intimacy in human-robot interaction) and norms (whether it is even desirable to do so, and if so in what contexts). Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: psychology; H.1.2 [Models and Principles]: User/Machine Systems-hunanfactors General Terms Experimentation, Human Factors
human-robot interaction | 2014
Peter H. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Heather E. Gary; Solace Shen
This conceptual paper provides design guidelines to enhance the sociality of human-robot interaction. The paper draws on the Interaction Pattern Approach in HRI, which seeks to specify the underlying structures and functions of human interaction. We extend this approach by proposing that in the same way people effectively engage the social world with different personas in different contexts, so can robots be designed not only with a single persona but multiple personas.
Human Development | 2013
Peter H. Kahn; Heather E. Gary; Solace Shen
This paper shows how humor can be used as an interaction pattern to help establish sociality in human-robot interaction. Drawing illustratively from our published research on people interacting with ATR’s humanoid robot Robovie, we highlight four forms of humor that we successfully implemented: wit and the ice-breaker, corny joke, subtle humor, and dry humor and self-deprecation. Categories and Subject Descriptors K.4.2 [Computers and Society]: Social Issues General Terms Design, Human Factors, Theory
human-robot interaction | 2011
Solace Shen
It has been happening for centuries. New technologies are invented that reshape individual thought and restructure social life. The emergence of the printing press, for example, provided access to written language for millions of people. In so doing, it gave rise to new forms of mental representations and to a powerful new means for the accretion of knowledge across generations. We would like to suggest that an equally astonishing technology is on the immediate horizon: social embodied technological networked entities. For purposes here we will narrow the class of these entities to perhaps their most canonical form: social robots. These robots embody aspects of people insofar as they have a persona, are adaptive and autonomous, and can talk, learn, use natural cues, and self-organize. One such humanoid robot, Robovie, was used as a museum guide at the Osaka Science Museum in Japan, and autonomously led groups of children around the museum, and conversed with them [Shiomi, Kanda, Ishiguro, & Hagita, 2007]. In the near future, think of a child coming home from school and interacting with a robot nanny or a robot friend [Tanaka, Cicourel, & Movellan, 2007]. Or think of an elderly person who is assisted in the home by a robot caretaker and who finds company through interaction with the robot [Sparrow & Sparrow, 2006]. Or, building conceptually on early software programs like ELIZA [Weizenbaum, 1966], think of patients in sessions with a robot therapist. These scenarios are not just science fiction any more. They are in research laboratories and will soon emerge in common life. These robots will not only change our everyday lives but the way we think about what constitutes a social and moral other . We’ve been finding this to be so in our collaborative research. In one study, for example, 90 children (9-, 12-, and 15-year-olds) initially interacted with the humanoid robot Robovie in 15-minute sessions [Kahn et al., 2012a]. Each session ended when an experimenter interrupted Robovie’s turn at a game and, against Robovie’s stated moral objections – based on moral considerations of fairness and psychological harm to itself – put Robovie into a closet. Each child was then engaged in a
human robot interaction | 2016
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Jolina H. Ruckert; Heather E. Gary
This conceptual paper draws upon moral philosophy to broach the question: Are robots moral agents?