Heather E. Gary
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heather E. Gary.
human-robot interaction | 2012
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Jolina H. Ruckert; Solace Shen; Heather E. Gary; Aimee L. Reichert; Nathan G. Freier; Rachel L. Severson
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATRs humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participants performance in a game, and prevented the participant from winning a
human-robot interaction | 2011
Peter H. Kahn; Aimee L. Reichert; Heather E. Gary; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Jolina H. Ruckert; Brian T. Gill
20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.
human-robot interaction | 2010
Peter H. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Aimee L. Reichert; Heather E. Gary; Solace Shen
This paper discusses converging evidence to support the hypothesis that personified robots and other embodied personified computational systems may represent a new ontological category, where ontology refers to basic categories of being, and ways of distinguishing them.
human-robot interaction | 2015
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Heather E. Gary; Jolina H. Ruckert
This conceptual paper broaches possibilities and limits of establishing psychological intimacy in HRI.
human-robot interaction | 2013
Jolina H. Ruckert; Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Heather E. Gary
Will people keep the secret of a socially compelling robot who shares, in confidence, a “personal” (robot) failing? Toward answering this question, 81 adults participated in a 20-minute interaction with (a) a humanoid robot (Robovie) interacting in a highly social way as a lab tour guide, and (b) with a human being interacting in the same highly social way. As a baseline comparison, participants also interacted with (c) a humanoid robot (Robovie) interacting in a more rudimentary social way. In each condition, the tour guide asks for the secret keeping behavior. Results showed that the majority of the participants (59%) kept the secret of the highly social robot, and did not tell the experimenter when asked directly, with the robot present. This percentage did not differ statistically from the percentage who kept the humans secret (67%). It did differ statistically when the robot engaged in the more rudimentary social interaction (11%). These results suggest that as humanoid robots become increasingly social in their interaction, that people will form increasingly intimate and trusting psychological relationships with them. Discussion focuses on design principles (how to engender psychological intimacy in human-robot interaction) and norms (whether it is even desirable to do so, and if so in what contexts). Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: psychology; H.1.2 [Models and Principles]: User/Machine Systems-hunanfactors General Terms Experimentation, Human Factors
human-robot interaction | 2014
Peter H. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Heather E. Gary; Solace Shen
This conceptual paper provides design guidelines to enhance the sociality of human-robot interaction. The paper draws on the Interaction Pattern Approach in HRI, which seeks to specify the underlying structures and functions of human interaction. We extend this approach by proposing that in the same way people effectively engage the social world with different personas in different contexts, so can robots be designed not only with a single persona but multiple personas.
Human Development | 2013
Peter H. Kahn; Heather E. Gary; Solace Shen
This paper shows how humor can be used as an interaction pattern to help establish sociality in human-robot interaction. Drawing illustratively from our published research on people interacting with ATR’s humanoid robot Robovie, we highlight four forms of humor that we successfully implemented: wit and the ice-breaker, corny joke, subtle humor, and dry humor and self-deprecation. Categories and Subject Descriptors K.4.2 [Computers and Society]: Social Issues General Terms Design, Human Factors, Theory
Early Education and Development | 2016
Nadia Chernyak; Heather E. Gary
It has been happening for centuries. New technologies are invented that reshape individual thought and restructure social life. The emergence of the printing press, for example, provided access to written language for millions of people. In so doing, it gave rise to new forms of mental representations and to a powerful new means for the accretion of knowledge across generations. We would like to suggest that an equally astonishing technology is on the immediate horizon: social embodied technological networked entities. For purposes here we will narrow the class of these entities to perhaps their most canonical form: social robots. These robots embody aspects of people insofar as they have a persona, are adaptive and autonomous, and can talk, learn, use natural cues, and self-organize. One such humanoid robot, Robovie, was used as a museum guide at the Osaka Science Museum in Japan, and autonomously led groups of children around the museum, and conversed with them [Shiomi, Kanda, Ishiguro, & Hagita, 2007]. In the near future, think of a child coming home from school and interacting with a robot nanny or a robot friend [Tanaka, Cicourel, & Movellan, 2007]. Or think of an elderly person who is assisted in the home by a robot caretaker and who finds company through interaction with the robot [Sparrow & Sparrow, 2006]. Or, building conceptually on early software programs like ELIZA [Weizenbaum, 1966], think of patients in sessions with a robot therapist. These scenarios are not just science fiction any more. They are in research laboratories and will soon emerge in common life. These robots will not only change our everyday lives but the way we think about what constitutes a social and moral other . We’ve been finding this to be so in our collaborative research. In one study, for example, 90 children (9-, 12-, and 15-year-olds) initially interacted with the humanoid robot Robovie in 15-minute sessions [Kahn et al., 2012a]. Each session ended when an experimenter interrupted Robovie’s turn at a game and, against Robovie’s stated moral objections – based on moral considerations of fairness and psychological harm to itself – put Robovie into a closet. Each child was then engaged in a
human robot interaction | 2016
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Jolina H. Ruckert; Heather E. Gary
ABSTRACT Research Findings: Interactive technology has become ubiquitous in young children’s lives, but little is known about how children incorporate such technologies into their intuitive biological theories. Here we explore how the manner in which technology is introduced to young children impacts their biological reasoning, moral regard, and prosocial behavior toward it. We asked 5- and 7-year-old children to interact with a robot dog that was described either as moving autonomously or as remote controlled. Compared with a controlled robot, the autonomous robot caused children to ascribe higher emotional and physical sentience to the robot, to reference the robot as having desires and physiological states, and to reference moral concerns as applying to the robot. Children who owned a dog at home were more likely to behave prosocially toward the autonomous robot than those who did not. Practice or Policy: Recent work has begun to use robots as learning tools. Our results suggest that the manner in which robots are introduced to young children may differentially impact children’s learning. Presenting robots as autonomous agents may help promote children’s social-emotional development, whereas presenting robots as human controlled may help promote robots as purely cognitive educational tools.
ubiquitous computing | 2014
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Heather E. Gary; Jolina H. Ruckert
Is it possible to design robots of the future so that they can enhance peoples creative endeavors? Forty-eight young adults were asked to produce their most creative ideas in a small Zen rock garden in a laboratory setting. Participants were randomly assigned to one of two conditions. In one condition, the robot Robovie (through a Wizard of Oz interface) encouraged participants to generate creative ideas (e.g., “Can you think of another way to do that?”), and pulled relevant images and video clip from the web for each participant to look at which could help spur the participant into more creative expressions. In a second condition, participants engaged in the same Zen rock garden task with the same core information, but through the modality of a self-paced PowerPoint presentation. Results showed that participants engaged in the creativity task longer and provided almost twice the number of creative expressions in the robot condition compared to the PowerPoint condition. Discussion focuses on a vision of social robotics coupled with advances in natural language processing to enhance the human creative mind and spirit.