Jolina H. Ruckert
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jolina H. Ruckert.
Developmental Psychology | 2012
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Nathan G. Freier; Rachel L. Severson; Brian T. Gill; Jolina H. Ruckert; Solace Shen
Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter interrupted Robovies turn at a game and, against Robovies stated objections, put Robovie into a closet. Each child was then engaged in a 50-min structural-developmental interview. Results showed that during the interaction sessions, all of the children engaged in physical and verbal social behaviors with Robovie. The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovies moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed). Developmentally, while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds. Discussion focuses on how (a) childrens social and moral relationships with future personified robots may well be substantial and meaningful and (b) personified robots of the future may emerge as a unique ontological category.
Current Directions in Psychological Science | 2009
Peter H. Kahn; Rachel L. Severson; Jolina H. Ruckert
Two world trends are powerfully reshaping human existence: the degradation, if not destruction, of large parts of the natural world, and unprecedented technological development. At the nexus of these two trends lies technological nature—technologies that in various ways mediate, augment, or simulate the natural world. Current examples of technological nature include videos and live webcams of nature, robot animals, and immersive virtual environments. Does it matter for the physical and psychological well-being of the human species that actual nature is being replaced with technological nature? As the basis for our provisional answer (it is “yes”), we draw on evolutionary and cross-cultural developmental accounts of the human relation with nature and some recent psychological research on the effects of technological nature. Finally, we discuss the issue—and area for future research—of “environmental generational amnesia.” The concern is that, by adapting gradually to the loss of actual nature and to the increase of technological nature, humans will lower the baseline across generations for what counts as a full measure of the human experience and of human flourishing.
human-robot interaction | 2008
Cady M. Stanton; Peter H. Kahn; Rachel L. Severson; Jolina H. Ruckert; Brian T. Gill
This study investigated whether a robotic dog might aid in the social development of children with autism. Eleven children diagnosed with autism (ages 5-8) interacted with the robotic dog AIBO and, during a different period within the same experimental session, a simple mechanical toy dog (Kasha), which had no ability to detect or respond to its physical or social environment. Results showed that, in comparison to Kasha, the children spoke more words to AIBO, and more often engaged in three types of behavior with AIBO typical of children without autism: verbal engagement, reciprocal interaction, and authentic interaction. In addition, we found suggestive evidence (with p values ranging from .07 to .09) that the children interacted more with AIBO, and, while in the AIBO session, engaged in fewer autistic behaviors. Discussion focuses on why robotic animals might benefit children with autism.
human-robot interaction | 2008
Peter H. Kahn; Nathan G. Freier; Takayuki Kanda; Hiroshi Ishiguro; Jolina H. Ruckert; Rachel L. Severson; Shaun K. Kane
We propose that Christopher Alexanders idea of design patterns can benefit the emerging field of HRI. We first discuss four features of design patterns that appear particularly useful. For example, a pattern should be specified abstractly enough such that many different instantiations of the pattern can be uniquely realized in the solution to specific problems in context. Then, after describing our method for generating patterns, we offer and describe eight possible design patterns for sociality in human robot interaction: initial introduction, didactic communication, in motion together, personal interests and history, recovering from mistakes, reciprocal turn-taking in game context, physical intimacy, and claiming unfair treatment or wrongful harms. We also discuss the issue of validation of design patterns. If a design pattern program proves successful, it will provide HRI researchers with basic knowledge about human robot interaction, and save time through the reuse of patterns to achieve high levels of sociality.
human-robot interaction | 2012
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Jolina H. Ruckert; Solace Shen; Heather E. Gary; Aimee L. Reichert; Nathan G. Freier; Rachel L. Severson
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATRs humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participants performance in a game, and prevented the participant from winning a
human-robot interaction | 2011
Peter H. Kahn; Aimee L. Reichert; Heather E. Gary; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Jolina H. Ruckert; Brian T. Gill
20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.
human-robot interaction | 2010
Peter H. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Aimee L. Reichert; Heather E. Gary; Solace Shen
This paper discusses converging evidence to support the hypothesis that personified robots and other embodied personified computational systems may represent a new ontological category, where ontology refers to basic categories of being, and ways of distinguishing them.
human-robot interaction | 2015
Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Heather E. Gary; Jolina H. Ruckert
This conceptual paper broaches possibilities and limits of establishing psychological intimacy in HRI.
human-robot interaction | 2013
Jolina H. Ruckert; Peter H. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Heather E. Gary
Will people keep the secret of a socially compelling robot who shares, in confidence, a “personal” (robot) failing? Toward answering this question, 81 adults participated in a 20-minute interaction with (a) a humanoid robot (Robovie) interacting in a highly social way as a lab tour guide, and (b) with a human being interacting in the same highly social way. As a baseline comparison, participants also interacted with (c) a humanoid robot (Robovie) interacting in a more rudimentary social way. In each condition, the tour guide asks for the secret keeping behavior. Results showed that the majority of the participants (59%) kept the secret of the highly social robot, and did not tell the experimenter when asked directly, with the robot present. This percentage did not differ statistically from the percentage who kept the humans secret (67%). It did differ statistically when the robot engaged in the more rudimentary social interaction (11%). These results suggest that as humanoid robots become increasingly social in their interaction, that people will form increasingly intimate and trusting psychological relationships with them. Discussion focuses on design principles (how to engender psychological intimacy in human-robot interaction) and norms (whether it is even desirable to do so, and if so in what contexts). Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: psychology; H.1.2 [Models and Principles]: User/Machine Systems-hunanfactors General Terms Experimentation, Human Factors
human-robot interaction | 2014
Peter H. Kahn; Jolina H. Ruckert; Takayuki Kanda; Hiroshi Ishiguro; Heather E. Gary; Solace Shen
This conceptual paper provides design guidelines to enhance the sociality of human-robot interaction. The paper draws on the Interaction Pattern Approach in HRI, which seeks to specify the underlying structures and functions of human interaction. We extend this approach by proposing that in the same way people effectively engage the social world with different personas in different contexts, so can robots be designed not only with a single persona but multiple personas.