Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Crystal Chao is active.

Publication


Featured researches published by Crystal Chao.


IEEE Transactions on Autonomous Mental Development | 2010

Designing Interactions for Robot Active Learners

Maya Cakmak; Crystal Chao; Andrea Lockerd Thomaz

This paper addresses some of the problems that arise when applying active learning to the context of human-robot interaction (HRI). Active learning is an attractive strategy for robot learners because it has the potential to improve the accuracy and the speed of learning, but it can cause issues from an interaction perspective. Here we present three interaction modes that enable a robot to use active learning queries. The three modes differ in when they make queries: the first makes a query every turn, the second makes a query only under certain conditions, and the third makes a query only when explicitly requested by the teacher. We conduct an experiment in which 24 human subjects teach concepts to our upper-torso humanoid robot, Simon, in each interaction mode, and we compare these modes against a baseline mode using only passive supervised learning. We report results from both a learning and an interaction perspective. The data show that the three modes using active learning are preferable to the mode using passive supervised learning both in terms of performance and human subject preference, but each mode has advantages and disadvantages. Based on our results, we lay out several guidelines that can inform the design of future robotic systems that use active learning in an HRI setting.


human-robot interaction | 2010

Transparent active learning for robots

Crystal Chao; Maya Cakmak; Andrea Lockerd Thomaz

This research aims to enable robots to learn from human teachers. Motivated by human social learning, we believe that a transparent learning process can help guide the human teacher to provide the most informative instruction. We believe active learning is an inherently transparent machine learning approach because the learner formulates queries to the oracle that reveal information about areas of uncertainty in the underlying model. In this work, we implement active learning on the Simon robot in the form of nonverbal gestures that query a human teacher about a demonstration within the context of a social dialogue. Our preliminary pilot study data show potential for transparency through active learning to improve the accuracy and efficiency of the teaching process. However, our data also seem to indicate possible undesirable effects from the human teachers perspective regarding balance of the interaction. These preliminary results argue for control strategies that balance leading and following during a social learning interaction.


international conference on development and learning | 2011

Towards grounding concepts for transfer in goal learning from demonstration

Crystal Chao; Maya Cakmak; Andrea Lockerd Thomaz

We aim to build robots that frame the task learning problem as goal inference so that they are natural to teach and meet peoples expectations for a learning partner. The focus of this work is the scenario of a social robot that learns task goals from human demonstrations without prior knowledge of high-level concepts. In the system that we present, these discrete concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned on them using Bayesian inference. The grounded concepts are derived from the structure of the Learning from Demonstration (LfD) problem and exhibit degrees of prototypicality. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. Using sensor data taken during demonstrations to our robot from five human teachers, we show the expressivity of using grounded concepts when learning new tasks from demonstration. We then show how the learning curve improves when transferring the knowledge of grounded concepts to future tasks.


robot and human interactive communication | 2011

Simon plays Simon says: The timing of turn-taking in an imitation game

Crystal Chao; Jinhan Lee; Momotaz Begum; Andrea Lockerd Thomaz

Turn-taking is fundamental to the way humans engage in information exchange, but robots currently lack the turn-taking skills required for natural communication. In order to bring effective turn-taking to robots, we must first understand the underlying processes in the context of what is possible to implement. We describe a data collection experiment with an interaction format inspired by “Simon says,” a turn-taking imitation game that engages the channels of gaze, speech, and motion. We analyze data from 23 human subjects interacting with a humanoid social robot and propose the principle of minimum necessary information (MNI) as a factor in determining the timing of the human response.We also describe the other observed phenomena of channel exclusion, efficiency, and adaptation. We discuss the implications of these principles and propose some ways to incorporate our findings into a computational model of turn-taking.


human robot interaction | 2012

Timing in multimodal turn-taking interactions: control and analysis using timed Petri nets

Crystal Chao; Andrea Lockerd Thomaz

Turn-taking interactions with humans are multimodal and reciprocal in nature. In addition, the timing of actions is of great importance, as it influences both social and task strategies. To enable the precise control and analysis of timed discrete events for a robot, we develop a system for multimodal collaboration based on a timed Petri net (TPN) representation. We also argue for action interruptions in reciprocal interaction and describe its implementation within our system. Using the system, our autonomously operating humanoid robot Simon collaborates with humans through both speech and physical action to solve the Towers of Hanoi, during which the human and the robot take turns manipulating objects in a shared physical workspace. We hypothesize that action interruptions have a positive impact on turn-taking and evaluate this in the Towers of Hanoi domain through two experimental methods. One is a between-groups user study with 16 participants. The other is a simulation experiment using 200 simulated users of varying speed, initiative, compliance, and correctness. In these experiments, action interruptions are either present or absent in the system. Our collective results show that action interruptions lead to increased task efficiency through increased user initiative, improved interaction balance, and higher sense of fluency. In arriving at these results, we demonstrate how these evaluation methods can be highly complementary in the analysis of interaction dynamics.


Ai Magazine | 2011

Turn-Taking Based on Information Flow for Fluent Human-Robot Interaction

Andrea Lockerd Thomaz; Crystal Chao

Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous floor relinquishing on a robot and discuss our insights into the nature of a general turn-taking model for human-robot interaction.


human robot interaction | 2013

Controlling social dynamics with a parametrized model of floor regulation

Crystal Chao; Andrea Lockerd Thomaz

Turn-taking is ubiquitous in human communication, yet turn-taking between humans and robots continues to be stilted and awkward for human users. The goal of our work is to build autonomous robot controllers for successfully engaging in human-like turn-taking interactions. Towards this end, we present CADENCE, a novel computational model and architecture that explicitly reasons about the four components of floor regulation: seizing the floor, yielding the floor, holding the floor, and auditing the owner of the floor. The model is parametrized to enable the robot to achieve a range of social dynamics for the human-robot dyad. In a between-groups experiment with 30 participants, our humanoid robot uses this turn-taking system at two contrasting parametrizations to engage users in autonomous object play interactions. Our results from the study show that: (1) manipulating these turn-taking parameters results in significantly different robot behavior; (2) people perceive the robots behavioral differences and consequently attribute different personalities to the robot; and (3) changing the robots personality results in different behavior from the human, manipulating the social dynamics of the dyad. We discuss the implications of this work for various contextual applications as well as the key limitations of the system to be addressed in future work.


International Journal of Social Robotics | 2012

Multi-cue Contingency Detection

Jinhan Lee; Crystal Chao; Aaron F. Bobick; Andrea Lockerd Thomaz

The ability to detect a human’s contingent response is an essential skill for a social robot attempting to engage new interaction partners or maintain ongoing turn-taking interactions. Prior work on contingency detection focuses on single cues from isolated channels, such as changes in gaze, motion, or sound. We propose a framework that integrates multiple cues for detecting contingency from multimodal sensor data in human-robot interaction scenarios. We describe three levels of integration and discuss our method for performing sensor fusion at each of these levels. We perform a Wizard-of-Oz data collection experiment in a turn-taking scenario in which our humanoid robot plays the turn-taking imitation game “Simon says” with human partners. Using this data set, which includes motion and body pose cues from a depth and color image and audio cues from a microphone, we evaluate our contingency detection module with the proposed integration mechanisms and show gains in accuracy of our multi-cue approach over single-cue contingency detection. We show the importance of selecting the appropriate level of cue integration as well as the implications of varying the referent event parameter.


The International Journal of Robotics Research | 2016

Timed Petri nets for fluent turn-taking over multimodal interaction resources in human-robot collaboration

Crystal Chao; Andrea Lockerd Thomaz

The goal of this work is to develop computational models of social intelligence that enable robots to work side by side with humans, solving problems and achieving task goals through dialogue and collaborative manipulation. A defining problem of collaborative behavior in an embodied setting is the manner in which multiple agents make use of shared resources. In a situated dialogue, these resources include physical bottlenecks such as objects or spatial regions, and cognitive bottlenecks such as the speaking floor. For a robot to function as an effective collaborative partner with a human, it must be able to seize and yield such resources appropriately according to social expectations. We describe a general framework that uses timed Petri nets for the modeling and execution of robot speech, gaze, gesture, and manipulation for collaboration. The system dynamically monitors resource requirements and availability to control real-time turn-taking decisions over resources that are shared with humans, reasoning about different resource types independently. We evaluate our approach with an experiment in which our robot Simon performs a collaborative assembly task with 26 different human partners, showing that the multimodal reciprocal approach results in superior task performance, fluency, and balance of control.


international conference on multimodal interfaces | 2012

Timing multimodal turn-taking for human-robot cooperation

Crystal Chao

In human cooperation, the concurrent usage of multiple social modalities such as speech, gesture, and gaze results in robust and efficient communicative acts. Such multimodality in combination with reciprocal intentions supports fluent turn-taking. I hypothesize that human-robot turn-taking can be made more fluent through appropriate timing of multimodal actions. Managing timing includes understanding the impact that timing can have on interactions as well as having a control system that supports the manipulation of such timing. To this end, I propose to develop a computational turn-taking model of the timing and information flow of reciprocal interactions. I also propose to develop an architecture based on the timed Petri net (TPN) for the generation of coordinated multimodal behavior, inside of which the turn-taking model will regulate turn timing and action initiation and interruption in order to seize and yield control. Through user studies in multiple domains, I intend to demonstrate the systems generality and evaluate the system on balance of control, fluency, and task effectiveness.

Collaboration


Dive into the Crystal Chao's collaboration.

Top Co-Authors

Avatar

Andrea Lockerd Thomaz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Maya Cakmak

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Jinhan Lee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cynthia Breazeal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matt Berlin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael J. Gielniak

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesse Gray

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge