Sonia Chernova
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sonia Chernova.
Synthesis Lectures on Artificial Intelligence and Machine Learning | 2014
Sonia Chernova; Andrea Lockerd Thomaz
Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 gives a brief survey of the psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 is devoted to interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects in this domain.
robot and human interactive communication | 2011
Halit Bener Suay; Sonia Chernova
The Interactive Reinforcement Learning algorithm enables a human user to train a robot by providing rewards in response to past actions and anticipatory guidance to guide the selection of future actions. Past work with software agents has shown that incorporating user guidance into the policy learning process through Interactive Reinforcement Learning significantly improves the policy learning time by reducing the number of states the agent explores. We present the first study of Interactive Reinforcement Learning in real-world robotic systems. We report on four experiments that study the effects that teacher guidance and state space size have on policy learning performance. We discuss modifications made to apply Interactive Reinforcement Learning to a real-world system and show that guidance significantly reduces the learning rate, and that its positive effects increase with state space size.
human robot interaction | 2013
Cynthia Breazeal; Nick DePalma; Jeff Orkin; Sonia Chernova; Malte Jung
Supporting a wide variety of interaction styles across a diverse set of people is a significant challenge in human-robot interaction (HRI). In this work, we explore a data-driven approach that relies on crowdsourcing as a rich source of interactions that cover a wide repertoire of human behavior. We first develop an online game that requires two players to collaborate to solve a task. One player takes the role of a robot avatar and the other a human avatar, each with a different set of capabilities that must be coordinated to overcome challenges and complete the task. Leveraging the interaction data recorded in the online game, we present a novel technique for data-driven behavior generation using case-based planning for a real robot. We compare the resulting autonomous robot behavior against a Wizard of Oz base case condition in a real-world reproduction of the online game that was conducted at the Boston Museum of Science. Results of a post-study survey of participants indicate that the autonomous robot behavior matched the performance of the human-operated robot in several important measures. We examined video recordings of the real-world game to draw additional insights as to how the novice participants attempted to interact with the robot in a loosely structured collaborative task. We discovered that many of the collaborative interactions were generated in the moment and were driven by interpersonal dynamics, not necessarily by the task design. We explored using bids analysis as a meaningful construct to tap into affective qualities of HRI. An important lesson from this work is that in loosely structured collaborative tasks, robots need to be skillful in handling these in-the-moment interpersonal dynamics, as these dynamics have an important impact on the affective quality of the interaction for people. How such interactions dovetail with more task-oriented policies is an important area for future work, as we anticipate such interactions becoming commonplace in situations where personal robots perform loosely structured tasks in interaction with people in human living spaces.
robot and human interactive communication | 2011
Sonia Chernova; Nick DePalma; Elisabeth Morant; Cynthia Breazeal
The ability for robots to engage in interactive behavior with a broad range of people is critical for future development of social robotic applications. In this paper, we propose the use of online games as a means of generating large-scale data corpora for human-robot interaction research in order to create robust and diverse interaction models. We describe a data collection approach based on a multiplayer game that was used to collect movement, action and dialog data from hundreds of online users. We then study how these records of human-human interaction collected in a virtual world can be used to generate contextually correct social and task-oriented behaviors for a robot collaborating with a human in a similar real-world environment. We evaluate the resulting behavior model using a physical robot in the Boston Museum of Science, and show that the robot successfully performs the collaborative task and that its behavior is strongly influenced by patterns in the crowdsourced dataset.
human-robot interaction | 2015
Anahita Mohseni-Kabir; Charles Rich; Sonia Chernova; Candace L. Sidner; Daniel Miller
We have developed learning and interaction algorithms to support a human teaching hierarchical task models to a robot using a single demonstration in the context of a mixedinitiative interaction with bi-directional communication. In particular, we have identified and implemented two important heuristics for suggesting task groupings based on the physical structure of the manipulated artifact and on the data flow between tasks. We have evaluated our algorithms with users in a simulated environment and shown both that the overall approach is usable and that the grouping suggestions significantly improve the learning and interaction. Categories and Subject Descriptors I.2.9 [Artificial Intelligence]: Robotics
human-robot interaction | 2011
Halit Bener Suay; Sonia Chernova
Most human interactions with the environment depend on our ability to navigate freely and to use our hands and arms to manipulate objects. Developing natural means of controlling these abilities in humanoid robots can significantly broaden the usability of such platforms. An ideal interface for humanoid robot teleoperation will be inexpensive, person-independent, require no wearable equipment, and will be easy to use, requiring little or no user training. This work presents a new humanoid robot control and interaction interface that uses depth images and skeletal tracking software to control the navigation, gaze and arm gestures of a humanoid robot. To control the robot, the user stands in front of a depth camera and assumes a specific pose to initiate skeletal tracking. The initial location of the user automatically becomes the origin of the control coordinate system. The user can then use leg and arm gestures to turn the robots motors on and off, to switch operation modes and to control the behavior of the robot. We present two control modes. The body control mode enables the user to control the arms and navigation direction of the robot using the persons own arms and location, respectively. The gaze direction control mode enables the user to control the focus of attention of the robot by pointing with one hand, while giving commands through gestures of the other hand. We present a demonstration of this interface, in which a combination of these two control modes is used to successfully enable an Aldebaran Nao robot to carry an object from one location to another. Our work makes use of the Microsoft Kinect depth sensor.
2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) | 2013
Nicholas Alunni; Calder Phillips-Grafftin; Halit Bener Suay; Daniel M. Lofaro; Dmitry Berenson; Sonia Chernova; Robert W. Lindeman; Paul Y. Oh
This paper presents our progress toward a user-guided manipulation framework for High Degree-of-Freedom robots operating in environments with limited communication. The system we propose consists of three components: (1) a user-guided perception interface which assists the user to provide task level commands to the robot, (2) planning algorithms that autonomously generate robot motion while obeying relevant constraints, and (3) a trajectory execution and monitoring system which detects errors in execution. We have performed quantitative experiments on these three components and qualitative experiments of the entire pipeline with the PR2 robot turning a valve for the DARPA Robotics Challenge. We ran 20 tests of the entire framework with an average run time of two minutes. We also report results for tests of each individual component.
Ai Magazine | 2006
Manuela M. Veloso; Paul E. Rybski; Scott Lenser; Sonia Chernova; Douglas L. Vail
CMRoboBits is a course offered at Carnegie Mellon University that introduces students to all the concepts needed to create a complete intelligent robot. In particular, the course focuses on the areas of perception, cognition, and action by using the Sony AIBO robot as the focus for the programming assignments. This course shows how an AIBO and its software resources make it possible for students to investigate and work with an unusually broad variety of AI topics within a single semester. While material presented in this article describes using AIBOs as the primary platform, the concepts presented in the course are not unique to the AIBO and can be applied on different kinds of robotic hardware.
intelligent robots and systems | 2015
Russell Toris; Julius Kammerl; David V. Lu; Jihoon Lee; Odest Chadwicke Jenkins; Sarah Osentoski; Mitchell Wills; Sonia Chernova
Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. Building from rosbridge, this paper describes our efforts with Robot Web Tools to advance: 1) human-robot interaction through usable client and visualization libraries for more efficient development of front-end human-robot interfaces, and 2) cloud robotics through more efficient methods of transporting high-bandwidth topics (e.g., kinematic transforms, image streams, and point clouds). We further discuss the significant impact of Robot Web Tools through a diverse set of use cases that showcase the importance of a generic messaging protocol and front-end development systems for human-robot interaction.
human-robot interaction | 2012
Russell Toris; Halit Bener Suay; Sonia Chernova
Research on robot learning from demonstration has seen significant growth in recent years, but existing evaluations have focused exclusively on algorithmic performance and not on usability factors, especially with respect to naïve users. Here we present findings from a comparative user study in which we asked non-experts to evaluate three distinctively different robot learning from demonstration algorithms - Behavior Networks, Interactive Reinforcement Learning, and Confidence Based Autonomy. Participants in the study showed a preference for interfaces where they controlled the robot directly (teleoperation and guidance) instead of providing retroactive feedback for past actions (reward and correction). Our results show that the best policy performance in most metrics was achieved using the Confidence Based Autonomy algorithm.