Maya Cakmak
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maya Cakmak.
human-robot interaction | 2012
Baris Akgun; Maya Cakmak; Jae Wook Yoo; Andrea Lockerd Thomaz
Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robots trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We present a user-study (n = 34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and keyframes in a single demonstration.
human-robot interaction | 2012
Maya Cakmak; Andrea Lockerd Thomaz
Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner.
human robot interaction | 2013
Kyle Strabala; Min Kyung Lee; Anca D. Dragan; Jodi L. Forlizzi; Siddhartha S. Srinivasa; Maya Cakmak; Vincenzo Micelli
A handover is a complex collaboration, where actors coordinate in time and space to transfer control of an object. This coordination comprises two processes: the physical process of moving to get close enough to transfer the object, and the cognitive process of exchanging information to guide the transfer. Despite this complexity, we humans are capable of performing handovers seamlessly in a wide variety of situations, even when unexpected. This suggests a common procedure that guides all handover interactions. Our goal is to codify that procedure. To that end, we first study how people hand over objects to each other in order to understand their coordination process and the signals and cues that they use and observe with their partners. Based on these studies, we propose a coordination structure for human-robot handovers that considers the physical and social-cognitive aspects of the interaction separately. This handover structure describes how people approach, reach out their hands, and transfer objects while simultaneously coordinating the what, when, and where of handovers: to agree that the handover will happen (and with what object), to establish the timing of the handover, and to decide the configuration at which the handover will occur. We experimentally evaluate human-robot handover behaviors that exploit this structure and offer design implications for seamless human-robot handover interactions.
IEEE Transactions on Autonomous Mental Development | 2010
Maya Cakmak; Crystal Chao; Andrea Lockerd Thomaz
This paper addresses some of the problems that arise when applying active learning to the context of human-robot interaction (HRI). Active learning is an attractive strategy for robot learners because it has the potential to improve the accuracy and the speed of learning, but it can cause issues from an interaction perspective. Here we present three interaction modes that enable a robot to use active learning queries. The three modes differ in when they make queries: the first makes a query every turn, the second makes a query only under certain conditions, and the third makes a query only when explicitly requested by the teacher. We conduct an experiment in which 24 human subjects teach concepts to our upper-torso humanoid robot, Simon, in each interaction mode, and we compare these modes against a baseline mode using only passive supervised learning. We report results from both a learning and an interaction perspective. The data show that the three modes using active learning are preferable to the mode using passive supervised learning both in terms of performance and human subject preference, but each mode has advantages and disadvantages. Based on our results, we lay out several guidelines that can inform the design of future robotic systems that use active learning in an HRI setting.
International Journal of Social Robotics | 2012
Baris Akgun; Maya Cakmak; Karl Jiang; Andrea Lockerd Thomaz
We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human–Robot Interaction perspective. Our approach—Keyframe-based Learning from Demonstration (KLfD)—takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.
Proceedings of the IEEE | 2012
Siddhartha S. Srinivasa; Dmitry Berenson; Maya Cakmak; Alvaro Collet; Mehmet Remzi Dogar; Anca D. Dragan; Ross A. Knepper; Tim Niemueller; Kyle Strabala; M. Vande Weghe; Julius Ziegler
We present the hardware design, software architecture, and core algorithms of Herb 2.0, a bimanual mobile manipulator developed at the Personal Robotics Lab at Carnegie Mellon University, Pittsburgh, PA. We have developed Herb 2.0 to perform useful tasks for and with people in human environments. We exploit two key paradigms in human environments: that they have structure that a robot can learn, adapt and exploit, and that they demand general-purpose capability in robotic systems. In this paper, we reveal some of the structure present in everyday environments that we have been able to harness for manipulation and interaction, comment on the particular challenges on working in human spaces, and describe some of the lessons we learned from extensively testing our integrated platform in kitchen and office environments.
intelligent robots and systems | 2011
Maya Cakmak; Siddhartha S. Srinivasa; Min Kyung Lee; Jodi Forlizzi; Sara Kiesler
Handing over objects to humans is an essential capability for assistive robots. While there are infinite ways to hand an object, robots should be able to choose the one that is best for the human. In this paper we focus on choosing the robot and object configuration at which the transfer of the object occurs, i.e. the hand-over configuration. We advocate the incorporation of user preferences in choosing hand-over configurations. We present a user study in which we collect data on human preferences and a human-robot interaction experiment in which we compare hand-over configurations learned from human examples against configurations planned using a kinematic model of the human. We find that the learned configurations are preferred in terms of several criteria, however planned configurations provide better reachability. Additionally, we find that humans prefer hand-overs with default orientations of objects and we identify several latent variables about the robots arm that capture significant human preferences. These findings point towards planners that can generate not only optimal but also preferable hand-over configurations for novel objects.
human-robot interaction | 2011
Maya Cakmak; Siddhartha S. Srinivasa; Min Kyung Lee; Sara Kiesler; Jodi Forlizzi
For robots to get integrated in daily tasks assisting humans, robot-human interactions will need to reach a level of fluency close to that of human-human interactions. In this paper we address the fluency of robot-human hand-overs. From an observational study with our robot HERB, we identify the key problems with a baseline hand-over action. We find that the failure to convey the intention of handing over causes delays in the transfer, while the lack of an intuitive signal to indicate timing of the hand-over causes early, unsuccessful attempts to take the object. We propose to address these problems with the use of spatial contrast, in the form of distinct hand-over poses, and temporal contrast, in the form of unambiguous transitions to the hand-over pose. We conduct a survey to identify distinct hand-over poses, and determine variables of the pose that have most communicative potential for the intent of handing over. We present an experiment that analyzes the effect of the two types of contrast on the fluency of hand-overs. We find that temporal contrast is particularly useful in improving fluency by eliminating early attempts of the human.
human-robot interaction | 2009
Andrea Lockerd Thomaz; Maya Cakmak
A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We present experiments conducted with our robot, Junior, and make six observations characterizing how people approached teaching about objects. We show that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “non-social” data sets when training SVM classifiers.
international conference on robotics and automation | 2007
Emre Ugur; Mehmet Remzi Dogar; Maya Cakmak; Erol Sahin
We are interested in how the concept of affordances can affect our view to autonomous robot control, and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances. In this paper, we studied how a mobile robot, equipped with a 3D laser scanner, can learn to perceive the traversability affordance and use it to wander in a room tilled with spheres, cylinders and boxes. The results showed that after learning, the robot can wander around avoiding contact with non-traversable objects (i.e. boxes, upright cylinders, or lying cylinders in certain orientation), but moving over traversable objects (such as spheres, and lying cylinders in a rollable orientation with respect to the robot) rolling them out of its way. We have shown that for each action approximately 1% of the perceptual features were relevant to determine whether it is afforded or not and that these relevant features are positioned in certain regions of the range image. The experiments are conducted both using a physics-based simulator and on a real robot.