Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anca D. Dragan is active.

Publication


Featured researches published by Anca D. Dragan.


The International Journal of Robotics Research | 2013

CHOMP: Covariant Hamiltonian optimization for motion planning

Matthew Zucker; Nathan D. Ratliff; Anca D. Dragan; Mihail Pivtoraiko; Matthew Klingensmith; Christopher M. Dellin; J. Andrew Bagnell; Siddhartha S. Srinivasa

In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot.


human-robot interaction | 2013

Legibility and predictability of robot motion

Anca D. Dragan; Kenton C.T. Lee; Siddhartha S. Srinivasa

A key requirement for seamless human-robot collaboration is for the robot to make its intentions clear to its human collaborator. A collaborative robots motion must be legible, or intent-expressive. Legibility is often described in the literature as and effect of predictable, unsurprising, or expected motion. Our central insight is that predictability and legibility are fundamentally different and often contradictory properties of motion. We develop a formalism to mathematically define and distinguish predictability and legibility of motion. We formalize the two based on inferences between trajectories and goals in opposing directions, drawing the analogy to action interpretation in psychology. We then propose mathematical models for these inferences based on optimizing cost, drawing the analogy to the principle of rational action. Our experiments validate our formalisms prediction that predictability and legibility can contradict, and provide support for our models. Our findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion.


human robot interaction | 2013

Toward seamless human-robot handovers

Kyle Strabala; Min Kyung Lee; Anca D. Dragan; Jodi L. Forlizzi; Siddhartha S. Srinivasa; Maya Cakmak; Vincenzo Micelli

A handover is a complex collaboration, where actors coordinate in time and space to transfer control of an object. This coordination comprises two processes: the physical process of moving to get close enough to transfer the object, and the cognitive process of exchanging information to guide the transfer. Despite this complexity, we humans are capable of performing handovers seamlessly in a wide variety of situations, even when unexpected. This suggests a common procedure that guides all handover interactions. Our goal is to codify that procedure. To that end, we first study how people hand over objects to each other in order to understand their coordination process and the signals and cues that they use and observe with their partners. Based on these studies, we propose a coordination structure for human-robot handovers that considers the physical and social-cognitive aspects of the interaction separately. This handover structure describes how people approach, reach out their hands, and transfer objects while simultaneously coordinating the what, when, and where of handovers: to agree that the handover will happen (and with what object), to establish the timing of the handover, and to decide the configuration at which the handover will occur. We experimentally evaluate human-robot handover behaviors that exploit this structure and offer design implications for seamless human-robot handover interactions.


The International Journal of Robotics Research | 2013

A policy-blending formalism for shared control

Anca D. Dragan; Siddhartha S. Srinivasa

In shared control teleoperation, the robot assists the user in accomplishing the desired task, making teleoperation easier and more seamless. Rather than simply executing the user’s input, which is hindered by the inadequacies of the interface, the robot attempts to predict the user’s intent, and assists in accomplishing it. In this work, we are interested in the scientific underpinnings of assistance: we propose an intuitive formalism that captures assistance as policy blending, illustrate how some of the existing techniques for shared control instantiate it, and provide a principled analysis of its main components: prediction of user intent and its arbitration with the user input. We define the prediction problem, with foundations in inverse reinforcement learning, discuss simplifying assumptions that make it tractable, and test these on data from users teleoperating a robotic manipulator. We define the arbitration problem from a control-theoretic perspective, and turn our attention to what users consider good arbitration. We conduct a user study that analyzes the effect of different factors on the performance of assistance, indicating that arbitration should be contextual: it should depend on the robot’s confidence in itself and in the user, and even the particulars of the user. Based on the study, we discuss challenges and opportunities that a robot sharing the control with the user might face: adaptation to the context and the user, legibility of behavior, and the closed loop between prediction and user behavior.


robotics: science and systems | 2012

Formalizing Assistive Teleoperation

Anca D. Dragan; Siddhartha S. Srinivasa

In assistive teleoperation, the robot helps the user accomplish the desired task, making teleoperation easier and more seamless. Rather than simply executing the users input, which is hindered by the inadequacies of the interface, the robot attempts to predict the users intent, and assists in accomplishing it. In this work, we are interested in the scientific underpinnings of assistance: we formalize assistance under the general framework of policy blending, show how previous work methods instantiate this formalism, and provide a principled analysis of its main components: prediction of user intent and its arbitration with the user input. We define the prediction problem, with foundations in Inverse Reinforcement Learning, discuss simplifying assumptions that make it tractable, and test these on data from users teleoperating a robotic manipulator under various circumstances. We propose that arbitration should be moderated by the confidence in the prediction. Our user study analyzes the effect of the arbitration type, together with the prediction correctness and the task difficulty, on the performance of assistance and the preferences of users.


Proceedings of the IEEE | 2012

Herb 2.0: Lessons Learned From Developing a Mobile Manipulator for the Home

Siddhartha S. Srinivasa; Dmitry Berenson; Maya Cakmak; Alvaro Collet; Mehmet Remzi Dogar; Anca D. Dragan; Ross A. Knepper; Tim Niemueller; Kyle Strabala; M. Vande Weghe; Julius Ziegler

We present the hardware design, software architecture, and core algorithms of Herb 2.0, a bimanual mobile manipulator developed at the Personal Robotics Lab at Carnegie Mellon University, Pittsburgh, PA. We have developed Herb 2.0 to perform useful tasks for and with people in human environments. We exploit two key paradigms in human environments: that they have structure that a robot can learn, adapt and exploit, and that they demand general-purpose capability in robotic systems. In this paper, we reveal some of the structure present in everyday environments that we have been able to harness for manipulation and interaction, comment on the particular challenges on working in human spaces, and describe some of the lessons we learned from extensively testing our integrated platform in kitchen and office environments.


international conference on robotics and automation | 2011

Manipulation planning with goal sets using constrained trajectory optimization

Anca D. Dragan; Nathan D. Ratliff; Siddhartha S. Srinivasa

Goal sets are omnipresent in manipulation: picking up objects, placing them on counters or in bins, handing them off — all of these tasks encompass continuous sets of goals. This paper describes how to design optimal trajectories that exploit goal sets. We extend CHOMP (Covariant Hamiltonian Optimization for Motion Planning), a recent trajectory optimizer that has proven effective on high-dimensional problems, to handle trajectory-wide constraints, and relate the solution to the intuition of taking unconstrained steps and subsequently projecting them onto the constraints. We then show how this projection simplifies for goal sets (i.e. constraints that affect only the end-point). Finally, we present experiments on a personal robotics platform that show the importance of exploiting goal sets in trajectory optimization for day-to-day manipulation tasks.


robotics science and systems | 2016

Planning for Autonomous Cars that Leverage Effects on Human Actions

Dorsa Sadigh; Shankar Sastry; Sanjit A. Seshia; Anca D. Dragan

Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.


human-robot interaction | 2014

Deliberate delays during robot-to-human handovers improve compliance with gaze communication

Henny Admoni; Anca D. Dragan; Siddhartha S. Srinivasa; Brian Scassellati

As assistive robots become popular in factories and homes, there is greater need for natural, multi-channel communication during collaborative manipulation tasks. Non-verbal communication such as eye gaze can provide information without overloading more taxing channels like speech. However, certain collaborative tasks may draw attention away from these subtle communication modalities. For instance, robot-to-human handovers are primarily manual tasks, and human attention is therefore drawn to robot hands rather than to robot faces during handovers. In this paper, we show that a simple manipulation of a robot’s handover behavior can significantly increase both awareness of the robot’s eye gaze and compliance with that gaze. When eye gaze communication occurs during the robot’s release of an object, delaying object release until the gaze is finished draws attention back to the robot’s head, which increases conscious perception of the robot’s communication. Furthermore, the handover delay increases peoples’ compliance with the robot’s communication over a non-delayed handover, even when compliance results in counterintuitive behavior.


robot and human interactive communication | 2012

Learning the communication of intent prior to physical collaboration

Kyle Strabala; Min Kyung Lee; Anca D. Dragan; Jodi Forlizzi; Siddhartha S. Srinivasa

When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participating in this communication enables us to predict a physical action rather than react to it, producing seamless collaboration. In this paper, we automatically learn key discriminative features that predict the intent to handover an object using machine learning techniques. We train and test our algorithm on multi-channel vision and pose data collected from an extensive user study in an instrumented kitchen. Our algorithm outputs a tree of possibilities, automatically encoding various types of pre-handover communication. A surprising outcome is that mutual gaze and inter-personal distance, often cited as being key for interaction, were not key discriminative features. Finally, we discuss the immediate and future impact of this work for human-robot interaction.

Collaboration


Dive into the Anca D. Dragan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shankar Sastry

University of California

View shared research outputs
Top Co-Authors

Avatar

Jaime F. Fisac

University of California

View shared research outputs
Top Co-Authors

Avatar

Pieter Abbeel

University of California

View shared research outputs
Top Co-Authors

Avatar

Dorsa Sadigh

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth Cha

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ken Goldberg

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael Laskey

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge