Alex X. Lee
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex X. Lee.
robotics science and systems | 2013
John Schulman; Jonathan Ho; Alex X. Lee; Ibrahim Awwal; Henry Bradlow; Pieter Abbeel
We present a novel approach for incorporating collision avoidance into trajectory optimization as a method of solving robotic motion planning problems. At the core of our approach are (i) A sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary. (ii) An efficient formulation of the no-collisions constraint that directly considers continuous-time safety and enables the algorithm to reliably solve motion planning problems, including problems involving thin and complex obstacles. We benchmarked our algorithm against several other motion planning algorithms, solving a suite of 7-degree-of-freedom (DOF) arm-planning problems and 18-DOF full-body planning problems. We compared against sampling-based planners from OMPL, and we also compared to CHOMP, a leading approach for trajectory optimization. Our algorithm was faster than the alternatives, solved more problems, and yielded higher quality paths. Experimental evaluation on the following additional problem types also confirmed the speed and effectiveness of our approach: (i) Planning foot placements with 34 degrees of freedom (28 joints + 6 DOF pose) of the Atlas humanoid robot as it maintains static stability and has to negotiate environmental constraints. (ii) Industrial box picking. (iii) Real-world motion planning for the PR2 that requires considering all degrees of freedom at the same time.
The International Journal of Robotics Research | 2014
John Schulman; Yan Duan; Jonathan Ho; Alex X. Lee; Ibrahim Awwal; Henry Bradlow; Jia Pan; Sachin Patil; Ken Goldberg; Pieter Abbeel
We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naïve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http://rll.berkeley.edu/trajopt/ijrr.
international conference on robotics and automation | 2013
John Schulman; Alex X. Lee; Jonathan Ho; Pieter Abbeel
We introduce an algorithm for tracking deformable objects from a sequence of point clouds. The proposed tracking algorithm is based on a probabilistic generative model that incorporates observations of the point cloud and the physical properties of the tracked object and its environment. We propose a modified expectation maximization algorithm to perform maximum a posteriori estimation to update the state estimate at each time step. Our modification makes it practical to perform the inference through calls to a physics simulation engine. This is significant because (i) it allows for the use of highly optimized physics simulation engines for the core computations of our tracking algorithm, and (ii) it makes it possible to naturally, and efficiently, account for physical constraints imposed by collisions, grasping actions, and material properties in the observation updates. Even in the presence of the relatively large occlusions that occur during manipulation tasks, our algorithm is able to robustly track a variety of types of deformable objects, including ones that are one-dimensional, such as ropes; two-dimensional, such as cloth; and three-dimensional, such as sponges. Our implementation can track these objects in real time.
international conference on robotics and automation | 2014
Ben Kehoe; Gregory Kahn; Jeffrey Mahler; Jonathan Kim; Alex X. Lee; Anna Lee; Keisuke Nakagawa; Sachin Patil; W. Douglas Boyd; Pieter Abbeel; Ken Goldberg
Autonomous robot execution of surgical sub-tasks has the potential to reduce surgeon fatigue and facilitate supervised tele-surgery. This paper considers the sub-task of surgical debridement: removing dead or damaged tissue fragments to allow the remaining healthy tissue to heal. We present an autonomous multilateral surgical debridement system using the Raven, an open-architecture surgical robot with two cable-driven 7 DOF arms. Our system combines stereo vision for 3D perception with trajopt, an optimization-based motion planner, and model predictive control (MPC). Laboratory experiments involving sensing, grasping, and removal of 120 fragments suggest that an autonomous surgical robot can achieve robustness comparable to human performance. Our robot system demonstrated the advantage of multilateral systems, as the autonomous execution was 1.5× faster with two arms than with one; however, it was two to three times slower than a human. Execution speed could be improved with better state estimation that would allow more travel between MPC steps and fewer MPC replanning cycles. The three primary contributions of this paper are: (1) introducing debridement as a sub-task of interest for surgical robotics, (2) demonstrating the first reliable autonomous robot performance of a surgical sub-task using the Raven, and (3) reporting experiments that highlight the importance of accurate state estimation for future research. Further information including code, photos, and video is available at: http://rll.berkeley.edu/raven.
international conference on robotics and automation | 2015
Alex X. Lee; Henry Lu; Abhishek Gupta; Sergey Levine; Pieter Abbeel
Manipulation of deformable objects often requires a robot to apply specific forces to bring the object into the desired configuration. For instance, tightening a knot requires pulling on the ends, flattening an article of clothing requires smoothing out wrinkles, and erasing a whiteboard requires applying downward pressure. We present a method for learning force-based manipulation skills from demonstrations. Our approach uses non-rigid registration to compute a warping function that transforms both the end-effector poses and forces in each demonstration into the current scene, based on the configuration of the object. Our method then uses the variation between the demonstrations to extract a single trajectory, along with time-varying feedback gains that determine how much to match poses or forces. This results in a learned variable-impedance control strategy that trades off force and position errors, providing for the right level of compliance that applies the necessary forces at each stage of the motion. We evaluate our approach by tying knots in rope, flattening towels, and erasing a whiteboard.
intelligent robots and systems | 2013
Alex X. Lee; Yan Duan; Sachin Patil; John Schulman; Zoe McCarthy; Jur van den Berg; Ken Goldberg; Pieter Abbeel
In many home and service applications, an emerging class of articulated robots such as the Raven and Baxter trade off precision in actuation and sensing to reduce costs and to reduce the potential for injury to humans in their workspaces. For planning and control of such robots, planning in belief ssigma hullpace, i.e., modeling such problems as POMDPs, has shown great promise but existing belief space planning methods have primarily been applied to cases where robots can be approximated as points or spheres. In this paper, we extend the belief space framework to treat articulated robots where the linkage can be decomposed into convex components. To allow planning and collision avoidance in Gaussian belief spaces, we introduce the concept of sigma hulls: convex hulls of robot links transformed according to the sigma standard deviation boundary points generated by the Unscented Kalman filter (UKF). We characterize the signed distances between sigma hulls and obstacles in the workspace to formulate efficient collision avoidance constraints compatible with the Gilbert-Johnson-Keerthi (GKJ) and Expanding Polytope Algorithms (EPA) within an optimization-based planning framework. We report results in simulation for planning motions for a 4-DOF planar robot and a 7-DOF articulated robot with imprecise actuation and inaccurate sensors. These experiments suggest that the sigma hull framework can significantly reduce the probability of collision and is computationally efficient enough to permit iterative re-planning for model predictive control.
intelligent robots and systems | 2014
Alex X. Lee; Sandy H. Huang; Dylan Hadfield-Menell; Eric Tzeng; Pieter Abbeel
Recent work [1], [2] has shown promising results in enabling robotic manipulation of deformable objects through learning from demonstrations. Their method computes a registration from training scene to test scene, and then applies an extrapolation of this registration to the training scene gripper motion to obtain the gripper motion for the test scene. The warping cost of scene-to-scene registrations is used to determine the nearest neighbor from a set of training demonstrations. Then once the gripper motion has been generalized to the test situation, they apply trajectory optimization [3] to plan for the robot motions that will track the predicted gripper motions. In many situations, however, the predicted gripper motions cannot be followed perfectly due to, for example, joint limits or obstacles. In this case the past work finds a path that minimizes deviation from the predicted gripper trajectory as measured by its Euclidean distance for position and angular distance for orientation. Measuring the error this way during the motion planning phase, however, ignores the underlying structure of the problem-namely the idea that rigid registrations are preferred to generalize from training scene to test scene. Deviating from the gripper trajectory predicted by the extrapolated registration effectively changes the warp induced by the registration in the part of the space where the gripper trajectories are. The main contribution of this paper is an algorithm that considers this effective final warp as the criterion to optimize for in a unified optimization that simultaneously considers the scene-to-scene warping and the robot trajectory (which were separated into two sequential steps by the past work). This results in an approach that adjusts to infeasibility in a way that adapts directly to the geometry of the scene and minimizes the introduction of additional warping cost. In addition, this paper proposes to learn the motion of the gripper pads, whereas past work considered the motion of a coordinate frame attached to the gripper as a whole. This enables learning more precise grasping motions. Our experiments, which consider the task of knot tying, show that both unified optimization and explicit consideration of gripper pad motion result in improved performance.
intelligent robots and systems | 2015
Alex X. Lee; Abhishek Gupta; Henry Lu; Sergey Levine; Pieter Abbeel
Learning from demonstration by means of non-rigid point cloud registration is an effective tool for learning to manipulate a wide range of deformable objects. However, most methods that use non-rigid registration to transfer demonstrated trajectories assume that the test and demonstration scene are structurally very similar, with any variation explained by a non-linear transformation. In real-world tasks with clutter and distractor objects, this assumption is unrealistic. In this work, we show that a trajectory-aware non-rigid registration method that uses multiple demonstrations to focus the registration process on points that are relevant to the task can effectively handle significantly greater visual variation than prior methods that are not trajectory-aware. We demonstrate that this approach achieves superior generalization on several challenging tasks, including towel folding and grasping objects in a box containing irrelevant distractors.
international conference on robotics and automation | 2015
Dylan Hadfield-Menell; Alex X. Lee; Chelsea Finn; Eric Tzeng; Sandy H. Huang; Pieter Abbeel
We consider the problem of learning from demonstrations to manipulate deformable objects. Recent work [1], [2], [3] has shown promising results that enable robotic manipulation of deformable objects through learning from demonstrations. Their approach is able to generalize from a single demonstration to new test situations, and suggests a nearest neighbor approach to select a demonstration to adapt to a given test situation. Such a nearest neighbor approach, however, ignores important aspects of the problem: brittleness (versus robustness) of demonstrations when generalized through this process, and the extent to which a demonstration makes progress towards a goal. In this paper, we frame the problem of selecting which demonstration to transfer as an options Markov decision process (MDP). We present max-margin Q-function estimation: an approach to learn a Q-function from expert demonstrations. Our learned policies account for variability in robustness of demonstrations and the sequential nature of our tasks. We developed two knot-tying benchmarks to experimentally validate the effectiveness of our proposed approach. The selection strategy described in [2] achieves success rates of 70% and 54%, respectively. Our approach performs significantly better, with success rates of 88% and 76%, respectively.
Nano Letters | 2018
Dylan Lu; Ye Zhang; Minliang Lai; Alex X. Lee; Chenlu Xie; Jia Lin; Teng Lei; Zhenni Lin; Christopher S. Kley; Jianmei Huang; Eran Rabani; Peidong Yang
Surface condition plays an important role in the optical performance of semiconductor materials. As new types of semiconductors, the emerging metal-halide perovskites are promising for next-generation optoelectronic devices. We discover significantly improved light-emission efficiencies in lead halide perovskites due to surface oxygen passivation. The enhancement manifests close to 3 orders of magnitude as the perovskite dimensions decrease to the nanoscale, improving external quantum efficiencies from <0.02% to over 12%. Along with about a 4-fold increase in spontaneous carrier recombination lifetimes, we show that oxygen exposure enhances light emission by reducing the nonradiative recombination channel. Supported by X-ray surface characterization and theoretical modeling, we propose that excess lead atoms on the perovskite surface create deep-level trap states that can be passivated by oxygen adsorption.