Adam Leeper
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adam Leeper.
human-robot interaction | 2012
Adam Leeper; Kaijen Hsiao; Matei T. Ciocarlie; Leila Takayama; David Gossow
Human-in-the loop robotic systems have the potential to handle complex tasks in unstructured environments, by combining the cognitive skills of a human operator with autonomous tools and behaviors. Along these lines, we present a system for remote human-in-the-loop grasp execution. An operator uses a computer interface to visualize a physical robot and its surroundings, and a point-and-click mouse interface to command the robot. We implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator. Our controlled experiment (N=48) results indicate that people were able to successfully grasp more objects and caused fewer unwanted collisions when using the strategies with more autonomous assistance. We used an untethered robot over wireless communications, making our strategies applicable for remote, human-in-the-loop robotic applications.
IEEE Robotics & Automation Magazine | 2013
Tiffany L. Chen; Matei T. Ciocarlie; Steve Cousins; Phillip M. Grice; Kelsey P. Hawkins; Kaijen Hsiao; Charles C. Kemp; Chih-Hung King; Daniel A. Lazewatsky; Adam Leeper; Hai Nguyen; Andreas Paepcke; Caroline Pantofaru; William D. Smart; Leila Takayama
Assistive mobile manipulators (AMMs) have the potential to one day serve as surrogates and helpers for people with disabilities, giving them the freedom to perform tasks such as scratching an itch, picking up a cup, or socializing with their families.
international conference on robotics and automation | 2012
Adam Leeper; Sonny Chan; Kenneth Salisbury
We present a constraint-based strategy for haptic rendering of arbitrary point cloud data. With the recent proliferation of low-cost range sensors, dense 3D point cloud data is readily available at high update rates. Taking a cue from the graphics literature, we propose that point data should be represented as an implicit surface, which can be formulated to be mathematically smooth and efficient for computing interaction forces, and for which haptic constraint algorithms are already well-known. This method is resistant to sensor noise, makes no assumptions about surface connectivity or orientation, and data pre-processing is fast enough for use with streaming data. We compare the performance of two different implicit representations and discuss our strategy for handling time-varying point clouds from a depth camera. Applications of haptic point cloud rendering to remote sensing, as in robot telemanipulation, are also discussed.
intelligent robots and systems | 2012
Matei T. Ciocarlie; Kaijen Hsiao; Adam Leeper; David Gossow
We present a mobile manipulation platform operated by a motor-impaired person using input from a head-tracker, single-button mouse. The platform is used to perform varied and unscripted manipulation tasks in a real home, combining navigation, perception and manipulation. The operator can make use of a wide range of interaction methods and tools, from direct tele-operation of the gripper or mobile base to autonomous sub-modules performing collision-free base navigation or arm motion planning. We describe the complete set of tools that enable the execution of complex tasks, and share the lessons learned from testing them in a real users home. In the context of grasping, we show how the use of autonomous sub-modules improves performance in complex, cluttered environments, and compare the results to those obtained by novice, able-bodied users operating the same system.
international symposium on experimental robotics | 2014
Adam Leeper; Kaijen Hsiao; Eric Chu; J. Kenneth Salisbury
Robotic grasping in unstructured environments requires the ability to adjust and recover when a pre-planned grasp faces imminent failure. Even for a single object, modeling uncertainties due to occluded surfaces, sensor noise and calibration errors can cause grasp failure; cluttered environments exacerbate the problem. In this work, we propose a simple but robust approach to both pre-touch grasp adjustment and grasp planning for unknown objects in clutter, using a small-baseline stereo camera attached to the gripper of the robot. By employing a 3D sensor from the perspective of the gripper we gain information about the object and nearby obstacles immediately prior to grasping that is not available during head-sensor-based grasp planning. We use a feature-based cost function on local 3D data to evaluate the feasibility of a proposed grasp. In cases where only minor adjustments are needed, our algorithm uses gradient descent on a cost function based on local features to find optimal grasps near the original grasp. In cases where no suitable grasp is found, the robot can search for a significantly different grasp pose rather than blindly attempting a doomed grasp. We present experimental results to validate our approach by grasping a wide range of unknown objects in cluttered scenes. Our results show that reactive pre-touch adjustment can correct for a fair amount of uncertainty in the measured position and shape of the objects, or the presence of nearby obstacles.
ieee-ras international conference on humanoid robots | 2013
Adam Leeper; Kaijen Hsiao; Matei T. Ciocarlie; Ioan Alexandru Sucan; Kenneth Salisbury
We introduce CAT, a constraint-aware teleoperation method that can track continuously updating 6-DOF end-effector goals while avoiding environment collisions, self-collisions, and joint limits. Our method uses sequential quadratic programming to generate motion trajectories that obey kinematic constraints while attempting to reduce the distance to the goal with each step. Environment models are created and updated at run-time using a commodity depth camera. We compare our method to three additional teleoperation strategies, based on global motion planning, inverse kinematics, and Jacobian-transpose control. Our analysis, using a real robot in a variety of scenes, highlights the strengths of each method, and shows that the CAT method we introduce performs well over a wide range of scenarios.
ieee haptics symposium | 2012
Adam Leeper; Sonny Chan; Kaijen Hsiao; Matei T. Ciocarlie; Kenneth Salisbury
We present an efficient 6-DOF haptic algorithm for rendering interaction forces between a rigid proxy object and a set of unordered point data. We further explore the use of haptic feedback for remotely supervised robots performing grasping tasks. The robot captures the geometry of a remote environment (as a cloud of 3D points) at run-time using a depth camera or laser scanner. An operator then uses a haptic device to position a virtual model of the robot gripper (the haptic proxy), specifying a desired grasp pose to be executed by the robot. The haptic algorithm enforces a proxy pose that is non-colliding with the observable environment, and provides both force and torque feedback to the operator. Once the operator confirms the desired gripper pose, the robot computes a collision-free arm trajectory and executes the specified grasp. We apply this method for grasping a wide range of objects, previously unseen by the robot, from highly cluttered scenes typical of human environments. Our user experiment (N=20) shows that people with no prior experience using the visualization system on which our interfaces are based are able to successfully grasp more objects with a haptic device providing force-feedback than with just a mouse.
intelligent robots and systems | 2012
Tiffany L. Chen; Matei T. Ciocarlie; Steve Cousins; Phillip M. Grice; Kelsey P. Hawkins; Kaijen Hsiao; Charles C. Kemp; Chih-Hung King; Daniel A. Lazewatsky; Adam Leeper; Hai Nguyen; Andreas Paepcke; Caroline Pantofaru; William D. Smart; Leila Takayama
The Robots for Humanity project aims to enable people with severe motor impairments to interact with their own bodies and their environment through the use of an assistive mobile manipulator, thereby improving their quality of life. Assistive mobile manipulators (AMMs) are mobile robots that physically manipulate the world in order to provide assistance to people with disabilities. They present an exciting frontier for assistive technology, as they can operate away from the user, have a large dexterous workspace (due to their mobility), and not directly encumber their users. The cornerstone of this project is an ongoing, interactive design process with a quadriplegic user, Henry Evans, and his wife and primary caregiver, Jane Evans. Henry has been enabled, through the use of a PR2 robot, to scratch his own face, shave, fetch a towel from his kitchen, and hand out Halloween candy to trick-ortreating children at a local mall.
intelligent robots and systems | 2011
Reuben D. Brewer; Adam Leeper; J. Kenneth Salisbury
We present a new mechanical design for a 3-DOF haptic device with spherical kinematics (pitch, yaw, and prismatic radial). All motors are grounded in the base to decrease inertia and increase compactness near the users hand. An aluminum-aluminum friction differential allows for actuation of pitch and yaw with mechanical robustness while allowing a cable transmission to route through its center. This novel cabling system provides simple, compact, and high-performance actuation of the radial DOF independent of motions in pitch and yaw. We show that the devices capabilities are suitable for general haptic rendering, as well as specialized applications of spherical kinematics such as laparoscopic surgery simulation.
international conference on robotics and automation | 2013
Anthony Pratkanis; Adam Leeper; Kenneth Salisbury
We describe our development of an autonomous robotic system that safely navigates through an unmodified campus environment to purchase and deliver a cup of coffee. To accomplish this task, the robot navigates through indoor and outdoor environments, opens heavy spring-loaded doors, calls, enters, and exits an elevator, waits in line with other customers, interacts with coffee shop employees to purchase beverages, and returns to its original location to deliver the beverages. This paper makes four contributions: a robust infrastructure for unifying multiple 2D navigation maps; a process for detecting and opening transparent, heavy spring-loaded doors; algorithms for operating elevators; and software that enables the intuitive passing of objects to and from untrained humans.