Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Hebert is active.

Publication


Featured researches published by Paul Hebert.


international conference on robotics and automation | 2011

Fusion of stereo vision, force-torque, and joint sensors for estimation of in-hand object location

Paul Hebert; Nicolas Hudson; Jeremy Ma; Joel W. Burdick

This paper develops a method to fuse stereo vision, force-torque sensor, and joint angle encoder measurements to estimate and track the location of a grasped object within the hand. We pose the problem as a hybrid systems estimation problem, where the continuous states are the object 6D pose, finger contact location, wrist-to-camera transform and the discrete states are the finger contact modes with the object. This paper develops the key measurement equations that govern the fusion process. Experiments with a Barrett Hand, Bumblebee 2 stereo camera, and an ATI omega force-torque sensor validate and demonstrate the method.


Journal of Field Robotics | 2015

Mobile Manipulation and Mobility as Manipulation-Design and Algorithms of RoboSimian

Paul Hebert; Max Bajracharya; Jeremy Ma; Nicolas Hudson; Alper Aydemir; Jason Reid; Charles F. Bergh; James Borders; Matthew Frost; Michael Hagman; John Leichty; Paul G. Backes; Brett Kennedy; Paul Karplus; Brian W. Satzinger; Katie Byl; Krishna Shankar; Joel W. Burdick

This article presents the hardware design and software algorithms of RoboSimian, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain. The robot has generalized limbs and hands capable of mobility and manipulation, along with almost fully hemispherical three-dimensional sensing with passive stereo cameras. The system is semiautonomous, enabling low-bandwidth, high latency control operated from a standard laptop. Because limbs are used for mobility and manipulation, a single unified mobile manipulation planner is used to generate autonomous behaviors, including walking, sitting, climbing, grasping, and manipulating. The remote operator interface is optimized to designate, parametrize, sequence, and preview behaviors, which are then executed by the robot. RoboSimian placed fifth in the DARPA Robotics Challenge Trials, demonstrating its ability to perform disaster recovery tasks in degraded human environments.


international conference on robotics and automation | 2012

End-to-end dexterous manipulation with deliberate interactive estimation

Nicolas Hudson; Thomas M. Howard; Jeremy Ma; Abhinandan Jain; Max Bajracharya; Steven Myint; Calvin Kuo; Larry H. Matthies; Paul G. Backes; Paul Hebert; Thomas J. Fuchs; Joel W. Burdick

This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and in independent DARPA testing archived the most successfully completed tasks with the fastest average task execution of any evaluated team.


international conference on robotics and automation | 2013

The next best touch for model-based localization

Paul Hebert; Thomas M. Howard; Nicolas Hudson; Jeremy Ma; Joel W. Burdick

This paper introduces a tactile or contact method whereby an autonomous robot equipped with suitable sensors can choose the next sensing action involving touch in order to accurately localize an object in its environment. The method uses an information gain metric based on the uncertainty of the objects pose to determine the next best touching action. Intuitively, the optimal action is the one that is the most informative. The action is then carried out and the state of the objects pose is updated using an estimator. The method is further extended to choose the most informative action to simultaneously localize and estimate the objects model parameter or model class. Results are presented both in simulation and in experiment on the DARPA Autonomous Robotic Manipulation Software (ARM-S) robot.


international conference on robotics and automation | 2012

Combined shape, appearance and silhouette for simultaneous manipulator and object tracking

Paul Hebert; Nicolas Hudson; Jeremy Ma; Thomas M. Howard; Thomas J. Fuchs; Max Bajracharya; Joel W. Burdick

This paper develops an estimation framework for sensor-guided manipulation of a rigid object via a robot arm. Using an unscented Kalman Filter (UKF), the method combines dense range information (from stereo cameras and 3D ranging sensors) as well as visual appearance features and silhouettes of the object and manipulator to track both an object-fixed frame location as well as a manipulator tool or palm frame location. If available, tactile data is also incorporated. By using these different imaging sensors and different imaging properties, we can leverage the advantages of each sensor and each feature type to realize more accurate and robust object and reference frame tracking. The method is demonstrated using the DARPA ARM-S system, consisting of a Barrett™WAM manipulator.


intelligent robots and systems | 2014

More solutions means more problems: Resolving kinematic redundancy in robot locomotion on complex terrain

Brian W. Satzinger; Jason Reid; Max Bajracharya; Paul Hebert; Katie Byl

This paper addresses the open challenge of planning quasi-static walking motions for robots with kinematically redundant limbs. Focusing on RoboSimian, a quadrupedal robot developed by the Jet Propulsion Laboratory (JPL), we develop a practical method for generating statically stable walking motions by pre-computing a reduced dimensional inverse kinematics (IK) lookup table with certain uniqueness and smoothness properties. We then use that lookup table to generate IK solutions at the beginning and end of walking phases (e.g., swing, body shift, etc), and connect these waypoints using the Rapidly exploring Random Tree Connect (RRT-Connect) algorithm [1]. Thus, we avoid arbitrarily choosing an IK solution at the goal (that may turn out to be difficult to reach from the start) by setting this choice through design and use of a task-specific lookup table, which can be analyzed offline. Our approach also introduces a complementary formulation of the RRT-Connect configuration space that addresses contact and closure constraints by using the forward kinematics of one stance leg to determine the body pose while treating additional stance limbs as dependent on the body pose and solving their inverse kinematics with IK table lookup. We demonstrate an implementation of some of this framework on RoboSimian and discuss generalizations and extensions.


international conference on robotics and automation | 2015

Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A framework for whole-body manipulation

Paul Hebert; Jeremy Ma; James Borders; Alper Aydemir; Max Bajracharya; Nicolas Hudson; Krishna Shankar; Sisir Karumanchi; Bertrand Douillard; Joel W. Burdick

The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called “supervised-autonomy” is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a “Supervised Remote Robot with Guided Autonomy and Teleoperation” (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of “behaviors” to chain together sequences of “actions” for the robot to perform which is then executed real time.


Autonomous Robots | 2014

Model-based autonomous system for performing dexterous, human-level manipulation tasks

Nicolas Hudson; Jeremy Ma; Paul Hebert; Abhinandan Jain; Max Bajracharya; Thomas F. Allen; Rangoli Sharan; Matanya B. Horowitz; Calvin Kuo; Thomas M. Howard; Larry H. Matthies; Paul G. Backes; Joel W. Burdick

This article presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation Software (ARM-S) program. Performing human-level manipulation tasks is achieved through a novel combination of perception in uncertain environments, precise tool use, forceful dual-arm planning and control, persistent environmental tracking, and task level verification. Deliberate interaction with the environment is incorporated into planning and control strategies, which, when coupled with world estimation, allows for refinement of models and precise manipulation. The system takes advantage of sensory feedback immediately with little open-loop execution, attempting true autonomous reasoning and multi-step sequencing that adapts in the face of changing and uncertain environments. A tire change scenario utilizing human tools, discussed throughout the article, is used to described the system approach. A second scenario of cutting a wire is also presented, and is used to illustrate system component reuse and generality.


international conference on robotics and automation | 2013

Dual arm estimation for coordinated bimanual manipulation

Paul Hebert; Nicolas Hudson; Jeremy Ma; Joel W. Burdick

This paper develops an estimation framework for sensor-guided dual-arm manipulation of a rigid object. Using an unscented Kalman Filter (UKF), the approach combines both visual and kinesthetic information to track both the manipulators and object. From visual updates of the object and manipulators, and tactile updates, the method estimates both the robots internal state and the objects pose. Nonlinear constraints are incorporated into the framework to deal with the an additional arm and ensure the state is consistent. Two frameworks are compared in which the first framework run two single arm filters in parallel and the second consists of the augment dual arm filter with nonlinear constraints. Experiments on a wheel changing task are demonstrated using the DARPA ARM-S system, consisting of dual Barrett- WAM manipulators.


Journal of Neuroscience Methods | 2012

Expert-like performance of an autonomous spike tracking algorithm in isolating and maintaining single units in the macaque cortex

Shubhodeep Chakrabarti; Paul Hebert; Michael T. Wolf; Michael Campos; Joel W. Burdick; Alexander Gail

Isolating action potentials of a single neuron (unit) is essential for intra-cortical neurophysiological recordings. Yet, during extracellular recordings in semi-chronic awake preparations, the relationship between neuronal soma and the recording electrode is typically not stationary. Neuronal waveforms often change in shape, and in the absence of counter-measures, merge with the background noise. To avoid this, experimenters can repeatedly re-adjust electrode positions to maintain the shapes of isolated spikes. In recordings with a larger number of electrodes, this process becomes extremely difficult. We report the performance of an automated algorithm that tracks neurons to obtain well isolated spiking, and autonomously adjusts electrode position to maintain good isolation. We tested the performance of this algorithm in isolating units with multiple individually adjustable micro-electrodes in a cortical surface area of macaque monkeys. We compared the performance in terms of signal quality and signal stability against passive placement of microelectrodes and against the performance of three human experts. The results show that our SpikeTrack2 algorithm achieves significantly better signal quality compared to passive placement. It is as least as good as humans in initially finding and isolating units, and better as the average and at least as good as the most proficient of three human experimenters in maintaining signal quality and signal stability. The autonomous tracking performance, the scalability of the system to large numbers of individual channels, and the possibility to objectify single unit recording criteria makes SpikeTrack2 a highly valuable tool for all multi-channel recording systems with individually adjustable electrodes.

Collaboration


Dive into the Paul Hebert's collaboration.

Top Co-Authors

Avatar

Joel W. Burdick

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeremy Ma

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Nicolas Hudson

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Max Bajracharya

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James Borders

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul G. Backes

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brett Kennedy

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge