Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Nickels is active.

Publication


Featured researches published by Kevin Nickels.


Image and Vision Computing | 2002

Estimating uncertainty in SSD-based feature tracking

Kevin Nickels; Seth Hutchinson

Abstract Sum-of-squared-differences (SSD) based feature trackers have enjoyed growing popularity in recent years, particularly in the field of visual servo control of robotic manipulators. These trackers use SSD correlation measures to locate target features in sequences of images. The results can then be used to estimate the motion of objects in the scene, to infer the 3D structure of the scene, or to control robot motions. The reliability of the information provided by these trackers can be degraded by a variety of factors, including changes in illumination, poor image contrast, occlusion of features, or unmodeled changes in objects. This has led other researchers to develop confidence measures that are used to either accept or reject individual features that are located by the tracker. In this paper, we derive quantitative measures for the spatial uncertainty of the results provided by SSD-based feature trackers. Unlike previous confidence measures that have been used only to accept or reject hypotheses, our new measure allows the uncertainty associated with a feature to be used to weight its influence on the overall tracking process. Specifically, we scale the SSD correlation surface, fit a Gaussian distribution to this surface, and use this distribution to estimate values for a covariance matrix. We illustrate the efficacy of these measures by showing the performance of an example object tracking system with and without the measures.


international conference on robotics and automation | 2001

Model-based tracking of complex articulated objects

Kevin Nickels; Seth Hutchinson

In this paper, we present methods for tracking complex, articulated objects. We assume that an appearance model and the kinematic structure of the object to be tracked are given, leading to what is termed a model-based object tracker. At each time step, this tracker observes a new monocular grayscale image of the scene and combines information gathered from this image with knowledge of the previous configuration of the object to estimate the configuration of the object at the time the image was acquired. Each degree of freedom in the model has an uncertainty associated with it, indicating the confidence in the current estimate for that degree of freedom. These uncertainty estimates are updated after each observation. An extended Kalman filter with appropriate observation and system models is used to implement this updating process. The methods that we describe are potentially beneficial to areas such as automated visual tracking in general, visual servo control, and human computer interaction.


southeastern symposium on system theory | 2012

Robot operating systems: Bridging the gap between human and robot

John Kerr; Kevin Nickels

A robot operating system (ROS) is a collection of programs which allow a user to easily control the mobile operations of a robot. This paper describes research conducted on sixteen different ROSs to determine which one will most accommodate future Trinity undergraduates for use in further robotics research. The goal of this research is to reduce this list of 16 ROSs to a single ROS that can be used by Trinity undergraduates with limited programming experience to perform simple robotic motion tasks. First, a detailed list of criteria describing the ideal ROS was created. The list of ROSs was narrowed down to a single ROS that best fit these criteria. This ROS is called Player/Stage. Next, Player/Stage was tested to ensure the validity of the research performed. In these tests, a robots mobility and sensors were controlled by a user via Player/Stage. This ROS excelled in both the mobility tests and the sensor tests, and also proved simple to navigate and manage.


Journal of Field Robotics | 2007

Visual end-effector position error compensation for planetary robotics

Max Bajracharya; Matthew DiCicco; Paul G. Backes; Kevin Nickels

This paper describes a vision-guided manipulation algorithm that improves arm end-effector positioning to subpixel accuracy and meets the highly restrictive imaging and computational constraints of a planetary robotic flight system. Analytical, simulation-based, and experimental analyses of the algorithms effectiveness and sensitivity to camera and arm model error is presented along with results on several prototype research systems and “ground-in-the-loop” technology experiments on the Mars Exploration Rover (MER) vehicles. A computationally efficient and robust subpixel end-effector fiducial detector that is instrumental to the algorithms ability to achieve high accuracy is also described along with its validation results on MER data.


international conference on robotics and automation | 1998

Weighting observations: the use of kinematic models in object tracking

Kevin Nickels; Seth Hutchinson

We describe a model-based object tracking system that updates the configuration parameters of an object model based upon information gathered from a sequence of monocular images. Realistic object and imaging models are used to determine the expected visibility of object features, and to determine the expected appearance of all visible features. We formulate the tracking problem as one of parameter estimation from partially observed data, and apply the extended Kalman filtering (EKF) algorithm. The models are also used to determine what point feature movement reveals about the configuration parameters of the object. This information is used by the EKF to update estimates for parameters, and for the uncertainty in the current estimates, based on observations of point features in monocular images.


Robotics and Autonomous Systems | 2010

Vision guided manipulation for planetary robotics - position control

Kevin Nickels; Matthew DiCicco; Max Bajracharya; Paul G. Backes

Manipulation systems for planetary exploration operate under severe restrictions. They need to integrate vision and manipulation to achieve the reliability, safety, and predictability required of expensive systems operating on remote planets. They also must operate on very modest hardware that is shared with many other systems, and must operate without human intervention. Typically such systems employ calibrated stereo cameras and calibrated manipulators to achieve precision of the order of one centimeter with respect to instrument placement activities. This paper presents three complementary approaches to vision guided manipulation designed to robustly achieve high precision in manipulation. These approaches are described and compared, both in simulation and on hardware. In situ estimation and adaptation of the manipulator and/or camera models in these methods account for changes in the system configuration, thus ensuring consistent precision for the life of the mission. All the three methods provide several-fold increases in accuracy of manipulator positioning over the standard flight approach.


international conference on robotics and automation | 1999

Measurement error estimation for feature tracking

Kevin Nickels; Seth Hutchinson

Performance estimation for feature tracking is a critical issue, if feature tracking results are to be used intelligently. In this paper, we derive quantitative measures for the spatial accuracy of a particular feature tracker. This method uses the results from the sum-of-squared-differences correlation measure commonly used for feature tracking to estimate the accuracy (in the image plane) of the feature tracking result. In this way, feature tracking results can be analyzed and exploited to a greater extent without placing undue confidence in inaccurate results or throwing out accurate results. We argue that this interpretation of results is more flexible and useful than simply using a confidence measure on tracking results to accept or reject features. For example, and extended Kalman filtering framework can assimilate these tracking results directly to monitor the uncertainty in the estimation process for the state of an articulated object.


international conference on system of systems engineering | 2006

Vision-guided self-alignment and manipulation in a walking robot

Kevin Nickels; Brett Kennedy; Hrand Aghazarian; Curtis Collins; Mike Garrett; Avi Okon; Julie Townsend

One of the robots under development at the NASAs Jet Propulsion Laboratory (JPL) is the limbed excursion mechanical utility robot, or LEMUR. Several of the tasks slated for this robot require computer vision, as a system, to interface with the other systems in the robot, such as walking, body pose adjustment, and manipulation. This paper describes the vision algorithms used in several tasks, as well as the vision-guided manipulation algorithms developed to mitigate mismatches between the vision system and the limbs used for manipulation. Two system-level tasks are described, one involving a two meter walk culminating in a bolt-fastening task and one involving a vision-guided alignment ending with the robot mating with a docking station


international conference on robotics and automation | 2001

Inertially assisted stereo tracking for an outdoor rover

Kevin Nickels; Eric Huber

Ego-motion (self-motion) of the camera is a considerable problem in outdoor rover applications. Stereo tracking of a single moving target is a difficult problem that becomes even more challenging when rough terrain causes significant and high-acceleration motion of the camera in the world. The paper discusses the use of inertial measurements to estimate camera ego-motion and the use of these estimates to augment stereo tracking. Some initial results from an outdoor rover are presented, illustrating the efficacy of the method. The method causes fast but predictable image location transients, but reduces the amplitude of image location transients due to the rough terrain.


international conference on robotics and automation | 1999

Development of a visual space-mouse

Tobias Peter Kurpjuhn; Kevin Nickels; Alexa Hauck; Seth Hutchinson

The pervasiveness of computers in everyday life coupled with recent rapid advances in computer technology have created both the need and the means for sophisticated human-computer interaction (HCI) technology. Despite all the progress in computer technology and robotic manipulation, the interfaces for controlling manipulators have changed very little in the last decade. Therefore human-computer interfaces for controlling robotic manipulators are of great interest. A flexible and useful robotic manipulator is one capable of movement in three translational degrees of freedom, and three rotational degrees of freedom. In addition to research labs, six degree of freedom robots can be found in construction areas or other environments unfavorable for human beings. This paper proposes an intuitive and convenient visually guided interface for controlling a robot with six degrees of freedom. Two orthogonal cameras are used to track the position and the orientation of the hand of the user. This allows the user to control the robotic arm in a natural way.

Collaboration


Dive into the Kevin Nickels's collaboration.

Top Co-Authors

Avatar

Matthew DiCicco

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Avi Okon

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brett Kennedy

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Curtis Collins

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Bajracharya

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julie Townsend

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul G. Backes

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge