Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald B. Gennery is active.

Publication


Featured researches published by Donald B. Gennery.


International Journal of Computer Vision | 1992

Visual tracking of known three-dimensional objects

Donald B. Gennery

A method is described of visually tracking a known three-dimensional object as it moves with six degrees of freedom. The method uses the predicted position of known features on the object to find the features in images from one or more cameras, measures the position of the features in the images, and uses these measurements to update the estimates of position, orientation, linear velocity, and angular velocity of the object model. The features usually used are brightness edges that correspond to markings or the edges of solid objects, although point features can be used. The solution for object position and orientation is a weighted least-squares adjustment that includes filtering over time, which reduces the effects of errors, allows extrapolation over times of missing data, and allows the use of stereo information from multiple-camera images that are not coincident in time. The filtering action is derived so as to be optimum if the acceleration is random. (Alternatively, random torque can be assumed for rotation.) The filter is equivalent to a Kalman filter, but for efficiency it is formulated differently in order to take advantage of the dimensionality of the observations and the state vector which occur in this problem. The method can track accurately with arbitrarily large angular velocities, as long as the angular acceleration (or torque) is small. Results are presented showing the successful tracking of partially obscured objects with clutter.


Autonomous Robots | 1999

Traversability Analysis and Path Planning for a Planetary Rover

Donald B. Gennery

A method of analyzing three-dimensional data such as might be produced by stereo vision or a laser range finder in order to plan a path for a vehicle such as a Mars rover is described. In order to produce robust results from data that is sparse and of varying accuracy, the method takes into account the accuracy of each data point, as represented by its covariance matrix. It computes estimates of smoothed and interpolated height, slope, and roughness at equally spaced horizontal intervals, as well as accuracy estimates of these quantities. From this data, a cost function is computed that takes into account both the distance traveled and the probability that each region is traversable. A parallel search algorithm that finds the path of minimum cost also is described. Examples using real data are presented.


International Journal of Computer Vision | 2006

Generalized Camera Calibration Including Fish-Eye Lenses

Donald B. Gennery

A method is described for accurately calibrating cameras including radial lens distortion, by using known points such as those measured from a calibration fixture. Both the intrinsic and extrinsic parameters are calibrated in a single least-squares adjustment, but provision is made for including old values of the intrinsic parameters in the adjustment. The distortion terms are relative to the optical axis, which is included in the model so that it does not have to be orthogonal to the image sensor plane. These distortion terms represent corrections to the basic lens model, which is a generalization that includes the perspective projection and the ideal fish-eye lens as special cases. The position of the entrance pupil point as a function of off-axis angle also is included in the model. (The complete camera model including all of these effects often is called CAHVORE.) A way of adding decentering distortion also is described. A priori standard deviations can be used to apply weight to given initial approximations (which can be zero) for the distortion terms, for the difference between the optical axis and the perpendicular to the sensor plane, and for the terms representing movement of the entrance pupil, so that the solution for these is well determined when there is insufficient information in the calibration data. For the other parameters, initial approximations needed for the nonlinear least-squares adjustment are obtained in a simple manner from the calibration data and other known information. (Weight can be given to these also, if desired.) Outliers among the calibration points that disagree excessively with the other data are removed by means of automatic editing based on analysis of the residuals. The use of the camera model also is described, including partial derivatives for propagating both from object space to image space and vice versa. These methods were used to calibrate the cameras on the Mars Exploration Rovers.


international conference on robotics and automation | 1992

Robotic vehicles for planetary exploration

Brian H. Wilcox; Larry H. Matthies; Donald B. Gennery; Brian K. Cooper; Tam T. Nguyen; Todd Litwin; Andrew Mishkin; Henry W. Stone

Future missions to the moon, Mars, or other planetary surfaces will use planetary rovers for exploration or other tasks. Operation of these rovers as unmanned robotic vehicles with some form of remote or semi-autonomous control is desirable to reduce the cost and increase the capability and safety of many types of missions. However, the long time delays and relatively low bandwidths associated with radio communications between planets precludes a total “telepresence” approach to controlling the vehicle. A program to develop planetary rover technology has been initiated at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration (NASA). Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed and initial testing has been completed. These testbed systems, the associated navigation techniques currently used and planned for implementation, and long-term mission strategies employing them are described.


computer vision and pattern recognition | 1989

Visual terrain matching for a Mars rover

Donald B. Gennery

A method of matching unequally spaced height maps is described. This method would be useful in a Mars rover which tries to refine the estimate of its position by matching the data from its stereo vision system or laser rangefinder to data obtained from a camera orbiting Mars. The refined position can be used to merge the two sets of data with proper registration. The method is designed to make full use of the information contained in the data, including accuracy estimates in the form of covariance matrices and reliability estimates in the form of probabilities. Means of extracting this information from stereo vision are discussed. The terrain matching process uses a coarse-to-fine strategy, and it includes automatic editing to remove points with excessive disagreement. An example using real data from an experimental vehicle is presented.<<ETX>>


Robotics and IECON '87 Conferences | 1987

A Vision System For A Mars Rover

Brian H. Wilcox; Donald B. Gennery; Andrew Mishkin; Brian K. Cooper; Teri B. Lawton; N. Keith Lay; Steven P. Katzmann

A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.


international conference on robotics and automation | 1987

Sensing and perception research for space telerobotics at JPL

Donald B. Gennery; Todd Litwin; Brian H. Wilcox; Bruce Bon

A useful space telerobot for on-orbit assembly, maintenance, and repair tasks must have a sensing and perception subsystem which can provide the locations, orientations, and velocities of all relevant objects in the work environment. This function must be accomplished with sufficient speed and accuracy to permit effective grappling and manipulation. Appropriate symbolic names must be attached to each object for use by higher-level planning algorithms. Sensor data and inferences must be presented to the remote human operator in a way that is both comprehensible in ensuring safe autonomous operation and useful for direct teleoperation. Research at JPL toward these objectives is described.


Machine Intelligence and Pattern Recognition | 1986

Stereo Vision for the Acquisition and Tracking of Moving Three-Dimensional Objects

Donald B. Gennery

A multiple-camera motion stereo solution is described that uses two-dimensional positions and velocities of tracked features to produce absolute three-dimensional positions of the features on a moving rigid object, without matching features between cameras. A system is described for automatically initializing the tracking of a known moving object by using such a motion stereo solution, using its results to aid in stereo matching to improve the accuracy of these results, and then matching these three-dimensional feature positions to an object model.


Autonomous robot vehicles | 1990

A Mars rover for the 1990's

Brian H. Wilcox; Donald B. Gennery

Some technical issues concerning a Mars rover launched in the 1990’s are discussed. Two particular modes of controlling the travelling of the vehicle are described. In one mode, most of the control is from Earth, by human operators viewing stereo pictures sent from the rover and designating short routes to follow. In the other mode, computer vision is used in order to make the rover more autonomous, but reliability is aided by the use of orbital imagery and approximate long routes sent from Earth. In the latter case, it is concluded that average travel rates of around 10 km/day are feasible.


OE LASE'87 and EO Imaging Symp (January 1987, Los Angeles) | 1987

Real-Time Model-Based Vision System For Object Acquisition And Tracking

Brian H. Wilcox; Donald B. Gennery; Bruce Bon; Todd Litwin

A useful space telerobot for on-orbit assembly, maintenance, and repair tasks must have a sensing and perception subsystem which can provide the locations, orientations, and velocities of all relevant objects in the work environment. This function must be accomplished with sufficient speed and accuracy to permit effective grappling and manipulation. Appropriate symbolic names must be attached to each object for use by higher-level planning algorithms. Sensor data and inferences must be presented to the remote human operator in a way that is both comprehensible in ensuring safe autonomous operation and useful for direct teleoperation. Research at JPL toward these objectives is described.

Collaboration


Dive into the Donald B. Gennery's collaboration.

Top Co-Authors

Avatar

Brian H. Wilcox

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew Mishkin

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Todd Litwin

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian K. Cooper

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bruce Bon

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eugene C. Chalfant

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Henry W. Stone

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Larry H. Matthies

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

N. Keith Lay

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steven P. Katzmann

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge