James R. McBride
Ford Motor Company
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James R. McBride.
The International Journal of Robotics Research | 2011
Gaurav Pandey; James R. McBride; Ryan M. Eustice
In this paper we describe a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit, a Velodyne three-dimensional lidar scanner, two push-broom forward-looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research Campus and downtown Dearborn, MI, during November–December 2009. The vehicle path trajectory in these data sets contains several large- and small-scale loop closures, which should be useful for testing various state-of-the-art computer vision and simultaneous localization and mapping algorithms.
IFAC Proceedings Volumes | 2010
Gaurav Pandey; James R. McBride; Silvio Savarese; Ryan M. Eustice
Abstract We propose an approach for external calibration of a 3D laser scanner with an omnidirectional camera system. The utility of an accurate calibration is that it allows for precise co-registration between the camera imagery and the 3D point cloud. This association can be used to enhance various state of the art algorithms in computer vision and robotics. The extrinsic calibration technique used here is similar to the calibration of a 2D laser range finder and a single camera as proposed by Zhang (2004), but has been extended to the case where we have a 3D laser scanner and an omnidirectional camera system. The procedure requires a planar checkerboard pattern to be observed simultaneously from the laser scanner and the camera system from a minimum of 3 views. The normal of the planar surface and 3D points lying on the surface constrain the relative position and orientation of the laser scanner and the omnidirectional camera system. These constraints can be used to form a non-linear optimization problem that is solved for the extrinsic calibration parameters and the covariance associated with the estimated parameters. Results are presented for a real world data set collected by a vehicle mounted with a 3D laser scanner and an omnidirectional camera system.
Journal of Field Robotics | 2015
Gaurav Pandey; James R. McBride; Silvio Savarese; Ryan M. Eustice
This paper reports on an algorithm for automatic, targetless, extrinsic calibration of a lidar and optical camera system based upon the maximization of mutual information between the sensor-measured surface intensities. The proposed method is completely data-driven and does not require any fiducial calibration targets-making in situ calibration easy. We calculate the Cramer-Rao lower bound CRLB of the estimated calibration parameter variance, and we show experimentally that the sample variance of the estimated parameters empirically approaches the CRLB when the amount of data used for calibration is sufficiently large. Furthermore, we compare the calibration results to independent ground-truth where available and observe that the mean error empirically approaches zero as the amount of data used for calibration is increased, thereby suggesting that the proposed estimator is a minimum variance unbiased estimate of the calibration parameters. Experimental results are presented for three different lidar-camera systems: i a three-dimensional 3D lidar and omnidirectional camera, ii a 3D time-of-flight sensor and monocular camera, and iii a 2D lidar and monocular camera.
international conference on robotics and automation | 2011
Gaurav Pandey; Silvio Savarese; James R. McBride; Ryan M. Eustice
This paper reports a novel algorithm for bootstrapping the automatic registration of unstructured 3D point clouds collected using co-registered 3D lidar and omnidirectional camera imagery. Here, we exploit the co-registration of the 3D point cloud with the available camera imagery to associate high dimensional feature descriptors such as scale invariant feature transform (SIFT) or speeded up robust features (SURF) to the 3D points. We first establish putative point correspondence in the high dimensional feature space and then use these correspondences in a random sample consensus (RANSAC) framework to obtain an initial rigid body transformation that aligns the two scans. This initial transformation is then refined in a generalized iterative closest point (ICP) framework. The proposed method is completely data driven and does not require any initial guess on the transformation. We present results from a real world dataset collected by a vehicle equipped with a 3D laser scanner and an omnidirectional camera.
intelligent robots and systems | 2012
Gaurav Pandey; James R. McBride; Silvio Savarese; Ryan M. Eustice
This paper reports a novel mutual information (MI) based algorithm for automatic registration of unstructured 3D point clouds comprised of co-registered 3D lidar and camera imagery. The proposed method provides a robust and principled framework for fusing the complementary information obtained from these two different sensing modalities. High-dimensional features are extracted from a training set of textured point clouds (scans) and hierarchical k-means clustering is used to quantize these features into a set of codewords. Using this codebook, any new scan can be represented as a collection of codewords. Under the correct rigid-body transformation aligning two overlapping scans, the MI between the codewords present in the scans is maximized. We apply a James-Stein-type shrinkage estimator to estimate the true MI from the marginal and joint histograms of the codewords extracted from the scans. Experimental results using scans obtained by a vehicle equipped with a 3D laser scanner and an omnidirectional camera are used to validate the robustness of the proposed algorithm over a wide range of initial conditions. We also show that the proposed method works well with 3D data alone.
intelligent robots and systems | 2011
Nicholas Carlevaris-Bianco; Anush Mohan; James R. McBride; Ryan M. Eustice
This paper reports on a method for tracking a camera system within an a priori known map constructed from co-registered 3D light detection and ranging (LIDAR) and omnidirectional image data. Our method pre-processes the raw 3D LIDAR and camera data to produce a sparse map that can scale to city-size environments. From the original LIDAR and camera data we extract visual features and identify those that are most robust to varying viewpoint. This allows us to include only the visual features that are most useful for localization in the map. Additionally, we quantize the visual features using a vocabulary tree to further reduce the maps file size. We then use vision-based localization to track the vehicles motion through the map. We present results on urban data collected with Ford Motor Companys autonomous vehicle testbed. In our experiments the map is built using urban data from winter 2009, and localization is performed using data collected in fall 2010 and winter 2011. This demonstrates our algorithms robustness to temporal changes in the environment.
Sensors and Actuators B-chemical | 2001
James R. McBride; K.E. Nietering; K.R. Ellwood
Abstract A large demand exists for sensors which are capable of measuring the various gas constituents that are present in automotive exhaust. Future advancements in engine control systems and on-board diagnostics for monitoring tailpipe emissions rely critically on the development of such devices. Sensors designed to employ the principle of differential calorimetry have been identified as among the more promising candidates for near-term automotive use in the detection of hydrocarbons and other combustible species. These calorimetric devices essentially consist of two temperature sensing elements, one of which has been coated with a catalytic layer. Heat generated from oxidation of combustible species raises the temperature of the catalytically coated element relative to the other, thus providing a measure of the concentration of combustibles in the exhaust. To date, several different prototype calorimetric devices have been evaluated under laboratory and dynamometer conditions. The sensitivity of the devices tested, measured as temperature rise per concentration of combustible, has typically been about an order of magnitude less than that theoretically possible. In this paper, we examine how the choice of calorimeter design affects the optimum achievable sensitivity. Simple physical arguments are applied to explain why the sensors tested thus far have demonstrated sensitivities substantially lower than theoretical limits. We discuss an alternate design configuration which significantly improves the calorimeter sensitivity. Results are presented from an analytical model describing this calorimeter design and from a simple experiment which validates the salient features predicted by the model.
Archive | 2000
V. V. Khatko; Eleftherios M. Logothetis; Richard E. Soltis; J. W. Hangas; James R. McBride
We are interested in the development of catalytic materials that are suitable for incorporation onto micromachined silicon gas sensors. The goal of the work is to explore methods to increase the activity and surface area of Pd and Pt catalysts. We employ sputtering techniques to fabricate catalytic layers with small grain size. In this paper, we study the catalytic properties of Pt/SiO2, Pd/SiO2, Pd/Al2O3, Pt/Al2O3, Pd/Au and Pt/Cr multilayer stacks formed by successive step-by-step deposition of ultra-thin films of the corresponding materials.
intelligent robots and systems | 2017
Gaurav Pandey; Shashank Giri; James R. McBride
This paper reports on a novel two-step algorithm for the estimation of full 6-degree-of-freedom (DOF) [t<inf>x</inf>, t<inf>y</inf>, t<inf>z</inf>, θ<inf>x</inf>, θ<inf>y</inf>, θ<inf>z</inf>] rigid body transformation between any two overlapping point-clouds that have a dominant ground plane. We first estimate the ground plane (X-Y plane) from the two 3D point-clouds and align them to obtain a good estimate of the distance between the ground planes (i.e. t<inf>z</inf>) and rotations θ<inf>x</inf> and θ<inf>y</inf> about the X and Y axis respectively using the Rodrigues rotation formula. The remaining parameters (t<inf>x</inf>, t<inf>y</inf>, θ<inf>z</inf>) are then estimated by maximizing the total mutual information (MI) between the 2D feature maps generated from the multi-modal sensor data. Experimental results using scans obtained by a vehicle equipped with a 3D laser scanner and an omnidirectional camera are used to validate the robustness of the proposed algorithm over a wide range of initial conditions. The proposed method provides an efficient framework for multi-modal sensor data fusion and provides a robust solution to the scan alignment problem.
The International Journal of Robotics Research | 2012
Gaurav Pandey; James R. McBride; Ryan M. Eustice
Owing to errors made at SAGE, the article: Ford Campus vision and lidar data set by Gaurav Pandey, James R McBride, and Ryan M Eustice, doi: 10.1177/0278364911400640, The International Journal of Robotics Research, November 2011 30: 1543–1552 was printed in black and white rather than colour. The online version of the paper is published in colour. SAGE apologises to the authors and to the readers.