Ross S. Eaton
Charles River Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ross S. Eaton.
international conference on computer vision systems | 2006
Ross S. Eaton; Mark R. Stevens; Jonah C. McBride; Greydon T. Foil; Magnus Snorrason
Over the last 30 years, scale space representations have emerged as a fundamental tool for allowing systems to become increasingly robust against changes in camera viewpoint. Unfortunately, the implementation details that are required to properly construct a scale space representation are not published in the literature. Incorrectly implementing these details will lead to extremely poor system performance. In this paper, we address the practical considerations associated with scale space representations. Our focus is to make explicit how a scale space is constructed, thereby increasing the accessibility of this powerful representation to developers of computer vision systems.
computer vision and pattern recognition | 2005
Jonah C. McBride; Magnus Snorrason; Thomas G. Goodsell; Ross S. Eaton; Mark R. Stevens
Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems such as those encountered in parking lot surveillance. Stereo reconstruction is a useful technique in this domain. The advantage of a single-camera stereo method versus a stereo rig is the flexibility to change the baseline distance to best match each scenario. This directly increases the robustness of the stereo algorithm and increases the effective range of the system. The challenge comes from accurately rectifying the images into an ideal stereo pair. Structure from motion (SFM) can be used to compute the camera motion between the two images, but its accuracy is limited and small errors can cause rectified images to be misaligned. We present a single-camera stereo system that incorporates a Levenberg-Marquardt minimization of rectification parameters to bring the rectified images into alignment.
international conference on pattern recognition | 2004
Mark R. Stevens; Magnus Snorrason; Ross S. Eaton; Jonah C. McBride
All aircraft rely on on-board sensor systems for navigation. The drawback to inertial sensors is that their position error compounds over time. Global Positioning Systems (GPS) overcome that problem for location but not orientation; also, GPS is sensitive to signal dropout and hostile jamming. We present a system capable of estimating the six-degree-of-freedom (6DOF) state (geo-location and orientation) of an air vehicle given digital terrain elevation data (DTED), an estimate of the vehicles velocity, and a sequence of images from an onboard video camera. This system first reconstructs individual terrain map sections from pairs of images taken by the onboard video camera as it flies over the terrain. The velocity estimate is used to convert the metric units produced by the reconstruction to Euclidean units (such as meters). The individual reconstructed map sections are then stitched together to form a larger terrain map. This reconstructed terrain map is then matched against the DTED to produce an estimate of the 6DOF state of the vehicle.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Ross S. Eaton; Magnus Snorrason; John M. Irvine; Steve Vanstone
Automatic target detection (ATD) systems process imagery to detect and locate targets in support of intelligence, surveillance, reconnaissance, and strike missions. Accurate prediction of ATD performance would assist in system design and trade studies, collection management, and mission planning. Specifically, a need exists for ATD performance prediction based exclusively on information available from the imagery and its associated metadata. In response to this need, we undertake a modeling effort that consists of two phases: a learning phase, where image measures are computed for a set of test images, the ATD performance is measured, and a prediction model is developed; and a second phase to test and validate performance prediction. The learning phase produces a mapping, valid across various ATD algorithms, which is even applicable when no image truth is available (e.g., when evaluating denied area imagery). Ongoing efforts to develop such a prediction model have met with some success. Previous results presented models to predict performance for several ATD methods. This paper extends the work in several ways: extension to a new ATD method, application of the modeling to a new image set, and an investigation of systematic changes in the image properties (resolution, noise, contrast). The paper concludes with a discussion of future research.
Unmanned ground vehicle technology. Conference | 2004
Jonah C. McBride; Magnus Snorrason; Thomas R. Goodsell; Ross S. Eaton; Mark R. Stevens
Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems. Potential solutions using low-cost video cameras are particularly alluring. Recent results in 3D scene reconstruction from a single moving camera seem particularly relevant, but robot designers who attempt to use such 3D techniques have uncovered a variety of practical concerns. We present lessons-learned from developing a single-camera 3D scene reconstruction system that provides both a real-time camera motion estimate and a rough model of major 3D structures in the robot’s vicinity. Our objective is to use the motion estimate to supplement GPS (indoors in particular) and to use the model to provide guidance for further vision processing (look for signs on walls, obstacles on the ground, etc.). The computational geometry involved is closely related to traditional two-camera stereo, however a number of degenerate cases exist. We also demonstrate how SFM can use used to improve the performance of two specific robot navigation tasks.
Proceedings of SPIE | 2013
Ross S. Eaton; Jonah C. McBride; Joseph Bates
The Office of Naval Research (ONR) is looking for methods to perform higher levels of sensor processing onboard UAVs to alleviate the need to transmit full motion video to ground stations over constrained data links. Charles River Analytics is particularly interested in performing intelligence, surveillance, and reconnaissance (ISR) tasks using UAV sensor feeds. Computing with approximate arithmetic can provide 10,000x improvement in size, weight, and power (SWAP) over desktop CPUs, thereby enabling ISR processing onboard small UAVs. Charles River and Singular Computing are teaming on an ONR program to develop these low-SWAP ISR capabilities using a small, low power, single chip machine, developed by Singular Computing, with many thousands of cores. Producing reliable results efficiently on massively parallel approximate machines requires adapting the core kernels of algorithms. We describe a feature-aided tracking algorithm adapted for the novel hardware architecture, which will be suitable for use onboard a UAV. Tests have shown the algorithm produces results equivalent to state-of-the-art traditional approaches while achieving a 6400x improvement in speed/power ratio.
Automatic Target Recognition XVII | 2007
Ross S. Eaton; Magnus Snorrason
Automatic target recognition (ATR) using an infrared (IR) sensor is a particularly appealing combination, because an IR sensor can overcome various types of concealment and works in both day and night conditions. We present a system for ATR on low resolution IR imagery. We describe the system architecture and methods for feature extraction and feature subset selection. We also compare two types of classifier, K-Nearest Neighbors (KNN) and Random Decision Tree (RDT). Our experiments test the recognition accuracy of the classifiers, within our ATR system, on a variety of IR datasets. Results show that RDT and KNN achieve comparable performance across the tested datasets, but that RDT requires significantly less retrieval time on large datasets and in high dimensional feature spaces. Therefore, we conclude that RDT is a promising classifier to enable a robust, real time ATR solution.
applied imagery pattern recognition workshop | 2015
John M. Irvine; Mon Young; Stan German; Ross S. Eaton
The quality of an image affects its utility for various analytic tasks. For security screening of baggage, the quality of the X-ray image will affect the ability of human operators to detect and identify relevant objects. This paper presents a recent protocol aimed at the development of a perception-based standard for assessing the quality of x-ray images of baggage. This standard provides a quantitative method for assessing x-ray image quality from the display, as presented to security officers. Furthermore, it provides a framework for understanding how different variables (belt speed, scanner orientation, degree of clutter in the image, ambient lighting, etc) affect the quality of images taken from X-Ray scanners at security checkpoints. The paper presents the protocol that was performed, summarizes the analysis and findings, and presents a method for employing the results to assess performance of a scanner system.
applied imagery pattern recognition workshop | 2008
Ross S. Eaton; Jessica Lowell; Magnus Snorrason; John M. Irvine; Jonathan Mills
Computer vision methods, such as automatic target recognition (ATR) techniques, have the potential to improve the accuracy of military systems for weapon deployment and targeting, resulting in greater utility and reduced collateral damage. A major challenge, however, is training the ATR algorithm to the specific environment and mission. Because of the wide range of operating conditions encountered in practice, advanced training based on a pre-selected training set may not provide the robust performance needed. Training on a mission-specific image set is a promising approach, but requires rapid selection of a small, but highly representative training set to support time-critical operations. To remedy these problems and make short-notice seeker missions a reality, we developed learning and mining using bagged augmented decision trees (LAMBAST). LAMBAST examines large databases and extracts sparse, representative subsets of target and clutter samples of interest. For data mining, LAMBAST uses a variant of decision trees, called random decision trees (RDTs). This approach guards against overfitting and can incorporate novel, mission-specific data after initial training via perpetual learning. We augment these trees with a distribution modeling component that eliminates redundant information, ignores misrepresentative class distributions in the database, and stops training when decision boundaries are sufficiently sampled. These augmented random decision trees enable fast investigation of multiple images to train a reliable, mission-specific ATR. This paper presents the augmented random decision tree framework, develops the sampling procedure for efficient construction of the sample, and illustrates the procedure using relevant examples.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Jonah C. McBride; Magnus Snorrason; Ross S. Eaton; N. Checka; Austin Reiter; G. Foil; Mark R. Stevens
Many fielded mobile robot systems have demonstrated the importance of directly estimating the 3D shape of objects in the robots vicinity. The most mature solutions available today use active laser scanning or stereo camera pairs, but both approaches require specialized and expensive sensors. In prior publications, we have demonstrated the generation of stereo images from a single very low-cost camera using structure from motion (SFM) techniques. In this paper we demonstrate the practical usage of single-camera stereo in real-world mobile robot applications. Stereo imagery tends to produce incomplete 3D shape reconstructions of man-made objects because of smooth/glary regions that defeat stereo matching algorithms. We demonstrate robust object detection despite such incompleteness through matching of simple parameterized geometric models. Results are presented where parked cars are detected, and then recognized via license plate recognition, all in real time by a robot traveling through a parking lot.