Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gary Witus.
Journal of Field Robotics | 2006
Lauro Ojeda; Johann Borenstein; Gary Witus; Robert E. Karlsen
This paper introduces novel methods for terrain classification and characterization with a mobile robot. In the context of this paper, terrain classification aims at associating terrains with one of a few predefined, commonly known categories, such as gravel, sand, or asphalt. Terrain characterization, on the other hand, aims at determining key parameters of the terrain that affect its ability to support vehicular traffic. Such properties are collectively called “trafficability.” The proposed terrain classification and characterization system comprises a skid-steer mobile robot, as well as some common and some uncommon but optional onboard sensors. Using these components, our system can characterize and classify terrain in real time and during the robots actual mission. The paper presents experimental results for both the terrain classification and characterization methods. The methods proposed in this paper can likely also be implemented on tracked robots, although we did not test this option in our work.
Proceedings of SPIE - The International Society for Optical Engineering | 2005
Lauro Ojeda; Johann Borenstein; Gary Witus
Most research on off-road mobile robot sensing focuses on obstacle negotiation, path planning, and position estimation. These issues have conventionally been the foremost factors limiting the performance and speeds of mobile robots. Very little attention has been paid to date to the issue of terrain trafficability, that is, the terrains ability to support vehicular traffic. Yet, trafficability is of great importance if mobile robots are to reach speeds that human-driven vehicles can reach on rugged terrain. For example, it is obvious that the maximal allowable speed for a turn is lower when driving over sand or wet grass than when driving on packed dirt or asphalt. This paper presents our work on automated real-time characterization of terrain with regard to trafficability for small mobile robots. The two proposed methods can be implemented on skid-steer mobile robots and possibly also on tracked mobile robots. The paper also presents experimental results for each of the two implemented methods.
SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995
Grant R. Gerhart; Thomas J. Meitzler; Eui Jung Sohn; Gary Witus; George H. Lindquist; J. Richard Freeling
This paper examines the applicability of computational vision models (CVM) to characterize thermal and visual imagery. A specific CVM model is described for the analysis of individual target characteristics and background clutter. A unique feature of the methodology is the spatial and temporal decomposition of the input image into various bandpass filters or channels. A description is given of the various model processes along with some representative examples of the subsequent analysis.
intelligent robots and systems | 2007
Robert E. Karlsen; Gary Witus
This paper presents a method to forecast terrain trafficability from visual appearance. During training, the system identifies a set of image chips (or exemplars) that span the range of terrain appearance. Each chip is assigned a vector tag of vehicle-terrain interaction characteristics that are obtained from on-board sensors and simple performance models, as the vehicle traverses the terrain. The system uses the exemplars to segment images into regions, based on visual similarity to the terrain patches observed during training, and assigns the appropriate vehicle-terrain interaction tag to them. This methodology will therefore allow the online forecasting of vehicle performance on upcoming terrain. Currently, we are using fuzzy c-means clustering and exploring a number of different features for characterizing the visual appearance of the terrain.
SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing | 1994
George H. Lindquist; Gary Witus; Thomas H. Cook; J. Richard Freeling; Grant R. Gerhart
The current DoD target acquisition models have two primary deficiencies: they use simplistic representations of the vehicle and background signatures, and a highly simplified description of the human observer. The current signature representation often fails for complex signature configurations, yields inaccurate detectability and marginal pay-off predictions for low signature vehicles, is not extensible to false alarms and temporal cues, and precludes vehicle design guidance and diagnosis. The current human observer model is simplified to the same degree as the signature representation, and as such is not extensible to high fidelity signature representations. In answer to the noted deficiencies, we have developed the TARDEC visual model (TVM). We have adopted an alternative approach that is based on emerging academic computational vision models (CVM). Our approach is tailored to visual signatures, though the model is applicable to thermal, SAR as well as other categories of imagery. Color imagery, input to the model, is initially transformed into a 3D color-opponent space comprising luminance, red-green, and yellow- blue axes. Each plane in the color-opponent space is then decomposed by local, oriented spatial frequency analyzers (Gabor or wavelet filters) in keeping with current knowledge of retinal/cortical processing. Signal-to-noise statistics are then calculated on each channel, appropriately aggregated over all channels, and used within the signal detection theory context to predict detection and false alarm probabilities.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Robert E. Karlsen; Gary Witus
This paper presents an exploration of methods for estimating terrain trafficability from visual appearance. Two different sets of data are used. The first set is extracted from video sequences and has a small number of different terrains. A fuzzy c-means clustering algorithm is used to predict terrain type. The second set is derived from high-resolution still images and has a large variety of terrains. A decision tree algorithm is used to provide a subjective assessment of trafficability. A variety of local features are explored, based on color and texture, as input to the learning algorithms.
Unmanned Systems Technology IX | 2007
Robert E. Karlsen; Gary Witus
This paper presents a method to forecast terrain trafficability from visual appearance. During training, the system identifies a set of image chips (or exemplars) that span the range of terrain appearance. Each chip is assigned a vector tag of vehicle-terrain interaction characteristics that are obtained from simple performance models and on-board sensors, as the vehicle traverses the terrain. The system uses the exemplars to segment images into regions, based on visual similarity to the terrain patches observed during training, and assigns the appropriate vehicle-terrain interaction tag to them. This methodology will therefore allow the online forecasting of vehicle performance on upcoming terrain. Currently, the system uses a fuzzy c-means clustering algorithm for training. In this paper, we explore a number of different features for characterizing the visual appearance of the terrain and measure their effect on the prediction of vehicle performance.
Proceedings of the 24th US Army Science Conference | 2006
Robert E. Karlsen; James L. Overholt; Gary Witus
Abstract : Military and security operations often require that participants move as quickly as possible, while avoiding harm. Humans judge how fast they can drive, how sharply they can turn and how hard they can brake, based on a subjective assessment of vehicle handling, which results from responsiveness to driving commands, ride quality, and prior experience in similar conditions. Vehicle handling is a product of the vehicle dynamics and the vehicle-terrain interaction. Near real-time methods are needed for unmanned ground vehicles to assess their handling limits on the current terrain in order to plan and execute extreme maneuvers. This paper describes preliminary research to develop on-the-fly procedures to capture vehicle-terrain interaction data and simple models of vehicle response to driving commands, given the vehicle-terrain interaction data.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
Gary Witus; R. Darin Ellis
This paper describes a series of large-scale perception experiments designed to collect human observer visual search and discrimination performance data for use in calibrating and validating computer models of visual acquisition of military ground vehicles. The first experiment provides data for development of models of color and luminescence adaptation and contrast sensitivity to extract information needed to discriminate simple 2-D shapes, as a function of size, adaptation, blur and contrast. The second experiment provides data for development of models of search and discrimination for simple 3-D shapes in cluttered backgrounds, as a function of size, clutter level, and facet contrast. The third experiment provides data for development of models of serach and discrimination of military ground vehicles in natural settings. These stimuli include vehicles at close and far ranges, with and without cue feature suppression, with and without camouflage, and under clear, hazy and dark conditions. Observer response test results show that the stimuli are uniformly distributed from very high to very low signatures. This paper also reports on insights for modeling visual discrimination.
Targets and backgrounds : characterization and representation. Conference | 1997
Gary Witus; Paul G. Gottschalk; Mitchell A. Cohen; Grant R. Gerhart; Robert E. Karlsen; Thomas J. Meitzler; Richard C. Goetz; Eui Jung Sohn; Darryl Bryk
We present and demonstrate a method to characterize a background scene, to extrapolate the background characteristics into a specified target region, and to generate a synthetic target image with the visual characteristics of the surrounding background. The algorithm is based on a computational model of spatial pattern analysis in the front-end retinal-cortical visual system. It uses nonstationary multi-resolution spatial filtering to extrapolate the intensity and the intensity modulation amplitude of the surrounding background into the target region. The algorithm provides a method to compute the background-induced bias for use as a zero-reference in computational models of target boundary perception and shape discrimination. We demonstrate the method with a complex, heterogeneous scene containing many discrete objects and backgrounds. The contrast and texture of the visualization blends into the local background. In most cases, the target boundaries are difficult to see, and the target regions are difficult to distinguish from the background. The results provide insight into the capabilities and limitations of the underlying model to front-end human visual pattern analysis. They provide insight into scene segmentation, shape properties, and prior knowledge of scene organization and object appearance for modeling visual discrimination.