Grinnell Jones
University of California, Riverside
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Grinnell Jones.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999
Grinnell Jones; Bir Bhanu
A model-based automatic target recognition system is developed to recognize articulated and occluded objects in synthetic aperture radar (SAR) images, based on invariant features of the objects. Characteristics of SAR target image scattering centers, azimuth variation, and articulation invariants are presented. The basic elements of the new recognition system are described and performance results are given for articulated, occluded and occluded articulated objects, and they are related to the target articulation invariance and percent unoccluded.
Pattern Recognition | 2001
Grinnell Jones; Bir Bhanu
Abstract This paper presents the first sucessful approach for recognizing articulated vehicles in real synthetic aperture radar (SAR) images. This approach is based on invariant properties of the objects. Using SAR scattering center locations and magnitudes as features, the invariance of these features with articulation (e.g. turret rotation of a tank) is shown for XPATCH-generated synthetic SAR signatures and actual signatures from the MSTAR (public) data. Although related to geometric hashing, our recognition approach is specifically designed for SAR, taking into account the great azimuthal variation and moderate articulation invariance of SAR signatures. We present a basic recognition system for the XPATCH data, using scatterer relative locations, and an improved recognition system, using scatterer locations and magnitudes, that achieves excellent results with the more limited articulation invariance encountered with the real SAR targets in the MSTAR data. The articulation invariant properties of the objects are used to characterize recognition system performance in terms of probability of correct identification as a function of percent invariance with articulation.
IEEE Transactions on Aerospace and Electronic Systems | 2001
Grinnell Jones; Bir Bhanu
Recognizing occluded vehicle targets in synthetic aperture radar (SAR) images is addressed. Recognition algorithms, based on local features, are presented that successfully recognize highly occluded objects in both XPATCH synthetic SAR signatures and real SAR images of actual vehicles from the MSTAR data. Extensive experimental results are presented for a basic recognition algorithm, using SAR scattering center relative locations as features with the XPATCH data and for an improved algorithm, using scatterer locations and magnitudes with the real SAR targets in the MSTAR data. The results show the effect of occlusion on recognition performance in terms of probability of correct identification, receiver operating characteristic curves, and confusion matrices.
machine vision applications | 2000
Bir Bhanu; Yingqiang Lin; Grinnell Jones; Jing Peng
Abstract. Target recognition is a multilevel process requiring a sequence of algorithms at low, intermediate and high levels. Generally, such systems are open loop with no feedback between levels and assuring their performance at the given probability of correct identification (PCI) and probability of false alarm (Pf) is a key challenge in computer vision and pattern recognition research. In this paper, a robust closed-loop system for recognition of SAR images based on reinforcement learning is presented. The parameters in model-based SAR target recognition are learned. The method meets performance specifications by using PCI and Pf as feedback for the learning system. It has been experimentally validated by learning the parameters of the recognition system for SAR imagery, successfully recognizing articulated targets, targets of different configuration and targets at different depression angles.
Optical Engineering | 2000
Bir Bhanu; Grinnell Jones
The focus of this paper is recognizing articulated vehicles and actual vehicle configuration variants in real synthetic aperture radar (SAR) images. Using SAR scattering-center locations and magnitudes as features, the invariance of these features is shown with articulation (e.g., rotation of a tank turret), with configuration variants, and with a small change in depression angle. This scatterer-location and magnitude quasiinvariance is used as a basis for development of a SAR recognition system that successfully identifies real articulated and nonstandard- configuration vehicles based on nonarticulated, standard recognition models. identification performance results are presented as vote-space scatterplots and receiver operating characteristic curves for configuration variants, for articulated objects, and for a small change in depression angle with the MSTAR public data.
Algorithms for synthetic aperture radar imagery. Conference | 2002
Bir Bhanu; Grinnell Jones
The focus of this paper is optimizing the recognition of vehicles in Synthetic Aperture Radar (SAR) imagery using multiple SAR recognizers at different look angles. The variance of SAR scattering center locations with target azimuth leads to recognition system results at different azimuths that are independent, even for small azimuth deltas. Extensive experimental recognition results are presented in terms of receiver operating characteristic (ROC) curves to show the effects of multiple look angles on recognition performance for MSTAR vehicle targets with configuration variants, articulation, and occlusion.
Algorithms for synthetic aperture radar imagery. Conference | 1997
Bir Bhanu; Grinnell Jones
The performance of a model-based automatic target recognition (ATR) engine with articulated and occluded objects in SAR imagery is characterized based on invariant properties of the objects. Using SAR scattering center locations as features, the invariance with articulation is shown as a function of object azimuth. The basic elements of our model-based recognition engine are described and performance results are given for various design parameters. The articulation invariant properties of the objects are used to characterize recognition engine performance, in terms of probability of correct identification as a function of percent invariance with articulation. Similar results are presented for object occlusion in the presence of noise, with percent unoccluded as the invariant measure. Finally, performance is characterized for occluded articulated objects as a function of number of features that are used. Results are presented using 4320 chips generated by XPATCH for 5 targets.
Optical Engineering | 2002
Bir Bhanu; Grinnell Jones
The focus of this work is optimizing recognition models for synthetic aperture radar (SAR) signatures of vehicles to improve the performance of a recognition algorithm under the extended operating conditions of target articulation, occlusion, and configuration variants. The recognition models are based on quasi-invariant local features, scat- tering center locations, and magnitudes. The approach determines the similarities and differences among the various vehicle models. Methods to penalize similar features or reward dissimilar features are used to increase the distinguishability of the recognition model instances. Exten- sive experimental recognition results are presented in terms of confusion matrices and receiver operating characteristic (ROC) curves to show the improvements in recognition performance for real SAR signatures of ve- hicle targets with articulation, configuration variants, and occlusion.
Proceedings of SPIE | 1998
Bir Bhanu; Grinnell Jones; Joon Ahn
The focus of this paper is recognizing articulated objects and the pose of the articulated parts in SAR images. Using SAR scattering center locations as features, the invariance with articulation (i.e. turret rotation for the T72, T80 and M1a tanks, missile erect vs. down for the SCUD launcher) is shown as a function of object azimuth. Similar data is shown for configuration differences in the MSTAR (Public) Targets. The UCR model-based recognition engine (which uses non- articulated models to recognize articulated, occluded and non-standard configuration objects) is described and target identification performance results are given as confusion matrices and ROC curves for six inch and one foot resolution XPATCH images and the one foot resolution MSTAR data. Separate body and turret models are developed that are independent of the relative positions between the body and the turret. These models are used with a subsequent matching technique to refine the pose of the body and determine the pose of the turret. An expression of the probability that a random match will occur is derived and this function is used to set thresholds to minimize the probability of a random match for the recognition system. Results for identification, body pose and turret pose are presented as a function of percent occlusion for articulated XPATCH data and results are given for identification and body pose for articulated MSTAR data.
Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (Cat. No.PR00640) | 2000
Bir Bhanu; Grinnell Jones
This paper outlines an approach and experimental results for synthetic aperture radar (SAR) object recognition using the MSTAR data. With SAR scattering center locations and magnitudes as features, the invariance of these features is shown with object articulation (e.g., rotation of a tank turret) and with external configuration variants. This scatterer location and magnitude quasi-invariance is used as a basis for development of a SAR recognition system that successfully identifies articulated and non-standard configuration vehicles based on non-articulated, standard recognition models. The forced recognition results and pose accuracy are given. The effect of different confusers on the receiver operating characteristic (ROC) curves are illustrated along with ROC curves for configuration variants, articulations and small changes in depression angle. Results are given that show that integrating the results of multiple recognizers can lead to significantly improved performance over the single best recognizer.