Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory J. Power is active.

Publication


Featured researches published by Gregory J. Power.


Proceedings of SPIE | 1998

Determining a confidence factor for automatic target recognition based on image sequence quality

Gregory J. Power; Mohammad A. Karim

For the Automatic Target Recognition (ATR) algorithm, the quality of the input image sequence can be a major determining factor as to the ATR algorithms ability to recognize an object. Based on quality, an image can be easy to recognize, barely recognizable or even mangled beyond recognition. If a determination of the image quality can be made prior to entering the ATR algorithm, then a confidence factor can be applied to the probability of recognition. This confidence factor can be used to rate sensors; to improve quality through selectively preprocessing image sequences prior to applying ATR; or to limit the problem space by determining which image sequences need not be processed by the ATR algorithm. It could even determine when human intervention is needed. To get a flavor for the scope of the image quality problem, this paper reviews analog and digital forms of image degradation. It looks at traditional quality metric approaches such as peak signal-to-noise ratio. It examines a newer metric based on human vision data, a metric introduced by the Institute for Telecommunication Sciences. These objective quality metrics can be used as confidence factors primarily in ATR systems that use image sequences degraded due to transmission systems. However, to determine the quality metric, a transmission system needs the original input image sequence and the degraded output image sequence. This paper suggests a more general approach to determining quality using analysis of spatial and temporal vectors where the original input sequence is not explicitly given. This novel approach would be useful where there is no transmission system but where the ATR system is part of the sensor, on-board a mobile platform. The results of this work are demonstrated on a few standard image sequences.


Optical Engineering | 1993

Object tracking by an optoelectronic inner product complex neural network

Abdul Ahad Sami Awwal; Gregory J. Power

A complex associative memory model based on a neural network architecture is proposed for tracking three-dimensional objects in a dynamic environment. The storage representation of the complex associative memory model is based on an efficient amplitude-modulated phase-only matched filter. The input to the memory is derived from the discrete Fourier transform of the edge coordinates of the to-be-recognized moving object, where the edges are obtained through motion-based segmentation of the image scene. An adaptive threshold is used during the decision-making process to indicate a match or identify a mismatch. Computer simulation on real-world data proves the effectiveness of the proposed model. The proposed scheme is readily amenable to optoelectronic implementation.


Optical pattern recognition. Conference | 2002

Correlators for rank order shape similarity measurement

Jason B. Gregga; Gregory J. Power; Khan M. Iftekharuddin

Correlators have been used for detecting shapes but not as often for measuring shape similarity. The complex inner product (CIP) has been used in various formulations as a shape similarity measure. The CIP is essentially a one-dimensional correlation approach to measuring similarity. One-dimensional variants of the correlation techniques including the matched filter (MF), phase-only filter (POF), and amplitude-modulated phase only filter (AMPOF) are shown to measure shape similarity in a trend that approaches human perception, however, clear performance differences are noted. The results show that the best correlator for measuring shape similarity is not the best correlator for detecting a shape. It is suggested that detection and shape similarity are fundamentally different functions that are in opposition to some degree. Ideal detection and ideal similarity measurement functions are explored. The degree to which various formulations of correlators approach the ideal functions of detection and similarity measurement are shown as well as results from human psychophysical experiments.


Proceedings of SPIE, the International Society for Optical Engineering | 2000

Optoelectronic complex inner product for evaluating quality of image segmentation

Gregory J. Power; Abdul Ahad Sami Awwal

In automatic target recognition and machine vision applications, segmentation of the images is a key step. Poor segmentation reduces the recognition performance. For some imaging systems such as MRI and Synthetic Aperture Radar (SAR) it is difficult even for humans to agree on the location of the edge which allows for segmentation. A real- time dynamic approach to determine the quality of segmentation can enable vision systems to refocus of apply appropriate algorithms to ensure high quality segmentation for recognition. A recent approach to evaluate the quality of image segmentation uses percent-pixels-different (PPD). For some cases, PPD provides a reasonable quality evaluation, but it has a weakness in providing a measure for how well the shape of the segmentation matches the true shape. This paper introduces the complex inner product approach for providing a goodness measure for evaluating the segmentation quality based on shape. The complex inner product approach is demonstrated on SAR target chips obtained from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The results are compared to the PPD approach. A design for an optoelectronic implementation of the complex inner product for dynamic segmentation evaluation is introduced.


Algorithms and Systems for Optical Information Processing IV | 2000

Segmenting Shadows from synthetic aperture radar imagery using edge-enhanced region growing

Gregory J. Power; Kelce S. Wilson

An enhanced region-growing approach for segmenting regions is introduced. A region-growing algorithm is merged with stopping criteria based on a robust noise-tolerant edge-detection routine. The region-grow algorithm is then used to segment the shadow region in a Synthetic Aperture Radar (SAR) image. This approach recognizes that SAR phenomenology causes speckle in imagery even to the shadow area due to energy injected from the surrounding clutter and target. The speckled image makes determination of edges a difficult task even for the human observer. This paper outlines the edge-enhanced region grow approach and compares the results to three other segmentation approaches including the region-grow only approach, an automated-threshold approach based on a priori knowledge of the SAR target information, and the manual segmentation approach. The comparison is shown using a tri-metric inter- algorithmic approach. The metrics used to evaluate the segmentation include percent-pixels same (PPS), the partial- directed hausdorff (PDH) metric, and a shape-based metric based on the complex inner product (CIP). Experimental results indicate that the enhanced region-growing technique is a reasonable segmentation for the SAR target image chips obtained from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program.


Algorithms for synthetic aperture radar imagery. Conference | 2003

Effects of SAR parametric variations on the performance of automatic target recognition algorithms

Kefu Xue; Sam Sink; Gregory J. Power

Synthetic aperture radar (SAR) imagery is one of the most valuable sensor data sources for todays military battlefield surveillance and analysis. The collection of SAR images by various platforms (e.g. Global Hawk, NASA/JPL AIRSAR, etc.) and on various missions for multiple purposes (e.g. reconnaissance, terrain mapping, etc.) has resulted in vast amount of data over wide surveillance areas. The pixel-to-eye ratio is simply too high for human analysts to rapidly sift through massive volumes of sensor data and yield engagement decisions quickly and precisely. Effective automatic target recognition (ATR) algorithms to process this growing mountain of information are clearly needed. However, even after many years of research, SAR ATR still remains a highly challenging research problem. What makes SAR ATR problems difficult is the amount of variability exhibited in the SAR image signatures of targets and clutters. There are many different factors that can cause the variability in SAR image signatures. It is of convention to categorizes those factors into three major groups known as extended operating conditions (OCs) of target, environment and sensor. The group of sensor OCs includes SAR sensor parametric variations in depression angle, polarization, squint angle, frequencies (UHF, VHF, X band) and bandwidth, pulse repetition frequency (PRF), multi-look, antenna geometry and type, image formation algorithms, platform variations and geometric errors, noise level, etc. Many existing studies of SAR ATR have been traditionally focused on the variability of SAR signatures caused by a sub-space of target OCs and environment OCs. The similar studies in terms of SAR parametric variations in sensor OCs have been very limited due to the lack of data across the sensor OCs and the inherent difficulties as well as the high cost in supplying various sensor OCs during the data collections. This paper will present the results of a comprehensive survey of SAR ATR research works involving the subjects of various sensor OCs. We found out in the survey that, to this date, very few research has been devoted to the problems of sensor OCs and their effects over the performance of SAR image based ATR algorithms. Due to the importance of sensor OCs in the ATR applications, we have developed a research platform as well as important focus areas of future research in SAR parametric variations. A number of baseline ATR algorithms in the research platform have been implemented and verified. We have also planned and started SAR data simulation process across the spectrum of sensor OCs. A road-map for the future research of SAR parametric variations (sensor OCs) and their impact on ATR algorithms is laid out in this paper.


Proceedings of SPIE | 2001

Benchtop methodology for evaluating the automatic segmentation of ladar images

Gregory J. Power

Numerous approaches to segmentation exist requiring an evaluation technique to determine the most appropriate technique to use for a specific ladar design. A benchtop evaluation methodology that uses multiple measures is used to evaluate ladar-specific image segmentation algorithms. The method uses multiple measures along with an inter-algorithmic approach that was recently introduced for evaluating Synthetic Aperture Radar (SAR) imagery. Ladar imagery is considered to be easier to segment than SAR since it generally contains less speckle and has both a range and intensity map to assist in segmentation. A system of multiple measures focuses on area, shape and edge closeness to judge the segmentation. The judgement is made on the benchtop by comparing the segmentation to supervised hand-segmented images. To demonstrate the approach, a ladar image is segmented using several segmentation approaches introduced in literature. The system of multiple measures is then demonstrated on the segmented ladar images. An interpretation of the results is given. This paper demonstrates that the original evaluation approach designed for evaluating SAR imagery can be generalized across differing sensor modalities even though the segmentation and sensor acquisition approaches are different.


Proceedings of SPIE | 2001

An Objective Evaluation of Four SAR Image Segmentation Algorithms

Jason B. Gregga; Steven C. Gustafson; Gregory J. Power

Because of the large number of SAR images the Air Force generates and the dwindling number of available human analysts, automated methods must be developed. A key step towards automated SAR image analysis is image segmentation. There are many segmentation algorithms, but they have not been tested on a common set of images, and there are no standard test methods. This paper evaluates four SAR image segmentation algorithms by running them on a common set of data and objectively comparing them to each other and to human segmentations. This objective comparison uses a multi-measure approach with a set of master segmentations as ground truth. The measure results are compared to a Human Threshold, which defines the performance of human segmentors compared to the master segmentations. Also, methods that use the multi-measures to determine the best algorithm are developed. These methods show that of the four algorithms, Statistical Curve Evolution produces the best segmentations; however, none of the algorithms are superior to human segmentations. Thus, with the Human Threshold and Statistical Curve Evolution as benchmarks, this paper establishes a new and practical framework for testing SAR image segmentation algorithms.


Optical Engineering | 1999

Synthetic correlation-based modified signed-digit trinary logic processing

Farid Ahmed; Abdul Ahad Sami Awwal; Gregory J. Power

An optical implementation using a correlation technique for modified signed-digit (MSD) trinary logic processing is presented. In par- ticular, a synthetic matched filter (SMF) correlator model is used to dem- onstrate the realization of the carry-propagation-free trinary MSD addi- tion. It is shown that proper encoding of the MSD numerals are of utmost importance for the correlator model to work. The developed method is expected to have far-reaching application in the optical higher radix nu- meric and logic processing.


Journal of Electronic Imaging | 1999

Velocital information feature for charting spatio-temporal changes in digital image sequences

Gregory J. Power; Mohammad A. Karim; Farid Ahmed

This paper introduces a velocital information feature that is extracted for each frame of an image sequence. The feature is based on the optical flow in each frame. A mathematical formulation for the velocital information feature is derived. Charting the feature over a sequence provides a quality metric called velocital informa- tion content (VIC). The relationship of VIC to the spatial and tempo- ral information content is shown. VIC offers a different role from traditional transmission-based quality metrics which require two im- ages: the original input image and degraded output image to calcu- late the quality metric. VIC can detect artifacts from a single image sequence by charting variations from the norm. Therefore, VIC of- fers a metric for judging the quality of the image frames prior to transmission, without a transmission system or without any knowl- edge of the higher quality image input. The differences between VIC and transmission-oriented quality metrics can provide a different role for VIC in analysis and image sequence processing. Results show that VIC is able to detect gradual and sudden changes in an image sequence. Results are shown for using VIC as a filter on electro-optical infrared image sequences where VIC detects frames suffering from erratic noise.

Collaboration


Dive into the Gregory J. Power's collaboration.

Top Co-Authors

Avatar

Farid Ahmed

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Jason B. Gregga

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kefu Xue

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian G. Swahn

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kelce S. Wilson

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge