Edmund G. Zelnio
Air Force Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edmund G. Zelnio.
Proceedings of SPIE, the International Society for Optical Engineering | 2005
Michael J. Minardi; LeRoy A. Gorham; Edmund G. Zelnio
A unified way of detecting and tracking moving targets with a SAR radar called SAR-MTI is presented. SAR-MTI differs from STAP or DPCA in that it is a generalization of SAR processing and can work with only a single phase center. SAR-MTI requires formation of a series of images assuming different sensor ground speeds, from vs-vtmax to vs+vtmax, where vs is the actual sensor ground speed and vtmax is the maximum target speed of interest. Each image will capture a different set of target velocities, and the complete set of images will focus all target speeds less than a desired maximum speed regardless of direction and target location. Thus the 2-dimensional SAR image is generalized to a 3-dimensional cube or stack of images. All linear moving targets less than the desired speed will be focused somewhere in the cube. The third dimension represents the along track velocity of the mover which is a piece of information not available to standard airborne MTI. A mover will remain focused at the same place within the cube as long as the motion of the mover and the sensor remain linear. Because stationary targets also focus within the detection cube, move-stop-move targets are handled smoothly and without changing waveforms or modes. Another result of this fact is that SAR-MTI has no minimum detectable velocity. SAR-MTI has an inherent ambiguity because the four-dimensions of target parameters (two dimensions in both velocity and position) are mapped into a three-dimensional detection space. This ambiguity is characterized and methods for resolving the ambiguity for geolocation are discussed. The point spread function in the detection cube is also described.
IEEE Transactions on Aerospace and Electronic Systems | 1996
Jian Li; Edmund G. Zelnio
Target detection with synthetic aperture radar (SAR) is considered. We derive generalized likelihood ratio (GLR) detection algorithms that may be used with SAR images that are obtained with coherent subtraction or have Gaussian distributions. We analytically compare the performance of (1) a single pixel detector, (2) a detector using complete knowledge of the target signature information and known orientation information, (3) a detector using incomplete knowledge of the target signature information and known orientation information (4) a detector using unknown target signature information and known orientation information, and (5) a detector using unknown target signature information and unknown orientation information.
IEEE Signal Processing Magazine | 2014
Joshua N. Ash; Emre Ertin; Lee C. Potter; Edmund G. Zelnio
Advances in radar hardware have enabled the sensing of ever-wider synthesized apertures. In this article, radar video - a sequence of radar images indexed on subaperture - is discussed as a natural, convenient, and revealing representation to capture wide-angle scattering behavior of complex objects. We review the inverse problem of recovering wide-angle scene reflectivity from synthetic aperture radar (SAR) measurements, survey signal processing approaches for its solution, and introduce a novel Bayesian estimation method. Examples from measured and simulated scattering data are presented to illustrate scattering behavior conveniently revealed by the SAR video framework.
IEEE Transactions on Aerospace and Electronic Systems | 2014
Gregory E. Newstadt; Edmund G. Zelnio; Alfred O. Hero
This work combines the physical, kinematic, and statistical properties of targets, clutter, and sensor calibration as manifested in multichannel synthetic aperture radar (SAR) imagery into a unified Bayesian structure that simultaneously estimates 1) clutter distributions and nuisance parameters, and 2) target signatures required for detection/inference. A Monte Carlo estimate of the posterior distribution is provided that infers the model parameters directly from the data with little tuning of algorithm parameters. Performance is demonstrated on both measured/synthetic wide-area datasets.
Proceedings of SPIE | 2010
Gregory E. Newstadt; Edmund G. Zelnio; LeRoy A. Gorham; Alfred O. Hero
In this work, the problem of detecting and tracking targets with synthetic aperture radars is considered. A novel approach in which prior knowledge on target motion is assumed to be known for small patches within the field of view. Probability densities are derived as priors on the moving target signature within backprojected SAR images, based on the work of Jao.1 Furthermore, detection and tracking algorithms are presented to take advantage of the derived prior densities. It was found that pure detection suffered from a high false alarm rate as the number of targets in the scene increased. Thus, tracking algorithms were implemented through a particle filter based on the Joint Multi-Target Probability Density (JMPD) particle filter2 and the unscented Kalman filter (UKF)3 that could be used in a track-before-detect scenario. It was found that the PF was superior than the UKF, and was able to track 5 targets at 0.1 second intervals with a tracking error of 0.20 ± 1.61m (95% confidence interval).
Proceedings of SPIE, the International Society for Optical Engineering | 2007
LeRoy A. Gorham; Brian D. Rigling; Edmund G. Zelnio
The polar format algorithm (PFA) is a well known method for forming imagery in both the radar community and the medical imaging community. PFA is attractive because it has low computational cost, and it partially compensates for phase errors due to a targets motion through resolution cells (MTRC). Since the imaging scenarios for remote sensing and medical imaging are traditionally different, the PFA implementation is different between the communities. This paper describes the differences in PFA implementation. The performance of two illustrative implementations is compared using synthetic radar and medical imagery.
IEEE Transactions on Aerospace and Electronic Systems | 2016
Kristjan H. Greenewald; Edmund G. Zelnio; Alfred O. Hero
This paper proposes a spatiotemporal decomposition for the detection of moving targets in multi-antenna synthetic aperture radar (SAR). As a high-resolution radar imaging modality, SAR detects and localizes nonmoving targets accurately, giving it an advantage over lower-resolution ground-moving target indication (GMTI) radars. Moving target detection is more challenging due to target smearing and masking by clutter. Space-time adaptive processing (STAP) is often used to remove the stationary clutter and enhance the moving targets. In this work, it is shown that the performance of STAP can be improved by modeling the clutter covariance as a space versus time Kronecker product with low-rank factors. Based on this model, a low-rank Kronecker product covariance estimation algorithm is proposed, and a novel separable clutter cancelation filter based on the Kronecker covariance estimate is introduced. The proposed method provides orders of magnitude reduction in the required number of training samples as well as improved robustness to corruption of the training data. Simulation results and experiments using the Gotcha SAR GMTI challenge dataset are presented that confirm the advantages of our approach relative to existing techniques.
electronic imaging | 2006
Philip M. Hanna; Brian D. Rigling; Edmund G. Zelnio
There is a need for persistent-surveillance assets to capture high-resolution, three-dimensional data for use in assisted target recognizing systems. Passive electro-optic imaging systems are presently limited by their ability to provide only 2-D measurements. We describe a methodology and system that uses existing technology to obtain 3-D information from disparate 2-D observations. This data can then be used to locate and classify objects under obscurations and noise. We propose a novel methodology for 3-D object reconstruction through use of established confocal microscopy techniques. A moving airborne sensing platform captures a sequence of geo-referenced, electro-optic images. Confocal processing of this data can synthesize a large virtual lens with an extremely sharp (small) depth of focus, thus yielding a highly discriminating 3-D data collection capability based on 2-D imagery. This allows existing assets to be used to obtain high-quality 3-D data (due to the fine z-resolution). This paper presents a stochastic algorithm for reconstruction of a 3-D target from a sequence of affine projections. We iteratively gather 2-D images over a known path, detect target edges, and aggregate the edges in 3-D space. In the final step, an expectation is computed resulting in an estimate of the target structure.
ieee radar conference | 2014
Christopher Paulson; Edmund G. Zelnio
Synthetic Aperture Radar (SAR) is used for a variety of missions and many of these missions require higher resolutions, larger scene sizes, and a variety of platform flight paths. All these factors affect the signature of the vehicles and cause the signature of the same vehicle to look significantly different even if the vehicle is at the same azimuth and elevation angle with respect to the center of the synthetic aperture, and even if the radar has the same bandwidth and coherent integration angle. Vehicle signatures are compared both qualitatively and quantitatively using long integration times, different flight paths, and different image locations with respect to scene center. The images are quantitatively compared using a metric based on previously successful SAR classification approaches. It is shown that the classifier must account for these signature differences to achieve reasonable performance.
Proceedings of SPIE | 2014
B. A. Blakeslee; Edmund G. Zelnio; Daniel E. Koditschek
We explore the potential on-line adjustment of sensory controls for improved object identification and discrimination in the context of a simulated high resolution camera system carried onboard a maneuverable robotic platform that can actively choose its observational position and pose. Our early numerical studies suggest the significant efficacy and enhanced performance achieved by even very simple feedback-driven iteration of the view in contrast to identification from a fixed pose, uninformed by any active adaptation. Specifically, we contrast the discriminative performance of the same conventional classification system when informed by: a random glance at a vehicle; two random glances at a vehicle; or a random glance followed by a guided second look. After each glance, edge detection algorithms isolate the most salient features of the image and template matching is performed through the use of the Hausdor↵ distance, comparing the simulated sensed images with reference images of the vehicles. We present initial simulation statistics that overwhelmingly favor the third scenario. We conclude with a sketch of our near-future steps in this study that will entail: the incorporation of more sophisticated image processing and template matching algorithms; more complex discrimination tasks such as distinguishing between two similar vehicles or vehicles in motion; more realistic models of the observers mobility including platform dynamics and eventually environmental constraints; and expanding the sensing task beyond the identification of a specified object selected from a pre-defined library of alternatives.