Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ernest E. Armstrong is active.

Publication


Featured researches published by Ernest E. Armstrong.


IEEE Transactions on Image Processing | 1997

Joint MAP registration and high-resolution image estimation using a sequence of undersampled images

Russell C. Hardie; Kenneth J. Barnard; Ernest E. Armstrong

In many imaging systems, the detector array is not sufficiently dense to adequately sample the scene with the desired field of view. This is particularly true for many infrared focal plane arrays. Thus, the resulting images may be severely aliased. This paper examines a technique for estimating a high-resolution image, with reduced aliasing, from a sequence of undersampled frames. Several approaches to this problem have been investigated previously. However, in this paper a maximum a posteriori (MAP) framework for jointly estimating image registration parameters and the high-resolution image is presented. Several previous approaches have relied on knowing the registration parameters a priori or have utilized registration techniques not specifically designed to treat severely aliased images. In the proposed method, the registration parameters are iteratively updated along with the high-resolution image in a cyclic coordinate-descent optimization procedure. Experimental results are provided to illustrate the performance of the proposed MAP algorithm using both visible and infrared images. Quantitative error analysis is provided and several images are shown for subjective evaluation.


Optical Engineering | 1998

High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system

Russell C. Hardie; Kenneth J. Barnard; John G. Bognar; Ernest E. Armstrong; Edward A. Watson

Some imaging systems employ detector arrays which are not su‐ciently dense so as to meet the Nyquist criteria during image acquisition. This is particularly true for many staring infrared imagers. Thus, the full resolution afiorded by the optics is not being realized in such a system. This paper presents a technique for estimating a high resolution image, with reduced aliasing, from a sequence of undersampled rotated and translationally shifted frames. Such an image sequence can be obtained if an imager is mounted on a moving platform, such as an aircraft. Several approaches to this type of problem have been proposed in the literature. Here we extend some of this previous work. In particular, we deflne an observation model which incorporates knowledge of the optical system and detector array. The high resolution image estimate is formed by minimizing a new regularized cost function which is based on the observation model. We show that with the proper choice of a tuning parameter, our algorithm exhibits robustness in the presence of noise. We consider both gradient descent and conjugate gradient optimization procedures to minimize the cost function. Detailed experimental results are provided to illustrate the performance of the proposed algorithm using digital video from an infrared imager.


Applied Optics | 2000

Scene-based Nonuniformity Correction with Video Sequences and Registration

Russell C. Hardie; Majeed M. Hayat; Ernest E. Armstrong; Brian J. Yasuda

We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.


Applied Optics | 1999

STATISTICAL ALGORITHM FOR NONUNIFORMITY CORRECTION IN FOCAL-PLANE ARRAYS

Majeed M. Hayat; Sergio N. Torres; Ernest E. Armstrong; Stephen C. Cain; Brian J. Yasuda

A statistical algorithm has been developed to compensate for the fixed-pattern noise associated with spatial nonuniformity and temporal drift in the response of focal-plane array infrared imaging systems. The algorithm uses initial scene data to generate initial estimates of the gain, the offset, and the variance of the additive electronic noise of each detector element. The algorithm then updates these parameters by use of subsequent frames and uses the updated parameters to restore the true image by use of a least-mean-square error finite-impulse-response filter. The algorithm is applied to infrared data, and the restored images compare favorably with those restored by use of a multiple-point calibration technique.


IEEE Transactions on Image Processing | 2001

Projection-based image registration in the presence of fixed-pattern noise

Stephen C. Cain; Majeed M. Hayat; Ernest E. Armstrong

A computationally efficient method for image registration is investigated that can achieve an improved performance over the traditional two-dimensional (2-D) cross-correlation-based techniques in the presence of both fixed-pattern and temporal noise. The method relies on transforming each image in the sequence of frames into two vector projections formed by accumulating pixel values along the rows and columns of the image. The vector projections corresponding to successive frames are in turn used to estimate the individual horizontal and vertical components of the shift by means of a one-dimensional (1-D) cross-correlation-based estimator. While gradient-based shift estimation techniques are computationally efficient, they often exhibit degraded performance under noisy conditions in comparison to cross-correlators due to the fact that the gradient operation amplifies noise. The projection-based estimator, on the other hand, significantly reduces the computational complexity associated with the 2-D operations involved in traditional correlation-based shift estimators while improving the performance in the presence of temporal and spatial noise. To show the noise rejection capability of the projection-based shift estimator relative to the 2-D cross correlator, a figure-of-merit is developed and computed reflecting the signal-to-noise ratio (SNR) associated with each estimator. The two methods are also compared by means of computer simulation and tests using real image sequences.


Applied Optics | 2006

Flash light detection and ranging range accuracy limits for returns from single opaque surfaces via Cramer-Rao bounds

Stephen C. Cain; Richard D. Richmond; Ernest E. Armstrong

A Cramer-Rao lower bound on the range accuracy obtainable by a Flash light detection and ranging (LADAR) system receiving a return from a single surface in the instantaneous field of view of each detector is developed and verified with experimental data. The bound is compared to the performance of a new algorithm and that of a matched filter receiver by using both simulated and measured LADAR data. The simulated data are used to show that the estimator is nearly unbiased and efficient for systems that match the negative paraboloid model used in its derivation. It is found that the achievable range accuracy for the LADAR system and for the target geometry used to collect the measured data is of the order of 2.5 in. while the bound predicts a range accuracy limit of approximately 0.6 in.


Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XI | 2000

Kalman-filtering approach for nonuniformity correction in focal plane array sensors

Sergio N. Torres; Majeed M. Hayat; Ernest E. Armstrong; Brian J. Yasuda

A Kalman filter is developed to estimate the temporal drift in the gain and the offset of detectors in focal-plane array sensors from scene data. The novelty of this approach is that the gain and the offset are modeled by random sequences (state variables) which must be estimated from the current and past noisy scene data. The gain and the offset are assumed constant over fixed-length blocks of frames; however, these parameters may slowly drift from block to block according to a temporal discrete-time Gauss-Markov process. The input to the Kalman filter consists of a sequence of blocks of frames and the output at any time is a vector containing current estimates of the bias and the offset for each detector. Once these estimates are generated, the true image is restored by means of a least- mean-square error temporal FIR filter. The efficacy of the reported technique is demonstrated by applying it to two sets of real infrared data and the advantage rendered by the Gauss-Markov model is shown.


Optical Engineering | 2006

Maximum a posteriori image and seeing condition estimation from partially coherent two-dimensional light detection and ranging images

Adam MacDonald; Stephen C. Cain; Ernest E. Armstrong

Recent developments in staring focal plane technology have spawned significant interest in the application of gated laser radar systems to battlefield remote sensing. Such environments are characterized by rapidly changing atmospheric seeing conditions and significant image distortion caused by long slant-range paths through the most dense regions of the atmosphere. Limited weight, space, and computational resources tend to prohibit the application of adaptive optic systems to mitigate atmospheric image distortion. We demonstrate and validate the use of a fast, iterative, maximum a posteriori (MAP) estimator to estimate both the original target scene and the ensemble-averaged atmospheric optical transfer function parameterized by Frieds seeing parameter. Wide-field-of-view sensor data is simulated to emulate images collected on an experimental test range. Simulated and experimental multiframe motion-compensated average images are deconvolved by the MAP estimator to produce most likely estimates of the truth image as well as the atmospheric seeing condition. For comparison, Frieds seeing parameter is estimated from experimentally collected images using a knife-edge response technique. The MAP estimator is found to yield seeing condition estimates within approximately 6% using simulated speckle images, and within approximately 8% of knife-edge derived truth for a limited set of experimentally collected image data.


Optical Science and Technology, the SPIE 49th Annual Meeting | 2004

Image restoration techniques for partially coherent 2-D LADAR imaging systems

Adam MacDonald; Stephen C. Cain; Ernest E. Armstrong

A new image reconstruction algorithm is used to remove the effect of atmospheric turbulence on motion-compensated frame averaged data collected by a laser illuminated 2-D imaging system. The algorithm simultaneously computes a high resolution image and Frieds seeing parameter via a MAP estimation technique. This blind deconvolution algorithm differs from other techniques in that it parameterizes the unknown component of the impulse response as an average short-exposure point spread function. The utility of the approach lies in its application to laser illuminated imaging where laser speckle and turbulence effects dominate other sources of error and the field of view of the sensor greatly exceeds the isoplanatic angle.


Proceedings of SPIE | 2010

The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

Jack Woods; Ernest E. Armstrong; Walter Armbruster; Richard D. Richmond

The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

Collaboration


Dive into the Ernest E. Armstrong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian J. Yasuda

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Stephen C. Cain

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Kenneth J. Barnard

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard D. Richmond

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam MacDonald

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edward A. Watson

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jack Woods

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge