Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan D. Fanning is active.

Publication


Featured researches published by Jonathan D. Fanning.


IEEE Sensors Journal | 2013

New Image Quality Assessment Algorithms for CFA Demosaicing

Robert A. Maschal; S. Susan Young; Joseph P. Reynolds; Keith Krapels; Jonathan D. Fanning; Ted Corbin

To address the frequent lack of a reference image or ground truth when performance testing Bayer pattern color filter array demosaicing algorithms, we propose two new no-reference quality assessment algorithms. These algorithms provide a relative comparison of two demosaicing algorithms by measuring the presence of two common artifacts, zippering and false coloring, in their output images. The first algorithm, the edge slope measure, tests the overall sharpness of each of the three color channels, thus estimating the relative edge reconstruction accuracy of each demosaicing algorithm. The second algorithm, the false color measure, estimates deviations from the established constant color difference image model and performs on green-red and green-blue color difference planes, therefore estimating the red and blue channel reconstruction of each demosaicing algorithm. We evaluate and rank common demosaicing algorithms using these new algorithms. Furthermore, we present real image examples for subjective evaluation to justify the rankings suggested by the new quality assessment algorithms.


Proceedings of SPIE | 2011

Improved noise model for the US Army sensor performance metric

Bradley L. Preece; Jeffrey T. Olson; Joseph P. Reynolds; Jonathan D. Fanning

Image noise, originating from a sensor system, is often the limiting factor in target acquisition performance. This is especially true of reflective-band sensors operating in low-light conditions. To accurately predict target acquisition range performance, image degradation introduced by the sensor must be properly combined with the limitations of the human visual system. This is modeled by adding system noise and blur to the contrast threshold function (CTF) of the human visual system, creating a combined system CTF. Current U.S. Army sensor performance models (NVThermIP, SSCAMIP, IICAM, and IINVD) do not properly address how external noise is added to the CTF as a function of display luminance. Historically, the noise calibration constant was fit from data using image intensifiers operating at low display luminance, typically much less than one foot-Lambert. However, noise calibration experiments with thermal imagery used a higher display luminance, on the order of ten foot-Lamberts, resulting in a larger noise calibration constant. To address this discrepancy, hundreds of CTF measurements were taken as a function of display luminance, apparent target angle, frame rate, noise intensity and filter shape. The experimental results show that the noise calibration constant varies as a function of display luminance. To account for this luminance dependence, a photon shot noise term representing an additional limitation in the performance of the human visual system is added to the observer model. The new noise model will be incorporated in the new U.S. Army Integrated Performance Model (NV-IPM), allowing accurate comparisons over a wide variety of sensor modalities and display luminance levels.


Optical Engineering | 2014

Human vision noise model validation for the U.S. Army sensor performance metric

Bradley L. Preece; Jeffrey T. Olson; Joseph P. Reynolds; Jonathan D. Fanning; David P. Haefner

Abstract. Image noise originating from a sensor system is often the limiting factor in target acquisition performance, especially when limited by atmospheric transmission or low-light conditions. To accurately predict target acquisition range performance for a wide variety of imaging systems, image degradation introduced by the sensor must be properly combined with the limitations of the human visual system (HVS). This crucial step of incorporating the HVS has been improved and updated within NVESD’s latest imaging system performance model. The new noise model discussed here shows how an imaging system’s noise and blur are combined with the contrast threshold function (CTF) to form the system CTF. Model calibration constants were found by presenting low-contrast sine gratings with additive noise in a two alternative forced choice experiment. One of the principal improvements comes from adding an eye photon noise term allowing the noise CTF to be accurate over a wide range of luminance. The latest HVS noise model is then applied to the targeting task performance metric responsible for predicting system performance from the system CTF. To validate this model, human target acquisition performance was measured from a series of infrared and visible-band noise-limited imaging systems.


Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVIII | 2007

IR system field performance with superresolution

Jonathan D. Fanning; Justin Miller; Jennifer K. Park; Gene D. Tener; Joseph Reynolds; Patrick O'Shea; Carl E. Halford; Ronald G. Driggers

Superresolution processing is currently being used to improve the performance of infrared imagers through an increase in sampling, the removal of aliasing, and the reduction of fixed-pattern noise. The performance improvement of superresolution has not been previously tested on military targets. This paper presents the results of human perception experiments to determine field performance on the NVESD standard military eight (8)-target set using a prototype LWIR camera. These experiments test and compare human performance of both still images and movie clips, each generated with and without superresolution processing. Lockheed Martins XR® algorithm is tested as a specific example of a modern combined superresolution and image processing algorithm. Basic superresolution with no additional processing is tested to help determine the benefit of separate processes. The superresolution processing is modeled in NVThermIP for comparison to the perception test. The measured range to 70% probability of identification using XR® is increased by approximately 34% while the 50% range is increased by approximately 19% for this camera. A comparison case is modeled using a more undersampled commercial MWIR sensor that predicts a 45% increase in range performance from superresolution.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

LWIR and MWIR fusion algorithm comparison using image metrics

Srikant Chari; Jonathan D. Fanning; S. M. Salem; Aaron L. Robinson; Carl E. Halford

This study determines the effectiveness of a number of image fusion algorithms through the use of the following image metrics: mutual information, fusion quality index, weighted fusion quality index, edge-dependent fusion quality index and Mannos-Sakrison’s filter. The results obtained from this study provide objective comparisons between the algorithms. It is postulated that multi-spectral sensors enhance the probability of target discrimination through the additional information available from the multiple bands. The results indicate that more information is present in the fused image than either single band image. The image quality metrics quantify the benefits of fusion of MWIR and LWIR imagery.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Effect of image magnification on target acquisition performance

Brian P. Teaney; Jonathan D. Fanning

The current US Army target acquisition models have a dependence on magnification. This is due in part to the structure of the observer Contrast Threshold Function (CTF) used in the model. Given the shape of the CTF, both over-magnification and under-magnification can dramatically impact modeled performance. This paper presents the results from two different perception studies, one using degraded imagery and the other using field imagery. The results presented demonstrate the correlation between observer performance and model prediction and provide guidance accurately representing system performance in under and over-magnified cases.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Target identification performance of superresolution versus dither

Jonathan D. Fanning; Joseph P. Reynolds

This paper presents the results of a performance comparison between superresolution reconstruction and dither, also known as microscan. Dither and superresolution are methods to improve the performance of spatially undersampled systems by reducing aliasing and increasing sampling. The performance measured is the probability of identification versus range for a set of tracked, armored military vehicles. The performance improvements of dither and superresolution are compared to the performance of the base system with no additional processing. Field data was collected for all types of processing using the same basic sensor. This allows the performance to be compared without comparing different sensors. The performance of the various methods is compared experimentally using human perception tests. The perception tests results are compared to modeled predictions of the range performance. The measured and modeled performance of all of the methods agree well.


Proceedings of SPIE | 2012

Modeling boost performance using a two dimensional implementation of the targeting task performance metric

Bradley L. Preece; David P. Haefner; Jonathan D. Fanning

Using post-processing filters to enhance image detail, a process commonly referred to as boost, can significantly affect the performance of an EO/IR system. The US Armys target acquisition models currently use the Targeting Task Performance (TTP) metric to quantify sensor performance. The TTP metric accounts for each element in the system including: blur and noise introduced by the imager, any additional post-processing steps, and the effects of the Human Visual System (HVS). The current implementation of the TTP metric assumes spatial separability, which can introduce significant errors when the TTP is applied to systems using non-separable filters. To accurately apply the TTP metric to systems incorporating boost, we have implement a two-dimensional (2D) version of the TTP metric. The accuracy of the 2D TTP metric was verified through a series of perception experiments involving various levels of boost. The 2D TTP metric has been incorporated into the Night Vision Integrated Performance Model (NV-IPM) allowing accurate system modeling of non-separable image filters.


Proceedings of SPIE | 2011

TOD to TTP calibration

Piet Bijl; Joseph P. Reynolds; Wouter K. Vos; Maarten A. Hogervorst; Jonathan D. Fanning

The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.


Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVIII | 2007

Direct view optics model for facial identification

Ronald G. Driggers; Steve Moyer; Keith Krapels; Lou Larsen; Jonathan D. Fanning; Jonathan G. Hixson; Richard H. Vollmerhausen

Direct view optics is a class of sensors to include the human eye and the human eye coupled to rifle scopes, spotter scopes, binoculars, and telescopes. The target acquisition model for direct view optics is based on the contrast threshold function of the eye with a modification for the optics modulation transfer function and the optical magnification. In this research, we extend the direct view model for the application of facial identification. The model is described and the experimental method for calibrating the task of human facial identification is discussed.

Collaboration


Dive into the Jonathan D. Fanning's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Krapels

Office of Naval Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald G. Driggers

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Larson

Office of Naval Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge