Alan R. Pinkus
Air Force Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan R. Pinkus.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Kelly E. Neriani; Alan R. Pinkus; David W. Dommett
It is believed that the fusion of multiple different images into a single image should be of great benefit to Warfighters engaged in a search task. As such, more research has focused on the improvement of algorithms designed for image fusion. Many different fusion algorithms have already been developed; however, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research is to apply a visual performance-based assessment methodology to assess four algorithms that are specifically designed for fusion of multispectral digital images. The image fusion algorithms used in this study included a Principle Component Analysis (PCA) based algorithm, a Shift-invariant Wavelet transform algorithm, a Contrast-based algorithm, and the standard method of fusion, pixel averaging. The methodology used has been developed to acquire objective human visual performance data as a means of evaluating the image fusion algorithms. Standard objective performance metrics, such as response time and error rate, were used to compare the fused images versus two baseline conditions comprising each individual image used in the fused test images (an image from a visible sensor and a thermal sensor). Observers completed a visual search task using a spatial-forced-choice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of this study and future directions are discussed.
electronic imaging | 2007
Daniel W. Repperger; Alan R. Pinkus; Julie A. Skipper; Christina D. Schrider
Discrimination of friendly or hostile objects is investigated using information-theory measures/metric in an image which has been compromised by a number of factors. In aerial military images, objects with different orientations can be reasonably approximated by a single identification signature consisting of the average histogram of the object under rotations. Three different information-theoretic measures/metrics are studied as possible criteria to help classify the objects. The first measure is the standard mutual information (MI) between the sampled object and the library object signatures. A second measure is based on information efficiency, which differs from MI. Finally an information distance metric is employed which determines the distance, in an information sense, between the sampled object and the library object. It is shown that the three (parsimonious) information-theoretic variables introduced here form an independent basis in the sense that any variable in the information channel can be uniquely expressed in terms of the three parameters introduced here. The methodology discussed is tested on a sample set of standardized images to evaluate their efficacy. A performance standardization methodology is presented which is based on manipulation of contrast, brightness, and size attributes of the sample objects of interest.
Head- and Helmet-Mounted Displays XII: Design and Applications | 2007
H. Lee Task; Alan R. Pinkus
The image quality of night vision goggles is often expressed in terms of visual acuity, resolution or modulation transfer function. The primary reason for providing a measure of image quality is the underlying assumption that the image quality metric correlates with the level of visual performance that one could expect when using the device, for example, target detection or target recognition performance. This paper provides a theoretical analysis of the relationships between these three image quality metrics: visual acuity, resolution and modulation transfer function. Results from laboratory and field studies were used to relate these metrics to visual performance. These results can also be applied to non-image intensifier based imaging systems such as a helmet-mounted display coupled to an imaging sensor.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Alan R. Pinkus; Miriam J. Poteet; Allan Pantle
In an empirical study, observers gave ratings of their ability to detect a military target in filtered images of natural scenes. The purpose of the study was twofold. First, the absolute value of the convolution images generated with oriented Gabor filters of different scales and orientations, and pairs of filters (corner filters), provided brightness images which were evaluated as saliency maps of potential target locations. The generation of the saliency maps with oriented Gabor filters was modeled after the second-order processing of texture in the visual system. Second, two methods of presentation of the saliency maps were compared. With the flicker presentation method, a saliency map was flickered on and off at a 2-Hz rate and superimposed upon the image of the original scene. The flicker presentation method was designed to take advantage of the known properties of the magnocellular pathway of the visual system. A second method (toggle presentation) used simply for comparison, required observers to switch back and forth between the saliency image and the image of the original scene. Primary results were that (1) saliency images produced with corner filters were rated higher than those produced with simple Gabor filters, and (2) ratings obtained with the flicker method were higher than those obtained with the toggle method, with the greatest advantage for filters tuned to lower spatial frequencies. The second result suggests that the flicker presentation method holds considerable promise as a new technique for combining information (dynamic image fusion) from two or more independently obtained (e.g., multi-spectral) or processed images.
PLOS ONE | 2016
Alexander Toet; Maarten A. Hogervorst; Alan R. Pinkus; Houbing Song
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.
Helmet- and Head-Mounted Displays IX: Technologies and Applications | 2004
Alan R. Pinkus; Harry L. Task
When night vision goggle (NVG) image intensifier tubes (I2Ts) are replaced during maintenance, the output luminances of the two channels must not exceed a ratio of 1.5 (brighter channel luminance divided by the dimmer channel luminance) in order to meet the current allowed binocular luminance disparity specification. Two studies were performed to investigate the validity of this requirement. The first study estimated thresholds of binocular luminance disparity detection for observers looking through NVGs. For eight observers, the 25% corrected-for-chance probability of detecting an ocular luminance difference, yielded an average ratio of 1.43 indicating that the current 1.5 specification is perhaps too loose. The second study investigated the Pulfrich phenomenon, a pseudo-stereo effect that can be induced by presenting luminance imbalances to the eyes. This study created NVG luminance imbalances using neutral density (ND) filters and then investigated whether or not the various imbalance levels were sufficient to cause the Pulfrich phenomenon to be perceived. Results indicated an imbalance ratio of 1.10 was insufficient to cause the effect to be seen, but a ratio of 1.26 was sufficient (p ≤ 0.0003) for the effect to be seen, at least part of the time. Based on these results, it is apparent the allowed binocular luminance disparity ratio should probably be tightened to at least 1.3 with a goal of 1.2.
Proceedings of SPIE | 2015
John P. McIntire; Paul R. Havig; Alan R. Pinkus
In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.
Proceedings of SPIE | 2013
Alan R. Pinkus; David W. Dommett; H. Lee Task
This paper is the fifth in a series exploring the possibility of using a synthetic observer to assess the resolution of both real and synthetic (fused) sensors. The previous paper introduced an Automatic Triangle Orientation Detection Algorithm (ATODA) that was capable of recognizing the orientation of an equilateral triangle used as a resolution target, which complemented the Automatic Landolt C Orientation Recognition (ALCOR) software developed earlier. Three different spectral band sensors (infrared, near infrared and visible) were used to collect images that included both resolution targets and militarily relevant targets at multiple distances. The resolution targets were evaluated using the two software algorithms described above. For the current study, subjects viewed the same set of images previously used in order to obtain human-based assessments of the resolutions of these three sensors for comparison with the automated approaches. In addition, the same set of images contained hand-held target objects so that human performance in recognizing the targets could be compared to both the automated and human-based assessment of resolution for each sensor.
Proceedings of SPIE | 2009
Alan R. Pinkus; Alexander Toet; H. Lee Task
In this effort we acquired and registered a multi-spectral dynamic image test set with the intent of using the imagery to assess the operational effectiveness of static and dynamic image fusion techniques for a range of relevant military tasks. This paper describes the image acquisition methodology, the planned human visual performance task approach, the lessons learned during image acquisition and the plans for a future, improved image set, resolution assessment methodology and human visual performance task.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Hulya Demiryont; Kenneth C. Shannon; Jan Isidorsson; Sharon Dixon; Alan R. Pinkus
Helmet-Mounted Displays (HMDs) do not allow the pilot to change transmission level of a visor transitioning from high to low light levels. A variable-transmittance visor (VTV) is a possible solution. The Eclipse Variable Electrochromic Device (EclipseECDTM) is well suited for these light modulation applications. The EclipseECTM modulates light intensity by changing the transmission level under an applied electric field. The optical density may be continuously changed by varying voltage. EclipseECDTM is comprised of vacuum deposited layers of a transparent bottom electrode, an active element, and a transparent top electrode, incorporating an all, solid-state electrolyte. The solid-state electrolyte eliminates possible complications associated with gel-based technologies, the need for lamination, and any additional visor modifications. The low-temperature deposition process enables direct application onto HMD flight visors. Additionally, the coating is easily manufactured; can be trimmed, has near spectral neutrality and fails in the clear (bleached) condition. Before introducing VTV technology to the warfighter, there are numerous human factors issues that must be assessed. Considerations include optical characteristics such as transmissive range, haze, irising, internal reflections, multiple imaging, user controllability, ease of fit, and field of view. Advanced materials tailoring coupled with meeting critical criteria will help ensure successful integration of VTV technology.