Penny R. Warren
United States Naval Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Penny R. Warren.
machine vision applications | 1999
Dean A. Scribner; Penny R. Warren; Jonathan M. Schuler
Abstract. There is a rapid growth in sensor technology in regions of the electromagnetic spectrum beyond the visible. Also, the parallel growth of powerful, inexpensive computers and digital electronics has made many new imaging applications possible. Although there are many approaches to sensor fusion, this paper provides a discussion of relevant infrared phenomenology and attempts to apply known methods of human color vision to achieve image fusion. Two specific topics of importance are color contrast enhancement and color constancy.
Optics & Photonics News | 1998
Dean A. Scribner; Penny R. Warren; Jon Schuler; Michael Satysher; Melvin R. Kruer
Sensor fusion is the science of combining information from multiple sensor responses. One approach, visualization, fuses various images and adds false color to define features that might otherwise be hidden and to aid the viewer in deciphering a scene.
Proceedings of SPIE | 1998
Dean A. Scribner; Jonathon M. Schuler; Penny R. Warren; Michael P. Satyshur; Melvin R. Kruer
The concept of multi-band infrared color vision is discussed in terms of combining two or more bands of infrared imagery into a single composite color image. This work is motivated by emerging new technologies in which two or more infrared bands are simultaneously imaged for improved discrimination of objects from backgrounds. One of the current objectives of this work is to quantify the improvement obtained over single band infrared imagery to detect dim targets in clutter. Methods are discussed for mapping raw image data into an appropriate color space and then processing it to achieve an intuitively meaningful color display for a human viewer. In this regard, the final imagery should provide good color contrast between objects and backgrounds and consistent colors regardless of environmental conditions such as solar illumination and variations in surface temperature. Initial performance measures show that infrared color can improve discrimination significantly over single band imaging.
Proceedings of SPIE, the International Society for Optical Engineering | 2000
Dean A. Scribner; Jonathon M. Schuler; Penny R. Warren; J. Grant Howard; Melvin R. Kruer
The emergence of new infrared sensor technologies and the availability of powerful, inexpensive computers have made many new imaging applications possible. Researchers working in the area of traditional image processing are showing an increased interest in working with infrared imagery. However, because of the inherent differences between infrared and visible phenomenology, a number of fundamental problems arise when trying to apply traditional processing methods to the infrared. Furthermore, the technologies required to image in the infrared are currently much less mature than comparable camera technologies used in visible imaging. Also infrared sensors need to capture six to eight additional bits of additional dynamic range over the eight normally used for visible imaging. Over the past half-century, visible cameras have become highly developed and can deliver images that meet engineering standards compatible with image displays. Similar image standards are often not possible in the infrared for a number of technical and phenomenological reasons. The purpose of this paper is to describe some of these differences and discuss a related topic known as image preprocessing. This is an area of processing that roughly lies between traditional image processing and image generation; because the camera images are less than ideal, additional processing is needed to perform necessary functions such as dynamic range management, non-uniformity correction, resolution enhancement, or color processing. A long-range goal for the implementation of these algorithms is to move them on-chip as analog retina-like or cortical-like circuits, thus achieving extraordinary reductions in power dissipation, size, and cost. Because this area of research is relatively new and still evolving, the discussion in this paper is limited to only a partial overview of the topic.
Proceedings of SPIE | 2001
Ronald G. Driggers; Keith Krapels; Richard H. Vollmerhausen; Penny R. Warren; Dean A. Scribner; J. Grant Howard; Brian H. Tsou; William K. Krebs
Current target acquisition models are for monochrome imagery systems (single detector). The increasing interest in multispectral infrared systems and color daylight imagers highlights the need for models that describe the target acquisition process for color systems (2 or more detectors). This study investigates the detection of simple color targets in a noise color background.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner; Richard Klien; Michael P. Satyshur; Melvin R. Kruer
Sensor fusion of up to three disparate imagers can readily be achieved by assigning each component video stream to a separate channel any standard RBG color monitor such as with television or personal computer systems. Provided the component imagery is pixel registered, such a straightforward systems can provide improved object-background separation, yielding quantifiable human-factors performance improvement compared to viewing monochrome imagery of a single sensor. Consideration is given to appropriate dynamic range management of the available color gamut, and appropriate color saturation in the presence of imager noise.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
J. Grant Howard; Penny R. Warren; Richard Klien; Jonathon M. Schuler; Michael P. Satyshur; Dean A. Scribner; Melvin R. Kruer
Increases in the power of personal computers and the availability of infrared focal plane array cameras allows new options in the development of real-time color fusion system for human visualization. This paper describes on-going development of an inexpensive, real-time PC=based infrared color visualization system. The hardware used in the system is all COTS, making it relatively inexpensive to maintain and modify. It consists of a dual Pentium II PC, with fast digital storage and up to five PCI frame-grabber cards. The frame-grabbers cards allow data to be selected from RS-170 (analog) or RS-422 (digital) cameras. Software allows the system configuration to be changed on the fly, so cameras can be swapped at will and new cameras can be added to the system in a matter of minutes. The software running on the system reads up to five separate images from the frame-grabber cards. These images are then digitally registered using a rubber-sheeting algorithm to reshape and shift the images. The registered data, from two or three cameras, is then processed by the selected fusion algorithm to produce a color-fused image, which is then displayed in real-time. The real-time capability of this system allows interactive laboratory testing of issues such as band selection, fusion algorithm optimization, and visualization trade-offs.
Proceedings of SPIE | 2001
Stuart Horn; James Campbell; Ronald G. Driggers; Thomas J. Soyka; Paul R. Norton; Philip Perconti; Timothy E. Ostromek; Joseph P. Estrera; Antonio V. Bacarella; Timothy R. Beystrum; Dean A. Scribner; Penny R. Warren
Fusion of reflected/emitted radiation light sensors can provide significant advantages for target identification and detection. The two bands -- 0.6 - 0.9 or 1 - 2 micrometer reflected light and 8 - 12 micrometer emitted radiation -- offer the greatest contrast since those bands have the lowest correlation, hence the greatest amount of combined information for infrared imaging. Data from fused imaging systems is presented for optical overlay as well as digital pixel fusion. Advantages of the digital fusion process are discussed as well as the advantages of having both bands present for military operations. Finally perception tests results are presented that show how color can significantly enhance target detection. A factor of two reduction in minimum resolvable temperature difference is postulated from perception tests in the chromaticity plane. Although initial results do not yet validate this finding, it is expected with the right fusion algorithms and displays that this important result will be proven shortly.
Applied Optics | 1999
Ronald G. Driggers; Mel Kruer; Dean A. Scribner; Penny R. Warren; Jon Leachtenauer
Target acquisition infrared imaging sensors are characterized by their minimum resolvable temperature parameter that is translated to the probability of identification (Pid) performance estimate for a given target. Intelligence-surveillance-reconnaissance (ISR) sensors are characterized by the general image quality equation to give a national imagery interpretability rating scale (NIIRS) performance estimate. Sensors, such as those on Predator and Global Hawk, will soon be used for both ISR and target acquisition purposes. We present a performance conversion that includes both sensor resolution and sensitivity. We also provide the first empirical results to our knowledge ever to be presented that relate NIIRS and Pid for a given set of targets.
Infrared and Passive Millimeter-wave Imaging Systems: Design, Analysis, Modeling, and Testing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
A general method is described to improve the operational resolution of an Electro-Optic (EO) imaging sensor using multiple frames of an image sequence. This method only assumes the constituent video has some ambient motion between the sensor and stationary background, and the optical image is electronically captured and digitally recorded by a staring focal plane detector array. Compared to alternative techniques that may require externally controlled or measured dither motion, this approach offers significantly enhanced operational resolution with substantially relaxed constraints on sensor stabilization, optics, and exposure time.