J. Grant Howard
United States Naval Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Grant Howard.
Proceedings of SPIE, the International Society for Optical Engineering | 2000
Dean A. Scribner; Jonathon M. Schuler; Penny R. Warren; J. Grant Howard; Melvin R. Kruer
The emergence of new infrared sensor technologies and the availability of powerful, inexpensive computers have made many new imaging applications possible. Researchers working in the area of traditional image processing are showing an increased interest in working with infrared imagery. However, because of the inherent differences between infrared and visible phenomenology, a number of fundamental problems arise when trying to apply traditional processing methods to the infrared. Furthermore, the technologies required to image in the infrared are currently much less mature than comparable camera technologies used in visible imaging. Also infrared sensors need to capture six to eight additional bits of additional dynamic range over the eight normally used for visible imaging. Over the past half-century, visible cameras have become highly developed and can deliver images that meet engineering standards compatible with image displays. Similar image standards are often not possible in the infrared for a number of technical and phenomenological reasons. The purpose of this paper is to describe some of these differences and discuss a related topic known as image preprocessing. This is an area of processing that roughly lies between traditional image processing and image generation; because the camera images are less than ideal, additional processing is needed to perform necessary functions such as dynamic range management, non-uniformity correction, resolution enhancement, or color processing. A long-range goal for the implementation of these algorithms is to move them on-chip as analog retina-like or cortical-like circuits, thus achieving extraordinary reductions in power dissipation, size, and cost. Because this area of research is relatively new and still evolving, the discussion in this paper is limited to only a partial overview of the topic.
Optical Engineering | 2012
K. Peter Judd; Jonathan M. Nichols; J. Grant Howard; James R. Waterman; Kenneth M. Vilardebo
This work offers a comparison of broadband shortwave infrared, defined as the spectral band from 0.9 to 1.7 μm, and hyperspectral shortwave infrared imagers in a marine environment under various daylight conditions. Both imagers are built around a Raytheon Vision Systems large format (1024×1280) indium-gallium-arsenide focal plane array with high dynamic range and low noise electronics. Sample imagery from a variety of objects and scenes indicates roughly the same visual performance between the two systems. However, we show that the more detailed spectral information provided by the hyperspectral system allows for object detection and discrimination. A vessel was equipped with panels coated with a variety of paints that possessed spectral differences in the 0.9 to 1.7 μm waveband. The vessel was imaged at various ranges, states of background clutter, and times of the day. Using a standard correlation receiver, it is demonstrated that image pixels containing the paint can be easily identified. During the exercise, it was also observed that both bow waves and near-field wakes from a wide variety of vessel traffic provide a spectral signature in the shortwave infrared waveband that could potentially be used for object tracking.
Proceedings of SPIE | 2001
Ronald G. Driggers; Keith Krapels; Richard H. Vollmerhausen; Penny R. Warren; Dean A. Scribner; J. Grant Howard; Brian H. Tsou; William K. Krebs
Current target acquisition models are for monochrome imagery systems (single detector). The increasing interest in multispectral infrared systems and color daylight imagers highlights the need for models that describe the target acquisition process for color systems (2 or more detectors). This study investigates the detection of simple color targets in a noise color background.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner; Richard Klien; Michael P. Satyshur; Melvin R. Kruer
Sensor fusion of up to three disparate imagers can readily be achieved by assigning each component video stream to a separate channel any standard RBG color monitor such as with television or personal computer systems. Provided the component imagery is pixel registered, such a straightforward systems can provide improved object-background separation, yielding quantifiable human-factors performance improvement compared to viewing monochrome imagery of a single sensor. Consideration is given to appropriate dynamic range management of the available color gamut, and appropriate color saturation in the presence of imager noise.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
J. Grant Howard; Penny R. Warren; Richard Klien; Jonathon M. Schuler; Michael P. Satyshur; Dean A. Scribner; Melvin R. Kruer
Increases in the power of personal computers and the availability of infrared focal plane array cameras allows new options in the development of real-time color fusion system for human visualization. This paper describes on-going development of an inexpensive, real-time PC=based infrared color visualization system. The hardware used in the system is all COTS, making it relatively inexpensive to maintain and modify. It consists of a dual Pentium II PC, with fast digital storage and up to five PCI frame-grabber cards. The frame-grabbers cards allow data to be selected from RS-170 (analog) or RS-422 (digital) cameras. Software allows the system configuration to be changed on the fly, so cameras can be swapped at will and new cameras can be added to the system in a matter of minutes. The software running on the system reads up to five separate images from the frame-grabber cards. These images are then digitally registered using a rubber-sheeting algorithm to reshape and shift the images. The registered data, from two or three cameras, is then processed by the selected fusion algorithm to produce a color-fused image, which is then displayed in real-time. The real-time capability of this system allows interactive laboratory testing of issues such as band selection, fusion algorithm optimization, and visualization trade-offs.
Infrared and Passive Millimeter-wave Imaging Systems: Design, Analysis, Modeling, and Testing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
A general method is described to improve the operational resolution of an Electro-Optic (EO) imaging sensor using multiple frames of an image sequence. This method only assumes the constituent video has some ambient motion between the sensor and stationary background, and the optical image is electronically captured and digitally recorded by a staring focal plane detector array. Compared to alternative techniques that may require externally controlled or measured dither motion, this approach offers significantly enhanced operational resolution with substantially relaxed constraints on sensor stabilization, optics, and exposure time.
visual communications and image processing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
Many imaging systems consist of a combination of distinct electro-optic sensors that are constrained to view the same scene through a common aperture or from a common platform. Oftentimes, a spatial registration of one sensors image is required to conform to the slightly disparate imaging geometry of a different sensor on the same platform. This is generally achieved through a judicious selection of image tie-points and geometric transformation model. This paper outlines a procedure to improve any such registration technique by leveraging the temporal motion within a pair of video sequences and requiring an additional constraint of minimizing the disparity in optical flow between registered video sequences.
Proceedings of SPIE | 2001
Jonathon M. Schuler; J. Grant Howard; Dean A. Scribner; Penny R. Warren; Richard B. Klein; Michael P. Satyshur; Melvin R. Kruer
This paper outlines a generalized image reconstruction approach that improves the resolution of an Electro-Optic (EO) imaging sensor based on multiple frame exposures during a temporal window of video. Such an approach is innovative in that it does not depend on controlled micro dithering of the camera, nor require the set of exposures to maintain any strictly defined transformation. It suffices to assume such video is physically captured by a focal plane array, and loosely requires some relative motion between sensor and subject.
visual communications and image processing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
This paper outlines a generalized image reconstruction approach to improve the resolution of an Electro-Optic (EO) imaging sensor using multiple frames of an image sequence. This method only assumes the constituent video has some ambient motion between the sensor and stationary background, and the optical image is physically captured by a staring focal plane array.
Infrared and Passive Millimeter-wave Imaging Systems: Design, Analysis, Modeling, and Testing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
The performance of any scene-adaptive Nonuniformity Correction (NUC) algorithm is fundamentally limited by the quality of the scene-based predicted value of each pixel. TARID-based composite imagery can serve as a scene-based pixel predictor with improved robustness, and reduced noise than that of more common scene-based pixel predictors. These improved properties result in dramatically faster algorithm convergence, generating corrected imagery with reduced spatial noise due to intrinsic nonuniform or inoperative pixels in a Focal Plane Array.