Jonathon M. Schuler
United States Naval Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathon M. Schuler.
Proceedings of SPIE | 1998
Dean A. Scribner; Jonathon M. Schuler; Penny R. Warren; Michael P. Satyshur; Melvin R. Kruer
The concept of multi-band infrared color vision is discussed in terms of combining two or more bands of infrared imagery into a single composite color image. This work is motivated by emerging new technologies in which two or more infrared bands are simultaneously imaged for improved discrimination of objects from backgrounds. One of the current objectives of this work is to quantify the improvement obtained over single band infrared imagery to detect dim targets in clutter. Methods are discussed for mapping raw image data into an appropriate color space and then processing it to achieve an intuitively meaningful color display for a human viewer. In this regard, the final imagery should provide good color contrast between objects and backgrounds and consistent colors regardless of environmental conditions such as solar illumination and variations in surface temperature. Initial performance measures show that infrared color can improve discrimination significantly over single band imaging.
Proceedings of SPIE, the International Society for Optical Engineering | 2000
Dean A. Scribner; Jonathon M. Schuler; Penny R. Warren; J. Grant Howard; Melvin R. Kruer
The emergence of new infrared sensor technologies and the availability of powerful, inexpensive computers have made many new imaging applications possible. Researchers working in the area of traditional image processing are showing an increased interest in working with infrared imagery. However, because of the inherent differences between infrared and visible phenomenology, a number of fundamental problems arise when trying to apply traditional processing methods to the infrared. Furthermore, the technologies required to image in the infrared are currently much less mature than comparable camera technologies used in visible imaging. Also infrared sensors need to capture six to eight additional bits of additional dynamic range over the eight normally used for visible imaging. Over the past half-century, visible cameras have become highly developed and can deliver images that meet engineering standards compatible with image displays. Similar image standards are often not possible in the infrared for a number of technical and phenomenological reasons. The purpose of this paper is to describe some of these differences and discuss a related topic known as image preprocessing. This is an area of processing that roughly lies between traditional image processing and image generation; because the camera images are less than ideal, additional processing is needed to perform necessary functions such as dynamic range management, non-uniformity correction, resolution enhancement, or color processing. A long-range goal for the implementation of these algorithms is to move them on-chip as analog retina-like or cortical-like circuits, thus achieving extraordinary reductions in power dissipation, size, and cost. Because this area of research is relatively new and still evolving, the discussion in this paper is limited to only a partial overview of the topic.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner; Richard Klien; Michael P. Satyshur; Melvin R. Kruer
Sensor fusion of up to three disparate imagers can readily be achieved by assigning each component video stream to a separate channel any standard RBG color monitor such as with television or personal computer systems. Provided the component imagery is pixel registered, such a straightforward systems can provide improved object-background separation, yielding quantifiable human-factors performance improvement compared to viewing monochrome imagery of a single sensor. Consideration is given to appropriate dynamic range management of the available color gamut, and appropriate color saturation in the presence of imager noise.
Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000
J. Grant Howard; Penny R. Warren; Richard Klien; Jonathon M. Schuler; Michael P. Satyshur; Dean A. Scribner; Melvin R. Kruer
Increases in the power of personal computers and the availability of infrared focal plane array cameras allows new options in the development of real-time color fusion system for human visualization. This paper describes on-going development of an inexpensive, real-time PC=based infrared color visualization system. The hardware used in the system is all COTS, making it relatively inexpensive to maintain and modify. It consists of a dual Pentium II PC, with fast digital storage and up to five PCI frame-grabber cards. The frame-grabbers cards allow data to be selected from RS-170 (analog) or RS-422 (digital) cameras. Software allows the system configuration to be changed on the fly, so cameras can be swapped at will and new cameras can be added to the system in a matter of minutes. The software running on the system reads up to five separate images from the frame-grabber cards. These images are then digitally registered using a rubber-sheeting algorithm to reshape and shift the images. The registered data, from two or three cameras, is then processed by the selected fusion algorithm to produce a color-fused image, which is then displayed in real-time. The real-time capability of this system allows interactive laboratory testing of issues such as band selection, fusion algorithm optimization, and visualization trade-offs.
Optical Engineering | 1999
Jonathon M. Schuler; Dean A. Scribner
A general method is outlined to increase the spatial resolution of digital video sequences containing uncontrolled camera motion. This is accomplished by combining multiple frames of video into a composite image of a higher spatial sampling rate than the original video. Fundamental to this technique is the computation of optical flow. This technique has the greatest impact on camera systems where the focal plane array spatially undersamples the projected optical image. Implicit to the technique is the potential trade-off between frame rate, latency, and display resolution in a digital imaging system.
Infrared and Passive Millimeter-wave Imaging Systems: Design, Analysis, Modeling, and Testing | 2002
Lisimachos P. Kondi; Dean A. Scribner; Jonathon M. Schuler
In this paper, we compare two resolution enhancement techniques. The first technique is based on Maximum A Posteriori (MAP) estimation while the second technique performs Temporal Accumulation of Registered Image Data (TARID) followed by Wiener filtering. Both techniques and described and the merits of each one are discussed. Experimental results are presented and conclusions are drawn.
Infrared and Passive Millimeter-wave Imaging Systems: Design, Analysis, Modeling, and Testing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
A general method is described to improve the operational resolution of an Electro-Optic (EO) imaging sensor using multiple frames of an image sequence. This method only assumes the constituent video has some ambient motion between the sensor and stationary background, and the optical image is electronically captured and digitally recorded by a staring focal plane detector array. Compared to alternative techniques that may require externally controlled or measured dither motion, this approach offers significantly enhanced operational resolution with substantially relaxed constraints on sensor stabilization, optics, and exposure time.
Optical Science and Technology, SPIE's 48th Annual Meeting | 2004
Rulon Mayer; James R. Waterman; Jonathon M. Schuler; Dean A. Scribner
To achieve enhanced target discrimination, prototype three- band long wave infrared (LWIR) focal plane arrays (FPA) for missile defense applications have recently been constructed. The cutoff wavelengths, widths, and spectral overlap of the bands are critical parameters for the multicolor sensor design. Previous calculations for sensor design did not account for target and clutter spectral features in determining the optimal band characteristics. The considerable spectral overlap and correlation between the bands and attendant reduction in color contrast is another unexamined issue. To optimize and simulate the projected behavior of three-band sensors, this report examined a hyperspectral LWIR image cube. Our study starts with 30 bands of the LWIR spectra of three man-made targets and natural backgrounds that were binned to 3 bands using weighted band binning. This work achieves optimal binning by using a genetic algorithm approach and the target-to-clutter-ratio (TCR) as the optimization criterion. Another approach applies a genetic algorithm to maximize discrimination among the spectral reflectivities in the Non-conventional Exploitation Factors Data System (NEFDS) library. Each candidate band was weighted using a Fermi function to represent four interacting band edges for three- bands. It is found that choice of target can significantly influence the optimal choice of bands as expressed through the TCR and the Receiver Operator Characteristic curve. This study shows that whitening the image data prominently displays targets relative to backgrounds by increasing color contrast and also maintains color constancy. Three-color images are displayed by assigning red, green, blue colors directly to the whitened data set. Achieving constant colors of targets and backgrounds over time can greatly aid human viewers in the interpretation of the images and discriminate targets.
visual communications and image processing | 2002
Jonathon M. Schuler; J. Grant Howard; Penny R. Warren; Dean A. Scribner
Many imaging systems consist of a combination of distinct electro-optic sensors that are constrained to view the same scene through a common aperture or from a common platform. Oftentimes, a spatial registration of one sensors image is required to conform to the slightly disparate imaging geometry of a different sensor on the same platform. This is generally achieved through a judicious selection of image tie-points and geometric transformation model. This paper outlines a procedure to improve any such registration technique by leveraging the temporal motion within a pair of video sequences and requiring an additional constraint of minimizing the disparity in optical flow between registered video sequences.
Proceedings of SPIE | 2001
Jonathon M. Schuler; J. Grant Howard; Dean A. Scribner; Penny R. Warren; Richard B. Klein; Michael P. Satyshur; Melvin R. Kruer
This paper outlines a generalized image reconstruction approach that improves the resolution of an Electro-Optic (EO) imaging sensor based on multiple frame exposures during a temporal window of video. Such an approach is innovative in that it does not depend on controlled micro dithering of the camera, nor require the set of exposures to maintain any strictly defined transformation. It suffices to assume such video is physically captured by a focal plane array, and loosely requires some relative motion between sensor and subject.