Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David A. Fay is active.

Publication


Featured researches published by David A. Fay.


Neural Networks | 1995

Neural processing of targets in visible, multispectral IR and SAR imagery

Allen M. Waxman; Michael Seibert; Alan N. Gove; David A. Fay; Ann Marie Bernardon; Carol H. Lazott; William R. Steele; Robert K. Cunningham

Abstract We have designed and implemented computational neural systems for target enhancement, detection, learning and recognition in visible, multispectral infrared (IR), and synthetic aperture radar (SAR) imagery. The system architectures are motivated by designs of biological vision systems, drawing insights from retinal processing of contrast and color, occipital lobe processing of shading, color and contour, and temporal lobe processing of pattern and shape. Distinguishing among similar targets, and accumulation of evidence across image sequences is also described. Similar neurocomputational principles and modules are used across these various sensing domains. We show how 3D target learning and recognition from visible silhouettes and SAR return patterns are related. We show how models of contrast enhancement, contour, shading and color vision can be used to enhance targets in multispectral IR and SAR imagery, aiding in target detection.


Proceedings of SPIE | 1998

Real-time fusion of low-light CCD and uncooled IR imagery for color night vision

Mario Aguilar; David A. Fay; William Ross; Allen M. Waxman; David B. Ireland; Joseph P. Racamato

We present an approach to color night vision through fusion of information derived from visible and thermal infrared sensors. Building on the work reported at SPIE in 1996 and 1997, we show how opponent-color processing and center-surround shunting neural networks can achieve informative multi-band image fusion. In particular, by emulating spatial and color processing in the retina, we demonstrate an effective strategy for multi-sensor color-night vision. We have developed a real- time visible/IR fusion processor from multiple C80 DSP chips using commercially available Matrox Genesis boards, which we use in conjunction with the Lincoln Lab low-light CCD and a Raytheon TI Systems uncooled IR camera. Limited human factors testing of visible/IR fusion is presented showing improvements in human performance using our color fused imagery relative to alternative fusion strategies or either single image modality alone. We conclude that fusion architectures that match opponent-sensor contrast to human opponent-color processing will yield fused image products of high image quality and utility.


international conference on information fusion | 2000

Fused multi-sensor image mining for feature foundation data

William W. Streilein; Allen M. Waxman; William Ross; Fang Liu; Michael Braun; David A. Fay; Paul Harmon; Chung Hye Read

Presents work on methods and user interfaces developed for interactive mining for feature foundation data (e.g. roads, rivers, orchards, forests) in fused multi-sensor imagery. A suite of client/server-based tools, including the Site Mining Tool and Image Map Interface, enable image analysts (IAs) to mine multi-sensor imagery for feature foundation data and to share trainable search agents, search results and image annotations with other IAs connected via a computer network. We discuss extensions to the fuzzy ARTMAP neural network which enable the Site Mining Tool to report confidence measures for detected search targets and to automatically select the critical features in the input vector which are most relevant for particular searches. Examples of the use of the Site Mining Tool and Image Map Interface are shown for an electo-optical (EO), IR and SAR data set derived from Landsat and Radarsat imagery, as well as multispectral (4-band) and hyperspectral (224-band) data sets. In addition, we present an architecture for the enhancement of hyperspectral fused imagery that utilizes internal category activity maps of a trained fuzzy ARTMAP network to enhance the visualization of targets in the color-fused imagery.


Proceedings of SPIE | 1996

Progress on color night vision: visible/IR fusion, perception and search, and low-light CCD imaging

Allen M. Waxman; Alan N. Gove; Michael C. Siebert; David A. Fay; James E. Carrick; Joseph P. Racamato; Eugene D. Savoye; Barry E. Burke; Robert K. Reich; William H. McGonagle; David M. Craig

We report progress on our development of a color night vision capability, using biological models of opponent-color processing to fuse low-light visible and thermal IR imagery, and render it in realtime in natural colors. Preliminary results of human perceptual testing are described for a visual search task, the detection of embedded small low-contrast targets in natural night scenes. The advantages of color fusion over two alterative grayscale fusion products is demonstrated in the form of consistent, rapid detection across a variety of low- contrast (+/- 15% or less) visible and IR conditions. We also describe advances in our development of a low-light CCD camera, capable of imaging in the visible through near- infrared in starlight at 30 frames/sec with wide intrascene dynamic range, and the locally adaptive dynamic range compression of this imagery. Example CCD imagery is shown under controlled illumination conditions, from full moon down to overcast starlight. By combining the low-light CCD visible imager with a microbolometer array LWIR imager, a portable image processor, and a color LCD on a chip, we can realize a compact design for a color fusion night vision scope.


Enhanced and Synthetic Vision 1999 | 1999

Field evaluations of dual-band fusion for color night vision

Mario Aguilar; David A. Fay; David B. Ireland; Joseph P. Racamato; William Ross; Allen M. Waxman

As part of an advanced night vision program sponsored by DARPA, a method for real-time color night vision based on the fusion of visible and infrared sensors has been developed and demonstrated. The work, based on principles of color vision in humans and primates, achieves an effective strategy for combining the complementary information present in the two sensors. Our sensor platform consists of a 640 X 480 low- light CCD camera developed at MIT Lincoln Laboratory and a 320 X 240 uncooled microbolometer thermal infrared camera from Lockheed Martin Infrared. Image capture, data processing, and display are implemented in real-time (30 fps) on commercial hardware. Recent results from field tests at Lincoln Laboratory and in collaboration with U.S. Army Special Forces at Fort Campbell will be presented. During the tests, we evaluated the performance of the system for ground surveillance and as a driving aid. Here, we report on the results using both a wide-field of view (42 deg.) and a narrow field of view (7 deg.) platforms.


Enhanced and synthetic vision 2000. Conference | 2000

Fusion of 2- /3- /4-sensor imagery for visualization, target learning, and search

David A. Fay; Allen M. Waxman; Mario Aguilar; David B. Ireland; Joseph P. Racamato; William Ross; William W. Streilein; Michael Braun

We present recent work on methods for fusion of imagery from multiple sensors for night vision capability. The fusion system architectures are based on biological models of the spatial and opponent-color processes in the human retina and visual cortex. The real-time implementation of the dual-sensor fusion system combines imagery from either a low-light CCD camera (developed at MIT Lincoln Laboratory) or a short-wave infrared camera (from Sensors Unlimited, Inc.) With thermal long-wave infrared imagery (from a Lockheed Martin microbolometer camera). Example results are shown for an extension of the fusion architecture to include imagery from all three of these sensors as well as imagery from a mid- wave infrared imager (from Raytheon Amber Corp.). We also demonstrate how the results from these multi-sensor fusion systems can be used as inputs to an interactive tool for target designation, learning, and search based on a Fuzzy ARTMAP neural network.


Proceedings of SPIE | 1997

Electronic imaging aids for night driving: low-light CCD, uncooled thermal IR, and color-fused visible/LWIR

Allen M. Waxman; Eugene D. Savoye; David A. Fay; Mario Aguilar; Alan N. Gove; James E. Carrick; Joseph P. Racamato

MIT Lincoln Laboratory is developing new electronic night vision technologies for defense applications which can be adapted for civilian applications such as night driving aids. These technologies include (1) low-light CCD imagers capable of operating under starlight illumination conditions at video rates, (2) realtime processing of wide dynamic range imagery (visible and IR) to enhance contrast and adaptively compress dynamic range, and (3) realtime fusion of low-light visible and thermal IR imagery to provide color display of the night scene to the operator in order to enhance situational awareness. This paper compares imagery collected during night driving including: low-light CCD visible imagery, intensified-CCD visible imagery, uncooled long-wave IR imagery, cryogenically cooled mid-wave IR imagery, and visible/IR dual-band imagery fused for gray and color display.


Proceedings of SPIE | 2001

Fusion of Multi- Sensor Passive and Active 3D Imagery

David A. Fay; Jacques Verly; Michael Braun; Carl E. Frost; Joseph P. Racamato; Allen M. Waxman

We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.


international conference on information fusion | 2003

Learn-while-tracking, feature discovery and fusion of high-resolution radar range profiles

Richard T. Ivey; Allen M. Waxman; David A. Fay; Daniel P. Martin

High-Resolution Radar (HRR) range profile data, obtained simultaneously with radar GMTI detection of a moving target, can provide a means to improve a target tracker s perj4ormance when multiple vehicles are in kinematically ambiguous situations. There is a need for methods that can process HRR profiles on-the-fly, enhance their features, discover which of the features are salient, and learn robust representations fi-om a small number of views. We present signal processing and pattern recognition methods based on neural models of human visual processing, leaming, and recognition that provide a novel approach to address this need. Promising simulation results using a set of militaly targetsfi-om the MSTAR dataset indicate that these methods can be exploited in the field, on-line, to improve the association between Gh4TI detections and tracks. The approach developed here is extensible and can easily accommodate multiple sensor plaljorms and incorporate other target signatures that may complement the HRR profile, for example spectral imagery of the target. Thus, our approach opens the way to sensor fused target tracking.


Enhanced and synthetic vision. Conference | 2004

Multisensor image fusion and mining: learning targets across extended operating conditions

David A. Fay; Allen M. Waxman; Richard T. Ivey; Neil A. Bomberger; Marianne Chiarella

We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light Visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused Visible/MWIR/LWIR imagery.

Collaboration


Dive into the David A. Fay's collaboration.

Top Co-Authors

Avatar

Allen M. Waxman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Racamato

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan N. Gove

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Braun

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

William Ross

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James E. Carrick

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eugene D. Savoye

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mario Aguilar

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge