Joseph P. Racamato
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph P. Racamato.
Proceedings of SPIE | 1998
Mario Aguilar; David A. Fay; William Ross; Allen M. Waxman; David B. Ireland; Joseph P. Racamato
We present an approach to color night vision through fusion of information derived from visible and thermal infrared sensors. Building on the work reported at SPIE in 1996 and 1997, we show how opponent-color processing and center-surround shunting neural networks can achieve informative multi-band image fusion. In particular, by emulating spatial and color processing in the retina, we demonstrate an effective strategy for multi-sensor color-night vision. We have developed a real- time visible/IR fusion processor from multiple C80 DSP chips using commercially available Matrox Genesis boards, which we use in conjunction with the Lincoln Lab low-light CCD and a Raytheon TI Systems uncooled IR camera. Limited human factors testing of visible/IR fusion is presented showing improvements in human performance using our color fused imagery relative to alternative fusion strategies or either single image modality alone. We conclude that fusion architectures that match opponent-sensor contrast to human opponent-color processing will yield fused image products of high image quality and utility.
Proceedings of SPIE | 1996
Allen M. Waxman; Alan N. Gove; Michael C. Siebert; David A. Fay; James E. Carrick; Joseph P. Racamato; Eugene D. Savoye; Barry E. Burke; Robert K. Reich; William H. McGonagle; David M. Craig
We report progress on our development of a color night vision capability, using biological models of opponent-color processing to fuse low-light visible and thermal IR imagery, and render it in realtime in natural colors. Preliminary results of human perceptual testing are described for a visual search task, the detection of embedded small low-contrast targets in natural night scenes. The advantages of color fusion over two alterative grayscale fusion products is demonstrated in the form of consistent, rapid detection across a variety of low- contrast (+/- 15% or less) visible and IR conditions. We also describe advances in our development of a low-light CCD camera, capable of imaging in the visible through near- infrared in starlight at 30 frames/sec with wide intrascene dynamic range, and the locally adaptive dynamic range compression of this imagery. Example CCD imagery is shown under controlled illumination conditions, from full moon down to overcast starlight. By combining the low-light CCD visible imager with a microbolometer array LWIR imager, a portable image processor, and a color LCD on a chip, we can realize a compact design for a color fusion night vision scope.
Enhanced and Synthetic Vision 1999 | 1999
Mario Aguilar; David A. Fay; David B. Ireland; Joseph P. Racamato; William Ross; Allen M. Waxman
As part of an advanced night vision program sponsored by DARPA, a method for real-time color night vision based on the fusion of visible and infrared sensors has been developed and demonstrated. The work, based on principles of color vision in humans and primates, achieves an effective strategy for combining the complementary information present in the two sensors. Our sensor platform consists of a 640 X 480 low- light CCD camera developed at MIT Lincoln Laboratory and a 320 X 240 uncooled microbolometer thermal infrared camera from Lockheed Martin Infrared. Image capture, data processing, and display are implemented in real-time (30 fps) on commercial hardware. Recent results from field tests at Lincoln Laboratory and in collaboration with U.S. Army Special Forces at Fort Campbell will be presented. During the tests, we evaluated the performance of the system for ground surveillance and as a driving aid. Here, we report on the results using both a wide-field of view (42 deg.) and a narrow field of view (7 deg.) platforms.
Enhanced and synthetic vision 2000. Conference | 2000
David A. Fay; Allen M. Waxman; Mario Aguilar; David B. Ireland; Joseph P. Racamato; William Ross; William W. Streilein; Michael Braun
We present recent work on methods for fusion of imagery from multiple sensors for night vision capability. The fusion system architectures are based on biological models of the spatial and opponent-color processes in the human retina and visual cortex. The real-time implementation of the dual-sensor fusion system combines imagery from either a low-light CCD camera (developed at MIT Lincoln Laboratory) or a short-wave infrared camera (from Sensors Unlimited, Inc.) With thermal long-wave infrared imagery (from a Lockheed Martin microbolometer camera). Example results are shown for an extension of the fusion architecture to include imagery from all three of these sensors as well as imagery from a mid- wave infrared imager (from Raytheon Amber Corp.). We also demonstrate how the results from these multi-sensor fusion systems can be used as inputs to an interactive tool for target designation, learning, and search based on a Fuzzy ARTMAP neural network.
Proceedings of SPIE | 1997
Allen M. Waxman; Eugene D. Savoye; David A. Fay; Mario Aguilar; Alan N. Gove; James E. Carrick; Joseph P. Racamato
MIT Lincoln Laboratory is developing new electronic night vision technologies for defense applications which can be adapted for civilian applications such as night driving aids. These technologies include (1) low-light CCD imagers capable of operating under starlight illumination conditions at video rates, (2) realtime processing of wide dynamic range imagery (visible and IR) to enhance contrast and adaptively compress dynamic range, and (3) realtime fusion of low-light visible and thermal IR imagery to provide color display of the night scene to the operator in order to enhance situational awareness. This paper compares imagery collected during night driving including: low-light CCD visible imagery, intensified-CCD visible imagery, uncooled long-wave IR imagery, cryogenically cooled mid-wave IR imagery, and visible/IR dual-band imagery fused for gray and color display.
Proceedings of SPIE | 2001
David A. Fay; Jacques Verly; Michael Braun; Carl E. Frost; Joseph P. Racamato; Allen M. Waxman
We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.
Neural Networks | 1997
Allen M. Waxman; Alan N. Gove; David A. Fay; Joseph P. Racamato; James E. Carrick; Michael Seibert; Eugene D. Savoye
Archive | 1994
Allen M. Waxman; David A. Fay; Alan N. Gove; Michael Seibert; Joseph P. Racamato
Storage and Retrieval for Image and Video Databases | 1995
Allen M. Waxman; David A. Fay; Andrew Cove; Michael Seibert; Joseph P. Racamato; James E. Carrick; Eugene D. Savoye
Archive | 1998
Allen M. Waxman; Mario Aguilar; David A. Fay; David B. Ireland; Joseph P. Racamato; William Ross; James E. Carrick; Alan N. Gove; Michael Seibert; Eugene D. Savoye; Robert K. Reich; Barry E. Burke; William H. McGonagle; David M. Craig