Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan N. Gove is active.

Publication


Featured researches published by Alan N. Gove.


Visual Neuroscience | 1995

Brightness perception, illusory contours, and corticogeniculate feedback

Alan N. Gove; Stephen Grossberg; Ennio Mingolla

A neural network model is developed to explain how visual thalamocortical interactions give rise to boundary percepts such as illusory contours and surface percepts such as filled-in brightnesses. Top-down feedback interactions are needed in addition to bottom-up feed-forward interactions to simulate these data. One feedback loop is modeled between lateral geniculate nucleus (LGN) and cortical area V1, and another within cortical areas V1 and V2. The first feedback loop realizes a matching process which enhances LGN cell activities that are consistent with those of active cortical cells, and suppresses LGN activities that are not. This corticogeniculate feedback, being endstopped and oriented, also enhances LGN ON cell activations at the ends of thin dark lines, thereby leading to enhanced cortical brightness percepts when the lines group into closed illusory contours. The second feedback loop generates boundary representations, including illusory contours, that coherently bind distributed cortical features together. Brightness percepts form within the surface representations through a diffusive filling-in process that is contained by resistive gating signals from the boundary representations. The model is used to simulate illusory contours and surface brightness induced by Ehrenstein disks, Kanizsa squares, Glass patterns, and café wall patterns in single contrast, reverse contrast, and mixed contrast configurations. These examples illustrate how boundary and surface mechanisms can generate percepts that are highly context-sensitive, including how illusory contours can be amodally recognized without being seen, how model simple cells in V1 respond preferentially to luminance discontinuities using inputs from both LGN ON and OFF cells, how model bipole cells in V2 with two colinear receptive fields can help to complete curved illusory contours, how short-range simple cell groupings and long-range bipole cell groupings can sometimes generate different outcomes, and how model double-opponent, filling-in and boundary segmentation mechanisms in V4 interact to generate surface brightness percepts in which filling-in of enhanced brightness and darkness can occur before the net brightness distribution is computed by double-opponent interactions.


Neural Networks | 1995

Neural processing of targets in visible, multispectral IR and SAR imagery

Allen M. Waxman; Michael Seibert; Alan N. Gove; David A. Fay; Ann Marie Bernardon; Carol H. Lazott; William R. Steele; Robert K. Cunningham

Abstract We have designed and implemented computational neural systems for target enhancement, detection, learning and recognition in visible, multispectral infrared (IR), and synthetic aperture radar (SAR) imagery. The system architectures are motivated by designs of biological vision systems, drawing insights from retinal processing of contrast and color, occipital lobe processing of shading, color and contour, and temporal lobe processing of pattern and shape. Distinguishing among similar targets, and accumulation of evidence across image sequences is also described. Similar neurocomputational principles and modules are used across these various sensing domains. We show how 3D target learning and recognition from visible silhouettes and SAR return patterns are related. We show how models of contrast enhancement, contour, shading and color vision can be used to enhance targets in multispectral IR and SAR imagery, aiding in target detection.


Proceedings of SPIE | 1996

Progress on color night vision: visible/IR fusion, perception and search, and low-light CCD imaging

Allen M. Waxman; Alan N. Gove; Michael C. Siebert; David A. Fay; James E. Carrick; Joseph P. Racamato; Eugene D. Savoye; Barry E. Burke; Robert K. Reich; William H. McGonagle; David M. Craig

We report progress on our development of a color night vision capability, using biological models of opponent-color processing to fuse low-light visible and thermal IR imagery, and render it in realtime in natural colors. Preliminary results of human perceptual testing are described for a visual search task, the detection of embedded small low-contrast targets in natural night scenes. The advantages of color fusion over two alterative grayscale fusion products is demonstrated in the form of consistent, rapid detection across a variety of low- contrast (+/- 15% or less) visible and IR conditions. We also describe advances in our development of a low-light CCD camera, capable of imaging in the visible through near- infrared in starlight at 30 frames/sec with wide intrascene dynamic range, and the locally adaptive dynamic range compression of this imagery. Example CCD imagery is shown under controlled illumination conditions, from full moon down to overcast starlight. By combining the low-light CCD visible imager with a microbolometer array LWIR imager, a portable image processor, and a color LCD on a chip, we can realize a compact design for a color fusion night vision scope.


SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995

Color night vision: fusion of intensified visible and thermal IR imagery

Allen M. Waxman; David A. Fay; Alan N. Gove; Michael Seibert; Joseph P. Racamato; James E. Carrick; Eugene D. Savoye

We introduce an apparatus and methodology to support realtime color imaging for night operations. Registered imagery obtained in the visible through near IR band is combined with thermal IR imagery using principles of biological color vision. The visible imagery is obtained using a Gen III image intensifier tube optically coupled to a conventional CCD, while the thermal IR imagery is obtained using an uncooled thermal imaging array, the two fields of view being matched and imaged through a dichroic beam splitter. Remarkably realistic color renderings of night scenes are obtained, and examples are given in the paper. We also describe a compact integrated version of our system in the form of a color night vision device, in which the intensifier tube is replaced by a high resolution low-light sensitive CCD. Example CCD imagery obtained under starlight conditions is also shown. The system described here has the potential to support safe and efficient night flight, ground, sea and search & rescue operations, as well as night surveillance.


international symposium on neural networks | 1992

Processing of synthetic aperture radar images by the boundary contour system and feature contour system

Dan Cruthirds; Alan N. Gove; Stephen Grossberg; Ennio Mingolla; Nicholas Nowak; James R. Williamson

An improved boundary contour system (BCS) and feature contour system (FCS) neural network model of preattentive vision was applied to two large images containing range data gathered by a synthetic aperture radar sensor. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS capitalizes on the form-sensitive operations of a neural network model to detect and enhance structure based on information over large, variably sized and variably shaped regions of the image.<<ETX>>


Proceedings of SPIE | 1997

Electronic imaging aids for night driving: low-light CCD, uncooled thermal IR, and color-fused visible/LWIR

Allen M. Waxman; Eugene D. Savoye; David A. Fay; Mario Aguilar; Alan N. Gove; James E. Carrick; Joseph P. Racamato

MIT Lincoln Laboratory is developing new electronic night vision technologies for defense applications which can be adapted for civilian applications such as night driving aids. These technologies include (1) low-light CCD imagers capable of operating under starlight illumination conditions at video rates, (2) realtime processing of wide dynamic range imagery (visible and IR) to enhance contrast and adaptively compress dynamic range, and (3) realtime fusion of low-light visible and thermal IR imagery to provide color display of the night scene to the operator in order to enhance situational awareness. This paper compares imagery collected during night driving including: low-light CCD visible imagery, intensified-CCD visible imagery, uncooled long-wave IR imagery, cryogenically cooled mid-wave IR imagery, and visible/IR dual-band imagery fused for gray and color display.


SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995

Neural processing of SAR imagery for enhanced target detection

Allen M. Waxman; Carol H. Lazott; David A. Fay; Alan N. Gove; W. R. Steele

Neural network models of early visual computation have been adapted for processing single polarization (VV channel) SAR imagery, in order to assess their potential for enhanced target detection. In particular, nonlinear center-surround shunting networks and multi-resolution boundary contour/feature contour system processing has been applied to a spotlight sequence of tactical targets imaged by the Lincoln ADT sensor at 1 ft resolution. We show how neural processing can modify the target and clutter statistics, thereby separating the poplulations more effectively. ROC performance curves indicating detection versus false alarm rate are presented, clearly showing the potential benefits of neural pre-processing of SAR imagery.


SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995

Learning to distinguish similar objects

Michael Seibert; Allen M. Waxman; Alan N. Gove

This paper describes how the similarities and differences among similar objects can be discovered during learning to facilitate recognition. The application domain is single views of flying model aircraft captured in silhouette by a CCD camera. The approach was motivated by human psychovisual and monkey neurophysiological data. The implementation uses neural net processing mechanisms to build a hierarchy that relates similar objects to superordinate classes, while simultaneously discovering the salient differences between objects within a class. Learning and recognition experiments both with and without the class similarity and difference learning show the effectiveness of the approach on this visual data. To test the approach, the hierarchical approach was compared to a non-hierarchical approach, and was found to improve the average percentage of correctly classified views from 77% to 84%.


Archive | 1996

Low-light-level imaging and image processing

Eugene D. Savoye; Allen M. Waxman; Robert K. Reich; Barry E. Burke; James A. Gregory; William H. McGonagle; Andrew H. Loomis; Bernard B. Kosicki; Robert W. Mountain; Alan N. Gove; David A. Fay; James E. Carrick


Neural Networks | 1997

Color night vision: opponent processing in the fusion of visible and IR imagery

Allen M. Waxman; Alan N. Gove; David A. Fay; Joseph P. Racamato; James E. Carrick; Michael Seibert; Eugene D. Savoye

Collaboration


Dive into the Alan N. Gove's collaboration.

Top Co-Authors

Avatar

Allen M. Waxman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David A. Fay

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James E. Carrick

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eugene D. Savoye

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Racamato

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Seibert

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Barry E. Burke

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert K. Reich

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge