Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neil A. Bomberger is active.

Publication


Featured researches published by Neil A. Bomberger.


international conference on information fusion | 2002

Information fusion for image analysis: geospatial foundations for higher-level fusion

Allen M. Waxman; David A. Fay; Bradley J. Rhodes; Timothy S. McKenna; Richard T. Ivey; Neil A. Bomberger; Val K. Bykoski; Gail A. Carpenter

In support of the AFOSR program in Information Fusion, the CNS Technology Laboratory at Boston University is developing and applying neural models of image and signal processing, pattern learning and recognition, associative learning dynamics, and 3D visualization, to the domain of Information Fusion for Image Analysis in a geospatial context. Our research is focused by a challenge problem involving the emergence of a crisis in an urban environment, brought on by a terrorist attack or other man-made or natural disaster. We aim to develop methods aiding preparation and monitoring of the battlespace, deriving context from multiple sources of imagery (high-resolution visible and low-resolution hyperspectral) and signals (GMTI from moving vehicles, and ELINT from emitters). This context will serve as a foundation, in conjunction with existing knowledge nets, for exploring neural methods in higher level information fusion supporting situation assessment and creation of a common operating picture (COP).


Archive | 2009

Anomaly Detection & Behavior Prediction: Higher-Level Fusion Based on Computational Neuroscientific Principles

Bradley J. Rhodes; Neil A. Bomberger; Majid Zandipour; Lauren H. Stolzar; Denis Garagic; James R. Dankert; Michael Seibert

Higher-level fusion aims to enhance situational awareness and assessment (Endsley, 1995). Enhancing the understanding analysts/operators derive from fused information is a key objective. Modern systems are capable of fusing information from multiple sensors, often using inhomogeneous modalities, into a single, coherent kinematic track picture. Although this provides a self-consistent representation of considerable data, having hundreds, or possibly thousands, of moving elements depicted on a display does not make for ease of comprehension (even with the best possible human-computer interface design). Automated assistance for operators that supports ready identification of those elements most worthy of their attention is one approach for effectively leveraging lower-level fusion products. A straightforward, commonly employed method is to use rule-based motion analysis techniques. Pre-defined activity patterns can be detected and identified to operators. Detectable patterns range from simple trip-wire crossing or zone penetration to more sophisticated multi-element interactions, such as rendezvous. Despite having a degree of utility, rule-based methods do not provide a complete solution. The complexity of real-world situations arises from the myriad combinations of conditions and contexts that make development of thorough, all-encompassing sets of rules impossible. Furthermore, it is also often the case that the events of interest and/or the conditions and contexts in which they are noteworthy can change at rates for which it is impractical to extend or modify large rule corpora. Also, pre-defined rules cannot assist operators interested in being able to determine whether any unusual activity is occurring in the track picture they are monitoring. Timely identification and assessment of anomalous activity within an area of interest is an increasingly important capability—one that falls under the enhanced situational awareness objective of higher-level fusion. A precursor of being able to automatically notify operators about the presence of anomalous activity is the capability to detect deviations from normal behavior. To do this, a model of normal behavior is required. It is impractical to consider a rule-based approach for achieving such a task, so an adaptive method is required: that is, a capability to learn what is normal in a scene is required. This normalcy representation can then be used to assess new data in order to determine their degree of normalcy and provide notification when any O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg


Enhanced and synthetic vision. Conference | 2004

Multisensor image fusion and mining: learning targets across extended operating conditions

David A. Fay; Allen M. Waxman; Richard T. Ivey; Neil A. Bomberger; Marianne Chiarella

We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light Visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused Visible/MWIR/LWIR imagery.


Journal of Vision | 2005

The structure of cortical hypercolumns: Receptive field scatter may enhance rather than degrade boundary contour representation in V1

Neil A. Bomberger; Eric L. Schwartz

The spatial relationship of orientation mapping, ocularity, and receptive field (RF) position provides an operational definition of the term “hypercolumn” in V1. Optical recording suggests that pinwheel centers and blobs are spatially uncorrelated. However, error analysis indicates a 100–150 micron systematic pinwheel center positional offset. This analysis suggests that pinwheel singularities and cytochrome oxidase blobs in primate V1 may in fact be coterminous. The only model to date that accounts for this detailed spatial relationship of ocularity, orientation mapping, and RF position is the columnar shear model (Wood and Schwartz, Neural Networks, 12:205–210, 1999). Here, we generalize this model to include RF scatter, which is observed to be in the range of one third to one half of the local RF size. This model provides a computational basis to address the following question: How is the existence of RF scatter consistent with accurate edge localization? We show that scatter of about one half the average RF size can provide an accurate representation of region and edge structure in an image based on a simple form of local inhibition between the blob (spatially lowpass) and interblob (spatially band-pass) neurons resulting in a process equivalent to nonlinear diffusion. The advantages afforded by this mechanism for edge preservation and noise suppression are that it avoids the slowness of diffusion (where time is proportional to distance squared) and is fully consistent with a correct understanding of the structure of the cortical hypercolumn. We demonstrate the effectiveness of this algorithm, known in the computer vision literature as the offset filter (Fischl and Schwartz, IEEE PAMI 22:42–48, 1999), by providing results on natural images corrupted with noise. This work emphasizes the importance of an un-normalized, low-pass response to accurate edge-representation—a function usually attributed to the intensity normalized, band-pass response of extra-blob neurons. Presented at unknown. Abstract number 894. Support Contributed By: NIH/NIBIB EB001550 Contact info: Neil A. Bomberger, Computer Vision and Computational Neuroscience Lab, 677 Beacon St., Boston, MA, 02215. URL: http://eslab.bu.edu, Email: [email protected]


Infrared Technology and Applications XXIX | 2003

Multisensor image fusion and mining in a COTS exploitation environment

David A. Fay; Richard T. Ivey; Neil A. Bomberger; Allen M. Waxman

We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.


Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2004 | 2004

Spiking neural networks for higher-level information fusion

Neil A. Bomberger; Allen M. Waxman; Felipe M. Pait

This paper presents a novel approach to higher-level (2+) information fusion and knowledge representation using semantic networks composed of coupled spiking neuron nodes. Networks of spiking neurons have been shown to exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been hypothesized to be involved in feature binding. The approach in this paper embeds spiking neurons in a semantic network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This approach is demonstrated with a simulated scenario involving the tracking of suspected criminal vehicles between meeting places in an urban environment.


applied imagery pattern recognition workshop | 2003

Multisensor & spectral image fusion & mining: from neural systems to applications

David A. Fay; Richard T. Ivey; Neil A. Bomberger; Allen M. Waxman

We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladder. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we have summarized the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we had demonstrated how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This has been illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.


IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003 | 2003

Multisensor image fusion and mining: from neural systems to COTS software with application to remote sensing AFE

Marianne Chiarella; David A. Fay; Allen M. Waxman; Richard T. Ivey; Neil A. Bomberger

We summarize our methods for the fusion of multisensor/spectral imagery based on concepts derived from neural models of visual processing (adaptive contrast enhancement, opponent-color contrast, multi-scale contour completion, and multi-scale texture enhancement) and semi-supervised pattern learning and recognition. These methods have been applied to the problem of aided feature extraction (AFE) from remote sensing airborne multispectral and hyperspectral imaging systems, and space-based multi-platform multi-modality imaging sensors. The methods enable color fused 3D visualization, as well as interactive exploitation and data mining in the form of human-guided machine learning and search for objects, landcover, and cultural features. This technology has been evaluated on space-based imagery for the National Imagery and Mapping Agency, and real-time implementation has also been demonstrated for terrestrial fused-color night imaging. We have recently incorporated these methods into a commercial software platform (ERDAS Imagine) for imagery exploitation. We describe the approach and user interfaces, and show results for a variety of sensor systems with application to remote sensing feature extraction including EO/IR/MSI/SAR imagery from Landsat and Radarsat, multispectral Ikonos imagery, and Hyperion and HyMap hyperspectral imagery.


international conference on information fusion | 2003

Image fusion & mining tools for a COTS environment

David A. Fay; Richard T. Ivey; Neil A. Bomberger; Allen M. Waxman


Archive | 2009

Information and Motion Pattern Learning and Analysis Using Neural Techniques

Brad Rhodes; Neil A. Bomberger; Majid Zandipour; Denis Garagic; James R. Dankert

Collaboration


Dive into the Neil A. Bomberger's collaboration.

Top Co-Authors

Avatar

Allen M. Waxman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David A. Fay

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Seibert

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge