Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James R. Williamson is active.

Publication


Featured researches published by James R. Williamson.


Neural Networks | 1996

Gaussian ARTMAP: a neural network for fast incremental learning of noisy multidimensional maps

James R. Williamson

A new neural network architecture for incremental supervised learning of analog multidimensional maps is introduced. The architecture, called Gaussian ARTMAP, is a synthesis of a Gaussian classifier and an adaptive resonance theory (ART) neural network, achieved by defining the ART choice function as the discriminant function of a Gaussian classifier with separable distributions, and the ART match function as the same, but with the distributions normalized to a unit height. While Gaussian ARTMAP retains the attractive parallel computing and fast learning properties of fuzzy ARTMAP, it learns a more efficient internal representation of a mapping while being more resistant to noise than fuzzy ARTMAP on a number of benchmark databases. SSeveral simulations are presented which demonstrate that Gaussian ARTMAP consistently obtains a better trade-off of classification rate to number of categories than fuzzy ARTMAP. Results on a vowel classification problem are also presented which demonstrate that Gaussian ARTMAP outperforms many other classifiers. Copyright 1996 Elsevier Science Ltd


Neural Networks | 1995

Synthetic aperture radar processing by a multiple scale neural system for boundary and surface representation

Stephen Grossberg; Ennio Mingolla; James R. Williamson

Abstract A neural network model of boundary segmentation and surface representation is developed to process images containing range data gathered by a synthetic aperture radar (SAR) sensor. The boundary and surface processing are accomplished by an improved Boundary Countour System (BCS) and Feature Countour System (FCS), respectively, that have been derived from analyses of perceptual and neurobiological data. BCS/FCS processing makes structures such as motor vehicles, roads, and buildings more salient and interpretable to human observers than they are in the original imagery. Early processing by ON cells and OFF cells embedded in shunting center-surround network models preprocessing by lateral geniculate nucleus (LGN). Such preprocessing compensates for illumination gradients, normalizes input dynamic range, and extracts local ratio contrasts. ON cell and OFF cell outputs are combined in the BCS to define oriental filters that model corticla simple cells. Pooling ON and OFF outputs at simple cells overcomes complementary processing deficiencies of each cell type along concave and convex contours, and enhances simpl;e cell sensitivity to image edges. Oriented filter outputs are rectified and outputs sensitive to opposite contrast polarities are pooled to define complex cells. The complex cells output to stages of short-range spatial competition (or endstopping) and orientational competition among hypercomplex cells. Hypercomplex cells activate long-range cooperative bipole cells that begin to group image boundaries. Nonlinear feedback between bipole cells and hypercomplex cell segments image regions by cooperatively completing and regularizing the most favored boundaries while suppressing image noise and weaker boundary groupings. Boundary segmentation is performed by three copies of the BCS at small, medium, and large filter scales, whose subsequent interaction distances covary with the size of the filter. Filling-in of multiple surface representations occurs within the FCS at each scale via a boundary-gated diffusion process. Diffusion is activated by the normalized LGN ON and OFF outputs within ON and OFFfilling-in domains. Diffusion is restricted to the regions defined by gating signals from the corresponding BCS boundary segmentation. The filled-in opponent ON and OFFsignals are subtracted to form double opponent surface representations. These surface representations are shown by any of three methods to be sensitive to both image ratio contrasts and background luminance. The three scales of surface representation are then added to yield a final multiple-scale output. The BCS and FCS are shown to perform favorably in comparison to several other techniques for speckle removal.


Vision Research | 1999

A self-organizing neural system for learning to recognize textured scenes

Stephen Grossberg; James R. Williamson

A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well as abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is bench marked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.


Neural Computation | 2001

Self-Organization of Topographic Mixture Networks Using Attentional Feedback

James R. Williamson

This article proposes a neural network model of supervised learning that employs biologically motivated constraints of using local, on-line, constructive learning. The model possesses two novel learning mechanisms. The first is a network for learning topographic mixtures. The networks internal category nodes are the mixture components, which learn to encode smooth distributions in the input space by taking advantage of topography in the input feature maps. The second mechanism is an attentional biasing feedback circuit. When the network makes an incorrect output prediction, this feedback circuit modulates the learning rates of the category nodes, by amounts based on the sharpness of their tuning, in order to improve the networks prediction accuracy. The network is evaluated on several standard classification benchmarks and shown to perform well in comparison to other classifiers.


Neural Computation | 1997

A constructive, incremental-learning network for mixture modeling and classification

James R. Williamson

Gaussian ARTMAP (GAM) is a supervised-learning adaptive resonance theory (ART) network that uses gaussian-defined receptive fields. Like other ART networks, GAM incrementally learns and constructs a representation of sufficient complexity to solve a problem it is trained on. GAMs representation is a gaussian mixture model of the input space, with learned mappings from the mixture components to output classes. We show a close relationship between GAM and the well-known expectation-maximization (EM) approach to mixture modeling. GAM outper forms an EM classification algorithm on three classification benchmarks, thereby demonstrating the advantage of the ART match criterion for regulating learning and the ARTMAP match tracking operation for incorporating environmental feedback in supervised learning situations.


international symposium on neural networks | 1992

Processing of synthetic aperture radar images by the boundary contour system and feature contour system

Dan Cruthirds; Alan N. Gove; Stephen Grossberg; Ennio Mingolla; Nicholas Nowak; James R. Williamson

An improved boundary contour system (BCS) and feature contour system (FCS) neural network model of preattentive vision was applied to two large images containing range data gathered by a synthetic aperture radar sensor. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS capitalizes on the form-sensitive operations of a neural network model to detect and enhance structure based on information over large, variably sized and variably shaped regions of the image.<<ETX>>


Neural Computation | 1996

Neural network for dynamic binding with graph representation: Form, linking, and depth-from-occlusion

James R. Williamson

A neural network is presented that explicitly represents form attributes and relations between them, thus solving the binding problem without temporal coding. Rather, the network creates a graph representation by dynamically allocating nodes to code local form attributes and establishing arcs to link them. With this representation, the network selectively groups and segments in depth objects based on line junction information, producing results consistent with those of several recent visual search experiments. In addition to depth-from-occlusion, the network provides a sufficient framework for local line-labeling processes to recover other three-dimensional (3-D) variables, such as edge/surface contiguity, edge slant, and edge convexity.


Cerebral Cortex | 2001

A Neural Model of how Horizontal and Interlaminar Connections of Visual Cortex Develop into Adult Circuits that Carry Out Perceptual Grouping and Learning

Stephen Grossberg; James R. Williamson


Archive | 1996

A Self-Organizing System for Classifying Complex Images: Natural Textures and Synthetic Aperture Radar

Stephen Grossberg; James R. Williamson


Archive | 1996

Neural networks for image processing, classification, and understanding

James R. Williamson

Collaboration


Dive into the James R. Williamson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan N. Gove

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge