Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gail A. Carpenter is active.

Publication


Featured researches published by Gail A. Carpenter.


IEEE Transactions on Neural Networks | 1992

Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps

Gail A. Carpenter; Stephen Grossberg; Natalya Markuzon; John H. Reynolds; David B. Rosen

A neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors, which may represent fuzzy or crisp sets of features. The architecture, called fuzzy ARTMAP, achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Four classes of simulation illustrated fuzzy ARTMAP performance in relation to benchmark backpropagation and generic algorithm systems. These simulations include finding points inside versus outside a circle, learning to tell two spirals apart, incremental approximation of a piecewise-continuous function, and a letter recognition database. The fuzzy ARTMAP system is also compared with Salzbergs NGE systems and with Simpsons FMMC system.


Applied Optics | 1987

System for self-organization of stable category recognition codes for analog input patterns

Gail A. Carpenter; Stephen Grossberg

Adaptive resonance architectures are neural networks that self-organize stable pattern recognition codes in real-time in response to arbitrary sequences of input patterns. This article introduces ART 2, a class of adaptive resonance architectures which rapidly self-organize pattern recognition categories in response to arbitrary sequences of either analog or binary input patterns. In order to cope with arbitrary sequences of analog input patterns-ART 2 architectures embody solutions to a number of design principles, such as the stability-plasticity tradeoff, the search-direct access tradeoff, and the match-reset tradeoff. In these architectures, top-down learned expectation and matching mechanisms are critical in self-stabilizing the code learning process. A parallel search scheme updates itself adaptively as the learning process unfolds, and realizes a form of real-time hypothesis discovery, testing, learning, and recognition. After learning selfstabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time for familiar inputs does not increase with the complexity of the learned code. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. A parameter called the attentional vigilance parameter determines how fine the categories will be. If vigilance increases (decreases) due to environmental feedback, then the system automatically searches for and learns finer (coarser) recognition categories. Gain control parameters enable the architecture to suppress noise up to a prescribed level. The architectures global design enables it to learn effectively despite the high degree of nonlinearity of such mechanisms.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1987

A massively parallel architecture for a self-organizing neural pattern recognition machine

Gail A. Carpenter; Stephen Grossberg

A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experienced. Four types of attentional process—priming, gain control, vigilance, and intermodal competition—are mechanistically characterized. Top—down priming and gain control are needed for code matching and self-stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environmental disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the ⅔ Rule) and new nonlinear associative laws (the Weber Law Rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The architecture circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.


Neural Networks | 1991

ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network

Gail A. Carpenter; Stephen Grossberg; John H. Reynolds

Abstract This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream [a(p)] of input patterns, and ARTb receives a stream [b(p)] of input patterns, where b(p) is the correct prediction given a(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a(p) are presented without b(p), and their predictions at ARTb are compared with b(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ϱa of ARTa by the minimal amount needed to correct a predictive error at ARTb. Parameter ϱa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ϱa is compared with the degree of match between a(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ϱa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ϱa relaxes to a baseline vigilance ϱ a . When ϱ a is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self-stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.


Psyccritiques | 1991

Pattern Recognition by Self-Organizing Neural Networks

Gail A. Carpenter; Stephen Grossberg

From the Publisher: Pattern Recognition by Self-Organizing Neural Networks presents the most recent advances in an area of research that is becoming vitally important in the fields of cognitive science, neuroscience, artificial intelligence, and neural networks in general. The 20 articles take up developments in competitive learning and computational maps, adaptive resonance theory, and specialized architectures and biological connections.


Neural Networks | 1990

ART 3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures ☆

Gail A. Carpenter; Stephen Grossberg

Abstract A model to implement parallel search of compressed or distributed pattern recognition codes in a neural network hierarchy is introduced. The search process functions well with either fast learning or slow learning, and can robustly cope with sequences of asynchronous input patterns in real-time. The search process emerges when computational properties of the chemical synapse, such as transmitter accumulation, release, inactivation, and modulation, are embedded within an Adaptive Resonance Theory architecture called ART 3. Formal analogs of ions such as Na− and Ca2− control nonlinear feedback interactions that enable presynaptic transmitter dynamics to model the postsynaptic short-term memory representation of a pattern recognition code. Reinforcement feedback can modulate the search process by altering the ART 3 vigilance parameter or directly engaging the search mechanism. The search process is a form of hypothesis testing capable of discovering appropriate representations of a nonstationary input environment.


international symposium on neural networks | 1991

ART 2-A: an adaptive resonance algorithm for rapid category learning and recognition

Gail A. Carpenter; Stephen Grossberg; David B. Rosen

The authors introduce ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulation show how the ART 2-A systems correspond to ART 2 dynamics both at the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is achieved without a loss of learning stability. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.<<ETX>>


Encyclopedia of Cognitive Science | 2006

Adaptive Resonance Theory

Gail A. Carpenter; Stephen Grossberg

Adaptive resonance theory is a cognitive and neural theory about how the brain develops and learns to recognize and recall objects and events throughout life. It shows how processes of learning, categorization, expectation, attention, resonance, synchronization, and memory search interact to enable the brain to learn quickly and to retain its memories stably, while explaining many data about perception, cognition, learning, memory, and consciousness. Keywords: adaptive resonance; recognition; learning; categorization; amnesia


Neural Networks | 1991

Art 2-A: an adaptive resonance algorithm for rapid category learning and recognition

Gail A. Carpenter; Stephen Grossberg; David B. Rosen

Abstract This article introduces Adaptive Resonance Theory 2-A (ART 2-A), an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large scale neural computation.


Neural Networks | 1989

Neural network models for pattern recognition and associative memory

Gail A. Carpenter

Abstract This review outlines some fundamental neural network modules for associative memory, pattern recognition, and category learning. Included are discussions of the McCulloch-Pitts neuron, perceptrons, adaline and madaline, back propagation, the learning matrix, linear associative memory, embedding fields, instars and outstars, the avalanche, shunting competitive networks, competitive learning, computational mapping by instarl outstar families, adaptive resonance theory, the cognitron and neocognitron, and simulated annealing. Adaptive filter formalism provides a unified notation. Activation laws include additive and shunting equations. Learning laws include back-coupled error correction, Hebbian learning, and gated instar and outstar equations. Also included are discussions of real-time and off-line modeling, stable and unstable coding, supervised and unsupervised learning, and self-organization.

Collaboration


Dive into the Gail A. Carpenter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge