Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary Bradski is active.

Publication


Featured researches published by Gary Bradski.


Biological Cybernetics | 1994

STORE working memory networks for storage and recall of arbitrary temporal sequences

Gary Bradski; Gail A. Carpenter; Stephen Grossberg

Neural network models of working memory, called “sustained temporal order recurrent” (STORE) models, are described. They encode the invariant temporal order of sequential events in short-term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items remains invariant in the sense that relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.


Neural Computation | 1992

Working memory networks for learning temporal order with application to three-dimensional visual object recognition

Gary Bradski; Gail A. Carpenter; Stephen Grossberg

Working memory neural networks, called Sustained Temporal Order REcurrent (STORE) models, encode the invariant temporal order of sequential events in short-term memory (STM). Inputs to the networks may be presented with widely differing growth rates, amplitudes, durations, and interstimulus intervals without altering the stored STM representation. The STORE temporal order code is designed to enable groupings of the stored events to be stably learned and remembered in real time, even as new events perturb the system. Such invariance and stability properties are needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensorimotor planning, or three-dimensional (3-D) visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its two-dimensional (2-D) aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor ART 2 STORE Model ART 2 Outstar Network.


Neural Networks | 1995

Fast-learning VIEWNET architectures for recognizing three-dimensional objects from multiple two-dimensional views

Gary Bradski; Stephen Grossberg

Abstract The recognition of three-dimensional (3-D) objects from sequences of their two-dimensional (2-D) views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs are combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may be used for scene understanding by using a preprocessor and classier that can determine both what objects are in a scene and where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequences of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. In the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding nodes activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean images using slow and fast learning. Slow learning at thefuzzy ARTMAP mapfield is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 128 × 128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.


international symposium on neural networks | 1992

Dynamic programming for optimal control of setup scheduling with neural network modifications

Gary Bradski

Demonstrated is an optimal control solution to change of machine setup scheduling based on dynamic programming average cost per stage value iteration as set forth by M. Caramanis et al. (1991) for the 2-D case. The difficulty with the optimal approach lies in the explosive computational growth of the resulting solution. A method of reducing the computational complexity is developed using ideas from biology and neural networks. A real-time controller is described that uses a linear-log representation of state space with neural networks employed to fit cost surfaces.<<ETX>>


international symposium on neural networks | 1992

Working memories for storage and recall of arbitrary temporal sequences

Gary Bradski; Gail A. Carpenter; Stephen Grossberg

A working memory model is described that is capable of storing and recalling arbitrary temporal sequences of events, including repeated items. These memories encode the invariant temporal order of sequential events that may be presented at widely differing speeds, durations, and interstimulus intervals. This temporal order code is designed to enable all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system.<<ETX>>


international symposium on neural networks | 1991

Working memory networks for learning multiple groupings of temporally ordered events: applications to 3-D visual object recognition

Gary Bradski; Gail A. Carpenter; Stephen Grossberg

Working memory neural networks which encode the invariant temporal order of sequential events that may be presented at widely differing speeds, durations, and interstimulus intervals are characterized. Working memory, a kind of short-term memory, can be quickly erased by a distracting event, unlike long-term memory. The authors describe a working memory architecture for the storage of temporal order information across a series of item representations. This temporal order code is designed to enable all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes. Using such a working memory, a self-organizing architecture for invariant 3D visual object recognition, based on the model of M. Siebert and A.M. Waxman (1990), is described.<<ETX>>


Archive | 1994

Recognition of 3-D Objects from Multiple 2-D Views by a Self-Organizing Neural Architecture

Gary Bradski; Stephen Grossberg

The recognition of 3-D objects from sequences of their 2-D views is modeled by a neural architecture, called VIEWNET, that uses View Information Encoded With NETworks. VIEWNET illustrates how several types of noise and variability in image data can be progressively removed while incomplete image features are restored and invariant features are discovered using an appropriately designed cascade of processing stages. VIEWNET first processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. Boundary regularization and completion are achieved by the same mechanisms that suppress image noise. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the fuzzy ARTMAP algorithm. Recognition categories of 2-D views are learned before evidence from sequences of 2-D view categories is accumulated to improve object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of jet aircraft with and without additive noise. A recognition rate of 90% is achieved with one 2-D view category and of 98.5% correct with three 2-D view categories.


Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision | 1994

VIEWNET: a neural architecture for learning to recognize 3D objects from multiple 2D views

Stephen Grossberg; Gary Bradski

A self-organizing neural network is developed for recognition of 3-D objects from sequences of their 2-D views. Called VIEWNET because it uses view information encoded with networks, the model processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the Fuzzy ARTMAP algorithm which learns 2-D view categories. Evidence from sequences of 2-D view categories is stored in a working memory. Voting based on the unordered set of stored categories determines object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view category and of up to 98.5% correct with three 2-D view categories.


Archive | 1995

Fast Learning VIEWNET Architectures for Recognizing 3-D Objects from Multiple 2-D Views

Gary Bradski; Stephen Grossberg


Archive | 1991

Working Memory Networks for Learning Temporal Order, with Application to 3-D Visual Object Recognition

Gary Bradski; Gail A. Carpenter; Stephen Grossberg

Collaboration


Dive into the Gary Bradski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge