Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert A. Jacobs is active.

Publication


Featured researches published by Robert A. Jacobs.


Neural Computation | 1991

Adaptive mixtures of local experts

Robert A. Jacobs; Michael I. Jordan; Steven J. Nowlan; Geoffrey E. Hinton

We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.


Neural Networks | 1987

Increased Rates of Convergence Through Learning Rate Adaptation

Robert A. Jacobs

WHILE THERE EXIST MANY TECHNIQUES FOR FINDING THE PARAMETERS THAT MINI- MIZE AN ERROR FUNCTION, ONLY THOSE METHODS THAT SOLELY PERFORM LOCAL COMPU- TATIONS ARE USED IN CONNECTIONIST NETWORKS. THE MOST POPULAR LEARNING ALGO RITHM FOR CONNECTIONIST NETWORKS IS THE BACK-PROPOGATION PROCEDURE [13], WHICH CAN BE USED TO UPDATE THE WEIGHTS BY THE METHOD OF STEEPEST DESCENT. IN THIS PAPER, WE EXAMINE STEEPEST DESCENT AND ANALYZE WHY IT CAN BE SLOW TO CONVERGE. WE THEN PROPOSE FOUR HEURISTICS FOR ACHIEVING FASTER RATES OF CONVERGENCE WHILE ADHERING TO THE LOCALITY CONSTRAINT. THESE HEURISTICS SUGGEST THAT EVERY WEIGHT OF A NETWORK SHOULD BE GIVEN ITS OWN LEARNING RATE AND THAT THESE RATES SHOULD BE ALLOWED TO VARY OVER TIME. ADDITIONALLY THE HEURISTICS SUGGEST HOW THE LEARNING RATES SHOULD BE ADJUSTED. TWO IMPLEMENTATIONS OF THESE HEURISTICS, NAMELY MOMENTUM AND AN ALGORITHM CALLED THE DELTA-BAR-DELTA RULE, ARE STUDIED AND SIMULATION RESULTS ARE PRESENTED.


Cognitive Science | 1991

Task Decomposition Through Competition in a Modular Connectionist Architecture: The What and Where Vision Tasks

Robert A. Jacobs; Michael I. Jordan; Andrew G. Barto

A novel modular connectionist architecture is presented in which the networks composing the architecture compete to learn the training patterns. An outcome of the competition is that different networks learn different training patterns and, thus, learn to compute different functions. The architecture performs task decomposition in the sense that it learns to partition a task into two or more functionally independent tasks and allocates distinct networks to learn each task. In addition, the architecture tends to allocate to each task the network whose topology is most appropriate to that task. The architectures performance on “what” and “where” vision tasks is presented and compared with the performance of two multilayer networks. Finally, it is noted that function decomposition is an underconstrained problem, and, thus, different modular architectures may decompose a function in different ways. A desirable decomposition can be achieved if the architecture is suitably restricted in the types of functions that it can compute. Finding appropriate restrictions is possible through the application of domain knowledge. A strength of the modular architecture is that its structure is well suited for incorporating domain knowledge.


international symposium on neural networks | 1993

Hierarchical mixtures of experts and the EM algorithm

Michael I. Jordan; Robert A. Jacobs

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIMs). Learning is treated as a maximum likelihood problem; in particular, we present an expectation-maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.


systems man and cybernetics | 1993

Learning piecewise control strategies in a modular neural network architecture

Robert A. Jacobs; Michael I. Jordan

The authors describe a multinetwork, or modular, neural network architecture that learns to perform control tasks using a piecewise control strategy. The architectures networks compete to learn the training patterns. As a result, a plants parameter space is adaptively partitioned into a number of regions, and a different network learns a control law in each region. This learning process is described in a probabilistic framework and learning algorithms that perform gradient ascent in a log-likelihood function are discussed. Simulations show that the modular architectures performance is superior to that of a single network on a multipayload robot motion control task. >


Journal of Vision | 2002

Comparing perceptual learning tasks: a review.

Ione Fine; Robert A. Jacobs

We compared perceptual learning in 16 psychophysical studies, ranging from low-level spatial frequency and orientation discrimination tasks to high-level object and face-recognition tasks. All studies examined learning over at least four sessions and were carried out foveally or using free fixation. Comparison of learning effects across this wide range of tasks demonstrates that the amount of learning varies widely between different tasks. A variety of factors seems to affect learning, including the number of perceptual dimensions relevant to the task, external noise, familiarity, and task complexity.


Trends in Cognitive Sciences | 2002

What determines visual cue reliability

Robert A. Jacobs

Visual environments contain many cues to properties of an observed scene. To integrate information provided by multiple cues in an efficient manner, observers must assess the degree to which each cue provides reliable versus unreliable information. Two hypotheses are reviewed regarding how observers estimate cue reliabilities, namely that the estimated reliability of a cue is related to the ambiguity of the cue, and that people use correlations among cues to estimate cue reliabilities. Cue reliabilities are shown to be important both for cue combination and for aspects of visual learning.


Journal of the American Statistical Association | 1996

Bayesian Inference in Mixtures-of-Experts and Hierarchical Mixtures-of-Experts Models with an Application to Speech Recognition

Fengchun Peng; Robert A. Jacobs; Martin A. Tanner

Abstract Machine classification of acoustic waveforms as speech events is often difficult due to context dependencies. Here a vowel recognition task with multiple speakers is studied via the use of a class of modular and hierarchical systems referred to as mixtures-of-experts and hierarchical mixtures-of-experts models. The statistical model underlying the systems is a mixture model in which both the mixture coefficients and the mixture components are generalized linear models. A full Bayesian approach is used as a basis of inference and prediction. Computations are performed using Markov chain Monte Carlo methods. A key benefit of this approach is the ability to obtain a sample from the posterior distribution of any functional of the parameters of the given model. In this way, more information is obtained than can be provided by a point estimate. Also avoided is the need to rely on a normal approximation to the posterior as the basis of inference. This is particularly important in cases where the posteri...


Journal of Cognitive Neuroscience | 1992

Computational consequences of a bias toward short connections

Robert A. Jacobs; Michael I. Jordan

A fundamental observation in the neurosciences is that the brain is a modular system in which different regions perform different tasks. Recent evidence, however, raises questions about the accuracy of this characterization with respect to neo-nates. One possible interpretation of this evidence is that certain aspects of the modular organization of the adult brain arise developmentally. To explore this hypothesis we wish to characterize the computational principles that underlie the development of modular systems. In previous work we have considered computational schemes that allow a learning system to discover the modular structure that is present in the environment (Jacobs, Jordan, & Barto, 1991). In the current paper we present a complementary approach in which the development of modularity is due to an architectural bias in the learner. In particular, we examine the computational consequences of a simple architectural bias toward short-range connections. We present simulations that show that systems that learn under the influence of such a bias have a number of desirable properties, including a tendency to decompose tasks into subtasks, to decouple the dynamics of recurrent subsystems, and to develop location-sensitive internal representations. Furthermore, the systems units develop local receptive and projective fields, and the system develops characteristics that are typically associated with topographic maps.


Nature Neuroscience | 2000

Motor timing learned without motor training.

Daniel V. Meegan; Richard N. Aslin; Robert A. Jacobs

Improvements due to perceptual training are often specific to the trained task and do not generalize to similar perceptual tasks. Surprisingly, given this history of highly constrained, context-specific perceptual learning, we found that training on a perceptual task showed significant transfer to a motor task. This result provides evidence for a common neural architecture underlying analysis of sensory input and control of motor output, and suggests a potential role for perception in motor development and rehabilitation.

Collaboration


Dive into the Robert A. Jacobs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ione Fine

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manu Chhabra

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge