Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Randall C. O'Reilly is active.

Publication


Featured researches published by Randall C. O'Reilly.


Psychological Review | 1995

Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.

James L. McClelland; Bruce L. McNaughton; Randall C. O'Reilly

Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems.


Psychological Review | 2003

Modeling Hippocampal and Neocortical Contributions to Recognition Memory: A Complementary-Learning-Systems Approach

Kenneth A. Norman; Randall C. O'Reilly

The authors present a computational neural-network model of how the hippocampus and medial temporal lobe cortex (MTLC) contribute to recognition memory. The hippocampal component contributes by recalling studied details. The MTLC component cannot support recall, but one can extract a scalar familiarity signal from MTLC that tracks how well a test item matches studied items. The authors present simulations that establish key differences in the operating characteristics of the hippocampal-recall and MTLC-familiarity signals and identify several manipulations (e.g., target-lure similarity, interference) that differentially affect the 2 signals. They also use the model to address the stochastic relationship between recall and familiarity and the effects of partial versus complete hippocampal lesions on recognition.


Cognitive, Affective, & Behavioral Neuroscience | 2001

Interactions between frontal cortex and basal ganglia in working memory: a computational model.

Michael J. Frank; Bryan Loughry; Randall C. O'Reilly

The frontal cortex and the basal ganglia interact via a relatively well understood and elaborate system of interconnections. In the context of motor function, these interconnections can be understood as disinhibiting, or “releasing the brakes,” on frontal motor action plans: The basal ganglia detect appropriate contexts for performing motor actions and enable the frontal cortex to execute such actions at the appropriate time. We build on this idea in the domain of working memory through the use of computational neural network models of this circuit. In our model, the frontal cortex exhibits robust active maintenance, whereas the basal ganglia contribute a selective, dynamic gating function that enables frontal memory representations to be rapidly updated in a task-relevant manner. We apply the model to a novel version of the continuous performance task that requires subroutine-like selective working memory updating and compare and contrast our model with other existing models and theories of frontal-cortex-basal-ganglia interactions.


Neural Computation | 2006

Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia

Randall C. O'Reilly; Michael J. Frank

The prefrontal cortex has long been thought to subserve both working memory (the holding of information online for processing) and executive functions (deciding how to manipulate working memory and perform processing). Although many computational models of working memory have been developed, the mechanistic basis of executive function remains elusive, often amounting to a homunculus. This article presents an attempt to deconstruct this homunculus through powerful learning mechanisms that allow a computational model of the prefrontal cortex to control both itself and other brain areas in a strategic, task-appropriate manner. These learning mechanisms are based on subcortical structures in the midbrain, basal ganglia, and amygdala, which together form an actor-critic architecture. The critic system learns which prefrontal representations are task relevant and trains the actor, which in turn provides a dynamic gating mechanism for controlling working memory updating. Computationally, the learning mechanism is designed to simultaneously solve the temporal and structural credit assignment problems. The models performance compares favorably with standard backpropagation-based temporal learning mechanisms on the challenging 1-2-AX working memory task and other benchmark working memory tasks.


Behavioral Neuroscience | 2006

A mechanistic account of striatal dopamine function in human cognition: psychopharmacological studies with cabergoline and haloperidol.

Michael J. Frank; Randall C. O'Reilly

The authors test a neurocomputational model of dopamine function in cognition by administering to healthy participants low doses of D2 agents cabergoline and haloperidol. The model suggests that DA dynamically modulates the balance of Go and No-Go basal ganglia pathways during cognitive learning and performance. Cabergoline impaired, while haloperidol enhanced, Go learning from positive reinforcement, consistent with presynaptic drug effects. Cabergoline also caused an overall bias toward Go responding, consistent with postsynaptic action. These same effects extended to working memory and attentional domains, supporting the idea that the basal ganglia/dopamine system modulates the updating of prefrontal representations. Drug effects interacted with baseline working memory span in all tasks. Taken together, the results support a unified account of the role of dopamine in modulating cognitive processes that depend on the basal ganglia.


Archive | 1999

Models of Working Memory: A Biologically Based Computational Model of Working Memory

Randall C. O'Reilly; Todd S. Braver; Jonathan D. Cohen

FIVE CENTRAL FEATURES OF THE MODEL We define working memory as controlled processing involving active maintenance and/or rapid learning, where controlled processing is an emergent property of the dynamic interactions of multiple brain systems, but the prefrontal cortex (PFC) and hippocampus (HCMP) are especially influential owing to their specialized processing abilities and their privileged locations within the processing hierarchy (both the PFC and HCMP are well connected with a wide range of brain areas, allowing them to influence behavior at a global level). The specific features of our model include: (1) A PFC specialized for active maintenance of internal contextual information that is dynamically updated and self-regulated, allowing it to bias (control) ongoing processing according to maintained information (e.g., goals, instructions, partial products). (2) An HCMP specialized for rapid learning of arbitrary information, which can be recalled in the service of controlled processing, whereas the posterior perceptual and motor cortex (PMC) exhibits slow, long-term learning that can efficiently represent accumulated knowledge and skills. (3) Control that emerges from interacting systems (PFC, HCMP, and PMC). (4) Dimensions that define continua of specialization in different brain systems: for example, robust active maintenance, fast versus slow learning. (5) Integration of biological and computational principles. Working memory is an intuitively appealing theoretical construct – perhaps deceptively so.


Philosophical Transactions of the Royal Society B | 2007

Towards an executive without a homunculus: computational models of the prefrontal cortex/basal ganglia system

Thomas E. Hazy; Michael J. Frank; Randall C. O'Reilly

The prefrontal cortex (PFC) has long been thought to serve as an ‘executive’ that controls the selection of actions and cognitive functions more generally. However, the mechanistic basis of this executive function has not been clearly specified often amounting to a homunculus. This paper reviews recent attempts to deconstruct this homunculus by elucidating the precise computational and neural mechanisms underlying the executive functions of the PFC. The overall approach builds upon existing mechanistic models of the basal ganglia (BG) and frontal systems known to play a critical role in motor control and action selection, where the BG provide a ‘Go’ versus ‘NoGo’ modulation of frontal action representations. In our model, the BG modulate working memory representations in prefrontal areas to support more abstract executive functions. We have developed a computational model of this system that is capable of developing human-like performance on working memory and executive control tasks through trial-and-error learning. This learning is based on reinforcement learning mechanisms associated with the midbrain dopaminergic system and its activation via the BG and amygdala. Finally, we briefly describe various empirical tests of this framework.


Trends in Cognitive Sciences | 2002

Hippocampal and neocortical contributions to memory: advances in the complementary learning systems framework

Randall C. O'Reilly; Kenneth A. Norman

The complementary learning systems framework provides a simple set of principles, derived from converging biological, psychological and computational constraints, for understanding the differential contributions of the neocortex and hippocampus to learning and memory. The central principles are that the neocortex has a low learning rate and uses overlapping distributed representations to extract the general statistical structure of the environment, whereas the hippocampus learns rapidly using separated representations to encode the details of specific events while minimizing interference. In recent years, we have instantiated these principles in working computational models, and have used these models to address human and animal learning and memory findings, across a wide range of domains and paradigms. Here, we review a few representative applications of our models, focusing on two domains: recognition memory and animal learning in the fear-conditioning paradigm. In both domains, the models have generated novel predictions that have been tested and confirmed.


Neural Computation | 1996

Biologically plausible error-driven learning using local activation differences: The generalized recirculation algorithm

Randall C. O'Reilly

The error backpropagation learning algorithm (BP) is generally considered biologically implausible because it does not use locally available, activation-based variables. A version of BP that can be computed locally using bidirectional activation recirculation (Hinton and McClelland 1988) instead of backpropagated error derivatives is more biologically plausible. This paper presents a generalized version of the recirculation algorithm (GeneRec), which overcomes several limitations of the earlier algorithm by using a generic recurrent network with sigmoidal units that can learn arbitrary input/output mappings. However, the contrastive Hebbian learning algorithm (CHL, also known as DBM or mean field learning) also uses local variables to perform error-driven learning in a sigmoidal recurrent network. CHL was derived in a stochastic framework (the Boltzmann machine), but has been extended to the deterministic case in various ways, all of which rely on problematic approximations and assumptions, leading some to conclude that it is fundamentally flawed. This paper shows that CHL can be derived instead from within the BP framework via the GeneRec algorithm. CHL is a symmetry-preserving version of GeneRec that uses a simple approximation to the midpoint or second-order accurate Runge-Kutta method of numerical integration, which explains the generally faster learning speed of CHL compared to BI. Thus, all known fully general error-driven learning algorithms that use local activation-based variables in deterministic networks can be considered variations of the GeneRec algorithm (and indirectly, of the backpropagation algorithm). GeneRec therefore provides a promising framework for thinking about how the brain might perform error-driven learning. To further this goal, an explicit biological mechanism is proposed that would be capable of implementing GeneRec-style learning. This mechanism is consistent with available evidence regarding synaptic modification in neurons in the neocortex and hippocampus, and makes further predictions.


Trends in Cognitive Sciences | 1998

Six principles for biologically based computational models of cortical cognition

Randall C. O'Reilly

This review describes and motivates six principles for computational cognitive neuroscience models: biological realism, distributed representations, inhibitory competition, bidirectional activation propagation, error-driven task learning, and Hebbian model learning. Although these principles are supported by a number of cognitive, computational and biological motivations, the prototypical neural-network model (a feedforward back-propagation network) incorporates only two of them, and no widely used model incorporates all of them. It is argued here that these principles should be integrated into a coherent overall framework, and some potential synergies and conflicts in doing so are discussed.

Collaboration


Dive into the Randall C. O'Reilly's collaboration.

Top Co-Authors

Avatar

Seth A. Herd

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerry W. Rudy

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuko Munakata

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Thomas E. Hazy

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Christian Lebiere

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Dean Wyatte

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Tim Curran

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Todd S. Braver

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge