Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. David Redish is active.

Publication


Featured researches published by A. David Redish.


The Journal of Neuroscience | 2007

Neural Ensembles in CA3 Transiently Encode Paths Forward of the Animal at a Decision Point

Adam Johnson; A. David Redish

Neural ensembles were recorded from the CA3 region of rats running on T-based decision tasks. Examination of neural representations of space at fast time scales revealed a transient but repeatable phenomenon as rats made a decision: the location reconstructed from the neural ensemble swept forward, first down one path and then the other. Estimated representations were coherent and preferentially swept ahead of the animal rather than behind the animal, implying it represented future possibilities rather than recently traveled paths. Similar phenomena occurred at other important decisions (such as in recovery from an error). Local field potentials from these sites contained pronounced theta and gamma frequencies, but no sharp wave frequencies. Forward-shifted spatial representations were influenced by task demands and experience. These data suggest that the hippocampus does not represent space as a passive computation, but rather that hippocampal spatial processing is an active process likely regulated by cognitive mechanisms.


Neuron | 2010

Hippocampal Replay Is Not a Simple Function of Experience

Anoopum S. Gupta; Matthijs A. A. van der Meer; David S. Touretzky; A. David Redish

Replay of behavioral sequences in the hippocampus during sharp wave ripple complexes (SWRs) provides a potential mechanism for memory consolidation and the learning of knowledge structures. Current hypotheses imply that replay should straightforwardly reflect recent experience. However, we find these hypotheses to be incompatible with the content of replay on a task with two distinct behavioral sequences (A and B). We observed forward and backward replay of B even when rats had been performing A for >10 min. Furthermore, replay of nonlocal sequence B occurred more often when B was infrequently experienced. Neither forward nor backward sequences preferentially represented highly experienced trajectories within a session. Additionally, we observed the construction of never-experienced novel-path sequences. These observations challenge the idea that sequence activation during SWRs is a simple replay of recent experience. Instead, replay reflected all physically available trajectories within the environment, suggesting a potential role in active learning and maintenance of the cognitive map.


Hippocampus | 1997

Cognitive maps beyond the hippocampus

A. David Redish; David S. Touretzky

We present a conceptual framework for the role of the hippocampus and its afferent and efferent structures in rodent navigation. Our proposal is compatible with the behavioral, neurophysiological, anatomical, and neuropharmacological literature, and suggests a number of practical experiments that could support or refute it.


Hippocampus | 1996

Theory of rodent navigation based on interacting representations of space

David S. Touretzky; A. David Redish

We present a computational theory of navigation in rodents based on interacting representations of place, head direction, and local view. An associated computer model is able to replicate a variety of behavioral and neurophysiological results from the rodent navigation literature. The theory and model generate predictions that are testable with current technologies.


Neural Computation | 1998

The role of the hippocampus in solving the Morris water maze

A. David Redish; David S. Touretzky

We suggest that the hippocampus plays two roles that allow rodents to solve the hidden-platform water maze: self-localization and route replay. When an animal explores an environment such as the water maze, the combination of place fields and correlational (Hebbian) long-term potentiation produces a weight matrix in the CA3 recurrent collaterals such that cells with overlapping place fields are more strongly interconnected than cells with nonoverlapping fields. When combined with global inhibition, this forms an attractor with coherent representations of position as stable states. When biased by local view information, this allows the animal to determine its position relative to the goal when it returns to the environment. We call this self-localization. When an animal traces specific routes within an environment, the weights in the CA3 recurrent collaterals become asymmetric. We show that this stores these routes in the recurrent collaterals. When primed with noise in the absence of sensory input, a coherent representation of position still forms in the CA3 population, but then that representation drifts, retracing a route. We show that these two mechanisms can coexist and form a basis for memory consolidation, explaining the anterograde and limited retrograde amnesia seen following hippocampal lesions.


Psychological Review | 2007

Reconciling reinforcement learning models with behavioral extinction and renewal: Implications for addiction, relapse, and problem gambling

A. David Redish; Steve Jensen; Adam Johnson; Zeb Kurth-Nelson

Because learned associations are quickly renewed following extinction, the extinction process must include processes other than unlearning. However, reinforcement learning models, such as the temporal difference reinforcement learning (TDRL) model, treat extinction as an unlearning of associated value and are thus unable to capture renewal. TDRL models are based on the hypothesis that dopamine carries a reward prediction error signal; these models predict reward by driving that reward error to zero. The authors construct a TDRL model that can accommodate extinction and renewal through two simple processes: (a) a TDRL process that learns the value of situation-action pairs and (b) a situation recognition process that categorizes the observed cues into situations. This model has implications for dysfunctional states, including relapse after addiction and problem gambling.


Current Opinion in Neurobiology | 2007

Integrating hippocampus and striatum in decision-making

Adam Johnson; Matthijs A. A. van der Meer; A. David Redish

Learning and memory and navigation literatures emphasize interactions between multiple memory systems: a flexible, planning-based system and a rigid, cached-value system. This has profound implications for decision-making. Recent conceptualizations of flexible decision-making employ prospection and projection arising from a network involving the hippocampus. Recent recordings from rodent hippocampus in decision-making situations have found transient forward-shifted representations. Evaluation of that prediction and subsequent action-selection probably occurs downstream (e.g. in orbitofrontal cortex, in ventral and dorsomedial striatum). Classically, striatum has been identified as a crucial component of the less-flexible, incremental system. Current evidence, however, suggests that striatum is involved in both flexible and stimulus-response decision-making, with dorsolateral striatum involved in stimulus-response strategies and ventral and dorsomedial striatum involved in goal-directed strategies.


Frontiers in Integrative Neuroscience | 2009

Covert expectation-of-reward in rat ventral striatum at decision points

Matthijs A. A. van der Meer; A. David Redish

Flexible decision-making strategies (such as planning) are a key component of adaptive behavior, yet their neural mechanisms have remained resistant to experimental analysis. Theories of planning require prediction and evaluation of potential future rewards, suggesting that reward signals may covertly appear at decision points. To test this idea, we recorded ensembles of ventral striatal neurons on a spatial decision task, in which hippocampal ensembles are known to represent future possibilities at decision points. We found representations of reward which were not only activated at actual reward delivery sites, but also at a high-cost choice point and before error correction. This expectation-of-reward signal at decision points was apparent at both the single cell and the ensemble level, and vanished with behavioral automation. We conclude that ventral striatal representations of reward are more dynamic than suggested by previous reports of reward- and cue-responsive cells, and may provide the necessary signal for evaluation of internally generated possibilities considered during flexible decision-making.


The Journal of Neuroscience | 2009

Corticostriatal Interactions during Learning, Memory Processing, and Decision Making

Cyriel M. A. Pennartz; Joshua D. Berke; Ann M. Graybiel; Rutsuko Ito; Carien S. Lansink; Matthijs A. A. van der Meer; A. David Redish; Kyle S. Smith; Pieter Voorn

This mini-symposium aims to integrate recent insights from anatomy, behavior, and neurophysiology, highlighting the anatomical organization, behavioral significance, and information-processing mechanisms of corticostriatal interactions. In this summary of topics, which is not meant to provide a comprehensive survey, we will first review the anatomy of corticostriatal circuits, comparing different ways by which “loops” of cortical–basal ganglia circuits communicate. Next, we will address the causal importance and systems-neurophysiological mechanisms of corticostriatal interactions for memory, emphasizing the communication between hippocampus and ventral striatum during contextual conditioning. Furthermore, ensemble recording techniques have been applied to compare information processing in the dorsal and ventral striatum to predictions from reinforcement learning theory. We will next discuss how neural activity develops in corticostriatal areas when habits are learned. Finally, we will evaluate the role of GABAergic interneurons in dynamically transforming cortical inputs into striatal output during learning and decision making.


Neuron | 2010

Triple Dissociation of Information Processing in Dorsal Striatum, Ventral Striatum, and Hippocampus on a Learned Spatial Decision Task

Matthijs A. A. van der Meer; Adam Johnson; Neil Schmitzer-Torbert; A. David Redish

Decision-making studies across different domains suggest that decisions can arise from multiple, parallel systems in the brain: a flexible system utilizing action-outcome expectancies and a more rigid system based on situation-action associations. The hippocampus, ventral striatum, and dorsal striatum make unique contributions to each system, but how information processing in each of these structures supports these systems is unknown. Recent work has shown covert representations of future paths in hippocampus and of future rewards in ventral striatum. We developed analyses in order to use a comparative methodology and apply the same analyses to all three structures. Covert representations of future paths and reward were both absent from the dorsal striatum. In contrast, dorsal striatum slowly developed situation representations that selectively represented action-rich parts of the task. This triple dissociation suggests that the different roles these structures play are due to differences in information-processing mechanisms.

Collaboration


Dive into the A. David Redish's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Johnson

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zeb Kurth-Nelson

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brandy Schmidt

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge