Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David S. Touretzky is active.

Publication


Featured researches published by David S. Touretzky.


The Journal of Neuroscience | 2006

A Spin Glass Model of Path Integration in Rat Medial Entorhinal Cortex

Mark C. Fuhs; David S. Touretzky

Electrophysiological recording studies in the dorsocaudal region of medial entorhinal cortex (dMEC) of the rat reveal cells whose spatial firing fields show a remarkably regular hexagonal grid pattern (Fyhn et al., 2004; Hafting et al., 2005). We describe a symmetric, locally connected neural network, or spin glass model, that spontaneously produces a hexagonal grid of activity bumps on a two-dimensional sheet of units. The spatial firing fields of the simulated cells closely resemble those of dMEC cells. A collection of grids with different scales and/or orientations forms a basis set for encoding position. Simulations show that the animal’s location can easily be determined from the population activity pattern. Introducing an asymmetry in the model allows the activity bumps to be shifted in any direction, at a rate proportional to velocity, to achieve path integration. Furthermore, information about the structure of the environment can be superimposed on the spatial position signal by modulation of the bump activity levels without significantly interfering with the hexagonal periodicity of firing fields. Our results support the conjecture of Hafting et al. (2005) that an attractor network in dMEC may be the source of path integration information afferent to hippocampus.


Neuron | 2010

Hippocampal Replay Is Not a Simple Function of Experience

Anoopum S. Gupta; Matthijs A. A. van der Meer; David S. Touretzky; A. David Redish

Replay of behavioral sequences in the hippocampus during sharp wave ripple complexes (SWRs) provides a potential mechanism for memory consolidation and the learning of knowledge structures. Current hypotheses imply that replay should straightforwardly reflect recent experience. However, we find these hypotheses to be incompatible with the content of replay on a task with two distinct behavioral sequences (A and B). We observed forward and backward replay of B even when rats had been performing A for >10 min. Furthermore, replay of nonlocal sequence B occurred more often when B was infrequently experienced. Neither forward nor backward sequences preferentially represented highly experienced trajectories within a session. Additionally, we observed the construction of never-experienced novel-path sequences. These observations challenge the idea that sequence activation during SWRs is a simple replay of recent experience. Instead, replay reflected all physically available trajectories within the environment, suggesting a potential role in active learning and maintenance of the cognitive map.


Cognitive Science | 1988

A Distributed Connectionist Production System.

David S. Touretzky; Geoffrey E. Hinton

DCPS is a connectionist production system interpreter that uses distributed representations. As a connectionist model it consists of many simple, richly interconnected neuron-like computing units that cooperate to solve problems in parallel. One motivation far constructing DCPS was to demonstrate that connectionist models ore copable of representing and using explicit rules. A second motivation was to show how “coarse coding” or “distributed representations” can be used to construct a working memory that requires far fewer units than the number of different facts that can potentially be stored. The simulation we present is intended as a detailed demonstration of the feasibility of certain ideas and should not be viewed as a full implementation of production systems. Our current model only has o few of the many interesting emergent properties that we eventually hope to demonstrate: It is damage-resistant, it performs matching and variable binding by massively parallel constraint satisfaction, and the capacity of its working memory is dependent on the similarity of the items being stored.


Hippocampus | 1997

Cognitive maps beyond the hippocampus

A. David Redish; David S. Touretzky

We present a conceptual framework for the role of the hippocampus and its afferent and efferent structures in rodent navigation. Our proposal is compatible with the behavioral, neurophysiological, anatomical, and neuropharmacological literature, and suggests a number of practical experiments that could support or refute it.


Trends in Cognitive Sciences | 2006

Bayesian theories of conditioning in a changing world

Aaron C. Courville; Nathaniel D. Daw; David S. Touretzky

The recent flowering of Bayesian approaches invites the re-examination of classic issues in behavior, even in areas as venerable as Pavlovian conditioning. A statistical account can offer a new, principled interpretation of behavior, and previous experiments and theories can inform many unexplored aspects of the Bayesian enterprise. Here we consider one such issue: the finding that surprising events provoke animals to learn faster. We suggest that, in a statistical account of conditioning, surprise signals change and therefore uncertainty and the need for new learning. We discuss inference in a world that changes and show how experimental results involving surprise can be interpreted from this perspective, and also how, thus understood, these phenomena help constrain statistical theories of animal and human learning.


Hippocampus | 1996

Theory of rodent navigation based on interacting representations of space

David S. Touretzky; A. David Redish

We present a computational theory of navigation in rodents based on interacting representations of place, head direction, and local view. An associated computer model is able to replicate a variety of behavioral and neurophysiological results from the rodent navigation literature. The theory and model generate predictions that are testable with current technologies.


Neural Computation | 1998

The role of the hippocampus in solving the Morris water maze

A. David Redish; David S. Touretzky

We suggest that the hippocampus plays two roles that allow rodents to solve the hidden-platform water maze: self-localization and route replay. When an animal explores an environment such as the water maze, the combination of place fields and correlational (Hebbian) long-term potentiation produces a weight matrix in the CA3 recurrent collaterals such that cells with overlapping place fields are more strongly interconnected than cells with nonoverlapping fields. When combined with global inhibition, this forms an attractor with coherent representations of position as stable states. When biased by local view information, this allows the animal to determine its position relative to the goal when it returns to the environment. We call this self-localization. When an animal traces specific routes within an environment, the weights in the CA3 recurrent collaterals become asymmetric. We show that this stores these routes in the recurrent collaterals. When primed with noise in the absence of sensory input, a coherent representation of position still forms in the CA3 population, but then that representation drifts, retracing a route. We show that these two mechanisms can coexist and form a basis for memory consolidation, explaining the anterograde and limited retrograde amnesia seen following hippocampal lesions.


Artificial Intelligence | 1990

BoltzCONS: dynamic symbol structures in a connectionist network

David S. Touretzky

Abstract BoltzCONS is a connectionist model that dynamically creates and manipulates composite symbol structures. These structures are implemented using a functional analog of linked lists, but BoltzCONS employs distributed representations and associative retrieval in place of a conventional memory organization. Associative retrieval leads to some interesting properties, e.g., the model can instantaneously access any uniquely-named internal node of a tree. But the point of the work is not to reimplement linked lists in some peculiar new way; it is to show how neural networks can exhibit compositionality and distal access (the ability to reference a complex structure via an abbreviated tag), two properties that distinguish symbol processing from lower-level cognitive functions such as pattern recognition. Unlike certain other neural net models, BoltzCONS represents objects as a collection of superimposed activity patterns rather than as a set of weights. It can therefore create new structured objects dynamically, without reliance on iterative training procedures, without rehearsal of previously-learned patterns, and without resorting to grandmother cells.


Robotics and Autonomous Systems | 1997

Shaping robot behavior using principles from instrumental conditioning

Lisa M. Saksida; Scott M. Raymond; David S. Touretzky

Shaping by successive approximations is an important animal training technique in which behavior is gradually adjusted in response to strategically timed reinforcements. We describe a computational model of this shaping process and its implementation on a mobile robot. Innate behaviors in our model are sequences of actions and enabling conditions, and shaping is a behavior editing process realized by multiple editing mechanisms. The model replicates some fundamental phenomena associated with instrumental learning in animals, and allows an RWI B21 robot to learn several distinct tasks derived from the same innate behavior.


Adaptive Behavior | 1997

Operant conditioning in skinnerbots

David S. Touretzky; Lisa M. Saksida

Instrumental (or operant) conditioning, a form of animal learning, is similar to reinforcement learning (Watkins, 1989) in that it allows an agent to adapt its actions to gain maximally from the environment while being rewarded only for correct performance. However, animals learn much more complicated behaviors through instrumental conditioning than robots presently acquire through reinforcement learning. We describe a new computational model of the conditioning process that attempts to capture some of the aspects that are missing from simple reinforcement learning: conditioned reinforcers, shifting reinforcement contingencies, explicit action sequencing, and state space refinement. We apply our model to a task commonly used to study working memory in rats and monkeys—the delayed match-to-sample task. Animals learn this task in stages. In simulation, our model also acquires the task in stages, in a similar manner. We have used the model to train an RWI B21 robot.

Collaboration


Dive into the David S. Touretzky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark C. Fuhs

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Robert Thibadeau

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge