Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Servan-Schreiber is active.

Publication


Featured researches published by David Servan-Schreiber.


Psychological Review | 1992

Context, cortex, and dopamine: A connectionist approach to behavior and biology in schizophrenia

Jonathan D. Cohen; David Servan-Schreiber

Connectionist models are used to explore the relationship between cognitive deficits and biological abnormalities in schizophrenia. Schizophrenic deficits in tasks that tap attention and language processing are reviewed, as are biological disturbances involving prefrontal cortex and the mesocortical dopamine system. Three computer models are then presented that simulate normal and schizophrenic performance in the Stroop task, the continuous performance test, and a lexical disambiguation task. They demonstrate that a disturbance in the internal representation of contextual information can provide a common explanation for schizophrenic deficits in several attention- and language-related tasks. The models also show that these behavioral deficits may arise from a disturbance in a model parameter (gain) corresponding to the neuromodulatory effects of dopamine, in a model component corresponding to the function of prefrontal cortex.


Human Brain Mapping | 1994

Activation of the prefrontal cortex in a nonspatial working memory task with functional MRI

Jonathan D. Cohen; Steven D. Forman; Todd S. Braver; B.J. Casey; David Servan-Schreiber; Douglas C. Noll

Functional magnetic resonance imaging (fMRI) was used to examine the pattern of activity of the prefrontal cortex during performance of subjects in a nonspatial working memory task. Subjects observed sequences of letters and responded whenever a letter repeated with exactly one nonidentical letter intervening. In a comparison task, subjects monitored similar sequences of letters for any occurrence of a single, prespecified target letter. Functional scanning was performed using a newly developed spiral scan image acquisition technique that provides high‐resolution, multislice scanning at approximately five times the rate usually possible on conventional equipment (an average of one image per second). Using these methods, activation of the middle and inferior frontal gyri was reliably observed within individual subjects during performance of the working memory task relative to the comparison task. Effect sizes (2–4%) closely approximated those that have been observed within primary sensory and motor cortices using similar fMRI techniques. Furthermore, activation increased and decreased with a time course that was highly consistent with the task manipulations. These findings corroborate the results of positron emission tomography studies, which suggest that the prefrontal cortex is engaged by tasks that rely on working memory. Furthermore, they demonstrate the applicability of newly developed fMRI techniques using conventional scanners to study the associative cortex in individual subjects.


Journal of Abnormal Psychology | 1999

Context-processing deficits in schizophrenia: converging evidence from three theoretically motivated cognitive tasks.

Jonathan D. Cohen; Cameron S. Carter; David Servan-Schreiber

To test the hypothesis that the ability to actively represent and maintain context information in a central function of working memory and that a disturbance in this function contributes to cognitive deficits in schizophrenia, the authors modified 3 tasks--the AX version of the Continuous Performance Test, Stroop, and a lexical disambiguation task--and administered them to patients with schizophrenia as well as to depressed and healthy controls. The results suggest an accentuation of deficits in patients with schizophrenia in context-sensitive conditions and cross-task correlations of performance in these conditions. However, the results do not definitively eliminate the possibility of a generalized deficit. The significance of these findings is discussed with regard to the specificity of deficits in schizophrenia and the hypothesis concerning the neural and cognitive mechanisms that underlie these deficits.


Neural Computation | 1989

Finite state automata and simple recurrent networks

Axel Cleeremans; David Servan-Schreiber; James L. McClelland

We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t1, together with element t, to predict element t 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, although this correspondence is not necessary for the network to act as a perfect finite-state recognizer. We explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements. Such information is maintained with relative ease if it is relevant at each intermediate step; it tends to be lost when intervening elements do not depend on it. At first glance this may suggest that such networks are not relevant to natural language, in which dependencies may span indefinite distances. However, embeddings in natural language are not completely independent of earlier information. The final simulation shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.


American Journal of Psychology | 1992

A parallel distributed processing approach to automaticity

Jonathan D. Cohen; David Servan-Schreiber; James L. McClelland

We consider how a particular set of information processing principles, developed within the parallel distributed processing (PDP) framework, can address issues concerning automaticity. These principles include graded, activation-based processing that is subject to attentional modulation; incremental, connection-based learning; and interactivity and competition in processing. We show how simulation models, based on these principles, can account for the major phenomena associated with automaticity, as well as many of those that have been troublesome for more traditional theories. In particular, we show how the PDP framework provides an alternative to the usual dichotomy between automatic and controlled processing and can explain the relative nature of automaticity as well as the fact that seemingly automatic processes can be influenced by attention. We also discuss how this framework can provide insight into the role that bidirectional influences play in processing: that is, how attention can influence processing at the same time that processing influences attention. Simulation models of the Stroop color-word task and the Eriksen response-competition task are described that help illustrate the application of the principles to performance in specific behavioral tasks.


Machine Learning | 1991

Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks

David Servan-Schreiber; Axel Cleeremans; James L. McClelland

We explore a network architecture introduced by Elman (1990) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t-1, together with element t, to predict element t+1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the net has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, however, this correspondence is not necessary for the network to act as a perfect finite-state recognizer. Next, we provide a detailed analysis of how the network acquires its internal representations. We show that the network progressively encodes more and more temporal context by means of a probability analysis. Finally, we explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements to distant elements. Such information is maintained with relative ease if it is relevant at each intermediate step, it tends to be lost when intervening elements do not depend on it. At first glance this may suggest that such networks are not relevant to natural language, in which dependencies may span indefinite distances. However, embed dings in natural language are not completely independent of earlier information. The final simulation shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information. The network encodes long-distance dependencies by shading internal representations that are responsible for processing common embeddings in otherwise different sequences. This ability to represent simultaneously similarities and differences between several sequences relies on the graded nature of representations used by the network, which contrast with the finite states of traditional automata. For this reason, the network and other similar architectures may be called Graded State Machines.


Journal of Cognitive Neuroscience | 1994

Mechanisms of spatial attention: The relation of macrostructure to microstructure in parietal neglect

Jonathan D. Cohen; Richard Romero; David Servan-Schreiber; Martha J. Farah

Parietal-damaged patients respond abnormally slowly to targets presented in the affected hemifield when preceded by cues in the intact hemifield. This inability to disengage attention from the ipsilesional field to reengage it in the contralesional field has been interpreted as evidence for a distinct disengage mechanism, localized in parietal cortex. We present a computational model that accounts for normal attentional effects by interactivity and competition among representations of different locations in space, without a dedicated disengage mechanism. We show that when the model is lesioned, it produces the disengage deficit shown by parietal-damaged patients. This suggests that the deficit observed in such patients can be understood as an emergent property of interactions among the remaining parts of the system, and need not imply the existence of a dedicated disengage mechanism in the normal brain.


Trends in Cognitive Sciences | 1997

Computational modeling of emotion: explorations through the anatomy and physiology of fear conditioning.

Jorge L. Armony; David Servan-Schreiber; Jonathan D. Cohen; Joseph E. LeDoux

Recent discoveries about the neural system and cellular mechanisms in pathways mediating classical fear conditioning have provided a foundation for pursuing concurrent connectionist models of this form of emotional learning. The models described are constrained by the known anatomy underlying the behavior being simulated. To date, implementations capture salient features of fear learning, both at the level of behavior and at the level of single cells, and additionally make use of generic biophysical constraints to mimic fundamental excitatory and inhibitory transmission properties. Owing to the modular nature of the systems model, biophysical modeling can be carried out in a single region, in this case the amygdala. Future directions include application of the biophysical model to questions about temporal summation in the two sensory input paths to amygdala, and modeling of an attentional interrupt signal that will extend the emotional processing model to interactions with cognitive systems.


Behavioral Neuroscience | 1995

An Anatomically Constrained Neural Network Model of Fear Conditioning

Jorge L. Armony; David Servan-Schreiber; Jonathan D. Cohen; Joseph E. LeDoux

Conditioning of fear reactions to an auditory conditioned stimulus (CS) paired with a footshock unconditioned stimulus (US) involves CS transmission to the amygdala from the auditory thalamus, the auditory cortex, or both. This article presents a simple neural network model of this neural system. The model consists of modules of mutually inhibitory nonlinear units representing the different relevant anatomical structures of the thalamo-amygdala and thalamo-corticoamygdala circuitry. Frequency-specific changes produced by fear conditioning were studied at the behavioral level (stimulus generalization) and the single-unit level (receptive fields). The findings mirror effects observed in conditioning studies of animals. This computational model provides an initial grounding for explorations of how emotional information and behavior are related to anatomical and physiological observations.


Biological Psychiatry | 1998

Dopamine and the mechanisms of cognition : Part II. D-amphetamine effects in human subjects performing a selective attention task

David Servan-Schreiber; Cameron S. Carter; Randy M. Bruno; Jonathan D. Cohen

BACKGROUND A neural network computer model described in a companion paper predicted the effects of increased dopamine transmission on selective attention under two different hypotheses. METHODS To evaluate these predictions we conducted an empirical study in human subjects of D-amphetamine effects on performance of the Eriksen response competition task. Ten healthy volunteers were tested before and after placebo or D-amphetamine in a double-blind cross-over design. RESULTS D-amphetamine induced a speeding of reaction time overall and an improvement of accuracy at fast reaction times but only in the task condition requiring selective attention. CONCLUSIONS This pattern of results conforms to the prediction of the model under the hypothesis that D-amphetamine primarily affects dopamine transmission in cognitive rather than motor networks. This suggests that the principles embodied in parallel distributed processing models of task performance may be sufficient to predict and explain specific behavioral effects of some drug actions in the central nervous system.

Collaboration


Dive into the David Servan-Schreiber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Axel Cleeremans

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd S. Braver

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harry Printz

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge