Stephen V. David
University of Maryland, College Park
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen V. David.
Electroencephalography and Clinical Neurophysiology | 1997
Dennis J. McFarland; Lynn M. McCane; Stephen V. David; Jonathan R. Wolpaw
Individuals can learn to control the amplitude of mu-rhythm activity in the EEG recorded over sensorimotor cortex and use it to move a cursor to a target on a video screen. The speed and accuracy of cursor movement depend on the consistency of the control signal and on the signal-to-noise ratio achieved by the spatial and temporal filtering methods that extract the activity prior to its translation into cursor movement. The present study compared alternative spatial filtering methods. Sixty-four channel EEG data collected while well-trained subjects were moving the cursor to targets at the top or bottom edge of a video screen were analyzed offline by four different spatial filters, namely a standard ear-reference, a common average reference (CAR), a small Laplacian (3 cm to set of surrounding electrodes) and a large Laplacian (6 cm to set of surrounding electrodes). The CAR and large Laplacian methods proved best able to distinguish between top and bottom targets. They were significantly superior to the ear-reference method. The difference in performance between the large Laplacian and small Laplacian methods presumably indicated that the former was better matched to the topographical extent of the EEG control signal. The results as a whole demonstrate the importance of proper spatial filter selection for maximizing the signal-to-noise ratio and thereby improving the speed and accuracy of EEG-based communication.
Current Opinion in Neurobiology | 2007
Jonathan B. Fritz; Mounya Elhilali; Stephen V. David; Shihab A. Shamma
Some fifty years after the first physiological studies of auditory attention, the field is now ripening, with exciting recent insights into the psychophysics, psychology, and neural basis of auditory attention. Current research seeks to unravel the complex interactions of pre-attentive and attentive processing of the acoustic scene, the role of auditory attention in mediating receptive-field plasticity in both auditory spatial and auditory feature processing, the contrasts and parallels between auditory and visual attention pathways and mechanisms, the interplay of bottom-up and top-down attentional mechanisms, the influential role of attention, goals, and expectations in shaping auditory processing, and the orchestration of diverse attentional effects at multiple levels from the cochlea to the cortex.
PLOS Biology | 2012
Brian N. Pasley; Stephen V. David; Nima Mesgarani; Adeen Flinker; Shihab A. Shamma; Nathan E. Crone; Robert T. Knight; Edward F. Chang
Direct brain recordings from neurosurgical patients listening to speech reveal that the acoustic speech signals can be reconstructed from neural activity in auditory cortex.
The Journal of Neuroscience | 2004
Stephen V. David; William E. Vinje; Jack L. Gallant
Studies of the primary visual cortex (V1) have produced models that account for neuronal responses to synthetic stimuli such as sinusoidal gratings. Little is known about how these models generalize to activity during natural vision. We recorded neural responses in area V1 of awake macaques to a stimulus with natural spatiotemporal statistics and to a dynamic grating sequence stimulus. We fit nonlinear receptive field models using each of these data sets and compared how well they predicted time-varying responses to a novel natural visual stimulus. On average, the model fit using the natural stimulus predicted natural visual responses more than twice as accurately as the model fit to the synthetic stimulus. The natural vision model produced better predictions in >75% of the neurons studied. This large difference in predictive power suggests that natural spatiotemporal stimulus statistics activate nonlinear response properties in a different manner than the grating stimulus. To characterize this modulation, we compared the temporal and spatial response properties of the model fits. During natural stimulation, temporal responses often showed a stronger late inhibitory component, indicating an effect of nonlinear temporal summation during natural vision. In addition, spatial tuning underwent complex shifts, primarily in the inhibitory, rather than excitatory, elements of the response profile. These differences in late and spatially tuned inhibition accounted fully for the difference in predictive power between the two models. Both the spatial and temporal statistics of the natural stimulus contributed to the modulatory effects.
Neuron | 2009
Serin Atiani; Mounya Elhilali; Stephen V. David; Jonathan B. Fritz; Shihab A. Shamma
Attention is essential for navigating complex acoustic scenes, when the listener seeks to extract a foreground source while suppressing background acoustic clutter. This study explored the neural correlates of this perceptual ability by measuring rapid changes of spectrotemporal receptive fields (STRFs) in primary auditory cortex during detection of a target tone embedded in noise. Compared with responses in the passive state, STRF gain decreased during task performance in most cells. By contrast, STRF shape changes were excitatory and specific, and were strongest in cells with best frequencies near the target tone. The net effect of these adaptations was to accentuate the representation of the target tone relative to the noise by enhancing responses of near-target cells to the tone during high-signal-to-noise ratio (SNR) tasks while suppressing responses of far-from-target cells to the masking noise in low-SNR tasks. These adaptive STRF changes were largest in high-performance sessions, confirming a close correlation with behavior.
Proceedings of the National Academy of Sciences of the United States of America | 2012
Stephen V. David; Jonathan B. Fritz; Shihab A. Shamma
As sensory stimuli and behavioral demands change, the attentive brain quickly identifies task-relevant stimuli and associates them with appropriate motor responses. The effects of attention on sensory processing vary across task paradigms, suggesting that the brain may use multiple strategies and mechanisms to highlight attended stimuli and link them to motor action. To better understand factors that contribute to these variable effects, we studied sensory representations in primary auditory cortex (A1) during two instrumental tasks that shared the same auditory discrimination but required different behavioral responses, either approach or avoidance. In the approach task, ferrets were rewarded for licking a spout when they heard a target tone amid a sequence of reference noise sounds. In the avoidance task, they were punished unless they inhibited licking to the target. To explore how these changes in task reward structure influenced attention-driven rapid plasticity in A1, we measured changes in sensory neural responses during behavior. Responses to the target changed selectively during both tasks but did so with opposite sign. Despite the differences in sign, both effects were consistent with a general neural coding strategy that maximizes discriminability between sound classes. The dependence of the direction of plasticity on task suggests that representations in A1 change not only to sharpen representations of task-relevant stimuli but also to amplify responses to stimuli that signal aversive outcomes and lead to behavioral inhibition. Thus, top-down control of sensory processing can be shaped by task reward structure in addition to the required sensory discrimination.
Journal of the Acoustical Society of America | 2008
Nima Mesgarani; Stephen V. David; Jonathan B. Fritz; Shihab A. Shamma
A controversial issue in neurolinguistics is whether basic neural auditory representations found in many animals can account for human perception of speech. This question was addressed by examining how a population of neurons in the primary auditory cortex (A1) of the naive awake ferret encodes phonemes and whether this representation could account for the human ability to discriminate them. When neural responses were characterized and ordered by spectral tuning and dynamics, perceptually significant features including formant patterns in vowels and place and manner of articulation in consonants, were readily visualized by activity in distinct neural subpopulations. Furthermore, these responses faithfully encoded the similarity between the acoustic features of these phonemes. A simple classifier trained on the neural representation was able to simulate human phoneme confusion when tested with novel exemplars. These results suggest that A1 responses are sufficiently rich to encode and discriminate phoneme classes and that humans and animals may build upon the same general acoustic representations to learn boundaries for categorical and robust sound classification.
Nature Neuroscience | 2010
Jonathan B. Fritz; Stephen V. David; Susanne Radtke-Schuller; Pingbo Yin; Shihab A. Shamma
Top-down signals from frontal cortex are thought to be important in cognitive control of sensory processing. To explore this interaction, we compared activity in ferret frontal cortex and primary auditory cortex (A1) during auditory and visual tasks requiring discrimination between classes of reference and target stimuli. Frontal cortex responses were behaviorally gated, selectively encoded the timing and invariant behavioral meaning of target stimuli, could be rapid in onset, and sometimes persisted for hours following behavior. These results are consistent with earlier findings in A1 that attention triggered rapid, selective, persistent, task-related changes in spectrotemporal receptive fields. Simultaneously recorded local field potentials revealed behaviorally gated changes in inter-areal coherence that were selectively modulated between frontal cortex and focal regions of A1 that were responsive to target sounds. These results suggest that A1 and frontal cortex dynamically establish a functional connection during auditory behavior that shapes the flow of sensory information and maintains a persistent trace of recent task-relevant stimulus features.
Neuron | 2008
Stephen V. David; Benjamin Y. Hayden; James A. Mazer; Jack L. Gallant
Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.
The Journal of Neuroscience | 2009
Stephen V. David; Nima Mesgarani; Jonathan B. Fritz; Shihab A. Shamma
In this study, we explored ways to account more accurately for responses of neurons in primary auditory cortex (A1) to natural sounds. The auditory cortex has evolved to extract behaviorally relevant information from complex natural sounds, but most of our understanding of its function is derived from experiments using simple synthetic stimuli. Previous neurophysiological studies have found that existing models, such as the linear spectro-temporal receptive field (STRF), fail to capture the entire functional relationship between natural stimuli and neural responses. To study this problem, we compared STRFs for A1 neurons estimated using a natural stimulus, continuous speech, with STRFs estimated using synthetic ripple noise. For about one-third of the neurons, we found significant differences between STRFs, usually in the temporal dynamics of inhibition and/or overall gain. This shift in tuning resulted primarily from differences in the coarse temporal structure of the speech and noise stimuli. Using simulations, we found that the stimulus dependence of spectro-temporal tuning can be explained by a model in which synaptic inputs to A1 neurons are susceptible to rapid nonlinear depression. This dynamic reshaping of spectro-temporal tuning suggests that synaptic depression may enable efficient encoding of natural auditory stimuli.