Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew R. Dykstra is active.

Publication


Featured researches published by Andrew R. Dykstra.


NeuroImage | 2012

Individualized localization and cortical surface-based registration of intracranial electrodes

Andrew R. Dykstra; Alexander M. Chan; Brian T. Quinn; Rodrigo Zepeda; Corey J. Keller; Justine Cormier; Joseph R. Madsen; Emad N. Eskandar; Sydney S. Cash

In addition to its widespread clinical use, the intracranial electroencephalogram (iEEG) is increasingly being employed as a tool to map the neural correlates of normal cognitive function as well as for developing neuroprosthetics. Despite recent advances, and unlike other established brain-mapping modalities (e.g. functional MRI, magneto- and electroencephalography), registering the iEEG with respect to neuroanatomy in individuals-and coregistering functional results across subjects-remains a significant challenge. Here we describe a method which coregisters high-resolution preoperative MRI with postoperative computerized tomography (CT) for the purpose of individualized functional mapping of both normal and pathological (e.g., interictal discharges and seizures) brain activity. Our method accurately (within 3mm, on average) localizes electrodes with respect to an individuals neuroanatomy. Furthermore, we outline a principled procedure for either volumetric or surface-based group analyses. We demonstrate our method in five patients with medically-intractable epilepsy undergoing invasive monitoring of the seizure focus prior to its surgical removal. The straight-forward application of this procedure to all types of intracranial electrodes, robustness to deformations in both skull and brain, and the ability to compare electrode locations across groups of patients makes this procedure an important tool for basic scientists as well as clinicians.


Cerebral Cortex | 2014

Speech-Specific Tuning of Neurons in Human Superior Temporal Gyrus

Alexander M. Chan; Andrew R. Dykstra; Vinay Jayaram; Matthew K. Leonard; Katherine E. Travis; Brian Gygi; Janet M. Baker; Emad N. Eskandar; Leigh R. Hochberg; Eric Halgren; Sydney S. Cash

How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words.


Hearing Research | 2014

Functional imaging of auditory scene analysis.

Alexander Gutschalk; Andrew R. Dykstra

Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging.


Journal of Neuroscience Methods | 2017

iELVis: An open source MATLAB toolbox for localizing and visualizing human intracranial electrode data

David M. Groppe; Stephan Bickel; Andrew R. Dykstra; Pierre Mégevand; Manuel R. Mercier; Fred A. Lado; Ashesh D. Mehta; Christopher J. Honey

BACKGROUND Intracranial electrical recordings (iEEG) and brain stimulation (iEBS) are invaluable human neuroscience methodologies. However, the value of such data is often unrealized as many laboratories lack tools for localizing electrodes relative to anatomy. To remedy this, we have developed a MATLAB toolbox for intracranial electrode localization and visualization, iELVis. NEW METHOD: iELVis uses existing tools (BioImage Suite, FSL, and FreeSurfer) for preimplant magnetic resonance imaging (MRI) segmentation, neuroimaging coregistration, and manual identification of electrodes in postimplant neuroimaging. Subsequently, iELVis implements methods for correcting electrode locations for postimplant brain shift with millimeter-scale accuracy and provides interactive visualization on 3D surfaces or in 2D slices with optional functional neuroimaging overlays. iELVis also localizes electrodes relative to FreeSurfer-based atlases and can combine data across subjects via the FreeSurfer average brain. RESULTS It takes 30-60min of user time and 12-24h of computer time to localize and visualize electrodes from one brain. We demonstrate iELViss functionality by showing that three methods for mapping primary hand somatosensory cortex (iEEG, iEBS, and functional MRI) provide highly concordant results. COMPARISON WITH EXISTING METHODS: iELVis is the first public software for electrode localization that corrects for brain shift, maps electrodes to an average brain, and supports neuroimaging overlays. Moreover, its interactive visualizations are powerful and its tutorial material is extensive. CONCLUSIONS iELVis promises to speed the progress and enhance the robustness of intracranial electrode research. The software and extensive tutorial materials are freely available as part of the EpiSurg software project: https://github.com/episurg/episurg.


Handbook of Clinical Neurology | 2015

Auditory neglect and related disorders.

Alexander Gutschalk; Andrew R. Dykstra

Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation.


PLOS ONE | 2012

Dissociation of Detection and Discrimination of Pure Tones following Bilateral Lesions of Auditory Cortex

Andrew R. Dykstra; Christine K. Koh; Louis D. Braida; Mark Jude Tramo

It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.


Epilepsia | 2014

Utility of foramen ovale electrodes in mesial temporal lobe epilepsy.

Sameer A. Sheth; Joshua P. Aronson; Mouhsin M. Shafi; H. Wesley Phillips; Naymee Velez-Ruiz; Brian P. Walcott; Churl-Su Kwon; Matthew K. Mian; Andrew R. Dykstra; Andrew J. Cole; Emad N. Eskandar

To determine the ability of foramen ovale electrodes (FOEs) to localize epileptogenic foci after inconclusive noninvasive investigations in patients with suspected mesial temporal lobe epilepsy (MTLE).


Philosophical Transactions of the Royal Society B | 2017

A roadmap for the study of conscious audition and its neural basis

Andrew R. Dykstra; Peter Cariani; Alexander Gutschalk

How and which aspects of neural activity give rise to subjective perceptual experience—i.e. conscious perception—is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them. This article is part of the themed issue ‘Auditory and visual scene analysis’.


Science Advances | 2015

Does the mismatch negativity operate on a consciously accessible memory trace

Andrew R. Dykstra; Alexander Gutschalk

A change-related component of the auditory evoked response long thought to be preattentive and preconscious may actually require consciousness. The extent to which the contents of short-term memory are consciously accessible is a fundamental question of cognitive science. In audition, short-term memory is often studied via the mismatch negativity (MMN), a change-related component of the auditory evoked response that is elicited by violations of otherwise regular stimulus sequences. The prevailing functional view of the MMN is that it operates on preattentive and even preconscious stimulus representations. We directly examined the preconscious notion of the MMN using informational masking and magnetoencephalography. Spectrally isolated and otherwise suprathreshold auditory oddball sequences were occasionally random rendered inaudible by embedding them in random multitone masker “clouds.” Despite identical stimulation/task contexts and a clear representation of all stimuli in auditory cortex, MMN was only observed when the preceding regularity (that is, the standard stream) was consciously perceived. The results call into question the preconscious interpretation of MMN and raise the possibility that it might index partial awareness in the absence of overt behavior.


PLOS ONE | 2015

Interaction of Streaming and Attention in Human Auditory Cortex

Alexander Gutschalk; André Rupp; Andrew R. Dykstra

Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

Collaboration


Dive into the Andrew R. Dykstra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Halgren

University of California

View shared research outputs
Top Co-Authors

Avatar

Alexander M. Chan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph R. Madsen

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge