Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy D. Thorne is active.

Publication


Featured researches published by Jeremy D. Thorne.


BMC Neuroscience | 2011

Transcranial direct current stimulation of the prefrontal cortex modulates working memory performance: combined behavioural and electrophysiological evidence

Tino Zaehle; Pascale Sandmann; Jeremy D. Thorne; Lutz Jäncke; Christoph Herrmann

BackgroundTranscranial direct current stimulation (tDCS) is a technique that can systematically modify behaviour by inducing changes in the underlying brain function. In order to better understand the neuromodulatory effect of tDCS, the present study examined the impact of tDCS on performance in a working memory (WM) task and its underlying neural activity. In two experimental sessions, participants performed a letter two-back WM task after sham and either anodal or cathodal tDCS over the left dorsolateral prefrontal cortex (DLPFC).ResultsResults showed that tDCS modulated WM performance by altering the underlying oscillatory brain activity in a polarity-specific way. We observed an increase in WM performance and amplified oscillatory power in the theta and alpha bands after anodal tDCS whereas cathodal tDCS interfered with WM performance and decreased oscillatory power in the theta and alpha bands under posterior electrode sides.ConclusionsThe present study demonstrates that tDCS can alter WM performance by modulating the underlying neural oscillations. This result can be considered an important step towards a better understanding of the mechanisms involved in tDCS-induced modulations of WM performance, which is of particular importance, given the proposal to use electrical brain stimulation for the therapeutic treatment of memory deficits in clinical settings.


Clinical Neurophysiology | 2009

Semi-automatic identification of independent components representing EEG artifact

Filipa Campos Viola; Jeremy D. Thorne; Barrie A. Edmonds; Till R. Schneider; Tom Eichele; Stefan Debener

OBJECTIVE Independent component analysis (ICA) can disentangle multi-channel electroencephalogram (EEG) signals into a number of artifacts and brain-related signals. However, the identification and interpretation of independent components is time-consuming and involves subjective decision making. We developed and evaluated a semi-automatic tool designed for clustering independent components from different subjects and/or EEG recordings. METHODS CORRMAP is an open-source EEGLAB plug-in, based on the correlation of ICA inverse weights, and finds independent components that are similar to a user-defined template. Component similarity is measured using a correlation procedure that selects components that pass a threshold. The threshold can be either user-defined or determined automatically. CORRMAP clustering performance was evaluated by comparing it with the performance of 11 users from different laboratories familiar with ICA. RESULTS For eye-related artifacts, a very high degree of overlap between users (phi>0.80), and between users and CORRMAP (phi>0.80) was observed. Lower degrees of association were found for heartbeat artifact components, between users (phi<0.70), and between users and CORRMAP (phi<0.65). CONCLUSIONS These results demonstrate that CORRMAP provides an efficient, convenient and objective way of clustering independent components. SIGNIFICANCE CORRMAP helps to efficiently use ICA for the removal EEG artifacts.


The Journal of Neuroscience | 2011

Cross-Modal Phase Reset Predicts Auditory Task Performance in Humans

Jeremy D. Thorne; Maarten De Vos; Filipa Campos Viola; Stefan Debener

In the multisensory environment, inputs to each sensory modality are rarely independent. Sounds often follow a visible action or event. Here we present behaviorally relevant evidence from the human EEG that visual input prepares the auditory system for subsequent auditory processing by resetting the phase of neuronal oscillatory activity in auditory cortex. Subjects performed a simple auditory frequency discrimination task using paired but asynchronous auditory and visual stimuli. Auditory cortex activity was modeled from the scalp-recorded EEG using spatiotemporal dipole source analysis. Phase resetting activity was assessed using time–frequency analysis of the source waveforms. Significant cross-modal phase resetting was observed in auditory cortex at low alpha frequencies (8–10 Hz) peaking 80 ms after auditory onset, at high alpha frequencies (10–12 Hz) peaking at 88 ms, and at high theta frequencies (∼7 Hz) peaking at 156 ms. Importantly, significant effects were only evident when visual input preceded auditory by between 30 and 75 ms. Behaviorally, cross-modal phase resetting accounted for 18% of the variability in response speed in the auditory task, with stronger resetting overall leading to significantly faster responses. A direct link was thus shown between visual-induced modulations of auditory cortex activity and performance in an auditory task. The results are consistent with a model in which the efficiency of auditory processing is improved when natural associations between visual and auditory inputs allow one input to reliably predict the next.


NeuroImage | 2012

Let's face it, from trial to trial: Comparing procedures for N170 single-trial estimation

Maarten De Vos; Jeremy D. Thorne; Galit Yovel; Stefan Debener

The estimation of event-related single trial EEG activity is notoriously difficult but is of growing interest in various areas of cognitive neuroscience, such as multimodal neuroimaging and EEG-based brain computer interfaces. However, an objective evaluation of different approaches is lacking. The present study therefore compared four frequently-used single-trial data filtering procedures: raw sensor amplitudes, regression-based estimation, bandpass filtering, and independent component analysis (ICA). High-density EEG data were recorded from 20 healthy participants in a face recognition task and were analyzed with a focus on the face-selective N170 single-trial event-related potential. Linear discriminant analysis revealed significantly better single-trial estimation for ICA compared to raw sensor amplitudes, whereas the other two approaches did not improve classification accuracy. Further analyses suggested that ICA enabled extraction of a face-sensitive independent component in each participant, which led to the superior performance in single trial estimation. Additionally, we show that the face-sensitive component does not directly represent activity from a neuronal population exclusively involved in face-processing, but rather the activity of a network involved in general visual processing. We conclude that ICA effectively facilitates the separation of physiological trial-by-trial fluctuations from measurement noise, in particular when the process of interest is reliably reflected in components representing the neural signature of interest.


Hearing Research | 2014

Look now and hear what's coming: On the functional role of cross-modal phase reset

Jeremy D. Thorne; Stefan Debener

In our multisensory environment our sensory systems are continuously receiving information that is often interrelated and must be integrated. Recent work in animals and humans has demonstrated that input to one sensory modality can reset the phase of ambient cortical oscillatory activity in another. The periodic fluctuations in neuronal excitability reflected in these oscillations can thereby be aligned to forthcoming anticipated sensory input. In the auditory domain, the example par excellence is speech, because of its inherently rhythmic structure. In contrast, fluctuations of oscillatory phase in the visual system are argued to reflect periodic sampling of the environment. Thus rhythmic structure is imposed on, rather than extracted from, the visual sensory input. Given this distinction, we suggest that cross-modal phase reset subserves separate functions in the auditory and visual systems. We propose a modality-dependent role for cross-modal input in temporal prediction whereby an auditory event signals the visual system to look now, but a visual event signals the auditory system that it needs to hear what is coming. This article is part of a Special Issue entitled .


Advances in Cognitive Psychology | 2013

Visual movement perception in deaf and hearing individuals.

Nadine Hauthal; Pascale Sandmann; Stefan Debener; Jeremy D. Thorne

A number of studies have investigated changes in the perception of visual motion as a result of altered sensory experiences. An animal study has shown that auditory-deprived cats exhibit enhanced performance in a visual movement detection task compared to hearing cats (Lomber, Meredith, & Kral, 2010). In humans, the behavioural evidence regarding the perception of motion is less clear. The present study investigated deaf and hearing adult participants using a movement localization task and a direction of motion task employing coherently-moving and static visual dot patterns. Overall, deaf and hearing participants did not differ in their movement localization performance, although within the deaf group, a left visual field advantage was found. When discriminating the direction of motion, however, deaf participants responded faster and tended to be more accurate when detecting small differences in direction compared with the hearing controls. These results conform to the view that visual abilities are enhanced after auditory deprivation and extend previous findings regarding visual motion processing in deaf individuals.


Brain Topography | 2015

Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli

Ling-Chia Chen; Pascale Sandmann; Jeremy D. Thorne; Christoph Herrmann; Stefan Debener

Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.


NeuroImage | 2013

Electrophysiological correlates of auditory change detection and change deafness in complex auditory scenes

Sebastian Puschmann; Pascale Sandmann; Janina Ahrens; Jeremy D. Thorne; Riklef Weerda; Georg M. Klump; Stefan Debener; Christiane M. Thiel

Change deafness describes the failure to perceive even intense changes within complex auditory input, if the listener does not attend to the changing sound. Remarkably, previous psychophysical data provide evidence that this effect occurs independently of successful stimulus encoding, indicating that undetected changes are processed to some extent in auditory cortex. Here we investigated cortical representations of detected and undetected auditory changes using electroencephalographic (EEG) recordings and a change deafness paradigm. We applied a one-shot change detection task, in which participants listened successively to three complex auditory scenes, each of them consisting of six simultaneously presented auditory streams. Listeners had to decide whether all scenes were identical or whether the pitch of one stream was changed between the last two presentations. Our data show significantly increased middle-latency Nb responses for both detected and undetected changes as compared to no-change trials. In contrast, only successfully detected changes were associated with a later mismatch response in auditory cortex, followed by increased N2, P3a and P3b responses, originating from hierarchically higher non-sensory brain regions. These results strengthen the view that undetected changes are successfully encoded at sensory level in auditory cortex, but fail to trigger later change-related cortical responses that lead to conscious perception of change.


PLOS ONE | 2014

Interplay of Agency and Ownership: The Intentional Binding and Rubber Hand Illusion Paradigm Combined

Niclas Braun; Jeremy D. Thorne; Helmut Hildebrandt; Stefan Debener

The sense of agency (SoA) refers to the phenomenal experience of initiating and controlling an action, whereas the sense of ownership (SoO) describes the feeling of myness an agent experiences towards his or her own body parts. SoA has been investigated with intentional binding paradigms, and the sense of ownership (SoO) with the rubber-hand illusion (RHI). We investigated the relationship between SoA and SoO by incorporating intentional binding into the RHI. Explicit and implicit measures of agency (SoA-questionnaire, intentional binding) and ownership (SoO-questionnaire, proprioceptive drift) were used. Artificial hand position (congruent/incongruent) and mode of agent (self-agent/other-agent) were systematically varied. Reported SoO varied mainly with position (higher in congruent conditions), but also with agent (higher in self-agent conditions). Reported SoA was modulated by agent (higher in self-agent conditions), and moderately by position (higher in congruent conditions). Implicit and explicit agency measures were not significantly correlated. Finally, intentional binding tended to be stronger in self-generated than observed voluntary actions. Results provide further evidence for a partial double dissociation between SoA and SoO, empirically distinct agency levels, and moderate intentional binding differences between self-generated and observed voluntary actions.


Brain Topography | 2014

Source Localisation of Visual Evoked Potentials in Congenitally Deaf Individuals

Nadine Hauthal; Jeremy D. Thorne; Stefan Debener; Pascale Sandmann

AbstractPrevious studies have suggested that individuals deprived of auditory input can compensate with specific superior abilities in the remaining sensory modalities. To better understand the neural basis of deafness-induced changes, the present study used electroencephalography to examine visual functions and cross-modal reorganization of the auditory cortex in deaf individuals. Congenitally deaf participants and hearing controls were presented with reversing chequerboard stimuli that were systematically modulated in luminance ratio. The two groups of participants showed similar modulation of visual evoked potential (VEP) amplitudes (N85, P110) and latencies (P110) as a function of luminance ratio. Analysis of VEPs revealed faster neural processing in deaf participants compared with hearing controls at early stages of cortical visual processing (N85). Deaf participants also showed higher amplitudes (P110) than hearing participants. In contrast to our expectations, the results from VEP source analysis revealed no clear evidence for cross-modal reorganization in the auditory cortex of deaf participants. However, deaf participants tended to show higher activation in posterior parietal cortex (PPC). Moreover, modulation of PPC responses as a function of luminance was also stronger in deaf than in hearing participants. Taken together, these findings are an indication of more efficient neural processing of visual information in the deaf, which may relate to functional changes, in particular in multisensory parietal cortex, as a consequence of early auditory deprivation.

Collaboration


Dive into the Jeremy D. Thorne's collaboration.

Top Co-Authors

Avatar

Stefan Debener

Royal South Hants Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niclas Braun

University of Oldenburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge