Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens Hjortkjær is active.

Publication


Featured researches published by Jens Hjortkjær.


Frontiers in Psychology | 2016

Impact of Background Noise and Sentence Complexity on Processing Demands during Sentence Comprehension

Dorothea Wendt; Torsten Dau; Jens Hjortkjær

Speech comprehension in adverse listening conditions can be effortful even when speech is fully intelligible. Acoustical distortions typically make speech comprehension more effortful, but effort also depends on linguistic aspects of the speech signal, such as its syntactic complexity. In the present study, pupil dilations, and subjective effort ratings were recorded in 20 normal-hearing participants while performing a sentence comprehension task. The sentences were either syntactically simple (subject-first sentence structure) or complex (object-first sentence structure) and were presented in two levels of background noise both corresponding to high intelligibility. A digit span and a reading span test were used to assess individual differences in the participants’ working memory capacity (WMC). The results showed that the subjectively rated effort was mostly affected by the noise level and less by syntactic complexity. Conversely, pupil dilations increased with syntactic complexity but only showed a small effect of the noise level. Participants with higher WMC showed increased pupil responses in the higher-level noise condition but rated sentence comprehension as being less effortful compared to participants with lower WMC. Overall, the results demonstrate that pupil dilations and subjectively rated effort represent different aspects of effort. Furthermore, the results indicate that effort can vary in situations with high speech intelligibility.


NeuroImage | 2017

Noise-robust cortical tracking of attended speech in real-world acoustic scenes

Søren A. Fuglsang; Torsten Dau; Jens Hjortkjær

&NA; Selectively attending to one speaker in a multi‐speaker scenario is thought to synchronize low‐frequency cortical activity to the attended speech signal. In recent studies, reconstruction of speech from single‐trial electroencephalogram (EEG) data has been used to decode which talker a listener is attending to in a two‐talker situation. It is currently unclear how this generalizes to more complex sound environments. Behaviorally, speech perception is robust to the acoustic distortions that listeners typically encounter in everyday life, but it is unknown whether this is mirrored by a noise‐robust neural tracking of attended speech. Here we used advanced acoustic simulations to recreate real‐world acoustic scenes in the laboratory. In virtual acoustic realities with varying amounts of reverberation and number of interfering talkers, listeners selectively attended to the speech stream of a particular talker. Across the different listening environments, we found that the attended talker could be accurately decoded from single‐trial EEG data irrespective of the different distortions in the acoustic input. For highly reverberant environments, speech envelopes reconstructed from neural responses to the distorted stimuli resembled the original clean signal more than the distorted input. With reverberant speech, we observed a late cortical response to the attended speech stream that encoded temporal modulations in the speech signal without its reverberant distortion. Single‐trial attention decoding accuracies based on 40–50 s long blocks of data from 64 scalp electrodes were equally high (80–90% correct) in all considered listening environments and remained statistically significant using down to 10 scalp electrodes and short (<30‐s) unaveraged EEG segments. In contrast to the robust decoding of the attended talker we found that decoding of the unattended talker deteriorated with the acoustic distortions. These results suggest that cortical activity tracks an attended speech signal in a way that is invariant to acoustic distortions encountered in real‐life sound environments. Noise‐robust attention decoding additionally suggests a potential utility of stimulus reconstruction techniques in attention‐controlled brain‐computer interfaces. HighlightsSelective attention to speech in real‐world acoustic scenarios.Cortical delta‐theta activity entrains to attended speech in a noise‐robust manner.Cortex represents reverberant speech without its acoustic distortion.Single‐trial EEG decoding of selective attention is robust to acoustic noise.


NeuroImage | 2018

Decoding the auditory brain with canonical component analysis

Alain de Cheveigné; Daniel D. E. Wong; Giovanni M. Di Liberto; Jens Hjortkjær; Malcolm Slaney; Edmund C. Lalor

&NA; The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event‐related potential (ERP) is appropriate for isolated stimuli, more sophisticated “decoding” strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus‐response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. HighlightsCCA is a powerful, easy to use method for multichannel data analysis.CCA finds an optimal linear model to relate stimulus and brain response.Multiple speech components map to distinct spectro‐spatial signatures.CCA yields large stimulus‐response correlation values.CCA supports good performance in a classification task.


Cerebral Cortex | 2018

Task-Modulated Cortical Representations of Natural Sound Source Categories

Jens Hjortkjær; Tanja Kassuba; Kristoffer Hougaard Madsen; Martin Skov; Hartwig R. Siebner

In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI to measure cortical response patterns while human listeners categorized real-world sounds created by objects of different solid materials (glass, metal, wood) manipulated by different sound-producing actions (striking, rattling, dropping). In different sessions, subjects had to identify either material or action categories in the same sound stimuli. The sound-producing action and the material of the sound source could be decoded from multivoxel activity patterns in auditory cortex, including Heschls gyrus and planum temporale. Importantly, decoding success depended on task relevance and category discriminability. Action categories were more accurately decoded in auditory cortex when subjects identified action information. Conversely, the material of the same sound sources was decoded with higher accuracy in the inferior frontal cortex during material identification. Representational similarity analyses indicated that both early and higher-order auditory cortex selectively enhanced spectrotemporal features relevant to the target category. Together, the results indicate a cortical selection mechanism that favors task-relevant information in the processing of nonvocal sound categories.


Journal of the Acoustical Society of America | 2016

Spectral and temporal cues for perception of material and action categories in impacted sound sources

Jens Hjortkjær; Stephen McAdams

In two experiments, similarity ratings and categorization performance with recorded impact sounds representing three material categories (wood, metal, glass) being manipulated by three different categories of action (drop, strike, rattle) were examined. Previous research focusing on single impact sounds suggests that temporal cues related to damping are essential for material discrimination, but spectral cues are potentially more efficient for discriminating materials manipulated by different actions that include multiple impacts (e.g., dropping, rattling). Perceived similarity between material categories across different actions was correlated with the distribution of long-term spectral energy (spectral centroid). Similarity between action categories was described by the temporal distribution of envelope energy (temporal centroid) or by the density of impacts. Moreover, perceptual similarity correlated with the pattern of confusion in categorization judgments. Listeners tended to confuse materials with similar spectral centroids, and actions with similar temporal centroids and onset densities. To confirm the influence of these different features, spectral cues were removed by applying the envelopes of the original sounds to a broadband noise carrier. Without spectral cues, listeners retained sensitivity to action categories but not to material categories. Conversely, listeners recognized material but not action categories after envelope scrambling that preserved long-term spectral content.


bioRxiv | 2018

A Comparison of Temporal Response Function Estimation Methods for Auditory Attention Decoding

Daniel D. E. Wong; Søren A. Fuglsang; Jens Hjortkjær; Enea Ceolini; Malcolm Slaney; Alain de Cheveigné

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on temporal response functions (TRFs). In the current context, a TRF is a function that facilitates a mapping between features of sound streams and EEG responses. It has been shown that when the envelope of attended speech and EEG responses are used to derive TRF mapping functions, the TRF model predictions can be used to discriminate between attended and unattended talkers. However, the predictive performance of the TRF models is dependent on how the TRF model parameters are estimated. There exist a number of TRF estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different TRF estimation methods to classify attended speakers from multi-channel EEG data. The performance of the TRF estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams.


bioRxiv | 2018

Multiway Canonical Correlation Analysis of Brain Signals

Alain de Cheveigné; Giovanni M. Di Liberto; Dorothée Arzounian; Daniel Wong; Jens Hjortkjær; Søren A. Fuglsang; Lucas C. Parra

Brain signals recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratio due to the presence of multiple competing sources and artifacts. A common remedy is to average over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are presented only once (speech, music, movies, natural sound). An alternative is to average responses over multiple subjects that were presented with the same identical stimuli, but differences in geometry of brain sources and sensors reduce the effectiveness of this solution. Multiway canonical correlation analysis (MCCA) brings a solution to this problem by allowing data from multiple subjects to be fused in such a way as to extract components common to all. This paper reviews the method, offers application examples that illustrate its effectiveness, and outlines the caveats and risks entailed by the method.


Frontiers in Neuroscience | 2018

A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

Daniel D. E. Wong; Søren A. Fuglsang; Jens Hjortkjær; Enea Ceolini; Malcolm Slaney; Alain de Cheveigné

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.


European Journal of Neuroscience | 2018

Cortical oscillations and entrainment in speech processing during working memory load

Jens Hjortkjær; Jonatan Märcher-Rørsted; Søren A. Fuglsang; Torsten Dau

Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real‐life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n‐back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n‐back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n‐back task on the speech sequences in different levels of background noise. Increasing WM load at higher n‐back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n‐back level. The observed alpha–theta power changes are consistent with visual n‐back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low‐frequency cortical activity (< 13 Hz). We found that increases in both types of WM load (background noise and n‐back level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top‐down influence of WM processing on cortical speech entrainment.


NeuroImage | 2017

Subcortical and cortical correlates of pitch discrimination: Evidence for two levels of neuroplasticity in musicians

Federica Bianchi; Jens Hjortkjær; Sébastien Santurette; Robert J. Zatorre; Hartwig R. Siebner; Torsten Dau

&NA; Musicians are highly trained to discriminate fine pitch changes but the neural bases of this ability are poorly understood. It is unclear whether such training‐dependent differences in pitch processing arise already in the subcortical auditory system or are linked to more central stages. To address this question, we combined psychoacoustic testing with functional MRI to measure cortical and subcortical responses in musicians and non‐musicians during a pitch‐discrimination task. First, we estimated behavioral pitch‐discrimination thresholds for complex tones with harmonic components that were either resolved or unresolved in the auditory system. Musicians outperformed non‐musicians, showing lower pitch‐discrimination thresholds in both conditions. The same participants underwent task‐related functional MRI, while they performed a similar pitch‐discrimination task. To account for the between‐group differences in pitch‐discrimination, task difficulty was adjusted to each individuals pitch‐discrimination ability. Relative to non‐musicians, musicians showed increased neural responses to complex tones with either resolved or unresolved harmonics especially in right‐hemispheric areas, comprising the right superior temporal gyrus, Heschls gyrus, insular cortex, inferior frontal gyrus, and in the inferior colliculus. Both subcortical and cortical neural responses predicted the individual pitch‐discrimination performance. However, functional activity in the inferior colliculus correlated with differences in pitch discrimination across all participants, but not within the musicians group alone. Only neural activity in the right auditory cortex scaled with the fine pitch‐discrimination thresholds within the musicians. These findings suggest two levels of neuroplasticity in musicians, whereby training‐dependent changes in pitch processing arise at the collicular level and are preserved and further enhanced in the right auditory cortex. HighlightsEvidence of both subcortical and cortical plasticity in musicians via fMRI.Subcortical responses reflect pitch‐discrimination performance across all subjects.Responses in the right auditory cortex predict pitch discrimination in musicians.Resolvability effect in anterior auditory cortex in musicians and non‐musicians.Novel paradigm matching task difficulty across musicians and non‐musicians.

Collaboration


Dive into the Jens Hjortkjær's collaboration.

Top Co-Authors

Avatar

Torsten Dau

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Federica Bianchi

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Hartwig R. Siebner

Copenhagen University Hospital

View shared research outputs
Top Co-Authors

Avatar

Sébastien Santurette

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Søren A. Fuglsang

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dorothea Wendt

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Daniel D. E. Wong

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Daniel Wong

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge