Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jona Sassenhagen is active.

Publication


Featured researches published by Jona Sassenhagen.


Brain and Language | 2014

The P600-as-P3 hypothesis revisited: single-trial analyses reveal that the late EEG positivity following linguistically deviant material is reaction time aligned.

Jona Sassenhagen; Matthias Schlesewsky; Ina Bornkessel-Schlesewsky

The P600, a late positive ERP component following linguistically deviant stimuli, is commonly seen as indexing structural, high-level processes, e.g. of linguistic (re)analysis. It has also been identified with the P3 (P600-as-P3 hypothesis), which is thought to reflect a systemic neuromodulator release facilitating behavioural shifts and is usually response time aligned. We investigated single-trial alignment of the P600 to response, a critical prediction of the P600-as-P3 hypothesis. Participants heard sentences containing morphosyntactic and semantic violations and responded via a button press. The elicited P600 was perfectly response aligned, while an N400 following semantic deviations was stimulus aligned. This is, to our knowledge, the first single-trial analysis of language processing data using within-sentence behavioural responses as temporal covariates. Results support the P600-as-P3 perspective and thus constitute a step towards a neurophysiological grounding of language-related ERPs.


Cortex | 2015

The P600 as a correlate of ventral attention network reorientation

Jona Sassenhagen; Ina Bornkessel-Schlesewsky

When, during language processing, a reader or listener is confronted with a structurally deviant phrase, this typically elicits a late positive ERP deflection (P600). The P600 is often understood as a correlate of structural analysis. This assumption has informed a number of neurocognitive models of language. However, the P600 strongly resembles the P3, likely a more general electrophysiological correlate of reorientation behaviour supported by noradrenergic input to the ventral attention network/VAN. Some researchers have proposed that the P600 is an instance of the P3, not a distinct component reflecting the analysis of structured inputs. Here, we tested the P600-as-P3 hypothesis by estimating the alignment of the P600 elicited in a visual sentence processing task to simultaneously collected behavioural measures. A similar analysis was undertaken for a P3 elicited in a separate non-linguistic (face detection) task. Since the P3 is usually aligned to reaction time/RT, the same should hold for the P600; a failure to find RT alignment of the P600 would pose a problem for the P600-as-P3 hypothesis. In contrast, RT alignment of the P600 would associate it with the well-established VAN/Locus Coeruleus - Noradrenaline - Network subserving cortical reorientation. We failed to falsify the hypothesis of RT alignment. Secondary measures, while less unambiguous, were more in agreement with the P600-as-P3 hypothesis. We interpret our results as corroborating the hypothesis that the P600 is a P3, in that it shows that the P600 is RT-aligned. This perspective is unpredicted by an account of the P600 as indexing high-level processing.


Brain and Language | 2016

A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests.

Jona Sassenhagen; Phillip M. Alday

Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.


Neuropsychologia | 2018

No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing

Dejan Draschkow; Edvard Heikel; Melissa L.-H. Võ; Christian J. Fiebach; Jona Sassenhagen

Attributing meaning to diverse visual input is a core feature of human cognition. Violating environmental expectations (e.g., a toothbrush in the fridge) induces a late event-related negativity of the event-related potential/ERP. This N400 ERP has not only been linked to the semantic processing of language, but also to objects and scenes. Inconsistent object-scene relationships are additionally associated with an earlier negative deflection of the EEG signal between 250 and 350 ms. This N300 is hypothesized to reflect pre-semantic perceptual processes. To investigate whether these two components are truly separable or if the early object-scene integration activity (250-350 ms) shares certain levels of processing with the late neural correlates of meaning processing (350-500 ms), we used time-resolved multivariate pattern analysis (MVPA) where a classifier trained at one time point in a trial (e.g., during the N300 time window) is tested at every other time point (i.e., including the N400 time window). Forty participants were presented with semantic inconsistencies, in which an object was inconsistent with a scenes meaning. Replicating previous findings, our manipulation produced significant N300 and N400 deflections. MVPA revealed above chance decoding performance for classifiers trained during time points of the N300 component and tested during later time points of the N400, and vice versa. This provides no evidence for the activation of two separable neurocognitive processes following the violation of context-dependent predictions in visual scene perception. Our data supports the early appearance of high-level, context-sensitive processes in visual cognition.


Brain and Language | 2018

Time-generalized multivariate analysis of EEG responses reveals a cascading architecture of semantic mismatch processing

Edvard Heikel; Jona Sassenhagen; Christian J. Fiebach

HighlightsThe cognitive architecture underlying sentence processing is still debated.We apply time‐generalized MVPA decoding to EEG correlates of semantic processing.Decoding yields distinct but overlapping EEG patterns for N400 and P600 time windows.This suggests an incrementally cascading as opposed to strictly serial architecture.GAT provides valuable complementary insights into language processing. Abstract Event‐related brain potentials have a strong impact on neurocognitive models, as they inform about the temporal sequence of cognitive processes. Nevertheless, their value for deciding among alternative cognitive architectures is partly limited by component overlap and the possibility of ambiguity regarding component identity. Here, we apply temporally‐generalized multivariate pattern analysis – a recently‐proposed machine learning method capable of tracking the evolution of neurocognitive processes over time – to constrain possible alternative architectures underlying the processing of semantic incongruency in sentences. In a spoken sentence paradigm, we replicate established N400/P600 correlates of semantic mismatch. Time‐generalized decoding indicates that early vs. late mismatch‐sensitive processes are (i) distinct in their neural substrate, arguing against recurrent or latency‐shifted single process architectures, and (ii) partially overlapping in time, inconsistent with predictions of strictly serial models. These results are in accordance with an incremental‐cascading neurocognitive organization of semantic mismatch processing. We propose time‐generalized multivariate decoding as a valuable tool for neurocognitive language studies.


bioRxiv | 2018

Finding the P3 in the P600: Decoding shared neural mechanisms of responses to syntactic violations and oddball target detection

Jona Sassenhagen; Christian J. Fiebach

The P600 Event-Related Brain Potential, elicited by syntactic violations in sentences, is generally interpreted as indicating language-specific structural/combinatorial processing, with far-reaching implications for models of language. P600 effects are also often taken as evidence for language-like grammars in non-linguistic domains like music or arithmetic. An alternative account, however, interprets the P600 as a P3, a domain-general brain response to salience. Using time-generalized multivariate pattern analysis, we demonstrate that P3 EEG patterns, elicited in a visual Oddball experiment, account for the P600 effect elicited in a syntactic violation experiment: P3 pattern-trained MVPA can classify P600 trials just as well as P600-trained ones. A second study replicates and generalizes this finding, and demonstrates its specificity by comparing it to face- and semantic mismatch-associated EEG responses. These results indicate that P3 and P600 share neural patterns to a substantial degree, calling into question the interpretation of P600 as a language-specific brain response and instead strengthening its association with the P3. More generally, our data indicate that observing P600-like brain responses provides no direct evidence for the presence of language-like grammars, in language or elsewhere.


bioRxiv | 2018

Visual word recognition relies on an orthographic prediction error signal

Benjamin Gagl; Jona Sassenhagen; Sophia Haan; Klara Gregorova; Fabio Richlan; Christian J. Fiebach

Current cognitive models of reading assume that word recognition involves the 9bottom-up9 assembly of perceived low-level visual features into letters, letter combinations, and words. This rather inefficient strategy, however, is incompatible with neurophysiological theories of Bayesian-like predictive neural computations during perception. Here we propose that prior knowledge of the words in a language is used to 9explain away9 redundant and highly expected parts of the percept. As a result, subsequent processing stages operate upon an optimized representation highlighting information relevant for word identification, i.e., the orthographic prediction error. We demonstrate empirically that the orthographic prediction error accounts for word recognition behavior. We then report neurophysiological data showing that this informationally optimized orthographic prediction error is processed around 200 ms after word-onset in the occipital cortex. The remarkable efficiency of reading, thus, is achieved by optimizing the mental representation of the visual percept, based on prior visual-orthographic knowledge.Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.


bioRxiv | 2018

Reading at the speed of speech: the rate of eye movements aligns with auditory language processing

Benjamin Gagl; Julius Golch; Stefan Hawelka; Jona Sassenhagen; David Poeppel; Christian J. Fiebach

Across languages, the speech signal is characterized by a ~4-5 Hz rhythm of the amplitude modulation spectrum, reflecting the processing of linguistic information chunks approximately every 200 ms. Interestingly, ~200 ms is also the typical eye-fixation duration during reading. Prompted by this observation, we estimated the frequency at which readers sample text, and demonstrate that they read sentences at a rate of ~5 Hz. We then examined the generality of this finding in a meta analysis. While replicating the experimentally measured 5 Hz sampling rate in the language in which it was obtained, i.e., German, we observe that fixation-based sampling frequencies vary across languages between 3.1 and 5.2 Hz, with the majority of languages lying between 4 and 5 Hz. Remarkably, we identify a systematic rate reduction from easy to difficult writing systems. Reading in easy-to-process writing systems thus is aligned with the rhythm of speech, which may constitute an upper boundary for reading. We argue that reading is likely tuned to supply information to linguistic processes at an optimal rate, coincident with the typical rate of speech. Significance Statement Across languages, speech is produced and perceived at a rate of 4-5Hz. When listening to speech, our brain ‘picks up’ this temporal structure. Here we report that during reading, our eyes sample text at the same rate. We demonstrate this empirically in one language, and generalize this finding in a meta analysis of 124 empirical studies, covering 14 different languages. Reading rates vary between 3.1 and 5.2 Hz - i.e., broadly in the range of the speech signal. We demonstrate that this variance is determined by the orthographical difficulty of different writing systems, and propose that the rate at which our brain processes spoken language acts as a driving force and upper limit for the voluntary control of eye movements during reading.Abstract Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum at ~4-5 Hz, reflecting the processing of linguistic information chunks (i.e., syllables or words) approximately every 200 ms. Interestingly, ~200 ms is also the typical duration of eye fixations during reading. Prompted by this observation, we estimated the frequency at which German readers sample text, and demonstrate that they read sentences at a rate of ~5 Hz. We then examined the generality of this finding in a meta-analysis including 14 languages. We replicated the empirical result for German and observed that fixation-based sampling frequencies vary across languages between 3.9 and 5.2 Hz. Remarkably, we identified a systematic rate reduction from easy to difficult writing systems. Finally, we directly investigated in a new experiment the association between speech spectrum and eye-movement sampling frequency at a person-specific level and found a significant correlation. Based on this evidence, we argue that during reading, the rate of our eye movements is tuned to supply information to language comprehension processes at a preferred rate, coincident with the typical rate of speech. Significance Statement Across languages, speech is produced and perceived at a rate of ~4-5Hz. When listening to speech, our brain capitalizes this temporal structure to segment speech. We show empirically that while reading our eyes sample text at the same rate, and generalize this finding in a meta-analysis to 14 languages. Reading rates vary between 3.9 and 5.2Hz – i.e., within the typical range of the speech signal. We demonstrate that the difficulty of writing systems underpins this variance. Lastly, we also demonstrate that the speech rate between persons is correlated with the rate at which their eyes sample text. The speech rate of spoken language appears to act as a driving force for the voluntary control of eye movements during reading.


bioRxiv | 2018

Decoding semantic predictions from EEG prior to word onset

Edvard Heikel; Jona Sassenhagen; Christian J. Fiebach

ABSTRACT The outstanding speed of language comprehension necessitates a highly efficient implementation of cognitive-linguistic processes. The domain-general theory of Predictive Coding suggests that our brain solves this problem by continuously forming linguistic predictions about expected upcoming input. The neurophysiological implementation of these predictive linguistic processes, however, is not yet understood. Here, we use EEG (human participants, both sexes) to investigate the existence and nature of online-generated, category-level semantic representations during sentence processing. We conducted two experiments in which some nouns – embedded in a predictive spoken sentence context – were unexpectedly delayed by 1 second. Target nouns were either abstract/concrete (Experiment 1) or animate/inanimate (Experiment 2). We hypothesized that if neural prediction error signals following (temporary) omissions carry specific information about the stimulus, the semantic category of the upcoming target word is encoded in brain activity prior to its presentation. Using time-generalized multivariate pattern analysis, we demonstrate significant decoding of word category from silent periods directly preceding the target word, in both experiments. This provides direct evidence for predictive coding during sentence processing, i.e., that information about a word can be encoded in brain activity before it is perceived. While the same semantic contrast could also be decoded from EEG activity elicited by isolated words (Experiment 1), the identified neural patterns did not generalize to pre-stimulus delay period activity in sentences. Our results not only indicate that the brain processes language predictively, but also demonstrate the nature and sentence-specificity of category-level semantic predictions preactivated during sentence comprehension. STATEMENT OF SIGNIFICANCE The speed of language comprehension necessitates a highly efficient implementation of cognitive-linguistic processes. Predictive processing has been suggested as a solution to this problem, but the underlying neural mechanisms and linguistic content of such predictions are only poorly understood. Inspired by Predictive Coding theory, we investigate whether the meaning of expected, but not-yet heard words can be decoded from brain activity. Using EEG, we can predict if a word is, e.g., abstract (as opposed to concrete), or animate (vs. inanimate), from brain signals preceding the word itself. This strengthens predictive coding theory as a likely candidate for the principled neural mechanisms underlying online processing of language and indicates that predictive processing applies to highly abstract categories like semantics.


Language, cognition and neuroscience | 2018

How to analyse electrophysiological responses to naturalistic language with time-resolved multiple regression

Jona Sassenhagen

ABSTRACT Naturalistic language processing cannot be approached with the analysis methods constructed to handle well-controlled experiments. Language is a multi- and cross-level phenomenon, with sequential interdependencies and correlations between various lexical dimensions. A recently-developed method allows the analysis of neural time series during natural story comprehension: time-resolved multiple regression. It consists in modelling continuous brain recordings with multiple regression after embedding linguistic features in a temporal-extension matrix (a distributed-lags model). It identifies neural correlates of linguistic processes, accounting for temporal interdependencies – simultaneously for, e.g. acoustics, phonology and semantics. This has resulted in impactful discoveries about how brains process coherent speech, potentially broadening the class of phenomena that can be studied. I discuss the method conceptually, highlight caveats, and relate it to similar as well as to traditional methods, all with a particular consideration for analysing the processing of coherent narratives. In a practical example, the word frequency-dependent N400 effect is estimated from a half-hour continuous narrative.

Collaboration


Dive into the Jona Sassenhagen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edvard Heikel

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Gagl

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Melissa L.-H. Võ

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Matthias Schlesewsky

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Dejan Draschkow

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Phillip M. Alday

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge