Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tijl Grootswagers is active.

Publication


Featured researches published by Tijl Grootswagers.


Journal of Cognitive Neuroscience | 2017

Decoding dynamic brain patterns from evoked responses: A tutorial on multivariate pattern analysis applied to time series neuroimaging data

Tijl Grootswagers; Susan G. Wardle; Thomas A. Carlson

Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain–computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to “decode” different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.


Neuropsychologia | 2017

Neural signatures of dynamic emotion constructs in the human brain

Tijl Grootswagers; Briana L. Kennedy; Steven B. Most; Thomas A. Carlson

How is emotion represented in the brain: is it categorical or along dimensions? In the present study, we applied multivariate pattern analysis (MVPA) to magnetoencephalography (MEG) to study the brains temporally unfolding representations of different emotion constructs. First, participants rated 525 images on the dimensions of valence and arousal and by intensity of discrete emotion categories (happiness, sadness, fear, disgust, and sadness). Thirteen new participants then viewed subsets of these images within an MEG scanner. We used Representational Similarity Analysis (RSA) to compare behavioral ratings to the unfolding neural representation of the stimuli in the brain. Ratings of valence and arousal explained significant proportions of the MEG data, even after corrections for low-level image properties. Additionally, behavioral ratings of the discrete emotions fear, disgust, and happiness significantly predicted early neural representations, whereas rating models of anger and sadness did not. Different emotion constructs also showed unique temporal signatures. Fear and disgust - both highly arousing and negative - were rapidly discriminated by the brain, but disgust was represented for an extended period of time relative to fear. Overall, our findings suggest that 1) dimensions of valence and arousal are quickly represented by the brain, as are some discrete emotions, and 2) different emotion constructs exhibit unique temporal dynamics. We discuss implications of these findings for theoretical understanding of emotion and for the interplay of discrete and dimensional aspects of emotional experience.


bioRxiv | 2018

Finding decodable information that is read out in behaviour

Tijl Grootswagers; Radoslaw Martin Cichy; Thomas A. Carlson

Multivariate decoding methods applied to neuroimaging data have become the standard in cognitive neuroscience for unravelling statistical dependencies between brain activation patterns and experimental conditions. The current challenge is to demonstrate that information decoded as such by the experimenter is in fact used by the brain itself to guide behaviour. Here we demonstrate a promising approach to do so in the context of neural activation during object perception and categorisation behaviour. We first localised decodable information about visual objects in the human brain using a spatially-unbiased multivariate decoding analysis. We then related brain activation patterns to behaviour using a machine-learning based extension of signal detection theory. We show that while there is decodable information about visual category throughout the visual brain, only a subset of those representations predicted categorisation behaviour, located mainly in anterior ventral temporal cortex. Our results have important implications for the interpretation of neuroimaging studies, highlight the importance of relating decoding results to behaviour, and suggest a suitable methodology towards this aim.


NeuroImage | 2018

Finding decodable information that can be read out in behaviour

Tijl Grootswagers; Radoslaw Martin Cichy; Thomas A. Carlson

&NA; Multivariate decoding methods applied to neuroimaging data have become the standard in cognitive neuroscience for unravelling statistical dependencies between brain activation patterns and experimental conditions. The current challenge is to demonstrate that decodable information is in fact used by the brain itself to guide behaviour. Here we demonstrate a promising approach to do so in the context of neural activation during object perception and categorisation behaviour. We first localised decodable information about visual objects in the human brain using a multivariate decoding analysis and a spatially‐unbiased searchlight approach. We then related brain activation patterns to behaviour by testing whether the classifier used for decoding can be used to predict behaviour. We show that while there is decodable information about visual category throughout the visual brain, only a subset of those representations predicted categorisation behaviour, which were strongest in anterior ventral temporal cortex. Our results have important implications for the interpretation of neuroimaging studies, highlight the importance of relating decoding results to behaviour, and suggest a suitable methodology towards this aim. HighlightsWe tested whether decodable information can be used by the brain for behaviour.Only a subset of decodable representations predicted behaviour.This has important implications for the interpretation of neuroimaging studies.The results highlight the importance of relating decoding results to behaviour.


Journal of Cognitive Neuroscience | 2017

Asymmetric Compression of Representational Space for Object Animacy Categorization under Degraded Viewing Conditions

Tijl Grootswagers; J. Brendan Ritchie; Susan G. Wardle; Andrew Heathcote; Thomas A. Carlson

Animacy is a robust organizing principle among object category representations in the human brain. Using multivariate pattern analysis methods, it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer RTs for the same animacy categorization task [Ritchie, J. B., Tovar, D. A., & Carlson, T. A. Emerging object representations in the visual system predict reaction times for categorization. PLoS Computational Biology, 11, e1004316, 2015; Carlson, T. A., Ritchie, J. B., Kriegeskorte, N., Durvasula, S., & Ma, J. Reaction time for object categorization is predicted by representational distance. Journal of Cognitive Neuroscience, 26, 132–142, 2014]. Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary and increase RTs. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the linear ballistic accumulator [Brown, S. D., & Heathcote, A. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178, 2008]. We found that distance to the classifier boundary correlated with RT, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted toward the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.


bioRxiv | 2018

The representational dynamics of visual objects in rapid serial visual processing streams

Tijl Grootswagers; Amanda K. Robinson; Thomas A. Carlson

In our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a rapid presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. In a second experiment, we replicated these results using an ultra-rapid presentation rate of 20 images per second. Our results show that the combination of naturalistic stimulus presentation rates and multivariate decoding analyses has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.


bioRxiv | 2018

Seeing versus Knowing: The Temporal Dynamics of Real and Implied Colour Processing in the Human Brain

Lina Teichmann; Tijl Grootswagers; Thomas A. Carlson; Anina N. Rich

Colour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For example, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated by knowing that something is red is the same as that activated when we actually see something red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activation patterns evoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N=18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results demonstrate that a common colour representation can be triggered by activating object representations from memory and perceiving colours. We show here that a difference between these processes lies in the time it takes to access the common colour representation.


Journal of Cognitive Neuroscience | 2018

Decoding digits and dice with magnetoencephalography: evidence for a shared representation of magnitude

Lina Teichmann; Tijl Grootswagers; Thomas A. Carlson; Anina N. Rich

Numerical format describes the way magnitude is conveyed, for example, as a digit (“3”) or Roman numeral (“III”). In the field of numerical cognition, there is an ongoing debate of whether magnitude representation is independent of numerical format. Here, we examine the time course of magnitude processing when using different symbolic formats. We presented participants with a series of digits and dice patterns corresponding to the magnitudes of 1 to 6 while performing a 1-back task on magnitude. Magnetoencephalography offers an opportunity to record brain activity with high temporal resolution. Multivariate pattern analysis applied to magnetoencephalographic data allows us to draw conclusions about brain activation patterns underlying information processing over time. The results show that we can cross-decode magnitude when training the classifier on magnitude presented in one symbolic format and testing the classifier on the other symbolic format. This suggests a similar representation of these numerical symbols. In addition, results from a time generalization analysis show that digits were accessed slightly earlier than dice, demonstrating temporal asynchronies in their shared representation of magnitude. Together, our methods allow a distinction between format-specific signals and format-independent representations of magnitude showing evidence that there is a shared representation of magnitude accessed via different symbols.


Journal of Vision | 2015

Decoding the emerging representation of degraded visual objects in the human brain.

Tijl Grootswagers; Thomas A. Carlson

Object recognition is fast and reliable, and works even when our eyes are focused elsewhere. The aim of our study was to examine how the visual system compensates for degraded inputs in object recognition by looking at the time course of the brains processing of naturally degraded visual object stimuli. The study used a set of 48 images depicting real world objects (24 animate and 24 inanimate). In experiment 1, we degraded the images by varying the simulated focus, such that each image was equally recognizable. In experiment 2, we presented the intact and out-of-focus images to participants, while their brain activity was recorded using magnetoencephalography (MEG). In the scanner, participants were asked to categorize the objects as animate or inanimate as quickly and accurately as possible. We predicted a behavioural reaction time effect and accordingly observed degraded objects were recognized 22ms slower. Time resolved multivariate pattern analysis was used to decode category (animacy) membership, as well as object identity for all possible pairwise exemplar comparisons as a function of time. In the decoding analysis, we observed lower decoding performance for degraded images overall; and the decoding onset and peak for degraded stimuli were 15ms slower. We assessed several models to explain the behavioural reaction time difference, including distance-based models, which predict reaction times based on exemplar decodability, as well as time-based models, which use the decoding onset and peak. Our analysis shows that distance based models are better predictors. These findings suggest that the time decodable information emerges is less important for determining reaction time behaviour than the quality of the representation (decodability). Meeting abstract presented at VSS 2015.


NeuroImage | 2016

Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG

Susan G. Wardle; Nikolaus Kriegeskorte; Tijl Grootswagers; Seyed Mahdi Khaligh-Razavi; Thomas A. Carlson

Collaboration


Dive into the Tijl Grootswagers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikolaus Kriegeskorte

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge