Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where I. Groen is active.

Publication


Featured researches published by I. Groen.


Neuron | 2012

Intact memory for irrelevant information impairs perception in amnesia

Morgan D. Barense; I. Groen; Andy C. H. Lee; Lok-Kin Yeung; Sinead M. Brady; Mariella Gregori; Narinder Kapur; Timothy J. Bussey; Lisa M. Saksida; Richard N. Henson

Summary Memory and perception have long been considered separate cognitive processes, and amnesia resulting from medial temporal lobe (MTL) damage is thought to reflect damage to a dedicated memory system. Recent work has questioned these views, suggesting that amnesia can result from impoverished perceptual representations in the MTL, causing an increased susceptibility to interference. Using a perceptual matching task for which fMRI implicated a specific MTL structure, the perirhinal cortex, we show that amnesics with MTL damage including the perirhinal cortex, but not those with damage limited to the hippocampus, were vulnerable to object-based perceptual interference. Importantly, when we controlled such interference, their performance recovered to normal levels. These findings challenge prevailing conceptions of amnesia, suggesting that effects of damage to specific MTL regions are better understood not in terms of damage to a dedicated declarative memory system, but in terms of impoverished representations of the stimuli those regions maintain.


The Journal of Neuroscience | 2013

From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category

I. Groen; Sennay Ghebreab; H. Prins; Victor A. F. Lamme; H.S. Scholte

The visual system processes natural scenes in a split second. Part of this process is the extraction of “gist,” a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in real-world scenes, the brain exhibits strong sensitivity to low-level summary statistics. Subjects rated a specific instance of a global scene property, naturalness, for a large set of natural scenes while EEG was recorded. For each individual scene, we derived two physiologically plausible summary statistics by spatially pooling local contrast filter outputs: contrast energy (CE), indexing contrast strength, and spatial coherence (SC), indexing scene fragmentation. We show that behavioral performance is directly related to these statistics, with naturalness rating being influenced in particular by SC. At the neural level, both statistics parametrically modulated single-trial event-related potential amplitudes during an early, transient window (100–150 ms), but SC continued to influence activity levels later in time (up to 250 ms). In addition, the magnitude of neural activity that discriminated between man-made versus natural ratings of individual trials was related to SC, but not CE. These results suggest that global scene information may be computed by spatial pooling of responses from early visual areas (e.g., LGN or V1). The increased sensitivity over time to SC in particular, which reflects scene fragmentation, suggests that this statistic is actively exploited to estimate scene naturalness.


PLOS Computational Biology | 2012

Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

I. Groen; Sennay Ghebreab; Victor A. F. Lamme; H. Steven Scholte

The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task.


Journal of Neurophysiology | 2016

The time course of natural scene perception with reduced attention

I. Groen; Sennay Ghebreab; Victor A. F. Lamme; H. Steven Scholte

Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention.


Frontiers in Computational Neuroscience | 2012

Low-level contrast statistics are diagnostic of invariance of natural textures

I. Groen; Sennay Ghebreab; Victor A. F. Lamme; H. Steven Scholte

Texture may provide important clues for real world object and scene perception. To be reliable, these clues should ideally be invariant to common viewing variations such as changes in illumination and orientation. In a large image database of natural materials, we found textures with low-level contrast statistics that varied substantially under viewing variations, as well as textures that remained relatively constant. This led us to ask whether textures with constant contrast statistics give rise to more invariant representations compared to other textures. To test this, we selected natural texture images with either high (HV) or low (LV) variance in contrast statistics and presented these to human observers. In two distinct behavioral categorization paradigms, participants more often judged HV textures as “different” compared to LV textures, showing that textures with constant contrast statistics are perceived as being more invariant. In a separate electroencephalogram (EEG) experiment, evoked responses to single texture images (single-image ERPs) were collected. The results show that differences in contrast statistics correlated with both early and late differences in occipital ERP amplitude between individual images. Importantly, ERP differences between images of HV textures were mainly driven by illumination angle, which was not the case for LV images: there, differences were completely driven by texture membership. These converging neural and behavioral results imply that some natural textures are surprisingly invariant to illumination changes and that low-level contrast statistics are diagnostic of the extent of this invariance.


Frontiers in Human Neuroscience | 2009

Your conflict matters to me! Behavioral and neural manifestations of control adjustment after self-experienced and observed decision-conflict

Jasper Winkel; Jasper G. Wijnen; K. Richard Ridderinkhof; I. Groen; Jan Derrfuss; Claudia Danielmeier; Birte U. Forstmann

In everyday life we tune our behavior to a rapidly changing environment as well as to the behavior of others. The behavioral and neural underpinnings of such adaptive mechanisms are the focus of the present study. In a social version of a prototypical interference task we investigated whether trial-to-trial adjustments are comparable when experiencing conflicting action tendencies ourselves, or simulate such conflicts when observing another player performing the task. Using behavioral and neural measures by means of event-related brain potentials we showed that both own as well as observed conflict result in comparable trial-to-trial adjustments. These adjustments are found in the efficiency of behavioral adjustments, and in the amplitude of an event-related potential in the N2 time window. In sum, in both behavioral and neural terms, we adapt to conflicts happening to others just as if they happened to ourselves.


Frontiers in Computational Neuroscience | 2015

Visual dictionaries as intermediate features in the human brain

Kandan Ramakrishnan; H. Steven Scholte; I. Groen; Arnold W. M. Smeulders; Sennay Ghebreab

The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2, and V3. However, BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.


Social Neuroscience | 2012

Observed and self-experienced conflict induce similar behavioral and neural adaptation

Jasper Winkel; Jasper G. Wijnen; Claudia Danielmeier; I. Groen; Jan Derrfuss; K. Richard Ridderinkhof; Birte U. Forstmann

In adapting our behavior to a rapidly changing environment, we also tune our behavior to that of others. To investigate the neural bases of such adaptive mechanisms, we examined how individuals adjust their actions after decision-conflicts observed in others compared to self-experienced conflicts. Participants responded to the color of a stimulus, while its spatial position elicited either a conflicting or a congruent action. Participants were required either to respond to stimuli themselves or to observe the response of another participant. We studied the difference between interference effects following conflicting or congruent stimuli, an effect known as conflict adaptation. Consistent with earlier reports, we found that the implementation of reactive control, following congruent trials, was accompanied by activation of the right inferior frontal cortex. Individual differences in the efficacy of response inhibition covaried with the level of activation in that region. Sustaining proactive control, following incongruent trials, activated the left lateral prefrontal cortex. Most importantly, adaptive controls induced by decision-conflicts observed in others, as well as the associated prefrontal activations, were comparable to those induced by self-experienced conflicts. We show that in both behavioral and neural terms we adapt to conflicts happening to others just as if they happened to us.


international conference on multimedia and expo | 2014

Visual dictionaries in the Brain: Comparing HMAX and BOW

Kandan Ramakrishnan; I. Groen; H.S. Scholte; Arnold W. M. Smeulders; Sennay Ghebreab

The human visual system is thought to use features of intermediate complexity for scene representation. How the brain computationally represents intermediate features is, however, still unclear. Here we tested and compared two widely used computational models - the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision against human brain activity. These computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. We analyzed where in the brain and to what extent human fMRI responses to natural scenes can be accounted for by the HMAX and BoW representations. Voxel-wise application of a distance-based variation partitioning method reveals that HMAX explains significant brain activity in early visual regions and also in higher regions such as LO, TO while the BoW primarily explains brain acitvity in the early visual area. Notably, both HMAX and BoW explain the most brain activity in higher areas such as V4 and TO. These results suggest that visual dictionaries might provide a suitable computation for the representation of intermediate features in the brain.


Journal of Vision | 2015

The posterior part of area LO responds to image statistics, the anterior part to categorical differences.

H. Steven Scholte; Ilja G. Sligte; I. Groen; Sennay Ghebreab

In the past we have shown that a substantial part of the early ERP responses and responses in early visual cortex up to area LO can be well understood on the basis of statistics derived from the distribution of contrasts in an image. One of these statistics, spatial coherency (Groen et al., 2013), describes images on the basis of a range from structured to Gaussian. Images on the structured side typically have a strong figure/ground segmentation often depict man-made scenes, images on the Gaussian side are fractionated and typically depict nature scenes or contain scrambled content. Here we explore to what degree we can understand responses in area LO and ventral cortex using such statistics. We ask the question whether responses are better understood using statistics related to the spatial structure of an image or whether they are better understood on the basis of a categorical contrast of for instance image vs. scrambled image. We show that activity in the posterior part of LO is better explained by the SC statistic, activity in the more anterior part of LO is better explained by the categorical contrast of image vs. scrambled. Meeting abstract presented at VSS 2015.

Collaboration


Dive into the I. Groen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H.S. Scholte

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge