A Bartels
Max Planck Society
Publication
Featured researches published by A Bartels.
NeuroImage | 2018
Andreas Schindler; A Bartels
&NA; Our phenomenological experience of the stable world is maintained by continuous integration of visual self‐motion with extra‐retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo‐vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head‐motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observers head was stabilized by inflatable aircushions. Visual stimuli were presented on head‐fixed display goggles and updated in real time as a function of head‐motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head‐rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi‐modal integration of visual cues with head motion into a coherent “stable world” percept in the parietal operculum and in an anterior part of parieto‐insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non‐visual cues during physical head‐movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo‐vestibular stimulation. HighlightsFirst fMRI study integrating voluntary head motion with visual selfmotion.BOLD dynamics allowed to acquire neural responses to head motion after offset.Evidence for multi‐modal integration in insular and visual motion regions.Allows future fMRI with high spatio‐temporal control over vestibular stimulation.
NeuroImage | 2016
Andreas Schindler; A Bartels
High-level regions of the ventral stream exhibit strong category selectivity to stimuli such as faces, houses, or objects. However, recent studies suggest that at least part of this selectivity stems from low-level differences inherent to images of the different categories. For example, visual outdoor and indoor scenes as well as houses differ in spatial frequency, rectilinearity and obliqueness when compared to face or object images. Correspondingly, scene responsive para-hippocampal place area (PPA) showed strong preference to low-level properties of visual scenes also in the absence of high-level scene content. This raises the question whether all high-level responses in PPA, the fusiform face area (FFA), or the object-responsive lateral occipital compex (LOC) may actually be explained by systematic differences in low-level features. In the present study we contrasted two classes of simple stimuli consisting of ten rectangles each. While both were matched in visual low-level features only one class of rectangle arrangements gave rise to a percept compatible with a high-level 3D layout such as a scene or an object. We found that areas PPA, transverse occipital sulcus (TOS, also referred to as occipital place area, OPA), as well as FFA and LOC showed robust responses to the visual scene class compared to the low-level matched control. Our results suggest that visual category responsive regions are not purely driven by low-level visual features but also by the high-level perceptual stimulus interpretation.
NeuroImage | 2016
Andreas Schindler; A Bartels
When we move, the retinal velocities of objects in our surrounding differ according to their relative distances and give rise to a powerful three-dimensional visual cue referred to as motion parallax. Motion parallax allows us to infer our surroundings 3D structure as well as self-motion based on 2D retinal information. However, the neural substrates mediating the link between visual motion and scene processing are largely unexplored. We used fMRI in human observers to study motion parallax by means of an ecologically relevant yet highly controlled stimulus that mimicked the observers lateral motion past a depth-layered scene. We found parallax selective responses in parietal regions IPS3 and IPS4, and in a region lateral to scene selective occipital place area (OPA). The traditionally defined scene responsive regions OPA, the para-hippocampal place area (PPA) and the retrosplenial cortex (RSC) did not respond to parallax. During parallax processing, the occipital parallax selective region entertained highly specific functional connectivity with IPS3 and with scene selective PPA. These results establish a network linking dorsal motion and ventral scene processing regions specifically during parallax processing, which may underlie the brains ability to derive 3D scene information from motion parallax.
Magnetic Resonance Imaging | 2011
Steffen Stoewer; Jozien Goense; Ga Keliris; A Bartels; Nk Logothetis; John S. Duncan; Natasha Sigala
Functional magnetic resonance imaging (fMRI) experiments with awake nonhuman primates (NHPs) have recently seen a surge of applications. However, the standard fMRI analysis tools designed for human experiments are not optimal for NHP data collected at high fields. One major difference is the experimental setup. Although real head movement is impossible for NHPs, MRI image series often contain visible motion artifacts. Animal body movement results in image position changes and geometric distortions. Since conventional realignment methods are not appropriate to address such differences, algorithms tailored specifically for animal scanning become essential. We have implemented a series of high-field NHP specific methods in a software toolbox, fMRI Sandbox (http://kyb.tuebingen.mpg.de/~stoewer/), which allows us to use different realignment strategies. Here we demonstrate the effect of different realignment strategies on the analysis of awake-monkey fMRI data acquired at high field (7 T). We show that the advantage of using a nonstandard realignment algorithm depends on the amount of distortion in the dataset. While the benefits for less distorted datasets are minor, the improvement of statistical maps for heavily distorted datasets is significant.
NeuroImage | 2018
Matthias Nau; Andreas Schindler; A Bartels
&NA; Eye movements induce visual motion that can complicate the stable perception of the world. The visual system compensates for such self‐induced visual motion by integrating visual input with efference copies of eye movement commands. This mechanism is central as it does not only support perceptual stability but also mediates reliable perception of world‐centered objective motion. In humans, it remains elusive whether visual motion responses in early retinotopic cortex are driven by objective motion or by retinal motion associated with it. To address this question, we used fMRI to examine functional responses of sixteen visual areas to combinations of planar objective motion and pursuit eye movements. Observers were exposed to objective motion that was faster, matched or slower relative to pursuit, allowing us to compare conditions that differed in objective motion velocity while retinal motion and eye movement signals were matched. Our results show that not only higher level motion regions such as V3A and V6, but also early visual areas signaled the velocity of objective motion, hence the product of integrating retinal with non‐retinal signals. These results shed new light on mechanisms that mediate perceptual stability and real‐motion perception, and show that extra‐retinal signals related to pursuit eye movements influence processing in human early visual cortex. HighlightsWe quantified objective (world‐centered) motion responses in sixteen visual areas.Objective motion responses were present already in areas V1 and V2.Motion areas V3A and V6 had very strong real motion responses.Responses were also present in IPS0, IPS4 and in motion areas VIP, Pc, CSv.
NeuroImage | 2017
Mm Bannert; A Bartels
A central problem in color vision is that the light reaching the eye from a given surface can vary dramatically depending on the illumination. Despite this, our color percept, the brains estimate of surface reflectance, remains remarkably stable. This phenomenon is called color constancy. Here we investigated which human brain regions represent surface color in a way that is invariant with respect to illuminant changes. We used physically realistic rendering methods to display natural yet abstract 3D scenes that were displayed under three distinct illuminants. The scenes embedded, in different conditions, surfaces that differed in their surface color (i.e. in their reflectance property). We used multivariate fMRI pattern analysis to probe neural coding of surface reflectance and illuminant, respectively. While all visual regions encoded surface color when viewed under the same illuminant, we found that only in V1 and V4α surface color representations were invariant to illumination changes. Along the visual hierarchy there was a gradient from V1 to V4α to increasingly encode surface color rather than illumination. Finally, effects of a stimulus manipulation on individual behavioral color constancy indices correlated with neural encoding of the illuminant in hV4. This provides neural evidence for the Equivalent Illuminant Model. Our results provide a principled characterization of color constancy mechanisms across the visual hierarchy, and demonstrate complementary contributions in early and late processing stages.
Current Biology | 2014
A Bartels
A new human fMRI study shows how early visual cortex makes sense of complex visual scenes by segregating foreground and background, and by highlighting outlier objects. The findings are consistent with two attractive theories: biased competition and predictive coding.
Current Biology | 2013
N Zaretskaya; A Bartels
Binocular rivalry occurs when two distinct visual stimuli are presented separately to each eye, causing perceptual ambiguity. The conscious state of the observer then alternates between the perceptual dominance of one of the stimuli while the other is suppressed, and vice versa. These vivid changes in perception during constant visual stimulation allow the study of brain processes involved in conscious visual experience. There is abundant electrophysiological as well as fMRI evidence that neural activity in stimulus-selective areas of the temporal lobe correlates with perceptual changes during rivalry [1–3]. Yet, almost nothing is known about the causal contribution of these areas to dominance and suppression of their preferred stimulus. We induced binocular rivalry in human observers using moving dots presented to one eye and a static face to the other eye, and applied transcranial magnetic stimulation (TMS) over the motion area V5/hMT+. We show that disrupting activity in V5/hMT+ during rivalry extends periods of motion suppression, with no effect on periods of motion dominance, revealing a state-specific contribution of V5/hMT+ to the competition for awareness in rivalry.
NeuroImage | 2017
Pr Grassi; N Zaretskaya; A Bartels
Abstract A growing body of literature suggests that feedback modulation of early visual processing is ubiquitous and central to cortical computation. In particular stimuli with high‐level content that invariably activate ventral object responsive regions have been shown to suppress early visual cortex. This suppression was typically interpreted in the framework of predictive coding and feedback from ventral regions. Here we examined early visual modulation during perception of a bistable Gestalt illusion that has previously been shown to be mediated by dorsal parietal cortex rather than by ventral regions that were not activated. The bistable dynamic stimulus consisted of moving dots that could either be perceived as corners of a large moving cube (global Gestalt) or as distributed sets of locally moving elements. We found that perceptual binding of local moving elements into an illusory Gestalt led to spatially segregated differential modulations in both, V1 and V2: representations of illusory lines and foreground were enhanced, while inducers and background were suppressed. Furthermore, correlation analyses suggest that distinct mechanisms govern fore‐ and background modulation. Our results demonstrate that motion‐induced Gestalt perception differentially modulates early visual cortex in the absence of ventral stream activation. HighlightsWe used a bistable Gestalt illusion to examine feedback modulations in V1 and V2.The Gestalt perception has been shown to be mediated by dorsal cortex.Representations of inducers, fore‐ and back‐ground were differentially modulated.Correlation analyses suggest distinct sub‐processes underlying the modulations.Our results are in line with predictive coding accounts of visual processing.
iScience | 2018
Andreas Schindler; A Bartels
Summary A key question in vision research concerns how the brain compensates for self-induced eye and head movements to form the world-centered, spatiotopic representations we perceive. Although human V3A and V6 integrate eye movements with vision, it is unclear which areas integrate head motion signals with visual retinotopic representations, as fMRI typically prevents head movement executions. Here we examined whether human early visual cortex V3A and V6 integrate these signals. A previously introduced paradigm allowed participant head movement during trials, but stabilized the head during data acquisition utilizing the delay between blood-oxygen-level-dependent (BOLD) and neural signals. Visual stimuli simulated either a stable environment or one with arbitrary head-coupled visual motion. Importantly, both conditions were matched in retinal and head motion. Contrasts revealed differential responses in human V6. Given the lack of vestibular responses in primate V6, these results suggest multi-modal integration of visual with neck efference copy signals or proprioception in V6.