Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Baumgartner is active.

Publication


Featured researches published by Florian Baumgartner.


Scientific Data | 2014

A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie.

Michael Hanke; Florian Baumgartner; Pierre Ibe; Falko R. Kaule; Stefan Pollmann; Oliver Speck; Wolf Zinke; Jörg Stadler

Here we present a high-resolution functional magnetic resonance (fMRI) dataset – 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film (“Forrest Gump”). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures – from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized.


NeuroImage | 2010

Event-related functional MRI of cortical activity evoked by microsaccades, small visually-guided saccades, and eyeblinks in human visual cortex

Peter U. Tse; Florian Baumgartner; Mark W. Greenlee

We used event-related functional magnetic resonance imaging (fMRI) to determine blood oxygen-level-dependent (BOLD) signal changes following microsaccades, visually-guided saccades, and eyeblinks in retinotopically mapped visual cortical areas V1-V3 and hMT+. A deconvolution analysis revealed a similar pattern of BOLD activation following a microsaccade, 0.16 degrees voluntary saccade, and 0.16 degrees displacement of the image under conditions of fixation. In all areas, an initial increase in BOLD signal peaking at approximately 4.5 s after the event was followed by a decline and decrease below baseline. This modulation appears most pronounced for microsaccades and small voluntary saccades in V1, diminishing in strength from V1 to V3. In contrast, 0.16 degrees real motion under conditions of fixation yields the same level of BOLD signal increase in V1 through V3. BOLD signal modulates parametrically with the size of voluntary saccades (0.16 degrees , 0.38 degrees , 0.82 degrees , 1.64 degrees , and 3.28 degrees ) in V1-V3, but not in hMT+. Eyeblinks generate larger modulation that peaks by 6.5 s, and dips below baseline by 10 s post-event, and also exhibits diminishing modulation from V1 to V3. Our results are consistent with the occurrence of transient neural excitation driven by changes in input to retinal ganglion cell receptive fields that are induced by microsaccades, visually-guided saccades, or small image shifts. The pattern of results in area hMT+ exhibits no significant modulation by microsaccades, relatively small modulation by eyeblinks, and substantial responses to saccades and background jumps, suggesting that spurious image motion signal arising from microsaccades and eyeblinks is relatively diminished by hMT+.


Cortex | 2015

Investigating the brain basis of facial expression perception using multi-voxel pattern analysis.

Martin Wegrzyn; Marcel Riehle; Kirsten Labudda; Friedrich G. Woermann; Florian Baumgartner; Stefan Pollmann; Christian G. Bien; Johanna Kissler

Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised.


NeuroImage | 2013

Dorsal and ventral working memory-related brain areas support distinct processes in contextual cueing

Angela A. Manginelli; Florian Baumgartner; Stefan Pollmann

Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) for keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in the temporo-parietal junction area, previously associated with the capture of attention by explicitly retrieved memory items, and in the ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory.


Frontiers in Human Neuroscience | 2012

Medial temporal lobe-dependent repetition suppression and enhancement due to implicit vs. explicit processing of individual repeated search displays

Thomas Geyer; Florian Baumgartner; Hermann J. Müller; Stefan Pollmann

Using visual search, functional magnetic resonance imaging (fMRI) and patient studies have demonstrated that medial temporal lobe (MTL) structures differentiate repeated from novel displays—even when observers are unaware of display repetitions. This suggests a role for MTL in both explicit and, importantly, implicit learning of repeated sensory information (Greene et al., 2007). However, recent behavioral studies suggest, by examining visual search and recognition performance concurrently, that observers have explicit knowledge of at least some of the repeated displays (Geyer et al., 2010). The aim of the present fMRI study was thus to contribute new evidence regarding the contribution of MTL structures to explicit vs. implicit learning in visual search. It was found that MTL activation was increased for explicit and, respectively, decreased for implicit relative to baseline displays. These activation differences were most pronounced in left anterior parahippocampal cortex (aPHC), especially when observers were highly trained on the repeated displays. The data are taken to suggest that explicit and implicit memory processes are linked within MTL structures, but expressed via functionally separable mechanisms (repetition-enhancement vs. -suppression). They further show that repetition effects in visual search would have to be investigated at the display level.


Frontiers in Human Neuroscience | 2012

Simulated loss of foveal vision eliminates visual search advantage in repeated displays

Franziska Geringswald; Florian Baumgartner; Stefan Pollmann

In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g., may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma.


NeuroImage | 2014

The right temporo-parietal junction contributes to visual feature binding

Stefan Pollmann; Wolf Zinke; Florian Baumgartner; Franziska Geringswald; Michael Hanke

We investigated the neural basis of conjoined processing of color and spatial frequency with functional magnetic resonance imaging (fMRI). A multivariate classification algorithm was trained to differentiate between either isolated color or spatial frequency differences, or between conjoint differences in both feature dimensions. All displays were presented in a singleton search task, avoiding confounds between conjunctive feature processing and search difficulty that arose in previous studies contrasting single feature and conjunction search tasks. Based on patient studies, we expected the right temporo-parietal junction (TPJ) to be involved in conjunctive feature processing. This hypothesis was confirmed in that only conjoined color and spatial frequency differences, but not isolated feature differences could be classified above chance level in this area. Furthermore, we could show that the accuracy of a classification of differences in both feature dimensions was superadditive compared to the classification accuracies of isolated color or spatial frequency differences within the right TPJ. These data provide evidence for the processing of feature conjunctions, here color and spatial frequency, in the right TPJ.


NeuroImage | 2013

Evidence for feature binding in the superior parietal lobule

Florian Baumgartner; Michael Hanke; Franziska Geringswald; Wolf Zinke; Oliver Speck; Stefan Pollmann

The neural substrates of feature binding are an old, yet still not completely resolved problem. While patient studies suggest that posterior parietal cortex is necessary for feature binding, imaging evidence has been inconclusive in the past. These studies compared visual feature and conjunction search to investigate the neural substrate of feature conjunctions. However, a common problem of these comparisons was a confound with search difficulty. To circumvent this confound, we directly investigated the localized representation of features (color and spatial frequency) and feature conjunctions in a single search task by using multivariate pattern analysis at high field strength (7T). In right superior parietal lobule, we found evidence for the representation of feature conjunctions that could not be explained by the summation of individual feature representations and thus indicates conjoined processing of color and spatial frequency.


Frontiers in Psychology | 2014

Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study.

Karsten Specht; Florian Baumgartner; Jörg Stadler; Kenneth Hugdahl; Stefan Pollmann

To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice-onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra-high-field 7 T MRI. Subjects were attentively listening to consonant–vowel (CV) syllables with an alveolar or bilabial stop-consonant and either a short or long VOT. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the CV syllables. This was however modulated strongest by PoA such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale (PT) onto the right auditory cortex (AC) during the processing of alveolar CV syllables. Furthermore, the connectivity result indicated also a directed information flow from the right to the left AC, and further to the left PT for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right ACs, with the left PT as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the CV syllables.


bioRxiv | 2016

Simultaneous fMRI and eye gaze recordings during prolonged natural stimulation -- a studyforrest extension

Michael Hanke; Nico Adelhöfer; Daniel Kottke; Vittorio Iacovella; Ayan Sengupta; Falko R. Kaule; Roland Nigbur; Alexander Q. Waite; Florian Baumgartner; Jörg Stadler

Here we present an update of the studyforrest (http://studyforrest.org) dataset that complements the previously released functional magnetic resonance imaging (fMRI) data for natural language processing with a new two-hour 3Tesla fMRI acquisition while 15 of the original participants were shown an audio-visual version of the stimulus motion picture. We demonstrate with two validation analyses that these new data support modeling specific properties of the complex natural stimulus, as well as a substantial within-subject BOLD response congruency in brain areas related to the processing of auditory inputs, speech, and narrative when compared to the existing fMRI data for audio-only stimulation. In addition, we provide participants’ eye gaze location as recorded simultaneously with fMRI, and an additional sample of 15 control participants whose eye gaze trajectories for the entire movie were recorded in a lab setting — to enable studies on attentional processes and comparative investigations on the potential impact of the stimulation setting on these processes.

Collaboration


Dive into the Florian Baumgartner's collaboration.

Top Co-Authors

Avatar

Stefan Pollmann

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Michael Hanke

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Franziska Geringswald

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Falko R. Kaule

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Jörg Stadler

Leibniz Institute for Neurobiology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Speck

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Ayan Sengupta

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Nico Adelhöfer

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Alexander Q. Waite

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge