Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Franziska Geringswald is active.

Publication


Featured researches published by Franziska Geringswald.


Experimental Psychology | 2014

Attentional Adjustment to Conflict Strength Evidence From the Effects of Manipulating Flanker- Target SOA on Response Times and Prestimulus Pupil Size

Mike Wendt; Andrea Kiesel; Franziska Geringswald; Sascha Purmann; Rico Fischer

Current models of cognitive control assume gradual adjustment of processing selectivity to the strength of conflict evoked by distractor stimuli. Using a flanker task, we varied conflict strength by manipulating target and distractor onset. Replicating previous findings, flanker interference effects were larger on trials associated with advance presentation of the flankers compared to simultaneous presentation. Controlling for stimulus and response sequence effects by excluding trials with feature repetitions from stimulus administration (Experiment 1) or from the statistical analyses (Experiment 2), we found a reduction of the flanker interference effect after high-conflict predecessor trials (i.e., trials associated with advance presentation of the flankers) but not after low-conflict predecessor trials (i.e., trials associated with simultaneous presentation of target and flankers). This result supports the assumption of conflict-strength-dependent adjustment of visual attention. The selective adaptation effect after high-conflict trials was associated with an increase in prestimulus pupil diameter, possibly reflecting increased cognitive effort of focusing attention.


Frontiers in Human Neuroscience | 2012

Simulated loss of foveal vision eliminates visual search advantage in repeated displays

Franziska Geringswald; Florian Baumgartner; Stefan Pollmann

In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g., may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma.


NeuroImage | 2014

The right temporo-parietal junction contributes to visual feature binding

Stefan Pollmann; Wolf Zinke; Florian Baumgartner; Franziska Geringswald; Michael Hanke

We investigated the neural basis of conjoined processing of color and spatial frequency with functional magnetic resonance imaging (fMRI). A multivariate classification algorithm was trained to differentiate between either isolated color or spatial frequency differences, or between conjoint differences in both feature dimensions. All displays were presented in a singleton search task, avoiding confounds between conjunctive feature processing and search difficulty that arose in previous studies contrasting single feature and conjunction search tasks. Based on patient studies, we expected the right temporo-parietal junction (TPJ) to be involved in conjunctive feature processing. This hypothesis was confirmed in that only conjoined color and spatial frequency differences, but not isolated feature differences could be classified above chance level in this area. Furthermore, we could show that the accuracy of a classification of differences in both feature dimensions was superadditive compared to the classification accuracies of isolated color or spatial frequency differences within the right TPJ. These data provide evidence for the processing of feature conjunctions, here color and spatial frequency, in the right TPJ.


Experimental Psychology | 2012

Visual search facilitation in repeated displays depends on visuospatial working memory.

Angela A. Manginelli; Franziska Geringswald; Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


Journal of Vision | 2013

Contextual cueing impairment in patients with age-related macular degeneration.

Franziska Geringswald; Anne Herbik; Michael B. Hoffmann; Stefan Pollmann

Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.


NeuroImage | 2013

Evidence for feature binding in the superior parietal lobule

Florian Baumgartner; Michael Hanke; Franziska Geringswald; Wolf Zinke; Oliver Speck; Stefan Pollmann

The neural substrates of feature binding are an old, yet still not completely resolved problem. While patient studies suggest that posterior parietal cortex is necessary for feature binding, imaging evidence has been inconclusive in the past. These studies compared visual feature and conjunction search to investigate the neural substrate of feature conjunctions. However, a common problem of these comparisons was a confound with search difficulty. To circumvent this confound, we directly investigated the localized representation of features (color and spatial frequency) and feature conjunctions in a single search task by using multivariate pattern analysis at high field strength (7T). In right superior parietal lobule, we found evidence for the representation of feature conjunctions that could not be explained by the summation of individual feature representations and thus indicates conjoined processing of color and spatial frequency.


Journal of Vision | 2016

Impairment of visual memory for objects in natural scenes by simulated central scotomata.

Franziska Geringswald; Eleonora Porracin; Stefan Pollmann

Because of the close link between foveal vision and the spatial deployment of attention, typically only objects that have been foveated during scene exploration may form detailed and persistent memory representations. In a recent study on patients suffering from age-related macular degeneration, however, we found surprisingly accurate visual long-term memory for objects in scenes. Normal exploration patterns that the patients had learned to rereference saccade targets to an extrafoveal retinal location. This rereferencing may allow use of an extrafoveal location as a focus of attention for efficient object encoding into long-term memory. Here, we tested this hypothesis in normal-sighted observers with gaze-contingent central scotoma simulations. As these observers were inexperienced in scene exploration with central vision loss and had not developed saccadic rereferencing, we expected deficits in long-term memory for objects. We used the same change detection task as in our patient study, probing sensitivity to object changes after a period of free scene exploration. Change detection performance was significantly reduced for two types of scotoma simulation diminishing foveal and parafoveal vision--a visible gray disc and a more subtle image warping--compared with unimpaired controls, confirming our hypothesis. The impact of a smaller scotoma covering specifically foveal vision was less distinct, leading to a marginally significant decrease of long-term memory performance compared with controls. We conclude that attentive encoding of objects is deficient when central vision is lost as long as successful saccadic rereferencing has not yet developed.


Ophthalmic and Physiological Optics | 2014

Prediction of higher visual function in macular degeneration with multifocal electroretinogram and multifocal visual evoked potential

Anne Herbik; Franziska Geringswald; H. Thieme; Stefan Pollmann; Michael B. Hoffmann

Visual search can be guided by past experience of regularities in our visual environment. This search guidance by contextual memory cues is impaired by foveal vision loss. Here we compared retinal and cortical visually evoked responses in their predictive value for contextual cueing impairment and visual acuity.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2015

Central and peripheral vision loss differentially affects contextual cueing in visual search

Franziska Geringswald; Stefan Pollmann

Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma.


Psychological Research-psychologische Forschung | 2018

Not scene learning, but attentional processing is superior in team sport athletes and action video game players

Anne Schmidt; Franziska Geringswald; Fariba Sharifian; Stefan Pollmann

We tested if high-level athletes or action video game players have superior context learning skills. Incidental context learning was tested in a spatial contextual cueing paradigm. We found comparable contextual cueing of visual search in repeated displays in high-level amateur handball players, dedicated action video game players and normal controls. In contrast, both handball players and action video game players showed faster search than controls, measured as search time per display item, independent of display repetition. Thus, our data do not indicate superior context learning skills in athletes or action video game players. Rather, both groups showed more efficient visual search in abstract displays that were not related to sport-specific situations.

Collaboration


Dive into the Franziska Geringswald's collaboration.

Top Co-Authors

Avatar

Stefan Pollmann

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Florian Baumgartner

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Anne Herbik

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Michael B. Hoffmann

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Michael Hanke

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Anne Schmidt

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Napp

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Klaus D. Toennies

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Oliver Speck

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge