Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Valuch is active.

Publication


Featured researches published by Christian Valuch.


Journal of Vision | 2013

Priming of fixations during recognition of natural scenes

Christian Valuch; Stefanie I. Becker; Ulrich Ansorge

Eye fixations allow the human viewer to perceive scene content with high acuity. If fixations drive visual memory for scenes, a viewer might repeat his/her previous fixation pattern during recognition of a familiar scene. However, visual salience alone could account for similarities between two successive fixation patterns by attracting the eyes in a stimulus-driven, task-independent manner. In the present study, we tested whether the viewers aim to recognize a scene fosters fixations on scene content that repeats from learning to recognition as compared to the influence of visual salience alone. In Experiment 1 we compared the gaze behavior in a recognition task to that in a free-viewing task. By showing the same stimuli in both tasks, the task-independent influence of salience was held constant. We found that during a recognition task, but not during (repeated) free viewing, viewers showed a pronounced preference for previously fixated scene content. In Experiment 2 we tested whether participants remembered visual input that they fixated during learning better than salient but nonfixated visual input. To that end we presented participants with smaller cutouts from learned and new scenes. We found that cutouts featuring scene content fixated during encoding were recognized better and faster than cutouts featuring nonfixated but highly salient scene content from learned scenes. Both experiments supported the hypothesis that fixations during encoding and maybe during recognition serve visual memory over and above a stimulus-driven influence of visual salience.


international conference on signal processing and multimedia applications | 2014

Visual attention in edited dynamical images

Ulrich Ansorge; Shelley Buchinger; Christian Valuch; Aniello Raffaele Patrone; Otmar Scherzer

Edited (or cut) dynamical images are created by changing perspectives in imaging devices, such as videos, or graphical animations. They are abundant in everyday and working life. However little is known about how attention is steered with regard to this material. Here we propose a simple two-step architecture of gaze control for this situation. This model relies on (1) a down-weighting of repeated information contained in optic flow within takes (between cuts), and (2) an up-weighting of repeated information between takes (across cuts). This architecture is both parsimonious and realistic. We outline the evidence speaking for this architecture and also identify the outstanding questions.


Multimedia Tools and Applications | 2015

The influence of color during continuity cuts in edited movies: an eye-tracking study

Christian Valuch; Ulrich Ansorge

Professionally edited videos entail frequent editorial cuts – that is, abrupt image changes from one frame to another. The impact of these cuts on human eye movements is currently not well understood. In the present eye-tracking study, we experimentally gauged the degree to which color and visual continuity contributed to viewers’ eye movements following cinematic cuts. In our experiment, viewers were presented with two edited action sports movies on the same screen but they were instructed to watch and keep their gaze on only one of these movies. Crucially, the movies were frequently interrupted and continued after a short break either at the same or at switched locations. Hence, viewers needed to rapidly recognize the continuation of the relevant movie and re-orient their gaze toward it. Properties of saccadic eye movements following each interruption probed the recognition of the relevant movie after a cut. Two key findings were that (i) memory co-determines attention after cuts in edited videos, resulting in faster re-orientation toward scene continuations when visual continuity across the interruption is high than when it is low, and (ii) color contributes to the guidance of attention after cuts, but its benefit largely rests upon enhanced discrimination of relevant from irrelevant visual information rather than memory. Results are discussed with regard to previous research on eye movements in movies and recognition processes. Possible future directions of research are outlined.


Frontiers in Psychology | 2014

Color priming in pop-out search depends on the relative color of the target

Stefanie I. Becker; Christian Valuch; Ulrich Ansorge

In visual search for pop-out targets, search times are shorter when the target and non-target colors from the previous trial are repeated than when they change. This priming effect was originally attributed to a feature weighting mechanism that biases attention toward the target features, and away from the non-target features. However, more recent studies have shown that visual selection is strongly context-dependent: according to a relational account of feature priming, the target color is always encoded relative to the non-target color (e.g., as redder or greener). The present study provides a critical test of this hypothesis, by varying the colors of the search items such that either the relative color or the absolute color of the target always remained constant (or both). The results clearly show that color priming depends on the relative color of a target with respect to the non-targets but not on its absolute color value. Moreover, the observed priming effects did not change over the course of the experiment, suggesting that the visual system encodes colors in a relative manner from the start of the experiment. Taken together, these results strongly support a relational account of feature priming in visual search, and are inconsistent with the dominant feature-based views.


Journal of Vision | 2017

Memory-guided attention during active viewing of edited dynamic scenes

Christian Valuch; Peter König; Ulrich Ansorge

Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movies continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewers active matching of scene content across cuts.


Advances in Cognitive Psychology | 2017

Human Eye Movements After Viewpoint Shifts in Edited Dynamic Scenes are Under Cognitive Control

Raphael Seywerth; Christian Valuch; Ulrich Ansorge

We tested whether viewers have cognitive control over their eye movements after cuts in videos of real-world scenes. In the critical conditions, scene cuts constituted panoramic view shifts: Half of the view following a cut matched the view on the same scene before the cut. We manipulated the viewing task between two groups of participants. The main experimental group judged whether the scene following a cut was a continuation of the scene before the cut. Results showed that following view shifts, fixations were determined by the task from 250 ms until 1.5 s: Participants made more and earlier fixations on scene regions that matched across cuts, compared to nonmatching scene regions. This was evident in comparison to a control group of participants that performed a task that did not require judging scene continuity across cuts, and did not show the preference for matching scene regions. Our results illustrate that viewing intentions can have robust and consistent effects on gaze behavior in dynamic scenes, immediately after cuts.


acm international conference on interactive experiences for tv and online video | 2014

The effect of cinematic cuts on human attention

Christian Valuch; Ulrich Ansorge; Shelley Buchinger; Aniello Raffaele Patrone; Otmar Scherzer


Animal Behaviour | 2014

Colour and contrast of female faces: attraction of attention and its dependence on male hormone status in Macaca fuscata

Lena S. Pflüger; Christian Valuch; Daria R. Gutleb; Ulrich Ansorge; Bernard Wallner


Journal of Vision | 2015

Why do cuts work? - Implicit memory biases attention and gaze after cuts in edited movies.

Christian Valuch; Raphael Seywerth; Peter König; Ulrich Ansorge


Journal of Vision | 2013

Task-dependent priming of fixation selection for recognition of natural scenes

Christian Valuch; Stefanie I. Becker; Ulrich Ansorge

Collaboration


Dive into the Christian Valuch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter König

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge