Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus Huff is active.

Publication


Featured researches published by Markus Huff.


Journal of Vision | 2010

Conflicting motion information impairs multiple object tracking

Rebecca St.Clair; Markus Huff; Adriane E. Seiffert

People can keep track of target objects as they move among identical distractors using only spatiotemporal information. We investigated whether or not participants use motion information during the moment-to-moment tracking of objects by adding motion to the texture of moving objects. The texture either remained static or moved relative to the objects direction of motion, either in the same direction, the opposite direction, or orthogonal to each objects trajectory. Results showed that, compared to the static texture condition, tracking performance was worse when the texture moved in the opposite direction of the object and better when the texture moved in the same direction as the object. Our results support the conclusion that motion information is used during the moment-to-moment tracking of objects. Motion information may either affect a representation of position or be used to periodically predict the future location of targets.


Behavior Research Methods | 2010

DynAOI: a tool for matching eye-movement data with dynamic areas of interest in animations and movies.

Frank Papenmeier; Markus Huff

Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.


Visual Cognition | 2010

Eye movements across viewpoint changes in multiple object tracking

Markus Huff; Frank Papenmeier; Georg Jahn; Friedrich W. Hesse

Observers can visually track multiple objects that move independently even if the scene containing the moving objects is rotated in a smooth way. Abrupt scene rotations yield tracking more difficult but not impossible. For nonrotated, stable dynamic displays, the strategy of looking at the targets centroid has been shown to be of importance for visual tracking. But which factors determine successful visual tracking in a nonstable dynamic display? We report two eye tracking experiments that present evidence for centroid looking. Across abrupt viewpoint changes, gaze on the centroid is more stable than gaze on targets indicating a process of realigning targets as a group. Further, we show that the relative importance of centroid looking increases with object speed.


Visual Cognition | 2009

Tracking multiple objects across abrupt viewpoint changes

Markus Huff; Georg Jahn; Stephan Schwan

The reported experiment tested the effect of abrupt and unpredictable viewpoint changes on the attentional tracking of multiple objects in dynamic 3-D scenes. Observers tracked targets that moved independently among identically looking distractors on a rectangular floor plane. The tracking interval was 11 s. Abrupt rotational viewpoint changes of 10°, 20°, or 30° occurred after 8 s. Accuracy of tracking targets across a 10° viewpoint change was comparable to accuracy in a continuous control condition, whereas viewpoint changes of 20° and 30° impaired tracking performance considerably. This result suggests that tracking is mainly dependant on a low-level process whose performance is saved against small disturbances by the visual systems ability to compensate for small changes of retinocentric coordinates. Tracking across large viewpoint changes succeeds only if allocentric coordinates are remembered to relocate targets after displacements.


Memory & Cognition | 2008

Verbalizing events: overshadowing or facilitation?

Markus Huff; Stephan Schwan

Verbal overshadowing refers to the surprising effect whereby additional verbal information about a visual stimulus hinders its subsequent recognition. In two experiments, we analyzed the validity of this effect for event recognition across various conditions of presentation and testing. Participants observed events that were either followed (Experiment 1) or preceded (Experiment 2) by a verbal description. Results showed that verbal overshadowing occurred when the verbal description was presented after the visual presentation, independent of the distractor type. However, when the verbal description preceded the event, recognition performance was seen to improve when distractor items incompatible with the verbal description were used. The findings were interpreted in terms of two interacting mental representations, which differ both in their level of abstraction and in their accessibility.


Perception | 2007

Changing viewpoints during dynamic events.

Bärbel Garsoffky; Markus Huff; Stephan Schwan

The connection of various viewpoints of a visual dynamic scene can be realised in different ways. We examined if various presentation modes influence scene recognition and cognitive representation type. In the learning phase, participants saw clips of basketball scenes from (a) a single, unvaried viewpoint, or with a change of viewpoint during the scene, whereby the connection was realised (b) by an abrupt cut, or (c) by a continuous camera move. In the test phase, participants had to recognise video stills presenting basketball scenes from the same or differing viewpoints. As expected, cuts led to lower recognition accuracy than a fixed unvaried viewpoint, whereas this was not the case for moves. However, the kind of connection between two viewpoints had no influence on the viewpoint dependence of the cognitive representation. Additionally, it was found that the amount of viewpoint deviation seemed to influence the overall conservativeness of participants reactions.


Journal of Experimental Psychology: Human Perception and Performance | 2009

Canonical Views of Dynamic Scenes.

Bärbel Garsoffky; Stephan Schwan; Markus Huff

The visual recognition of dynamic scenes was examined. The authors hypothesized that the notion of canonical views, which has received strong empirical support for static objects, also holds for dynamic scenes. In Experiment 1, viewpoints orthogonal to the main axis of movement in the scene were preferred over other viewpoints, whereas viewpoints in line with the main axis were least preferred. Experiment 2 provided no empirical evidence for a recognition advantage of canonical viewpoints when presented during the initial learning phase, but Experiment 3 showed a cognitive advantage for canonical viewpoints if they were presented as test stimuli during the recognition test. Overall, the findings suggest that on a phenomenological level, viewers are consciously aware of such viewpoints, and, on a cognitive level, viewers benefit from canonical viewpoints in terms of recognition accuracy.


Attention Perception & Psychophysics | 2010

Spatial updating of dynamic scenes: Tracking multiple invisible objects across viewpoint changes

Markus Huff; Hauke S. Meyerhoff; Frank Papenmeier; Georg Jahn

Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.


international conference spatial cognition | 2006

The spatial representation of dynamic scenes: an integrative approach

Markus Huff; Stephan Schwan; Bärbel Garsoffky

This paper addresses the spatial representation of dynamic scenes, particularly the question whether recognition performance is viewpoint dependent or viewpoint invariant. Beginning with the delimitation of static and dynamic scene recognition, the viewpoint dependency of visual recognition performance and the structure of the underlying mental representation are discussed. In the following, two parameters (an easy to identify event model and salient static features) are identified which appeared to be accountable for viewpoint dependency or viewpoint invariance of visual recognition performance for dynamic scenes.


Proceedings of the Annual Meeting of the Cognitive Science Society | 2006

The Influence of the Serial Order of Visual and Verbal Presentation on the Verbal Overshadowing Effect of Dynamic Scenes

Bärbel Garsoffky; Markus Huff; Stephan Schwan

Collaboration


Dive into the Markus Huff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georg Jahn

University of Greifswald

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge