Frank Papenmeier
University of Tübingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frank Papenmeier.
Behavior Research Methods | 2010
Frank Papenmeier; Markus Huff
Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.
Brain and Cognition | 2012
Georg Jahn; Julia Wendt; Martin Lotze; Frank Papenmeier; Markus Huff
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to ones own body involves an updating of spatial working memory. Here, multiple object tracking was employed to study spatial updating and its neural correlates. In a dynamic 3D-scene, targets moved among visually indistinguishable distractors. The targets and distractors either stayed visible during continuous viewpoint changes or they turned invisible. The parametric variation of tracking load revealed load-dependent activations of the intraparietal sulcus, the superior parietal lobule, and the lateral occipital cortex in response to the attentive tracking task. Viewpoint changes with invisible objects that demanded retention and updating produced load-dependent activation only in the precuneus in line with its presumed involvement in updating spatial working memory.
Journal of Experimental Psychology: Human Perception and Performance | 2014
Frank Papenmeier; Hauke S. Meyerhoff; Georg Jahn; Markus Huff
We examined whether surface feature information is utilized to track the locations of multiple objects. In particular, we tested whether surface features and spatiotemporal information are weighted according to their availability and reliability. Accordingly, we hypothesized that surface features should affect location tracking across spatiotemporal discontinuities. Three kinds of spatiotemporal discontinuities were implemented across five experiments: abrupt scene rotations, abrupt zooms, and a reduced presentation frame rate. Objects were briefly colored across the spatiotemporal discontinuity. Distinct coloring that matched spatiotemporal information across the discontinuity improved tracking performance as compared with homogeneous coloring. Swapping distinct colors across the discontinuity impaired performance. Correspondence by color was further demonstrated by more mis-selected distractors appearing in a former target color than distractors appearing in a former distractor color in the swap condition. This was true even when color never supported tracking and when participants were instructed to ignore color. Furthermore, effects of object color on tracking occurred with unreliable spatiotemporal information but not with reliable spatiotemporal information. Our results demonstrate that surface feature information can be utilized to track the locations of multiple objects. This is in contrast to theories stating that objects are tracked based on spatiotemporal information only. We introduce a flexible-weighting tracking account stating that spatiotemporal information and surface features are both utilized by the location tracking mechanism. The two sources of information are weighted according to their availability and reliability. Surface feature effects on tracking are particularly likely when distinct surface feature information is available and spatiotemporal information is unreliable.
Visual Cognition | 2012
Markus Huff; Frank Papenmeier; Jeffrey M. Zacks
Boundaries between meaningful events are key moments in comprehending human action. At these points, viewers may focus on the events contents at the expense of other information. We tested whether visual detection was impaired at those moments perceivers judged to be boundaries between events. Short animated football clips were used as stimulus material, and event boundaries were imposed by having the ball change possession. In a first experiment, we found that possession changes were perceived to be event boundaries. In a second experiment, participants were asked to keep track of 4 of 10 players and to watch for 120 ms probes appearing either at an event boundary or a nonboundary. Probe detection was less accurate at event boundaries. This result suggests that the segmentation of ongoing activity into events corresponds with the regulation of attention over time.
Cognition | 2011
Hauke S. Meyerhoff; Markus Huff; Frank Papenmeier; Georg Jahn; Stephan Schwan
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.
Attention Perception & Psychophysics | 2010
Markus Huff; Hauke S. Meyerhoff; Frank Papenmeier; Georg Jahn
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.
Perception | 2013
Hauke S. Meyerhoff; Frank Papenmeier; Markus Huff
How do observers track multiple moving objects simultaneously? Previous work has shown that adding conflicting texture motion to the tracked objects impairs tracking performance. Here, we test whether texture motion is integrated with object motion in an object-based manner, or whether adding conflicting texture motion to a display causes global interference effects. We added a moving texture onto the surface of tracked objects with the texture moving either in the same or opposite direction to the objects. In the critical trials, we presented both types of texture motion. In these trials, we found a selective impairment for the objects with opposite texture motion, suggesting that multiple motion information sources are integrated in an object-based manner during tracking. The integrated motion signals might be used to anticipate prospective object locations in order to enhance tracking.
Attention Perception & Psychophysics | 2012
Frank Papenmeier; Markus Huff; Stephan Schwan
Locations of multiple stationary objects are represented on the basis of their global spatial configuration in visual short-term memory (VSTM). Once objects move individually, they form a global spatial configuration with varying spatial inter-object relations over time. The representation of such dynamic spatial configurations in VSTM was investigated in six experiments. Participants memorized a scene with six moving and/or stationary objects and performed a location change detection task for one object specified during the probing phase. The spatial configuration of the objects was manipulated between memory phase and probing phase. Full spatial configurations showing all objects caused higher change detection performance than did no or partial spatial configurations for static and dynamic scenes. The representation of dynamic scenes in VSTM is therefore also based on their global spatial configuration. The variation of the spatiotemporal features of the objects demonstrated that spatiotemporal features of dynamic spatial configurations are represented in VSTM. The presentation of conflicting spatiotemporal cues interfered with memory retrieval. However, missing or conforming spatiotemporal cues triggered memory retrieval of dynamic spatial configurations. The configurational representation of stationary and moving objects was based on a single spatial configuration, indicating that static spatial configurations are a special case of dynamic spatial configurations.
Vision Research | 2013
Markus Huff; Frank Papenmeier
In multiple-object tracking, participants can track several moving objects among identical distractors. It has recently been shown that the human visual system uses motion information in order to keep track of targets (St. Clair et al., Journal of Vision, 10(4), 1-13). Texture on the surface of an object that moved in the opposite direction to the object itself impaired tracking performance. In this study, we examined the temporal interval at which texture motion and object motion is integrated in dynamic scenes. In two multiple-object tracking experiments, we manipulated the texture motion on the objects: The texture either moved in the same direction as the objects, in the opposite direction, or alternated between the same and opposite direction at varying intervals. In Experiment 1, we show that the integration of object motion and texture motion can take place at intervals as short as 100 ms. In Experiment 2, we show that there is a linear relationship between the proportion of opposite texture motion and tracking performance. We suggest that texture motion might cause shifts in perceived object locations, thus influencing tracking performance.
Experimental Psychology | 2012
Georg Jahn; Frank Papenmeier; Hauke S. Meyerhoff; Markus Huff
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.