Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus Lappe is active.

Publication


Featured researches published by Markus Lappe.


Nature | 2000

Postsaccadic visual references generate presaccadic compression of space

Markus Lappe; Holger Awater; Bart Krekelberg

With every rapid gaze shift (saccade), our eyes experience a different view of the world. Stable perception of visual space requires that points in the new image are associated with corresponding points in the previous image. The brain may use an extraretinal eye position signal to compensate for gaze changes, or, alternatively, exploit the image contents to determine associated locations. Support for a uniform extraretinal signal comes from findings that the apparent position of objects briefly flashed around the time of a saccade is often shifted in the direction of the saccade. This view is challenged, however, by observations that the magnitude and direction of the displacement varies across the visual field. Led by the observation that non-uniform displacements typically occurred in studies conducted in slightly illuminated rooms, here we determine the dependence of perisaccadic mislocalization on the availability of visual spatial references at various times around a saccade. We find that presaccadic compression occurs only if visual references are available immediately after, rather than before or during, the saccade. Our findings indicate that the visual processes of transsaccadic spatial localization use mainly postsaccadic visual information.


Proceedings of the National Academy of Sciences of the United States of America | 2002

Perception of biological motion without local image motion

J. A. Beintema; Markus Lappe

A vivid perception of the moving form of a human figure can be obtained from a few moving light points on the joints of the body. This is known as biological motion perception. It is commonly believed that the perception of biological motion rests on image motion signals. Curiously, however, some patients with lesions to motion processing areas of the dorsal stream are severely impaired in image motion perception but can easily perceive biological motion. Here we describe a biological motion stimulus based on a limited lifetime technique that tests the perception of a moving human figure in the absence of local image motion. We find that subjects can spontaneously recognize a moving human figure in displays without local image motion. Their performance is very similar to that for classic point-light displays. We also find that tasks involving the discrimination of walking direction or the coherence of a walking figure can be performed in the absence of image motion. Thus, although image motion may generally aid processes such as segmenting figure from background, we propose that it is not the basis for the percept of biological motion. Rather, we suggest biological motion is derived from dynamic form information on body posture evolving over time.


IEEE Transactions on Visualization and Computer Graphics | 2010

Estimation of Detection Thresholds for Redirected Walking Techniques

Frank Steinicke; Gerd Bruder; Jason Jerald; Harald Frenz; Markus Lappe

In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.


The Journal of Neuroscience | 2006

A Model of Biological Motion Perception from Configural Form Cues

Joachim Lange; Markus Lappe

Biological motion perception is the compelling ability of the visual system to perceive complex human movements effortlessly and within a fraction of a second. Recent neuroimaging and neurophysiological studies have revealed that the visual perception of biological motion activates a widespread network of brain areas. The superior temporal sulcus has a crucial role within this network. The roles of other areas are less clear. We present a computational model based on neurally plausible assumptions to elucidate the contributions of motion and form signals to biological motion perception and the computations in the underlying brain network. The model simulates receptive fields for images of the static human body, as found by neuroimaging studies, and temporally integrates their responses by leaky integrator neurons. The model reveals a high correlation to data obtained by neurophysiological, neuroimaging, and psychophysical studies.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision?

Norbert Krüger; Peter Janssen; Sinan Kalkan; Markus Lappe; Aleš Leonardis; Justus H. Piater; Antonio Jose Rodríguez-Sánchez; Laurenz Wiskott

Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in todays mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.


The Journal of Neuroscience | 1996

Optic Flow Processing in Monkey STS: A Theoretical and Experimental Approach

Markus Lappe; Frank Bremmer; Martin Pekel; Alexander Thiele; Klaus-Peter Hoffmann

How does the brain process visual information about self-motion? In monkey cortex, the analysis of visual motion is performed by successive areas specialized in different aspects of motion processing. Whereas neurons in the middle temporal (MT) area are direction-selective for local motion, neurons in the medial superior temporal (MST) area respond to motion patterns. A neural network model attempts to link these properties to the psychophysics of human heading detection from optic flow. It proposes that populations of neurons represent specific directions of heading. We quantitatively compared single-unit recordings in area MST with single-neuron simulations in this model. Predictions were derived from simulations and subsequently tested in recorded neurons. Neuronal activities depended on the position of the singular point in the optic flow. Best responses to opposing motions occurred for opposite locations of the singular point in the visual field. Excitation by one type of motion is paired with inhibition by the opposite motion. Activity maxima often occur for peripheral singular points. The averaged recorded shape of the response modulations is sigmoidal, which is in agreement with model predictions. We also tested whether the activity of the neuronal population in MST can represent the directions of heading in our stimuli. A simple least-mean-square minimization could retrieve the direction of heading from the neuronal activities with a precision of 4.3°. Our results show good agreement between the proposed model and the neuronal responses in area MST and further support the hypothesis that area MST is involved in visual navigation.


Journal of Vision | 2008

About the influence of post-saccadic mechanisms for visual stability on peri-saccadic compression of object location

Fred H. Hamker; Marc Zirnsak; Markus Lappe

Peri-saccadic perception experiments have revealed a multitude of mislocalization phenomena. For instance, a briefly flashed stimulus is perceived closer to the saccade target, whereas a displacement of the saccade target goes usually unnoticeable. This latter saccadic suppression of displacement has been explained by a built-in characteristic of the perceptual system: the assumption that during a saccade, the environment remains stable. We explored whether the mislocalization of a briefly flashed stimulus toward the saccade target also grounds in the built-in assumption of a stable environment. If the mislocalization of a peri-saccadically flashed stimulus originates from a post-saccadic alignment process, an additional location marker at the position of the upcoming flash should counteract compression. Alternatively, compression might be the result of peri-saccadic attentional phenomena. In this case, mislocalization should occur even if the position of the flashed stimulus is marked. When subjects were asked about their perceived location, they mislocalized the stimulus toward the saccade target, even though they were fully aware of the correct stimulus location. Thus, our results suggest that the uncertainty about the location of a flashed stimulus is not inherently relevant for compression.


Annals of the New York Academy of Sciences | 1999

Linear vestibular self-motion signals in monkey medial superior temporal area.

Frank Bremmer; Michael Kubischik; Martin Pekel; Markus Lappe; Klaus-Peter Hoffmann

Abstract: The present study was aimed at investigating the sensitivity to linear vestibular stimulation of neurons in the medial superior temporal area (MST) of the macaque monkey. Two monkeys were moved on a parallel swing while single‐unit activity was recorded. About one‐half of the cells (28/51) responded in the dark either to forward motion (n= 10), or to backward motion (n= 11), or to both (n= 7). Twenty cells responding to vestibular stimulation in darkness were also tested for their responses to optic flow stimulation simulating forward and backward self‐motion. Forty‐five percent (9/20) of them preferred the same self‐motion directions, that is, combined visual and vestibular signals in a synergistic manner. Thirty percent (6/20) of the cells were not responsive to visual stimulation alone. The remaining 25% (5/20) preferred directions that were antialigned. Our results provide strong evidence that neurons in the MST area are at least in part involved in the processing of self‐motion.


Neuron | 2004

Perisaccadic Mislocalization Orthogonal to Saccade Direction

Marcus Kaiser; Markus Lappe

Saccadic eye movements transiently distort perceptual space. Visual objects flashed shortly before or during a saccade are mislocalized along the saccade direction, resembling a compression of space around the saccade target. These mislocalizations reflect transient errors of processes that construct spatial stability across eye movements. They may arise from errors of reference signals associated with saccade direction and amplitude or from visual or visuomotor remapping processes focused on the saccade targets position. The second case would predict apparent position shifts toward the target also in directions orthogonal to the saccade. We report that such orthogonal mislocalization indeed occurs. Surprisingly, however, the orthogonal mislocalization is restricted to only part of the visual field. This part comprises distant positions in saccade direction but does not depend on the targets position. Our findings can be explained by a combination of directional and positional reference signals that varies in time course across the visual field.


PLOS Computational Biology | 2008

The peri-saccadic perception of objects and space.

Fred H. Hamker; Marc Zirnsak; Dirk Calow; Markus Lappe

Eye movements affect object localization and object recognition. Around saccade onset, briefly flashed stimuli appear compressed towards the saccade target, receptive fields dynamically change position, and the recognition of objects near the saccade target is improved. These effects have been attributed to different mechanisms. We provide a unifying account of peri-saccadic perception explaining all three phenomena by a quantitative computational approach simulating cortical cell responses on the population level. Contrary to the common view of spatial attention as a spotlight, our model suggests that oculomotor feedback alters the receptive field structure in multiple visual areas at an intermediate level of the cortical hierarchy to dynamically recruit cells for processing a relevant part of the visual field. The compression of visual space occurs at the expense of this locally enhanced processing capacity.

Collaboration


Dive into the Markus Lappe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dirk Calow

University of Münster

View shared research outputs
Top Co-Authors

Avatar

Fred H. Hamker

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge