Mark Wexler
Paris Descartes University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark Wexler.
Cognition | 1998
Mark Wexler; Stephen M. Kosslyn; Alain Berthoz
Much indirect evidence supports the hypothesis that transformations of mental images are at least in part guided by motor processes, even in the case of images of abstract objects rather than of body parts. For example, rotation may be guided by processes that also prime one to see results of a specific motor action. We directly test the hypothesis by means of a dual-task paradigm in which subjects perform the Cooper-Shepard mental rotation task while executing an unseen motor rotation in a given direction and at a previously-learned speed. Four results support the inference that mental rotation relies on motor processes. First, motor rotation that is compatible with mental rotation results in faster times and fewer errors in the imagery task than when the two rotations are incompatible. Second, the angle through which subjects rotate their mental images, and the angle through which they rotate a joystick handle are correlated, but only if the directions of the two rotations are compatible. Third, motor rotation modifies the classical inverted V-shaped mental rotation response time function, favoring the direction of the motor rotation; indeed, in some cases motor rotation even shifts the location of the minimum of this curve in the direction of the motor rotation. Fourth, the preceding effect is sensitive not only to the direction of the motor rotation, but also to the motor speed. A change in the speed of motor rotation can correspondingly slow down or speed up the mental rotation.
Trends in Cognitive Sciences | 2005
Mark Wexler; Jeroen J. A. van Boxtel
The connection between perception and action has classically been studied in one direction only: the effect of perception on subsequent action. Although our actions can modify our perceptions externally, by modifying the world or our view of it, it has recently become clear that even without this external feedback the preparation and execution of a variety of motor actions can have an effect on three-dimensional perceptual processes. Here, we review the ways in which an observers motor actions--locomotion, head and eye movements, and object manipulation--affect his or her perception and representation of three-dimensional objects and space. Allowing observers to act can drastically change the way they perceive the third dimension, as well as how scientists view depth perception.
Journal of Vision | 2011
Tomas Knapen; Martin Rolfs; Mark Wexler; Patrick Cavanagh
Perceptual aftereffects provide a sensitive tool to investigate the influence of eye and head position on visual processing. There have been recent indications that the TAE is remapped around the time of a saccade to remain aligned to the adapting location in the world. Here, we investigate the spatial frame of reference of the TAE by independently manipulating retinal position, gaze orientation, and head orientation between adaptation and test. The results show that the critical factor in the TAE is the correspondence between the adaptation and test locations in a retinotopic frame of reference, whereas world- and head-centric frames of reference do not play a significant role. Our results confirm that adaptation to orientation takes place at retinotopic levels of visual processing. We suggest that the remapping process that plays a role in visual stability does not transfer feature gain information around the time of eye (or head) movements.
Journal of Vision | 2009
Camille Morvan; Mark Wexler
To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly-by extracting motion relative to a background presumed to be fixed-or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes smooth movement due to inertia, only one object is visible-and the motion of this stimulus is decoupled from that of the eye. Using a wide variety of stimulus speeds and directions, we rule out a linear model of compensation, in which stimulus velocity is estimated as a linear combination of retinal and eye velocities multiplied by a constant gain. In fact, we find that when the stimulus moves in the same direction as the eyes, there is little compensation, but when movement is in the opposite direction, compensation grows in a nonlinear way with speed. We conclude that eye movement is estimated from a combination of extraretinal and retinal signals, the latter based on an assumption of stimulus stationarity. Two simple models, in which the direction of eye movement is computed from the extraretinal signal and the speed from the retinal signal, account well for our results.
Psychological Science | 2003
Mark Wexler
Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observers vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary motion in human subjects. The results show that the motor command contributes to the objective perception of space: Observers are more likely to apply, consciously and unconsciously, spatial criteria relative to an allocentric frame of reference when they are executing voluntary head movements than while they are undergoing similar involuntary displacements (which lead to a more egocentric bias). Furthermore, details of the motor command are crucial to spatial vision, as allocentric bias decreases or disappears when self-motion and motor command do not match.
Vision Research | 2001
Mark Wexler; Ivan Lamouret; Jacques Droulez
Having long considered that extraretinal information plays little or no role in spatial vision, the study of structure from motion (SfM) has confounded a moving observer perceiving a stationary object with a non-moving observer perceiving a rigid object undergoing equal and opposite motion. However, recently it has been shown that extraretinal information does play an important role in the extraction of structure from motion by enhancing motion cues for objects that are stationary in an allocentric, world-fixed reference frame (Nature 409 (2001) 85). Here, we test whether stationarity per se is a criterion in SfM by pitting it against rigidity. We have created stimuli that, for a moving observer, offer two interpretations: one that is rigid but non-stationary, another that is more stationary or less rigid. In two experiments, with subjects reporting either structure or motion, we show that stationary, non-rigid solutions are preferred over rigid, non-stationary solutions; and that when no perfectly stationary solutions is available, the visual system prefers the solution that is most stationary. These results demonstrate that allocentric criteria, derived from extra-retinal information, participate in reconstructing the visual scene.
Journal of Vision | 2003
Jeroen J. A. van Boxtel; Mark Wexler; Jacques Droulez
We investigated the perception of three-dimensional plane orientation--focusing on the perception of tilt--from optic flow generated by the observers active movement around a simulated stationary object, and compared the performance to that of an immobile observer receiving a replay of the same optic flow. We found that perception of plane orientation is more precise in the active than in the immobile case. In particular, in the case of the immobile observer, the presence of shear in optic flow drastically diminishes the precision of tilt perception, whereas in the active observer, this decrease in performance is greatly reduced. The difference between active and immobile observers appears to be due to random rather than systematic errors. Furthermore, perceived slant is better correlated with simulated slant in the active observer. We conclude with a discussion of various theoretical explanations for our results.
Psychological Science | 2010
Emmanuelle Combe; Mark Wexler
It is commonly assumed that size constancy—invariance of perceived size of objects as they change retinal size because of changes in distance—depends solely on retinal stimulation and vergence, but on no other action-related signals. Distance to an object can change through displacement of either the observer or the object. The common assumption predicts that the two types of displacement should lead to the same degree of size constancy. We measured size constancy while observers viewed stationary stimuli at different distances. Changes in distance between trials were either actively produced by the observer or generated by real or simulated object displacement, with retinal stimulation held constant across the movement conditions. Responses were always closer to perfect constancy for observer than for object movement. Thus, size constancy is enhanced by information from observer displacement, and, more generally, processes thought to be purely perceptual may have unexpected components related to action.
Current Biology | 2008
Mark Wexler; Nizar Ouarti
Understanding how we spontaneously scan the visual world through eye movements is crucial for characterizing both the strategies and inputs of vision. Despite the importance of the third or depth dimension for perception and action, little is known about how the specifically three-dimensional aspects of scenes affect looking behavior. Here we show that three-dimensional surface orientation has a surprisingly large effect on spontaneous exploration, and we demonstrate that a simple rule predicts eye movements given surface orientation in three dimensions: saccades tend to follow surface depth gradients. The rule proves to be quite robust: it generalizes across depth cues, holds in the presence or absence of a task, and applies to more complex three-dimensional objects. These results not only lead to a more accurate understanding of visuo-motor strategies, but also suggest a possible new oculomotor technique for studying three-dimensional vision from a variety of depth cues in subjects--such as animals or human infants--that cannot explicitly report their perceptions.
Journal of Vision | 2005
Camille Morvan; Mark Wexler
To perceive the real motion of objects in the world while moving the eyes, retinal motion signals must be compensated by information about eye movements. Here we study when this compensation takes place in the course of visual processing, and whether uncompensated motion signals are ever available. We used a paradigm based on asymmetry in motion detection: Fast-moving objects are found easier among slow-moving distractors than are slow objects among fast distractors. By coupling object motion to eye motion, we created stimuli that moved fast on the retina but slowly in an eye-independent reference frame, or vice versa. In the 100 ms after stimulus onset, motion detection is dominated by retinal motion, uncompensated for eye movements. As early as 130 ms, compensated signals become available: objects that move slowly on the retina but fast in an eye-independent frame are detected as easily as those that move fast on the retina.