Manuel Vidal
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manuel Vidal.
PLOS ONE | 2014
Guy Cheron; Axelle Leroy; Ernesto Palmero-Soler; Caty De Saedeleer; Ana Bengoetxea; Ana Maria Cebolla; Manuel Vidal; Bernard Dan; Alain Berthoz; Joseph McIntyre
Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.
Journal of Visualized Experiments | 2012
Michael Barnett-Cowan; T Meilinger; Manuel Vidal; Harald Teufel; Hh Bülthoff
Path integration is a process in which self-motion is integrated over time to obtain an estimate of ones current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Experimental Brain Research | 2013
Caty De Saedeleer; Manuel Vidal; Mark Lipshits; Ana Bengoetxea; Ana Maria Cebolla; Alain Berthoz; Guy Cheron; Joseph McIntyre
In the present study, we investigated the effect of weightlessness on the ability to perceive and remember self-motion when passing through virtual 3D tunnels that curve in different direction (up, down, left, right). We asked cosmonaut subjects to perform the experiment before, during and after long-duration space flight aboard the International Space Station (ISS), and we manipulated vestibular versus haptic cues by having subjects perform the task either in a rigidly fixed posture with respect to the space station or during free-floating, in weightlessness. Subjects were driven passively at constant speed through the virtual 3D tunnels containing a single turn in the middle of a linear segment, either in pitch or in yaw, in increments of 12.5°. After exiting each tunnel, subjects were asked to report their perception of the turn’s angular magnitude by adjusting, with a trackball, the angular bend in a rod symbolizing the outside view of the tunnel. We demonstrate that the strong asymmetry between downward and upward pitch turns observed on Earth showed an immediate and significant reduction when free-floating in weightlessness and a delayed reduction when the cosmonauts were firmly in contact with the floor of the station. These effects of weightlessness on the early processing stages (vestibular and optokinetics) that underlie the perception of self-motion did not stem from a change in alertness or any other uncontrolled factor in the ISS, as evidenced by the fact that weightlessness had no effect on the perception of yaw turns. That the effects on the perception of pitch may be partially overcome by haptic cues reflects the fusion of multisensory cues and top-down influences on visual perception.
Attention Perception & Psychophysics | 2013
Guillaume Thibault; Achille Pasqualotto; Manuel Vidal; Jacques Droulez; Alain Berthoz
Although a number of studies have been devoted to 2-D navigation, relatively little is known about how the brain encodes and recalls navigation in complex multifloored environments. Previous studies have proposed that humans preferentially memorize buildings by a set of horizontal 2-D representations. Yet this might stem from the fact that environments were also explored by floors. Here, we have investigated the effect of spatial learning on memory of a virtual multifloored building. Two groups of 28 participants watched a computer movie that showed either a route along floors one at a time or travel between floors by simulated lifts, consisting in both cases of a 2-D trajectory in the vertical plane. To test recognition, the participants viewed a camera movement that either replicated a segment of the learning route (familiar segment) or did not (novel segment—i.e., shortcuts). Overall, floor recognition was not reliably superior to column recognition, but learning along a floor route produced a better spatial memory performance than did learning along a column route. Moreover, the participants processed familiar segments more accurately than novel ones, not only after floor learning, but crucially, also after column learning, suggesting a key role of the observation mode on the exploitation of spatial memory.
Attention Perception & Psychophysics | 2006
Manuel Vidal; Michel-Ange Amorim; Joseph McIntyre; Alain Berthoz
Terrestrial gravity restricts human locomotion to surfaces in which turns involve rotations around the body axis. Because observers are usually upright, one might expect the effects of gravity to induce differences in the processing of vertical versus horizontal turns. Subjects observed visual scenes of bending tunnels, either statically or dynamically, as if they were moving passively through the visual scene and were then asked to reproduce the turn deviation of the tunnel with a trackball. In order to disentangle inertia-related (earth-centered) from vision-related (body-centered) factors, the subjects were either upright or lying on their right side during the observations. Furthermore, the availability of continuous optic flow, geometrical cues, and eye movement were manipulated in three experiments. The results allowed us to characterize the factors’ contributions as follows. Forward turns (pitch down) with all cues were largely overestimated, as compared with backward turns (pitch up). First, eye movements known to be irregular for vertical stimulation were largely responsible for this asymmetry. Second, geometry-based estimations are, to some extent, asymmetrical. Third, a cognitive effect corresponding to the evaluation of navigability for upward and downward turns was found (i.e., topdown influences, such as the fear of falling often reported), which tended to increase the estimation of turns in the direction of gravity.
IEEE Transactions on Human-Machine Systems | 2013
Alain Berthoz; Willem Bles; Hh Bülthoff; B.J. Correia Grácio; Philippus Feenstra; Nicolas Filliard; R. Hühne; Andras Kemeny; Michael Mayrhofer; M. Mulder; Hans-Günther Nusseck; P Pretto; Gilles Reymond; Richard Schlüsselberger; Johann Schwandtner; Harald Teufel; Benjamin Vailleau; M. M. van Paassen; Manuel Vidal; M. Wentink
Advanced driving simulators aim at rendering the motion of a vehicle with maximum fidelity, which requires increased mechanical travel, size, and cost of the system. Motion cueing algorithms reduce the motion envelope by taking advantage of limitations in human motion perception, and the most commonly employed method is just to scale down the physical motion. However, little is known on the effects of motion scaling on motion perception and on actual driving performance. This paper presents the results of a European collaborative project, which explored different motion scale factors in a slalom driving task. Three state-of-the-art simulator systems were used, which were capable of generating displacements of several meters. The results of four comparable driving experiments, which were obtained with a total of 65 participants, indicate a preference for motion scale factors below 1, within a wide range of acceptable values (0.4-0.75). Very reduced or absent motion cues significantly degrade driving performance. Applications of this research are discussed for the design of motion systems and cueing algorithms for driving simulation.
Experimental Psychology | 2011
Nicolas Poirel; Manuel Vidal; Arlette Pineau; Céline Lanoë; Gaëlle Leroux; Amélie Lubin; Marie-Renée Turbelin; Alain Berthoz; Olivier Houdé
This study investigated the influence of egocentric and allocentric viewpoints on a comparison task of length estimation in children and adults. A total of 100 participants ranging in age from 5 years to adulthood were presented with virtual scenes representing a park landscape with two paths, one straight and one serpentine. Scenes were presented either from an egocentric or allocentric viewpoint. Results showed that when the two paths had the same length, participants always overestimated the length of the straight line for allocentric trials, whereas a development from a systematic overestimation in children to an underestimation of the straight line length in adults was found for egocentric trials. We discuss these findings in terms of the influences of both bias-inhibition processes and school acquisitions.
Experimental Brain Research | 2009
Matthieu Lafon; Manuel Vidal; Alain Berthoz
Spatial cognition studies have described two main cognitive strategies involved in the memorization of traveled paths in human navigation. One of these strategies uses the action-based memory (egocentric) of the traveled route or paths, which involves kinesthetic memory, optic flow, and episodic memory, whereas the other strategy privileges a survey memory of cartographic type (allocentric). Most studies have dealt with these two strategies separately, but none has tried to show the interaction between them in spite of the fact that we commonly use a map to imagine our journey and then proceed using egocentric navigation. An interesting question is therefore: how does prior allocentric knowledge of the environment affect the egocentric, purely kinesthetic navigation processes involved in human navigation? We designed an experiment in which blindfolded subjects had first to walk and memorize a path with kinesthetic cues only. They had previously been shown a map of the path, which was either correct or distorted (consistent shrinking or growing). The latter transformations were studied in order to observe what influence a distorted prior knowledge could have on spatial mechanisms. After having completed the first learning travel along the path, they had to perform several spatial tasks during the testing phase: (1) pointing towards the origin and (2) to specific points encountered along the path, (3) a free locomotor reproduction, and (4) a drawing of the memorized path. The results showed that prior cartographic knowledge influences the paths drawn and the spatial inference capacity, whereas neither locomotor reproduction nor spatial updating was disturbed. Our results strongly support the notion that (1) there are two independent neural bases underlying these mechanisms: a map-like representation allowing allocentric spatial inferences, and a kinesthetic memory of self-motion in space; and (2) a common use of, or a switching between, these two strategies is possible. Nevertheless, allocentric representations can emerge from the experience of kinesthetic cues alone.
Experimental Brain Research | 2009
Manuel Vidal; Alexandre Lehmann; Hh Bülthoff
Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. These changes arise either from the rotation of the test object array or from the rotation of the observer. Previous studies showed that the cognitive cost of mental rotations is reduced when viewpoint changes result from the observer’s motion, which was explained by the spatial updating mechanism involved during self-motion. However, little is known about how various sensory cues available might contribute to the updating performance. We used a Virtual Reality setup in a series of experiments to investigate table-top mental rotations under different combinations of modalities among vision, body and audition. We found that mental rotation performance gradually improved when adding sensory cues to the moving observer (from None to Body or Vision and then to Body & Audition or Body & Vision), but that the processing time drops to the same level for any of the sensory contexts. These results are discussed in terms of an additive contribution when sensory modalities are co-activated to the spatial updating mechanism involved during self-motion. Interestingly, this multisensory approach can account for different findings reported in the literature.
Experimental Brain Research | 2010
Manuel Vidal; Hh Bülthoff
Many previous studies have focused on how humans combine inputs provided by different modalities for the same physical property. However, it is not yet very clear how different senses providing information about our own movements combine in order to provide a motion percept. We designed an experiment to investigate how upright turns are stored, and particularly how vestibular and visual cues interact at the different stages of the memorization process (encoding/recalling). Subjects experienced passive yaw turns stimulated in the vestibular modality (whole-body rotations) and/or in the visual modality (limited lifetime star-field rotations), with the visual scene turning 1.5 times faster when combined (unnoticed conflict). Then they were asked to actively reproduce the rotation displacement in the opposite direction, with body cues only, visual cues only, or both cues with either the same or a different gain factor. First, we found that in none of the conditions did the reproduced motion dynamics follow that of the presentation phase (Gaussian angular velocity profiles). Second, the unimodal recalling of turns was largely uninfluenced by the other sensory cue that it could be combined with during the encoding. Therefore, turns in each modality, visual, and vestibular are stored independently. Third, when the intersensory gain was preserved, the bimodal reproduction was more precise (reduced variance) and lay between the two unimodal reproductions. This suggests that with both visual and vestibular cues available, these combine in order to improve the reproduction. Fourth, when the intersensory gain was modified, the bimodal reproduction resulted in a substantially larger change for the body than for the visual scene rotations, which indicates that vision prevails for this rotation displacement task when a matching problem is introduced.