Richard Palluel-Germain
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard Palluel-Germain.
Experimental Brain Research | 2012
François Osiurak; Nicolas Morgado; Richard Palluel-Germain
An interesting issue about human tool use is whether people spontaneously and implicitly intend to use an available tool to perform an action that would be impossible without it. Recent research indicates that targets presented just beyond arm’s reach are perceived closer when people intend to reach them with a tool rather than without it. An intriguing issue is whether this effect also occurs when people are not explicitly instructed to use a tool to reach targets. To address this issue, we asked participants to estimate distances that were beyond arm’s reach in three conditions. Participants who held passively a long baton underestimated the distances as compared to participants with no baton (Experiment 1). To examine whether this effect resulted from holding the baton, we asked participants to estimate distances while holding passively a shorter baton (Experiment 2). We found that holding this short baton did not influence distance perception. Our findings demonstrate that when people aim at performing a task beyond their action capabilities, they spontaneously and implicitly intend to use a tool if it substantially extends their action capabilities. These findings provide interesting insights into the understanding of the link between the emergence of tool use, intention, and perception.
Psychonomic Bulletin & Review | 2013
Nicolas Morgado; Edouard Gentaz; Éric Guinet; François Osiurak; Richard Palluel-Germain
A large number of studies have shown that effort influences the visual perception of reaching distance. These studies have mainly focused on the effects of reach-relevant properties of the body and of the objects that people intend to reach. However, any influence of the reach-relevant properties of the surrounding environment remains still speculative. We investigated this topic in terms of the role of obstacle width in perceiving distances. Participants had to estimate the straight-line distance to a cylinder located just behind a transparent barrier of varying width. The results showed that participants perceived the straight-line distance to the cylinder as being longer when they intended to grasp the cylinder by reaching around a wide transparent barrier rather than by reaching around narrower ones. Interestingly, this effect might be due to the anticipated effort involved in reaching. Together, our results show that reach-relevant properties of the surrounding environment influence perceived distances, thereby supporting an embodied view of the visual perception of space.
Perception | 2011
Nicolas Morgado; Dominique Muller; Edouard Gentaz; Richard Palluel-Germain
Recent data show that psychosocial factors affect visual perception. We tested this hypothesis by investigating the relationship between affective closeness and the perception of apertures between two people. People feel discomfort when they are near someone they are not affectively close to. Therefore, we predict that they will be less likely to perceive that they can pass between two people not affectively close to them. Participants had to imagine passing through the aperture between two life-size classmate pictures. We found that the closer participants felt to their classmates, the more they felt able to pass between them. This provides the first evidence of a relationship between affective closeness and the perception of aperture between two people, suggesting that psychosocial factors constrain space perception.
Neuroscience Letters | 2004
Richard Palluel-Germain; Frederic Boy; Jean-Pierre Orliaguet; Yann Coello
The aim of the present study was to show that planning and controlling the trajectory of a pointing movement is influenced not solely by physical constraints but also by visual constraints. Subjects were required to point towards different targets located at 20 degrees , 40 degrees , 60 degrees and 80 degrees of eccentricity. Movements were either constrained (i.e. two-dimensional movements) or unconstrained (i.e. three-dimensional movements). Furthermore, movements were carried out either under a direct or a remote visual control (use of a video system). Results revealed that trajectories of constrained movements were nearly straight whatever the eccentricity of the target and the type of visual control. A different pattern was revealed for unconstrained movements. Indeed, under direct vision the trajectory curvature increased as the eccentricity augmented, whereas under indirect vision, trajectories remained nearly straight whatever the eccentricity of the target. Thus, movements controlled through a remote visual feedback appear to be planned in extrinsic space as constrained movements.
Applied Bionics and Biomechanics | 2015
Chloé Stoll; Richard Palluel-Germain; Vincent Fristot; Denis Pellerin; David Alleysson; Christian Graff
Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range.
Psychological Research-psychologische Forschung | 2014
François Osiurak; Nicolas Morgado; Guillaume T. Vallet; Marion Drot; Richard Palluel-Germain
Two experiments examine whether people overestimate the benefits provided by tool use in motor tasks. Participants had to move different quantities of objects by hand (two at a time) or with a tool (four at a time). The tool was not within reach so participants had to get it before moving the objects. In Experiment 1, the task was performed in a real and an imagined situation. In Experiment 2, participants had to decide for each quantity, whether they preferred moving the objects by hand or with the tool. Our findings indicated that people perceive tool actions as less costly in terms of movement time than they actually are (Experiment 1) and decide to use a tool even when it objectively provides less time-based benefits than using the hands (Experiment 2). Taken together, the data suggest that people overestimate the benefits provided by tool use.
Neuroscience Letters | 2011
Richard Palluel-Germain; Steven A. Jax; Laurel J. Buxbaum
During gain adaptation, participants must learn to adapt to novel visuo-motor mappings in which the movement amplitudes they produce do not match the visual feedback they receive. The aim of the present study was to investigate the neural substrates of gain adaptation by examining its possible disruption following left hemisphere stroke. Thirteen chronic left hemisphere stroke patients and five healthy right-handed control subjects completed three experimental phases involving reaching with the left hand, which was the less-affected hand in patients. First, participants reached without visual feedback to six different target locations (baseline phase). Next, in the adaptation phase, participants executed movements to one target under conditions in which the perceived movement distance was 70% of the produced movement distance. Last, in order to test the generalization of this new visuomotor mapping, participants made movements without visual feedback to untrained target locations (generalization phase). Significant between-patient differences were observed during adaptation. Lesion analyses indicated that these between-patient differences were predicted by the amount of damage to the supramarginal gyrus (Brodmann area 40). In addition, patients performed more poorly than controls in the generalization phase, suggesting that different processes are involved in adaptation and generalization periods.
Journal of Deaf Studies and Deaf Education | 2018
Chloé Stoll; Richard Palluel-Germain; Roberto Caldara; Junpeng Lao; Matthew W. G. Dye; Florent Aptel; Olivier Pascalis
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.
Neuroscience Letters | 2006
Richard Palluel-Germain; Frederic Boy; Jean-Pierre Orliaguet; Yann Coello
The main objective of the present study is to show that the visual context can influence the trajectory formation of grasping movements. We asked participants to reach and grasp a cylinder disposed at three different positions: -20 degrees , 0 degrees and 20 degrees of eccentricity with respect to the midsagittal axis. Grasping movements were performed in a direct and in an indirect visual feedback condition (i.e., controlled through a vertical video display). Results revealed that for grasping movements directed toward objects located at -20 degrees and 0 degrees , path curvatures of the wrist, the thumb and the index finger were significantly straighter in the indirect visual feedback condition. However, no significant difference concerning hand path curvature was observed when the movement was directed toward the object located at 20 degrees . This suggests that grasping movements controlled through a remote visual feedback tend to be planned in extrinsic space and that the effect of the visual context on movement planning appears to be not isotropic over the workspace.
Vision Research | 2018
Chloé Stoll; Richard Palluel-Germain; François-Xavier Gueriot; Christophe Chiquet; Olivier Pascalis; Florent Aptel
ABSTRACT Studies have observed that deaf signers have a larger Visual Field (VF) than hearing non‐signers with a particular large extension in the lower part of the VF. This increment could stem from early deafness or from the extensive use of sign language, since the lower VF is critical to perceive and understand linguistics gestures in sign language communication. The aim of the present study was to explore the potential impact of sign language experience without deafness on the VF sensitivity within its lower part. Using standard Humphrey Visual Field Analyzer, we compared luminance sensitivity in the fovea and between 3 and 27 degrees of visual eccentricity for the upper and lower VF, between hearing users of French Sign Language and age‐matched hearing non‐signers. The sensitivity in the fovea and in the upper VF were similar in both groups. Hearing signers had, however, higher luminance sensitivity than non‐signers in the lower VF but only between 3 and 15°, the visual location for sign language perception. Sign language experience, no associated with deafness, may then be a modulating factor of VF sensitivity but restricted to the very specific location where signs are perceived.