Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pascal Mamassian is active.

Publication


Featured researches published by Pascal Mamassian.


Seeing and Perceiving | 2010

Multisensory processing in review: from physiology to behaviour.

David Alais; Fiona N. Newell; Pascal Mamassian

Research in multisensory processes has exploded over the last decade. Tremendous advances have been made in a variety of fields from single-unit neural recordings and functional brain imaging through to behaviour, perception and cognition. These diverse approaches have highlighted how the senses work together to produce a coherent multimodal representation of the external world that enables us to function better by exploiting the redundancies and complementarities provided by multiple sensory modalities. With large numbers of new students and researchers being attracted to multisensory research, and the multi-disciplinary nature of the work, our aim in this review is to provide an overview of multisensory processing that includes all fields in a single review. Our intention is to provide a comprehensive source for those interested in learning about multisensory processes, covering a variety of sensory combinations and methodologies, and tracing the path from single-unit neurophysiology through to perception and cognitive functions such as attention and speech.


Experimental Brain Research | 1997

Prehension of objects oriented in three-dimensional space

Pascal Mamassian

Abstract When reaching for an object, the proximity of the object, its orientation, and shape should all be correctly estimated well before the hand arrives in contact with it. We were interested in the effects of the object’s orientation on manual prehension. Subjects were asked to reach for an object at one of several possible orientations. We found that the trajectory of the hand and its rotation and opening were significantly affected by the object’s orientation within the first half of the movement. We also detected a slight delay of the wrist relative to the forearm and a small bias of the orientation of the fingers’ tips toward the orientation of the table on which the object lay. Finally, the aperture of the hand was proportional to the physical size of the object, which shows that size constancy was achieved from the variation of the object’s orientation. Taken together, these results indicate that the three components of the movement – the transport, rotation, and opening of the hand – have access to a common visual representation of the object’s orientation.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Flexible mechanisms underlie the evaluation of visual confidence

Simon Barthelmé; Pascal Mamassian

Visual processing is fraught with uncertainty: The visual system must attempt to estimate physical properties despite missing information and noisy mechanisms. Sometimes high visual uncertainty translates into lack of confidence in our visual perception: We are aware of not seeing well. The mechanism by which we achieve this awareness—how we assess our own visual uncertainty—is unknown, but its investigation is critical to our understanding of visual decision mechanisms. The simplest possibility is that the visual system relies on cues to uncertainty, stimulus features usually associated with visual uncertainty, like blurriness. Probabilistic models of the brain suggest a more sophisticated mechanism, in which visual uncertainty is explicitly represented as probability distributions. In two separate experiments, observers performed a visual discrimination task in which confidence could be determined by the cues available (contrast and crowding or eccentricity and masking) or by their actual performance, the latter requiring a more sophisticated mechanism than cue monitoring. Results show that observers’ confidence followed performance rather than cues, indicating that the mechanisms underlying the evaluation of visual confidence are relatively complex. This result supports probabilistic models, which imply the existence of sophisticated mechanisms for evaluating uncertainty.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Prior knowledge of illumination for 3D perception in the human brain

Peggy Gerardin; Zoe Kourtzi; Pascal Mamassian

In perceiving 3D shape from ambiguous shading patterns, humans use the prior knowledge that the light is located above their head and slightly to the left. Although this observation has fascinated scientists and artists for a long time, the neural basis of this “light from above left” preference for the interpretation of 3D shape remains largely unexplored. Combining behavioral and functional MRI measurements coupled with multivoxel pattern analysis, we show that activations in early visual areas predict best the light source direction irrespective of the perceived shape, but activations in higher occipitotemporal and parietal areas predict better the perceived 3D shape irrespective of the light direction. These findings demonstrate that illumination is processed earlier than the representation of 3D shape in the visual system. In contrast to previous suggestions, we propose that prior knowledge about illumination is processed in a bottom-up manner and influences the interpretation of 3D structure at higher stages of processing.


PLOS Computational Biology | 2009

Evaluation of objective uncertainty in the visual system.

Simon Barthelmé; Pascal Mamassian

The role of sensory systems is to provide an organism with information about its environment. Because sensory information is noisy and insufficient to uniquely determine the environment, natural perceptual systems have to cope with systematic uncertainty. The extent of that uncertainty is often crucial to the organism: for instance, in judging the potential threat in a stimulus. Inducing uncertainty by using visual noise, we had human observers perform a task where they could improve their performance by choosing the less uncertain among pairs of visual stimuli. Results show that observers had access to a reliable measure of visual uncertainty in their decision-making, showing that subjective uncertainty in this case is connected to objective uncertainty. Based on a Bayesian model of the task, we discuss plausible computational schemes for that ability.


Vision Research | 2008

Ambiguities and conventions in the perception of visual art

Pascal Mamassian

Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.


Journal of Physiology-paris | 2007

Bayesian modeling of dynamic motion integration

Anna Montagnini; Pascal Mamassian; Laurent Perrinet; Eric Castet; Guillaume S. Masson

The quality of the representation of an objects motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limitation of the visual motion analyzers (aperture problem). Perceptual and oculomotor data demonstrate that motion processing of extended objects is initially dominated by the local 1D motion cues, related to the objects edges and orthogonal to them, whereas 2D information, related to terminators (or edge-endings), takes progressively over and leads to the final correct representation of global motion. A Bayesian framework accounting for the sensory noise and general expectancies for object velocities has proven successful in explaining several experimental findings concerning early motion processing [Weiss, Y., Adelson, E., 1998. Slow and smooth: a Bayesian theory for the combination of local motion signals in human vision. MIT Technical report, A.I. Memo 1624]. In particular, these models provide a qualitative account for the initial bias induced by the 1D motion cue. However, a complete functional model, encompassing the dynamical evolution of object motion perception, including the integration of different motion cues, is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving objects and more particularly the time course of its initiation phase, which reflects the ongoing motion integration process. In addition, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information. We compare the model predictions for object motion tracking with human oculomotor recordings.


Nature Neuroscience | 2012

More is not always better: adaptive gain control explains dissociation between perception and action

Claudio Simoncini; Laurent Perrinet; Anna Montagnini; Pascal Mamassian; Guillaume S. Masson

Moving objects generate motion information at different scales, which are processed in the visual system with a bank of spatiotemporal frequency channels. It is not known how the brain pools this information to reconstruct object speed and whether this pooling is generic or adaptive; that is, dependent on the behavioral task. We used rich textured motion stimuli of varying bandwidths to decipher how the human visual motion system computes object speed in different behavioral contexts. We found that, although a simple visuomotor behavior such as short-latency ocular following responses takes advantage of the full distribution of motion signals, perceptual speed discrimination is impaired for stimuli with large bandwidths. Such opposite dependencies can be explained by an adaptive gain control mechanism in which the divisive normalization pool is adjusted to meet the different constraints of perception and action.


Vision Research | 2008

What does the illusory-flash look like?

David McCormick; Pascal Mamassian

In the illusory-flash effect (Shams, L., Kamitani, Y., & Shimojo, S. (2000). Illusions. What you see is what you hear. Nature, 408, 788), one flash presented with two tones has a tendency to be seen as two flashes. Previous studies of this effect have been ill-equipped to establish whether this illusory-flash is the result of a genuine percept, or that of a shift in criterion. We addressed this issue by using a stimulus comprising two locations. This enabled contrast-threshold measurement by means of a location detection task. High-contrast white or black flashes were presented simultaneously to both locations, followed by threshold contrast flashes of the same contrast polarity at the two locations in half of the trials; observers reported whether or not the low-contrast flashes had been present. Irrelevant to the task, half of the trials contained one tone, the other half contained two tones. In this way, we were able to compute the change in sensitivity and shift in criterion between illusory and non-illusory trials. We observe both a decrease in visual sensitivity and a criterion shift in the illusory-flash conditions. In a second experiment, we were interested in determining whether this change in visual sensitivity gave rise to measurable visual attributes of the illusory-flash. If it has a contrast, it should interact with a spatio-temporally concurrent real flash. Using a similar two-location stimulus presentation, we found that under certain conditions, we were able to infer the polarity of the perceived illusory-flash. We conclude that the illusory-flash is indeed a perceptual effect with psychophysically assessable characteristics.


Vision Research | 2008

Audiovisual integration of stimulus transients

Tobias S. Andersen; Pascal Mamassian

A change in sound intensity can facilitate luminance change detection. We found that this effect did not depend on whether sound intensity and luminance increased or decreased. In contrast, luminance identification was strongly influenced by the congruence of luminance and sound intensity change leaving only unsigned stimulus transients as the basis for audiovisual integration. Facilitation of luminance detection occurred even with varying audiovisual stimulus onset asynchrony and even when the sound lagged behind the luminance change by 75ms supporting the interpretation that perceptual integration rather than a reduction of temporal uncertainty or effects of attention caused the effect.

Collaboration


Dive into the Pascal Mamassian's collaboration.

Top Co-Authors

Avatar

Andrei Gorea

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marina Zannoli

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Simon Barthelmé

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrien Chopin

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge