Corentin Massot
McGill University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Corentin Massot.
Experimental Brain Research | 2011
Kathleen E. Cullen; Jessica X. Brooks; Mohsen Jamali; Jerome Carriot; Corentin Massot
In everyday life, vestibular sensors are activated by both self-generated and externally applied head movements. The ability to distinguish inputs that are a consequence of our own actions (i.e., active motion) from those that result from changes in the external world (i.e., passive or unexpected motion) is essential for perceptual stability and accurate motor control. Recent work has made progress toward understanding how the brain distinguishes between these two kinds of sensory inputs. We have performed a series of experiments in which single-unit recordings were made from vestibular afferents and central neurons in alert macaque monkeys during rotation and translation. Vestibular afferents showed no differences in firing variability or sensitivity during active movements when compared to passive movements. In contrast, the analyses of neuronal firing rates revealed that neurons at the first central stage of vestibular processing (i.e., in the vestibular nuclei) were effectively less sensitive to active motion. Notably, however, this ability to distinguish between active and passive motion was not a general feature of early central processing, but rather was a characteristic of a distinct group of neurons known to contribute to postural control and spatial orientation. Our most recent studies have addressed how vestibular and proprioceptive inputs are integrated in the vestibular cerebellum, a region likely to be involved in generating an internal model of self-motion. We propose that this multimodal integration within the vestibular cerebellum is required for eliminating self-generated vestibular information from the subsequent computation of orientation and posture control at the first central stage of processing.
Journal of Neurophysiology | 2011
Corentin Massot; Maurice J. Chacron; Kathleen E. Cullen
Understanding how sensory neurons transmit information about relevant stimuli remains a major goal in neuroscience. Of particular relevance are the roles of neural variability and spike timing in neural coding. Peripheral vestibular afferents display differential variability that is correlated with the importance of spike timing; regular afferents display little variability and use a timing code to transmit information about sensory input. Irregular afferents, conversely, display greater variability and instead use a rate code. We studied how central neurons within the vestibular nuclei integrate information from both afferent classes by recording from a group of neurons termed vestibular only (VO) that are known to make contributions to vestibulospinal reflexes and project to higher-order centers. We found that, although individual central neurons had sensitivities that were greater than or equal to those of individual afferents, they transmitted less information. In addition, their velocity detection thresholds were significantly greater than those of individual afferents. This is because VO neurons display greater variability, which is detrimental to information transmission and signal detection. Combining activities from multiple VO neurons increased information transmission. However, the information rates were still much lower than those of equivalent afferent populations. Furthermore, combining responses from multiple VO neurons led to lower velocity detection threshold values approaching those measured from behavior (∼ 2.5 vs. 0.5-1°/s). Our results suggest that the detailed time course of vestibular stimuli encoded by afferents is not transmitted by VO neurons. Instead, they suggest that higher vestibular pathways must integrate information from central vestibular neuron populations to give rise to behaviorally observed detection thresholds.
PLOS Biology | 2012
Corentin Massot; Adam D. Schneider; Maurice J. Chacron; Kathleen E. Cullen
Early vestibular processing in macaque monkeys is inherently nonlinear and is optimized to detect specific features of self-motion.
International Journal of Computer Vision | 2008
Corentin Massot; Jeanny Hérault
Abstract This paper addresses the question: at the level of cortical cells present in the primary area V1, is the information sufficient to extract the local perspective from the texture? Starting from a model of complex cells in visual area V1, we propose a biologically plausible algorithm for frequency analysis applied to the shape from texture problem. First, specific log-normal filters are designed in replacement of the classical Gabor filters because of their theoretical properties and of their biological plausibility. These filters are separable in frequency and orientation and they better sample the image spectrum which makes them appropriate for any pattern analysis technique. A method to estimate the local frequency in the image, which discards the need to choose the best local scale, is designed. Based on this frequency analysis model, a local decomposition of the image into patches leads to the estimation of the local frequency variation which is used to solve the problem of recovering the shape from the texture. From the analytical relation between the local frequency and the geometrical parameters, under perspective projection, it is possible to recover the orientation and the shape of the original image. The accuracy of the method is evaluated and discussed on different kind of textures, both regular and irregular, with planar and curved surfaces and also on natural scenes and psychophysical stimuli. It compares favorably to the best existing methods, with in addition, a low computational cost. The biological plausibility of the model is finally discussed.
international conference on pattern recognition | 2005
Zakia Hammal; Corentin Massot; Guillermo Bedoya; Alice Caplier
An efficient algorithm to iris segmentation and its application to automatic and non-intrusive gaze tracking and vigilance estimation is presented and discussed. A luminance gradient technique is used to fit the irises from face images. A robust preprocessing which mimics the human retina is used in such a way that a robust system to luminance variations is obtained and contrast enhancement is achieved. The validation of the proposed algorithm is experimentally demonstrated by using three well-known test databases: the FERET database, the Yale database and the Cohn-Kanade database. Experimental results confirm the effectiveness and the robustness of the proposed approach to be applied successfully in gaze direction and vigilance estimation.
The Journal of Neuroscience | 2010
Marion R. Van Horn; Diana E. Mitchell; Corentin Massot; Kathleen E. Cullen
The ability to accurately control movement requires the computation of a precise motor command. However, the computations that take place within premotor pathways to determine the dynamics of movements are not understood. Here we studied the local processing that generates dynamic motor commands by simultaneously recording spikes and local field potentials (LFPs) in the network that commands saccades. We first compared the information encoded by LFPs and spikes recorded from individual premotor and motoneurons (saccadic burst neurons, omnipause neurons, and motoneurons) in monkeys. LFP responses consistent with net depolarizations occurred in association with bursts of spiking activity when saccades were made in a neurons preferred direction. In contrast, when saccades were made in a neurons nonpreferred direction, neurons ceased spiking and the associated LFP responses were consistent with net hyperpolarizations. Surprisingly, hyperpolarizing and depolarizing LFPs encoded movement dynamics with equal robustness and accuracy. Second, we compared spiking responses at one hierarchical level of processing to LFPs at the next stage. Latencies and spike-triggered averages of LFP responses were consistent with each neurons place within this circuit. LFPs reflected relatively local events (<500 μm) and encoded important features not available from the spiking train (i.e., hyperpolarizing response). Notably, quantification of their time-varying profiles revealed that a precise balance of depolarization and hyperpolarization underlies the production of precise saccadic eye movement commands at both motor and premotor levels. Overall, simultaneous recordings of LFPs and spiking responses provides an effective means for evaluating the local computations that take place to produce accurate motor commands.
international conference on computer vision | 2010
Zakia Hammal; Corentin Massot
An automatic system for facial expression recognition should be able to recognize on-line multiple facial expressions (i.e. “emotional segments”) without interruption. The current paper proposes a new method for the automatic segmentation of “emotional segments” and the dynamic recognition of the corresponding facial expressions in video sequences. First, a new spatial filtering method based on Log-Normal filters is introduced for the analysis of the whole face towards the automatic segmentation of the “emotional segments”. Secondly, a similar filtering-based method is applied to the automatic and precise segmentation of the transient facial features (such as nasal root wrinkles and nasolabial furrows) and the estimation of their orientation. Finally, a dynamic and progressive fusion process of the permanent and transient facial feature deformations is made inside each “emotional segment” for a temporal recognition of the corresponding facial expression. When tested for automatic detection of “emotional segment” in 96 sequences from the MMI and Hammal-Caplier facial expression databases, the proposed method achieved an accuracy of 89%. Tested on 1655 images the automatic detection of transient features achieved a mean precision of 70 % with an error of 2.5 for the estimation of the corresponding orientation. Finally compared to the original model for static facial expression classification, the introduction of transient features and the temporal information increases the precision of the classification of facial expression by 12% and compare favorably to human observers’ performances.
advanced concepts for intelligent vision systems | 2005
Corentin Massot; Jeanny Hérault
How does the visual cortex extract perspective information from textured surfaces? To answer this question, we propose a biologically plausible algorithm based on a simplified model of the visual processing. First, new log-normal filters are presented in replacement of the classical Gabor filters. Particularly, these filters are separable in frequency and orientation and this characteristic is used to derive a robust method to estimate the local mean frequency in the image. Based on this new approach, a local decomposition of the image into patches, after a retinal pre-treatment, leads to the estimation of the local frequency variation all over the surface. The analytical relation between the local frequency and the geometrical parameters of the surface, under perspective projection, is derived and finally allows to solve the so-called problem of recovering the shape from the texture. The accuracy of the method is evaluated and discussed on different kind of textures, both regular and irregular, and also on natural scenes.
international conference on computer vision theory and applications | 2016
Zakia Hammal; Corentin Massot
Journal of Vision | 2011
Corentin Massot