Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martial Mermillod is active.

Publication


Featured researches published by Martial Mermillod.


Behavioral and Brain Sciences | 2010

The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression.

Paula M. Niedenthal; Martial Mermillod; Marcus Maringer; Ursula Hess

Recent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brains reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.


Journal of Experimental Psychology: General | 2004

The Role of Bottom-Up Processing in Perceptual Categorization by 3- to 4-Month-Old Infants: Simulations and Data

Robert M. French; Denis Mareschal; Martial Mermillod; Paul C. Quinn

Disentangling bottom-up and top-down processing in adult category learning is notoriously difficult. Studying category learning in infancy provides a simple way of exploring category learning while minimizing the contribution of top-down information. Three- to 4-month-old infants presented with cat or dog images will form a perceptual category representation for cat that excludes dogs and for dog that includes cats. The authors argue that an inclusion relationship in the distribution of features in the images explains the asymmetry. Using computational modeling and behavioral testing, the authors show that the asymmetry can be reversed or removed by using stimulus images that reverse or remove the inclusion relationship. The findings suggest that categorization of nonhuman animal images by young infants is essentially a bottom-up process.


Emotion | 2010

The effect of expectancy of a threatening event on time perception in human adults.

Sylvie Droit-Volet; Martial Mermillod; Raquel Cocenas-Silva; Sandrine Gil

Two experiments were conducted to investigate the effect of a threatening stimulus in human adults in a temporal bisection task. In Experiment 1, for two anchor duration conditions (400/800 vs. 800/1600 ms), the participants completed trials in which the probe duration was followed by an aversive stimulus or a nonaversive stimulus. The results showed that the duration was judged longer when the participants expected an aversive rather than a nonaversive stimulus. In Experiment 2, the effect of the temporal localization of the aversive stimulus was also tested, with the aversive stimulus being presented at the beginning or at the end of the probe duration. The results revealed a temporal overestimation in each condition compared to the trials in which no aversive stimulus was presented. Furthermore, the temporal overestimation was greater when the expectation for the forthcoming threatening stimulus was longer. This temporal overestimation is explained in terms of a speeding-up of the neural timing system in response to the increase in the arousal level produced by the expectation of a threatening stimulus.


PLOS ONE | 2009

Emotional modulation of attention : fear increases but disgust reduces the attentional blink

Nicolas Vermeulen; Jimmy Godefroid; Martial Mermillod

Background It is well known that facial expressions represent important social cues. In humans expressing facial emotion, fear may be configured to maximize sensory exposure (e.g., increases visual input) whereas disgust can reduce sensory exposure (e.g., decreases visual input). To investigate whether such effects also extend to the attentional system, we used the “attentional blink” (AB) paradigm. Many studies have documented that the second target (T2) of a pair is typically missed when presented within a time window of about 200–500 ms from the first to-be-detected target (T1; i.e., the AB effect). It has recently been proposed that the AB effect depends on the efficiency of a gating system which facilitates the entrance of relevant input into working memory, while inhibiting irrelevant input. Following the inhibitory response on post T1 distractors, prolonged inhibition of the subsequent T2 is observed. In the present study, we hypothesized that processing facial expressions of emotion would influence this attentional gating. Fearful faces would increase but disgust faces would decrease inhibition of the second target. Methodology/Principal Findings We showed that processing fearful versus disgust faces has different effects on these attentional processes. We found that processing fear faces impaired the detection of T2 to a greater extent than did the processing disgust faces. This finding implies emotion-specific modulation of attention. Conclusions/Significance Based on the recent literature on attention, our finding suggests that processing fear-related stimuli exerts greater inhibitory responses on distractors relative to processing disgust-related stimuli. This finding is of particular interest for researchers examining the influence of emotional processing on attention and memory in both clinical and normal populations. For example, future research could extend upon the current study to examine whether inhibitory processes invoked by fear-related stimuli may be the mechanism underlying the enhanced learning of fear-related stimuli.


Brain and Cognition | 2006

Effect of temporal constraints on hemispheric asymmetries during spatial frequency processing

Carole Peyrin; Martial Mermillod; Sylvie Chokron; Christian Marendaz

Studies on functional hemispheric asymmetries have suggested that the right vs. left hemisphere should be predominantly involved in low vs. high spatial frequency (SF) analysis, respectively. By manipulating exposure duration of filtered natural scene images, we examined whether the temporal characteristics of SF analysis (i.e., the temporal precedence of low on high spatial frequencies) may interfere with hemispheric specialization. Results showed the classical hemispheric specialization pattern for brief exposure duration and a trend to a right hemisphere advantage irrespective of the SF content for longer exposure duration. The present study suggests that the hemispheric specialization pattern for visual information processing should be considered as a dynamic system, wherein the superiority of one hemisphere over the other could change according to the level of temporal constraints: the higher the temporal constraints of the task, the more the hemispheres are specialized in SF processing.


Psychological Science | 2010

Are Coarse Scales Sufficient for Fast Detection of Visual Threat

Martial Mermillod; Sylvie Droit-Volet; Damien Devaux; Alexandre Schaefer; Nicolas Vermeulen

It has recently been suggested that low-spatial-frequency information would provide rapid visual cues to the amygdala for basic but ultrarapid behavioral responses to dangerous stimuli. The present behavioral study investigated the role of different spatial-frequency channels in visually detecting dangerous stimuli belonging to living or nonliving categories. Subjects were engaged in a visual detection task involving dangerous stimuli, and subjects’ behavioral responses were assessed in association with their fear expectations (induced by an aversive 90-dB white noise). Our results showed that, despite its crudeness, low-spatial-frequency information could constitute a sufficient signal for fast recognition of visual danger in a context of fear expectation. In addition, we found that this effect tended to be specific for living entities. These results were obtained despite a strong perceptual bias toward faster recognition of high-spatial-frequency stimuli under supraliminal perception durations.


Cognition | 2009

Neural computation as a tool to differentiate perceptual from emotional processes : the case of anger superiority effect

Martial Mermillod; Nicolas Vermeulen; Daniel Lundqvist; Paula M. Niedenthal

Research findings in social and cognitive psychology imply that it is easier to detect angry faces than happy faces in a crowd of neutral faces [Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd - An anger superiority effect. Journal of Personality and Social Psychology, 54(6), 917-924]. This phenomenon has been held to have evolved over phylogenetic development because it was adaptive to quickly and accurately detect a potential threat in the environment. However, across recent studies, a controversy has emerged about the underlying perceptual versus emotional factors responsible for this so-called anger superiority effect [Juth, P., Lundqvist, D., Karlsson, A., & Ohman, A. (2005). Looking for foes and friends: Perceptual and emotional factors when finding a face in the crowd. Emotion, 5(4), 379-395; Purcell, D. G., Stewart, A. L., & Skov, R. B. (1996). It takes a confounded face to pop out of a crowd. Perception, 25(9), 1091-1108]. To tease apart emotional and perceptual processes, we used neural network analyzes of human faces in two different simulations. Results show that a perceptual bias is probably acting against faster and more accurate identification of anger faces compared to happy faces at a purely perceptual level. We suggest that a parsimonious hypothesis related to the simple perceptual properties of the stimuli might explain these behavioral results without reference to evolutionary processes. We discuss the importance of statistical or connectionist analysis for empirical studies that seek to isolate perceptual from emotional factors, but also learned vs. innate factors in the processing of facial expression of emotion.


Neurocomputing | 2010

Coarse scales are sufficient for efficient categorization of emotional facial expressions: Evidence from neural computation

Martial Mermillod; Patrick Bonin; Laurie Mondillon; David Alleysson; Nicolas Vermeulen

The human perceptual system performs rapid processing within the early visual system: low spatial frequency information is processed rapidly through magnocellular layers, whereas the parvocellular layers process all the spatial frequencies more slowly. The purpose of the present paper is to test the usefulness of low spatial frequency (LSF) information compared to high spatial frequency (HSF) and broad spatial frequency (BSF) visual stimuli in a classification task of emotional facial expressions (EFE) by artificial neural networks. The connectionist modeling results show that an LSF information provided by the frequency domain is sufficient for a distributed neural network to correctly classify EFE, even when all the spatial information relating to these images is discarded. These results suggest that the HSF signal, which is also present in BSF faces, acts as a source of noisy information for classification tasks in an artificial neural system.


Cognition | 2009

Unintended embodiment of concepts into percepts: Sensory activation boosts attention for same-modality concepts in the attentional blink paradigm

Nicolas Vermeulen; Martial Mermillod; Jimmy Godefroid; Olivier Corneille

This study shows that sensory priming facilitates reports of same-modality concepts in an attentional blink paradigm. Participants had to detect and report two target words (T1 and T2) presented for 53 ms each among a series of nonwords distractors at a frequency of up to 19 items per second. SOA between target words was set to 53 ms or 213 ms, with reduced attention expected for T2 under the longer SOA (attentional blink) and for T1 under the shorter SOA (lag-1 sparing). These effects were found but reduced when the sensory modality of the concepts matched that of a sensory stimulation occurring prior to the detection trial. Hence, sensory activation increased report for same-modality concepts. This finding reveals that grounded cognition effects (1) are involved in conceptual processing as soon as a word has reached the point of lexical identification and (2) occur independent of intentional access to sensory properties of concepts.


Connection Science | 2009

The importance of low spatial frequency information for recognising fearful facial expressions

Martial Mermillod; Patrik Vuilleumier; Carole Peyrin; David Alleysson; Christian Marendaz

A recent brain imaging study (Vuilleumier, Armony, Driver and Dolan 2003, Nature Neuroscience, 6, 624–631) has shown that amygdala responses to fearful expressions are preferentially driven by intact or low spatial frequency (LSF) images of faces, rather than by high spatial frequency (HSF) images. These results suggest that LSF components processed rapidly via magnocellular pathways within the visual system might be very efficiently conveyed to the amygdala for the rapid recognition of fearful expressions, perhaps via a subcortical pathway that activates the pulvinar and superior colliculus, but which bypasses any finer visual analysis of HSF cues in the striate and temporal extrastriate cortex. The purpose of this paper is to analyse the statistical properties of LSF compared with HSF and intact faces. The statistical analysis shows that the LSF components in faces, which are typically extracted rapidly by the visual system, provide a better source of information than HSF components for the correct categorisation of fearful expressions in faces. These results support the idea that visual pathways from the magnocellular visual neurons might be optimal, at a computational level, for the rapid classification of fearful emotional expressions in human faces.

Collaboration


Dive into the Martial Mermillod's collaboration.

Top Co-Authors

Avatar

Nicolas Vermeulen

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Brice Beffara

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurie Mondillon

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nathalie Guyader

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anamitra Basu

Indian Institute of Technology Bhubaneswar

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Chauvin

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge