Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junpeng Lao is active.

Publication


Featured researches published by Junpeng Lao.


Journal of Personality and Social Psychology | 2012

Control deprivation and styles of thinking

Xinyue Zhou; Lingnan He; Qing Yang; Junpeng Lao; Roy F. Baumeister

Westerners habitually think in analytical ways, whereas East Asians tend to favor holistic styles of thinking. We replicated this difference but showed that it disappeared after control deprivation (Experiment 1). Brief experiences of control deprivation, which stimulate increased desire for control, caused Chinese participants to shift toward Western-style analytical thinking in multiple ways (Experiments 2-5). Western Caucasian participants also increased their use of analytical thinking after control deprivation (Experiment 6). Manipulations that required Chinese participants to think in Western, analytical ways caused their sense of personal control to increase (Experiments 7-9). Prolonged experiences of control deprivation, which past work suggested foster an attitude more akin to learned helplessness than striving for control, had the opposite effect of causing Chinese participants to shift back toward a strongly holistic style of thinking (Experiments 10-12). Taken together, the results support the reality of cultural differences in cognition but also the cross-cultural similarity of using analytical thinking when seeking to enhance personal control.


Behavior Research Methods | 2017

iMap4: An open source toolbox for the statistical fixation mapping of eye movement data with linear mixed modeling

Junpeng Lao; Sebastien R Miellet; Cyril Pernet; Nayla Sokhn; Roberto Caldara

A major challenge in modern eye movement research is to statistically map where observers are looking, by isolating the significant differences between groups and conditions. As compared to the signals from contemporary neuroscience measures, such as magneto/electroencephalography and functional magnetic resonance imaging, eye movement data are sparser, with much larger variations in space across trials and participants. As a result, the implementation of a conventional linear modeling approach on two-dimensional fixation distributions often returns unstable estimations and underpowered results, leaving this statistical problem unresolved (Liversedge, Gilchrist, & Everling, 2011). Here, we present a new version of the iMap toolbox (Caldara & Miellet, 2011) that tackles this issue by implementing a statistical framework comparable to those developed in state-of-the-art neuroimaging data-processing toolboxes. iMap4 uses univariate, pixel-wise linear mixed models on smoothed fixation data, with the flexibility of coding for multiple between- and within-subjects comparisons and performing all possible linear contrasts for the fixed effects (main effects, interactions, etc.). Importantly, we also introduced novel nonparametric tests based on resampling, to assess statistical significance. Finally, we validated this approach by using both experimental and Monte Carlo simulation data. iMap4 is a freely available MATLAB open source toolbox for the statistical fixation mapping of eye movement data, with a user-friendly interface providing straightforward, easy-to-interpret statistical graphical outputs. iMap4 matches the standards of robust statistical neuroimaging methods and represents an important step in the data-driven processing of eye movement fixation data, an important field of vision sciences.


Plastic and Reconstructive Surgery | 2013

A Functional Magnetic Resonance Imaging Paradigm to Identify Distinct Cortical Areas of Facial Function: A Reliable Localizer

Marco Romeo; Luca Vizioli; Myrte Breukink; Kiomars Aganloo; Junpeng Lao; Stefano Cotrufo; Roberto Caldara; Stephen Morley

Background: Irreversible facial paralysis can be surgically treated by importing both a new neural and a new motor muscle supply. Various donor nerves can be used. If a nerve supply other than the facial nerve is used, the patient has to adapt to generate a smile. If branches of the fifth cranial nerve are used, the patient has to learn to clench teeth and smile. Currently, controversy exists regarding whether a patient develops a spontaneous smile if a nerve other than the facial nerve is used. The authors postulate that brain adaptation in facial palsy patients can occur because of neural plasticity. The authors aimed to determine whether functional magnetic resonance imaging could topographically differentiate activity between the facial nerve– and the trigeminal nerve–related cortical areas. Methods: A new paradigm of study using functional magnetic resonance imaging based on blood oxygen level–dependent signal activation was tested on 15 voluntary healthy subjects to find a sensitive localizer for teeth clenching and smiling. Subjects smiled to stimulate the facial nerve–related cortex, clenched their jaws to stimulate the trigeminal nerve–related cortex, and tapped their finger as a control condition. Results: Smiling and teeth clenching showed distinct and consistent areas of cortical activation. Trigeminal and facial motor cortex areas were found to be distinct areas with minimal overlapping. Conclusions: The authors successfully devised a functional magnetic resonance imaging paradigm effective for activating specific areas corresponding to teeth clenching and smiling. This will allow accurate mapping of cortical plasticity in facial reanimation patients. CLINICAL QUESTION/LEVEL OF EVIDENCE: Diagnostic, IV.


Scientific Reports | 2016

Mapping female bodily features of attractiveness

Jeanne Bovet; Junpeng Lao; Océane Bartholomée; Roberto Caldara; Michel Raymond

“Beauty is bought by judgment of the eye” (Shakespeare, Love’s Labour’s Lost), but the bodily features governing this critical biological choice are still debated. Eye movement studies have demonstrated that males sample coarse body regions expanding from the face, the breasts and the midriff, while making female attractiveness judgements with natural vision. However, the visual system ubiquitously extracts diagnostic extra-foveal information in natural conditions, thus the visual information actually used by men is still unknown. We thus used a parametric gaze-contingent design while males rated attractiveness of female front- and back-view bodies. Males used extra-foveal information when available. Critically, when bodily features were only visible through restricted apertures, fixations strongly shifted to the hips, to potentially extract hip-width and curvature, then the breast and face. Our hierarchical mapping suggests that the visual system primary uses hip information to compute the waist-to-hip ratio and the body mass index, the crucial factors in determining sexual attractiveness and mate selection.


Journal of Vision | 2015

iMap 4: An Open Source Toolbox for the Statistical Fixation Mapping of Eye Movement data with Linear Mixed Modeling

Junpeng Lao; Sebastien R Miellet; Cyril Pernet; Nayla Sokhn; Roberto Caldara

A major challenge in modern eye movement research is to statistically map where observers are looking at, as well as isolating statistical significant differences between groups and conditions. Compared to signals of contemporary neuroscience measures, such as M/EEG and fMRI, eye movement data are sparse with much larger variations across trials and participants. As a result, the implementation of a conventional Hierarchical Linear Model approach on two-dimensional fixation distributions often returns unstable estimations and underpowered results, leaving this statistical problem unresolved. Here, we tackled this issue by using the statistical framework implemented in diverse state-of-the-art neuroimaging data processing toolboxes: Statistical Parametric Mapping (SPM), Fieldtrip and LIMO EEG. We first estimated the mean individual fixation maps per condition by using trimmean to account for the sparseness and the high variations of fixation data. We then applied a univariate, pixel-wise linear mixed model (LMM) on the smoothed fixation data with each subject as a random effect, which offers the flexibility to code for multiple between- and within- subject comparisons. After this step, our approach allows to perform all the possible linear contrasts for the fixed effects (main effects, interactions, etc.). Importantly, we also introduced a novel spatial cluster test based on bootstrapping to assess the statistical significance of the linear contrasts. Finally, we validated this approach by using both experimental and computer simulation data with a Monte Carlo approach. iMap 4 is a freely available MATLAB open source toolbox for the statistical fixation mapping of eye movement data, with a user-friendly interface providing straightforward, easy to interpret statistical graphical outputs and matching the standards in robust statistical neuroimaging methods. iMap 4 represents a major step in the processing of eye movement fixation data, paving the way to a routine use of robust data-driven analyses in this important field of vision sciences. Meeting abstract presented at VSS 2015.


Journal of Deaf Studies and Deaf Education | 2018

Face Recognition is Shaped by the Use of Sign Language

Chloé Stoll; Richard Palluel-Germain; Roberto Caldara; Junpeng Lao; Matthew W. G. Dye; Florent Aptel; Olivier Pascalis

Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.


Journal of Vision | 2017

The Facespan-the perceptual span for face recognition.

Michael Papinutto; Junpeng Lao; Meike Ramon; Roberto Caldara; Sebastien R Miellet

In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan-the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing-encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.


Food Research International | 2017

Visual attention to food cues is differentially modulated by gustatory-hedonic and post-ingestive attributes

David Garcia-Burgos; Junpeng Lao; Simone Munsch; Roberto Caldara

Although attentional biases towards food cues may play a critical role in food choices and eating behaviours, it remains largely unexplored which specific food attribute governs visual attentional deployment. The allocation of visual attention might be modulated by anticipatory postingestive consequences, from taste sensations derived from eating itself, or both. Therefore, in order to obtain a comprehensive understanding of the attentional mechanisms involved in the processing of food-related cues, we recorded the eye movements to five categories of well-standardised pictures: neutral non-food, high-calorie, good taste, distaste and dangerous food. In particular, forty-four healthy adults of both sexes were assessed with an antisaccade paradigm (which requires the generation of a voluntary saccade and the suppression of a reflex one) and a free viewing paradigm (which implies the free visual exploration of two images). The results showed that observers directed their initial fixations more often and faster on items with high survival relevance such as nutrient and possible dangers; although an increase in antisaccade error rates was only detected for high-calorie items. We also found longer prosaccade fixation duration and initial fixation duration bias score related to maintained attention towards high-calorie, good taste and danger categories; while shorter reaction times to correct an incorrect prosaccade related to less difficulties in inhibiting distasteful images. Altogether, these findings suggest that visual attention is differentially modulated by both the accepted and rejected food attributes, but also that normal-weight, non-eating disordered individuals exhibit enhanced approach to foods postingestive effects and avoidance of distasteful items (such as bitter vegetables or pungent products).


Journal of Experimental Child Psychology | 2018

Quantifying facial expression signal and intensity use during development

Helen Rodger; Junpeng Lao; Roberto Caldara

Behavioral studies investigating facial expression recognition during development have applied various methods to establish by which age emotional expressions can be recognized. Most commonly, these methods employ static images of expressions at their highest intensity (apex) or morphed expressions of different intensities, but they have not previously been compared. Our aim was to (a) quantify the intensity and signal use for recognition of six emotional expressions from early childhood to adulthood and (b) compare both measures and assess their functional relationship to better understand the use of different measures across development. Using a psychophysical approach, we isolated the quantity of signal necessary to recognize an emotional expression at full intensity and the quantity of expression intensity (using neutral expression image morphs of varying intensities) necessary for each observer to recognize the six basic emotions while maintaining performance at 75%. Both measures revealed that fear and happiness were the most difficult and easiest expressions to recognize across age groups, respectively, a pattern already stable during early childhood. The quantity of signal and intensity needed to recognize sad, angry, disgust, and surprise expressions decreased with age. Using a Bayesian update procedure, we then reconstructed the response profiles for both measures. This analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, cannot be straightforwardly compared during development. Altogether, our findings offer novel methodological and theoretical insights and tools for the investigation of the developing affective system.


American Journal of Human Biology | 2018

No evidence for correlations between handgrip strength and sexually dimorphic acoustic properties of voices

Chengyang Han; Hongyi Wang; Vanessa Fasolt; Amanda C. Hahn; Iris J Holzleitner; Junpeng Lao; Lisa M. DeBruine; David R. Feinberg; Benedict C. Jones

Recent research on the signal value of masculine physical characteristics in men has focused on the possibility that such characteristics are valid cues of physical strength. However, evidence that sexually dimorphic vocal characteristics are correlated with physical strength is equivocal. Consequently, we undertook a further test for possible relationships between physical strength and masculine vocal characteristics.

Collaboration


Dive into the Junpeng Lao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meike Ramon

University of Fribourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nayla Sokhn

University of Fribourg

View shared research outputs
Top Co-Authors

Avatar

Olivier Pascalis

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge