Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chiara Fiorentini is active.

Publication


Featured researches published by Chiara Fiorentini.


Frontiers in Psychology | 2013

Importance of the Inverted Control in Measuring Holistic Face Processing with the Composite Effect and Part-Whole Effect

Elinor McKone; Anne M. Aimola Davies; Hayley Darke; Kate Crookes; Tushara Wickramariyaratne; Stephanie Zappia; Chiara Fiorentini; Simone K Favelle; Mary Broughton; Dinusha Fernando

Holistic coding for faces is shown in several illusions that demonstrate integration of the percept across the entire face. The illusions occur upright but, crucially, not inverted. Converting the illusions into experimental tasks that measure their strength – and thus index degree of holistic coding – is often considered straightforward yet in fact relies on a hidden assumption, namely that there is no contribution to the experimental measure from secondary cognitive factors. For the composite effect, a relevant secondary factor is size of the “spotlight” of visuospatial attention. The composite task assumes this spotlight can be easily restricted to the target half (e.g., top-half) of the compound face stimulus. Yet, if this assumption were not true then a large spotlight, in the absence of holistic perception, could produce a false composite effect, present even for inverted faces and contributing partially to the score for upright faces. We review evidence that various factors can influence spotlight size: race/culture (Asians often prefer a more global distribution of attention than Caucasians); sex (females can be more global); appearance of the join or gap between face halves; and location of the eyes, which typically attract attention. Results from five experiments then show inverted faces can sometimes produce large false composite effects, and imply that whether this happens or not depends on complex interactions between causal factors. We also report, for both identity and expression, that only top-half face targets (containing eyes) produce valid composite measures. A sixth experiment demonstrates an example of a false inverted part-whole effect, where encoding-specificity is the secondary cognitive factor. We conclude the inverted face control should be tested in all composite and part-whole studies, and an effect for upright faces should be interpreted as a pure measure of holistic processing only when the experimental design produces no effect inverted.


PLOS ONE | 2012

A Robust Method of Measuring Other-Race and Other-Ethnicity Effects: The Cambridge Face Memory Test Format

Elinor McKone; Sacha Stokes; Jia Liu; Sarah Cohan; Chiara Fiorentini; Madeleine Pidcock; Galit Yovel; Mary Broughton; Michel Pelleg

Other-race and other-ethnicity effects on face memory have remained a topic of consistent research interest over several decades, across fields including face perception, social psychology, and forensic psychology (eyewitness testimony). Here we demonstrate that the Cambridge Face Memory Test format provides a robust method for measuring these effects. Testing the Cambridge Face Memory Test original version (CFMT-original; European-ancestry faces from Boston USA) and a new Cambridge Face Memory Test Chinese (CFMT-Chinese), with European and Asian observers, we report a race-of-face by race-of-observer interaction that was highly significant despite modest sample size and despite observers who had quite high exposure to the other race. We attribute this to high statistical power arising from the very high internal reliability of the tasks. This power also allows us to demonstrate a much smaller within-race other ethnicity effect, based on differences in European physiognomy between Boston faces/observers and Australian faces/observers (using the CFMT-Australian).


Visual Cognition | 2009

Perceiving facial expressions

Chiara Fiorentini; Paolo Viviani

Three experiments investigated the perception of facial displays of emotions. Using a morphing technique, Experiment 1 (identification task) and Experiment 2 (ABX discrimination task) evaluated the merits of categorical and dimensional models of the representation of these stimuli. We argue that basic emotions—as they are usually defined verbally—do not correspond to primary perceptual categories emerging from the visual analysis of facial expressions. Instead, the results are compatible with the hypothesis that facial expressions are coded in a continuous anisotropic space structured by valence axes. Experiment 3 (identification task) introduces a new technique for generating chimeras to address the debate between feature-based and holistic models of the processing of facial expressions. Contrary to the pure holistic hypothesis, the results suggest that an independent assessment of discrimination features is possible, and may be sufficient for identifying expressions even when the global facial configuration is ambiguous. However, they also suggest that top-down processing may improve identification accuracy by assessing the coherence of local features.


Autism Research | 2013

Recognition of Face and Non‐Face Stimuli in Autistic Spectrum Disorder

Leo Arkush; Adam P.R. Smith-Collins; Chiara Fiorentini; David Skuse

The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non‐facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high‐functioning unmedicated adolescents with ASD and a matched control group on a “surprise” face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain‐general (non‐dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain‐specialized processing of inner facial cues to support face recognition memory. Autism Res 2013, 6: 550–560.


Perception | 2012

The identification of unfolding facial expressions

Chiara Fiorentini; Susanna Schmidt; Paolo Viviani

We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s−1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.


PLOS ONE | 2015

Variation in the X-Linked EFHC2 Gene Is Associated with Social Cognitive Abilities in Males

Carla Startin; Chiara Fiorentini; Michelle de Haan; David Skuse

Females outperform males on many social cognitive tasks. X-linked genes may contribute to this sex difference. Males possess one X chromosome, while females possess two X chromosomes. Functional variations in X-linked genes are therefore likely to impact more on males than females. Previous studies of X-monosomic women with Turner syndrome suggest a genetic association with facial fear recognition abilities at Xp11.3, specifically at a single nucleotide polymorphism (SNP rs7055196) within the EFHC2 gene. Based on a strong hypothesis, we investigated an association between variation at SNP rs7055196 and facial fear recognition and theory of mind abilities in males. As predicted, males possessing the G allele had significantly poorer facial fear detection accuracy and theory of mind abilities than males possessing the A allele (with SNP variant accounting for up to 4.6% of variance). Variation in the X-linked EFHC2 gene at SNP rs7055196 is therefore associated with social cognitive abilities in males.


Journal of Experimental Child Psychology | 2016

The integration of visual context information in facial emotion recognition in 5- to 15-year-olds

Anne Theurel; Arnaud Witt; Jennifer Malsert; Fleur Lejeune; Chiara Fiorentini; Koviljka Barisnikov; Edouard Gentaz

The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition.


Journal of Experimental Psychology: Human Perception and Performance | 2017

Adaptive Face Coding Contributes to Individual Differences in Facial Expression Recognition Independently of Affective Factors

Romina Palermo; Linda Jeffery; Jessica Lewandowsky; Chiara Fiorentini; Jessica Irons; Amy Dawel; Nichola Burton; Elinor McKone; Gillian Rhodes

There are large, reliable individual differences in the recognition of facial expressions of emotion across the general population. The sources of this variation are not yet known. We investigated the contribution of a key face perception mechanism, adaptive coding, which calibrates perception to optimize discrimination within the current perceptual “diet.” We expected that a facial expression system that readily recalibrates might boost sensitivity to variation among facial expressions, thereby enhancing recognition ability. We measured adaptive coding strength with an established facial expression aftereffect task and measured facial expression recognition ability with 3 tasks optimized for the assessment of individual differences. As expected, expression recognition ability was positively associated with the strength of facial expression aftereffects. We also asked whether individual variation in affective factors might contribute to expression recognition ability, given that clinical levels of such traits have previously been linked to ability. Expression recognition ability was negatively associated with self-reported anxiety but not with depression, mood, or degree of autism-like or empathetic traits. Finally, we showed that the perceptual factor of adaptive coding contributes to variation in expression recognition ability independently of affective factors.


International Journal of Synthetic Emotions | 2016

Appraisal Inference from Synthetic Facial Expressions

Ilaria Sergi; Chiara Fiorentini; Stéphanie Trznadel; Klaus R. Scherer

Facial expression research largely relies on forced-choice paradigms that ask observers to choose a label to describe the emotion expressed, assuming a categorical encoding and decoding process. In contrast, appraisal theories of emotion suggest that cognitive appraisal of a situation and the resulting action tendencies determine facial actions in a complex cumulative and sequential process. It is feasible to assume that, in consequence, the expression recognition process is driven by the inference of appraisal configurations that can then be interpreted as discrete emotions. To obtain first evidence with realistic but well-controlled stimuli, theory-guided systematic facial synthesis of action units in avatar faces was used, asking judges to rate 42 combinations of facial actions action units on 9 appraisal dimensions. The results support the view that emotion recognition from facial expression is largely mediated by appraisal-action tendency inferences rather than direct categorical judgment. Implications for affective computing are discussed.


Developmental Cognitive Neuroscience | 2017

Ensemble perception of emotions in autistic and typical children and adolescents

Themelis Karaminis; Louise Neil; Catherine Manning; Marco Turi; Chiara Fiorentini; David C. Burr; Elizabeth Pellicano

Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an ‘ensemble’ emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

Collaboration


Dive into the Chiara Fiorentini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louise Neil

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Turi

University of Florence

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elinor McKone

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linda Jeffery

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

David Skuse

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge