Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hillel Aviezer is active.

Publication


Featured researches published by Hillel Aviezer.


Psychological Science | 2008

Angry, Disgusted, or Afraid? Studies on the Malleability of Emotion Perception

Hillel Aviezer; Ran R. Hassin; Jennifer D. Ryan; Cheryl L. Grady; Josh Susskind; Adam K. Anderson; Morris Moscovitch; Shlomo Bentin

Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly “read out” from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels.


Science | 2012

Body Cues, Not Facial Expressions, Discriminate Between Intense Positive and Negative Emotions

Hillel Aviezer; Yaacov Trope; Alexander Todorov

Joy or Pain? Face recognition and processing are so completely central to human social interactions that these functions are supported by specialized regions in the brain. One of the fundamental aspects being processed is emotion, particularly whether the emotion being expressed is positive or negative. Nevertheless, neuroimaging studies have documented that perceiving opposite emotions often activates the same or overlapping regions. Aviezer et al. (p. 1225) report that the recognition of positive versus negative emotions actually relies on information communicated by the body—the extent to which perceivers identified joy versus grief in composite figures was driven by whether the body came from a joyous (versus grievous) image rather than the face. The body reveals what the face conceals. The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion, positive and negative situations were successfully discriminated from isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created compounds of intense negative faces combined with positive bodies, and vice versa. Perceived affect and mimicry of the faces shifted systematically as a function of their contextual body emotion. These findings challenge standard models of emotion expression and highlight the role of the body in expressing and perceiving emotions.


Emotion Review | 2013

Inherently Ambiguous: Facial Expressions of Emotions, in Context

Ran R. Hassin; Hillel Aviezer; Shlomo Bentin

With a few yet increasing number of exceptions, the cognitive sciences enthusiastically endorsed the idea that there are basic facial expressions of emotions that are created by specific configurations of facial muscles. We review evidence that suggests an inherent role for context in emotion perception. Context does not merely change emotion perception at the edges; it leads to radical categorical changes. The reviewed findings suggest that configurations of facial muscles are inherently ambiguous, and they call for a different approach towards the understanding of facial expressions of emotions. Prices of sticking with the modal view, and advantages of an expanded view, are succinctly reviewed.


Emotion | 2011

The automaticity of emotional face-context integration.

Hillel Aviezer; Shlomo Bentin; Dudarev; Ran R. Hassin

Recent studies have demonstrated that context can dramatically influence the recognition of basic facial expressions, yet the nature of this phenomenon is largely unknown. In the present paper we begin to characterize the underlying process of face-context integration. Specifically, we examine whether it is a relatively controlled or automatic process. In Experiment 1 participants were motivated and instructed to avoid using the context while categorizing contextualized facial expression, or they were led to believe that the context was irrelevant. Nevertheless, they were unable to disregard the context, which exerted a strong effect on their emotion recognition. In Experiment 2, participants categorized contextualized facial expressions while engaged in a concurrent working memory task. Despite the load, the context exerted a strong influence on their recognition of facial expressions. These results suggest that facial expressions and their body contexts are integrated in an unintentional, uncontrollable, and relatively effortless manner.


Journal of Personality and Social Psychology | 2012

Holistic person processing: Faces With Bodies Tell the Whole Story

Hillel Aviezer; Yaacov Trope; Alexander Todorov

Faces and bodies are typically encountered simultaneously, yet little research has explored the visual processing of the full person. Specifically, it is unknown whether the face and body are perceived as distinct components or as an integrated, gestalt-like unit. To examine this question, we investigated whether emotional face-body composites are processed in a holistic-like manner by using a variant of the composite face task, a measure of holistic processing. Participants judged facial expressions combined with emotionally congruent or incongruent bodies that have been shown to influence the recognition of emotion from the face. Critically, the faces were either aligned with the body in a natural position or misaligned in a manner that breaks the ecological person form. Converging data from 3 experiments confirm that breaking the person form reduces the facilitating influence of congruent body context as well as the impeding influence of incongruent body context on the recognition of emotion from the face. These results show that faces and bodies are processed as a single unit and support the notion of a composite person effect analogous to the classic effect described for faces.


Brain | 2009

Not on the face alone: perception of contextualized face expressions in Huntington's disease

Hillel Aviezer; Shlomo Bentin; Ran R. Hassin; Wendy S. Meschino; Jeanne Kennedy; Sonya Grewal; Sherali Esmail; Sharon Cohen; Morris Moscovitch

Numerous studies have demonstrated that Huntingtons disease mutation-carriers have deficient explicit recognition of isolated facial expressions. There are no studies, however, which have investigated the recognition of facial expressions embedded within an emotional body and scene context. Real life facial expressions are typically embedded in contexts which may dramatically change the emotion recognized in the face. Moreover, a recent study showed that the magnitude of the contextual bias is modulated by the similarity between the actual expression of the presented face and the facial expression that would typically fit the context, e.g. disgust faces are more similar to anger than to sadness faces and, consequently, are more strongly influenced by contexts expressing anger than by contexts expressing sadness. Since context effects on facial expression perception are not explicitly controlled, their pattern serves as an implicit measure of the processing of facial expressions. In this study we took advantage of the face-in-context design to compare explicit recognition of face-expressions by Huntingtons disease mutation-carriers, with evidence for processing the expressions deriving from implicit measures. In an initial experiment we presented a group of 21 Huntingtons disease mutation-carriers with standard tests of face-expression recognition. Relative to controls, they displayed deficits in recognizing disgust and anger faces despite intact recognition of these emotions from non-facial images. In a subsequent experiment, we embedded the disgust faces on images of people conveying sadness and anger as expressed by body language and additional paraphernalia. In addition, sadness and anger faces were embedded on context images conveying disgust. In both cases participants were instructed to categorize the facial expressions, ignoring the context. Despite the deficient explicit recognition of isolated disgust and anger faces, the perception of the emotions expressed by the faces was affected by context in Huntingtons disease mutation-carriers in a similar manner as in control participants. Specifically, they displayed the same sensitivity to face-context pairings. These findings suggest that, despite their impaired explicit recognition of facial expressions, Huntingtons disease mutation-carriers display relatively preserved processing of the same facial configurations when embedded in context. The results also show intact utilization of the information elicited by contextual cues about faces expressing disgust even when the actually presented face expresses a different emotion. Overall, our findings shed light on the nature of the deficit in facial expression recognition in Huntingtons disease mutation-carriers as well as underscore the importance of context in emotion perception.


Cognition & Emotion | 2016

Contributions of facial expressions and body language to the rapid perception of dynamic emotions

Laura Martinez; Virginia Falvello; Hillel Aviezer; Alexander Todorov

Correctly perceiving emotions in others is a crucial part of social interactions. We constructed a set of dynamic stimuli to determine the relative contributions of the face and body to the accurate perception of basic emotions. We also manipulated the length of these dynamic stimuli in order to explore how much information is needed to identify emotions. The findings suggest that even a short exposure time of 250 milliseconds provided enough information to correctly identify an emotion above the chance level. Furthermore, we found that recognition patterns from the face alone and the body alone differed as a function of emotion. These findings highlight the role of the body in emotion perception and suggest an advantage for angry bodies, which, in contrast to all other emotions, were comparable to the recognition rates from the face and may be advantageous for perceiving imminent threat from a distance.


Cortex | 2012

Impaired Integration of Emotional Faces and Affective Body Context in a Rare Case of Developmental Visual Agnosia

Hillel Aviezer; Ran R. Hassin; Shlomo Bentin

In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia (DVA) who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LGs agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in DVA, suggesting that impaired integration may extend from the level of the face to the level of the full person.


Neuropsychologia | 2007

Implicit integration in a case of integrative visual agnosia

Hillel Aviezer; Ayelet N. Landau; Lynn C. Robertson; Mary A. Peterson; Nachum Soroker; Yaron Sacher; Yoram Bonneh; Shlomo Bentin

We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SEs ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.


Emotion | 2016

Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations.

Sofia Wenzler; Sarah Levine; Rolf van Dick; Viola Oertel-Knöchel; Hillel Aviezer

According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary peoples spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record

Collaboration


Dive into the Hillel Aviezer's collaboration.

Top Co-Authors

Avatar

Ran R. Hassin

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shlomo Bentin

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anat Perry

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neta Yitzhak

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Noga S. Ensenberg

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge