Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Reginald B. Adams is active.

Publication


Featured researches published by Reginald B. Adams.


Journal of Vision | 2009

Face gender and emotion expression: are angry women more like men?

Ursula Hess; Reginald B. Adams; Karl Grammer; Robert E. Kleck

Certain features of facial appearance perceptually resemble expressive cues related to facial displays of emotion. We hypothesized that because expressive markers of anger (such as lowered eyebrows) overlap with perceptual markers of male sex, perceivers would identify androgynous angry faces as more likely to be a man than a woman (Study 1) and would be slower to classify an angry woman as a woman than an angry man as a man (Study 2). Conversely, we hypothesized that because perceptual features of fear (raised eyebrows) and happiness (a rounded smiling face) overlap with female sex markers, perceivers would be more likely to identify an androgynous face showing these emotions as a woman than as a man (Study 1) and would be slower to identify happy and fearful men as men than happy and fearful women as women (Study 2). The results of the two studies showed that happiness and fear expressions bias sex discrimination toward the female, whereas anger expressions bias sex perception toward the male.


Cognition & Emotion | 2012

Emotion in the neutral face: A mechanism for impression formation?

Reginald B. Adams; Anthony J. Nelson; José A. Soto; Ursula Hess; Robert E. Kleck

The current work examined contributions of emotion-resembling facial cues to impression formation. There exist common facial cues that make people look emotional, male or female, and from which we derive personality inferences. We first conducted a Pilot Study to assess these effects. We found that neutral female versus neutral male faces were rated as more submissive, affiliative, naïve, honest, cooperative, babyish, fearful, happy, and less angry than neutral male faces. In our Primary Study, we then “warped” these same neutral faces over their corresponding anger and fear displays so the resultant facial appearance cues now structurally resembled emotion while retaining a neutral visage (e.g., no wrinkles, furrows, creases, etc.). The gender effects found in the Pilot Study were replicated in the Primary Study, suggesting clear stereotype-driven impressions. Critically, ratings of the neutral-over-fear warps versus neutral-over-anger warps also revealed a profile similar to the gender-based ratings, revealing perceptually driven impressions directly attributable to emotion overgeneralisation.


Consciousness and Cognition | 2017

Predictions penetrate perception: Converging insights from brain, behaviour and disorder.

Claire O’Callaghan; Kestutis Kveraga; James M. Shine; Reginald B. Adams; Moshe Bar

It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains-including object recognition, contextual associations, cognitive biases and affective state-that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception.


Brain and Cognition | 2011

Differentially tuned responses to restricted versus prolonged awareness of threat: a preliminary fMRI investigation.

Reginald B. Adams; Robert G. Franklin; Anthony J. Nelson; Heather L. Gordon; Robert E. Kleck; Paul J. Whalen; Nalini Ambady

Responses to threat occur via two known independent processing routes. We propose that early, reflexive processing is predominantly tuned to the detection of congruent combinations of facial cues that signal threat, whereas later, reflective processing is predominantly tuned to incongruent combinations of threat. To test this prediction, we examined responses to threat-gaze expression pairs (anger versus fear expression by direct versus averted gaze). We report on two functional magnetic resonance imaging (fMRI) studies, one employing prolonged presentations (2s) of threat-gaze pairs to allow for reflective processing (Study 1), and one employing severely restricted (33 ms), backward masked presentations of threat-gaze pairs to isolate reflexive neural responding (Study 2). Our findings offer initial support for the conclusion that early, reflexive responses to threat are predominantly tuned to congruent threat-gaze pairings, whereas later reflective responses are predominantly tuned to ambiguous threat-gaze pairings. These findings highlight a distinct dual function in threat perception.


Current Directions in Psychological Science | 2017

Social Vision: Applying a Social-Functional Approach to Face and Expression Perception

Reginald B. Adams; Daniel N. Albohn; Kestutis Kveraga

A social-functional approach to face processing comes with a number of assumptions. First, given that humans possess limited cognitive resources, it assumes that we naturally allocate attention to processing and integrating the most adaptively relevant social cues. Second, from these cues, we make behavioral forecasts about others in order to respond in an efficient and adaptive manner. This assumption aligns with broader ecological accounts of vision that highlight a direct action-perception link, even for nonsocial vision. Third, humans are naturally predisposed to process faces in this functionally adaptive manner. This latter contention is implied by our attraction to dynamic aspects of the face, including looking behavior and facial expressions, from which we tend to overgeneralize inferences, even when forming impressions of stable traits. The functional approach helps to address how and why observers are able to integrate functionally related compound social cues in a manner that is ecologically relevant and thus adaptive.


Archive | 2015

Cross-Cultural Reading the Mind in the Eyes and Its Consequences for International Relations

Robert G. Franklin; Michael T. Stevenson; Nalini Ambady; Reginald B. Adams

Franklin, Stevenson, Ambady, and Adams observe that there is little cross-cultural research in the ability to perceive information (emotion, cognitions, etc.) from the eyes of an individual. The authors argue that the eyes play a major role in social interaction and looking at within- and between-culture use of information from the eyes, what is referred to as mind reading, is important to understanding nonverbal communication.


I-perception | 2018

Line-Drawn Scenes Provide Sufficient Information for Discrimination of Threat and Mere Negativity

Jasmine Boshyan; Lisa Feldman Barrett; Nicole Betz; Reginald B. Adams; Kestutis Kveraga

Previous work using color photographic scenes has shown that human observers are keenly sensitive to different types of threatening and negative stimuli and reliably classify them by the presence, and spatial and temporal directions of threat. To test whether such distinctions can be extracted from impoverished visual information, we used 500 line drawings made by hand-tracing the original set of photographic scenes. Sixty participants rated the scenes on spatial and temporal dimensions of threat. Based on these ratings, trend analysis revealed five scene categories that were comparable to those identified for the matching color photographic scenes. Another 61 participants were randomly assigned to rate the valence or arousal evoked by the line drawings. The line drawings perceived to be the most negative were also perceived to be the most arousing, replicating the finding for color photographic scenes. We demonstrate here that humans are very sensitive to the spatial and temporal directions of threat even when they must extract this information from simple line drawings, and rate the line drawings very similarly to matched color photographs. The set of 500 hand-traced line-drawing scenes has been made freely available to the research community: http://www.kveragalab.org/threat.html.


IEEE Transactions on Affective Computing | 2017

Probabilistic Multigraph Modeling for Improving the Quality of Crowdsourced Affective Data

Jianbo Ye; Jia Li; Michelle G. Newman; Reginald B. Adams; James Zijun Wang

We proposed a probabilistic approach to joint modeling of participants’ reliability and humans’ regularity in crowdsourced affective studies. Reliability measures how likely a subject will respond to a question seriously; and regularity measures how often a human will agree with other seriously-entered responses coming from a targeted population. Crowdsourcing-based studies or experiments, which rely on human self-reported affect, pose additional challenges as compared with typical crowdsourcing studies that attempt to acquire concrete non-affective labels of objects. The reliability of participants has been massively pursued for typical non-affective crowdsourcing studies, whereas the regularity of humans in an affective experiment in its own right has not been thoroughly considered. It has been often observed that different individuals exhibit different feelings on the same test question, which does not have a sole correct response in the first place. High reliability of responses from one individual thus cannot conclusively result in high consensus across individuals. Instead, globally testing consensus of a population is of interest to investigators. Built upon the agreement multigraph among tasks and workers, our probabilistic model differentiates subject regularity from population reliability. We demonstrate the method’s effectiveness for in-depth robust analysis of large-scale crowdsourced affective data, including emotion and aesthetic assessments collected by presenting visual stimuli to human subjects.


Psychiatry Research-neuroimaging | 2018

Development and validation of Image Stimuli for Emotion Elicitation (ISEE): A novel affective pictorial system with test-retest repeatability

Hanjoo Kim; Xin Lu; Michael Costa; Baris Kandemir; Reginald B. Adams; Jia Li; James Ze Wang; Michelle G. Newman

The aim of this study was to develop a novel set of pictorial stimuli for emotion elicitation. The Image Stimuli for Emotion Elicitation (ISEE), are the first set of stimuli for which there was an unbiased initial selection method and with images specifically selected for high retest correlation coefficients and high agreement across time. In order to protect against a researchers subjective bias in screening initial pictures, we crawled 10,696 images from the biggest image hosting website (Flickr.com) based on a computational selection method. In the initial screening study, participants rated stimuli twice for emotion elicitation across a 1-week interval and 1620 images were selected based on the number of ratings of participants and retest reliability of each picture. Using this set of stimuli, a second phase of the study was conducted, again having participants rate images twice with a 1-week interval, in which we found a total of 158 unique images that elicited various levels of emotionality with both good reliability and good agreement over time. The newly developed pictorial stimuli set is expected to facilitate cumulative science on human emotions.


Frontiers in Psychology | 2018

Culture Moderates the Relationship Between Emotional Fit and Collective Aspects of Well-Being

Sinhae Cho; Natalia Van Doren; Mark R. Minnick; Daniel N. Albohn; Reginald B. Adams; José A. Soto

The present study examined how emotional fit with culture – the degree of similarity between an individual’ emotional response to the emotional response of others from the same culture – relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one’s evaluation of one’s cultural group) and identification with one’s group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group.

Collaboration


Dive into the Reginald B. Adams's collaboration.

Top Co-Authors

Avatar

Robert G. Franklin

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Anthony J. Nelson

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Daniel N. Albohn

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jia Li

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Joseph E. Beeney

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

José A. Soto

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Michelle G. Newman

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge