Daniel N. Albohn
Pennsylvania State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel N. Albohn.
Current Directions in Psychological Science | 2017
Reginald B. Adams; Daniel N. Albohn; Kestutis Kveraga
A social-functional approach to face processing comes with a number of assumptions. First, given that humans possess limited cognitive resources, it assumes that we naturally allocate attention to processing and integrating the most adaptively relevant social cues. Second, from these cues, we make behavioral forecasts about others in order to respond in an efficient and adaptive manner. This assumption aligns with broader ecological accounts of vision that highlight a direct action-perception link, even for nonsocial vision. Third, humans are naturally predisposed to process faces in this functionally adaptive manner. This latter contention is implied by our attraction to dynamic aspects of the face, including looking behavior and facial expressions, from which we tend to overgeneralize inferences, even when forming impressions of stable traits. The functional approach helps to address how and why observers are able to integrate functionally related compound social cues in a manner that is ecologically relevant and thus adaptive.
Frontiers in Psychology | 2016
Reginald B. Adams; Carlos O. Garrido; Daniel N. Albohn; Ursula Hess; Robert E. Kleck
It might seem a reasonable assumption that when we are not actively using our faces to express ourselves (i.e., when we display nonexpressive, or neutral faces), those around us will not be able to read our emotions. Herein, using a variety of expression-related ratings, we examined whether age-related changes in the face can accurately reveal one’s innermost affective dispositions. In each study, we found that expressive ratings of neutral facial displays predicted self-reported positive/negative dispositional affect, but only for elderly women, and only for positive affect. These findings meaningfully replicate and extend earlier work examining age-related emotion cues in the face of elderly women (Malatesta et al., 1987a). We discuss these findings in light of evidence that women are expected to, and do, smile more than men, and that the quality of their smiles predicts their life satisfaction. Although ratings of old male faces did not significantly predict self-reported affective dispositions, the trend was similar to that found for old female faces. A plausible explanation for this gender difference is that in the process of attenuating emotional expressions over their lifetimes, old men reveal less evidence of their total emotional experiences in their faces than do old women.
Nature Human Behaviour | 2017
Hee Yeon Im; Daniel N. Albohn; Troy G. Steiner; Cody Cushing; Reginald B. Adams; Kestutis Kveraga
In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading ‘crowd emotion’. We investigated how the brain accomplishes this feat in a set of behavioural and functional magnetic resonance imaging studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioural differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions.Im et al. examine how people process crowd facial expressions as opposed to individual ones, finding significant behavioural and neural differences.
Scientific Reports | 2018
Cody Cushing; Hee Yeon Im; Reginald B. Adams; Noreen Ward; Daniel N. Albohn; Troy G. Steiner; Kestutis Kveraga
Fearful faces convey threat cues whose meaning is contextualized by eye gaze: While averted gaze is congruent with facial fear (both signal avoidance), direct gaze (an approach signal) is incongruent with it. We have previously shown using fMRI that the amygdala is engaged more strongly by fear with averted gaze during brief exposures. However, the amygdala also responds more to fear with direct gaze during longer exposures. Here we examined previously unexplored brain oscillatory responses to characterize the neurodynamics and connectivity during brief (~250 ms) and longer (~883 ms) exposures of fearful faces with direct or averted eye gaze. We performed two experiments: one replicating the exposure time by gaze direction interaction in fMRI (N = 23), and another where we confirmed greater early phase locking to averted-gaze fear (congruent threat signal) with MEG (N = 60) in a network of face processing regions, regardless of exposure duration. Phase locking to direct-gaze fear (incongruent threat signal) then increased significantly for brief exposures at ~350 ms, and at ~700 ms for longer exposures. Our results characterize the stages of congruent and incongruent facial threat signal processing and show that stimulus exposure strongly affects the onset and duration of these stages.
Frontiers in Psychology | 2018
Sinhae Cho; Natalia Van Doren; Mark R. Minnick; Daniel N. Albohn; Reginald B. Adams; José A. Soto
The present study examined how emotional fit with culture – the degree of similarity between an individual’ emotional response to the emotional response of others from the same culture – relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one’s evaluation of one’s cultural group) and identification with one’s group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group.
bioRxiv | 2017
Hee Yeon Im; Daniel N. Albohn; Troy G. Steiner; Cody Cushing; Reginald B. Adams; Kestas Kveraga
The visual system takes advantage of redundancies in the scene by extracting summary statistics from groups of similar items. Similary, in social situations, we routinely make snap judgments of crowds of people. Reading “crowd emotion” is critical for guiding us away from danger (e.g., mass panic or violent mobs) and towards help from friendly groups. Scrutinizing each individual’s expression would be too slow and inefficient. How the brain accomplishes this feat, however, remains unaddressed. Here we report a set of behavioral and fMRI studies in which participants made avoidance or approach decisions by choosing between two facial crowds presented in the left and right visual fields (LVF/RVF). Behaviorally, participants were most accurate for crowds containing task-relevant cues-avoiding angry crowds/approaching happy crowds. This effect was amplified by sex-linked facial cues (angry male/happy female crowds), and highly lateralized with greater recognition of task-congruent stimuli presented in LVF. In a related fMRI study, the processing of facial crowds evoked right-lateralized activations in the dorsal visual stream, whereas similar processing of single faces preferentially activated the ventral stream bilaterally. Our results shed new light on our understand of ensemble face coding, revealing qualitatively different mechanisms involved in reading crowd vs. individual emotion.
bioRxiv | 2017
Hee Yeon Im; Sang Chul Chong; Jisoo Sun; Troy G. Steiner; Daniel N. Albohn; Reginald B. Adams; Kestutis Kveraga
In many social situations, we make a snap judgment about crowds of people relying on their overall mood (termed “crowd emotion”). Although reading crowd emotion is critical for interpersonal dynamics, the sociocultural aspects of this process have not been explored. The current study examined how culture modulates the processing of crowd emotion in Korean and American observers. Korean and American participants were briefly presented with two groups of faces that were individually varying in emotional expressions and asked to choose which group between the two they would rather avoid. We found that Korean participants were more accurate than American participants overall, in line with the framework on cultural viewpoints: Holistic versus analytic processing in East Asians versus Westerners. Moreover, we found a speed advantage for other-race crowds in both cultural groups. Finally, we found different hemispheric lateralization patterns: American participants were more accurate for angry crowds presented in the left visual field and happy crowds presented in the right visual field, replicating previous studies, whereas Korean participants did not show an interaction between emotional valence and visual field. This work suggests that culture plays a role in modulating our crowd emotion perception of groups of faces and responses to them.
Neuroimaging Personality, Social Cognition, and Character | 2016
Daniel N. Albohn; Reginald B. Adams
Abstract In this chapter, we examine how the ecological model of vision can be applied to person perception, with a specific emphasis on the combinatorial nature of face perception. Key behavior and neuroanatomical research from the face perception literature is examined. Throughout the chapter, we stress that cues that share social signal value should not—and to a degree, cannot —be studied independently, as has been historically the case. We illustrate this point by reviewing research on the compound nature of identity cues, such as gender and race, and expressive cues, such as eye gaze and facial expressiveness. We argue that the ecological model provides a lens through which we can interpret the complicated nature of person and face perception, helping to reduce the complexities surrounding the study of compound social cue integration.
Journal of Vision | 2016
Daniel N. Albohn; Kestutis Kveraga; Reginald B. Adams
Adams Jr, R. B., & Kveraga, K. (2015). Social vision: Functional forecasting and the integration of compound social cues. Review of Philosophy and Psychology, 6(4), 591-610. Adams, R. B., Franklin, R. G., Nelson, A. J., & Stevenson, M. T. (2011). Compound social cues in human face processing. The Science of Social Vision: The Science of Social Vision, 7, 90. Aviezer, H., Hassin, R. R., Ryan, J., Grady, C., Susskind, J., Anderson, A., ... & Bentin, S. (2008). Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychological Science, 19(7), 724-732. de Gelder, B., & Tamietto, M. (2010). Faces, bodies, agent vision and social consciousness. In R.B. Adams, N. Ambady, K. Nakayama & S. Shimojo (Eds.), The science of social vision. New York: Oxford University Press. Martinez, L., Falvello, V. B., Aviezer, H., & Todorov, A. (2015). Contributions of facial expressions and body language to the rapid perception of dynamic emotions. Cognition and Emotion, 1-14.
Journal of Vision | 2016
Reginald B. Adams; Hee Yeon Im; Cody Cushing; Noreen Ward; Jasmine Boshyan; Troy G. Steiner; Daniel N. Albohn; Kestutis Kveraga