Stefan Uddenberg
Yale University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefan Uddenberg.
Psychonomic Bulletin & Review | 2016
Benjamin van Buren; Stefan Uddenberg; Brian J. Scholl
Visual processing recovers not only simple features, such as color and shape, but also seemingly higher-level properties, such as animacy. Indeed, even abstract geometric shapes are readily perceived as intentional agents when they move in certain ways, and such percepts can dramatically influence behavior. In the wolfpack effect, for example, subjects maneuver a disc around a display in order to avoid several randomly moving darts. When the darts point toward the disc, subjects (falsely) perceive that the darts are chasing them, and this impairs several types of visuomotor performance. Are such effects reflexive, automatic features of visual processing? Or might they instead arise only as contingent strategies in tasks in which subjects must interact with (and thus focus on the features of) such objects? We explored these questions in an especially direct way—by embedding such displays into the background of a completely independent “foraging” task. Subjects now moved their disc to collect small “food” dots (which appeared sequentially in random locations) as quickly as possible. The darts were task-irrelevant, and subjects were encouraged to ignore them. Nevertheless, foraging was impaired when the randomly moving darts pointed at the subjects’ disc, as compared to control conditions in which they were either oriented orthogonally to the subjects’ disc or pointed at another moving shape—thereby controlling for nonsocial factors. The perception of animacy thus influences downstream visuomotor behavior in an automatic manner, such that subjects cannot completely override the influences of seemingly animate shapes even while attempting to ignore them.
Journal of Experimental Psychology: General | 2018
Stefan Uddenberg; Brian J. Scholl
How is race encoded into memory when viewing faces? Here we demonstrate a novel systematic bias in which our memories of faces converge on certain prioritized regions in our underlying “face space,” as they relate to perceived race. This convergence was made especially salient using a new visual variant of the method of serial reproduction: “TeleFace.” A single face was briefly presented, with its race selected from a smooth continuum between White and Black (matched for mean luminance). The observer then reproduced that face, using a slider to morph a test face along this continuum. Their response was then used as the face initially presented to the next observer, and so on down the line in each reproduction chain. White observers’ chains consistently and steadily converged onto faces significantly Whiter than they had initially encountered—Whiter than both the original face in the chain and the continuum’s midpoint—regardless of where chains began. Indeed, even chains beginning near the Black end of the continuum inevitably ended up well into White space. Very different patterns resulted when the same method was applied to other arbitrary face stimuli. These results highlight a systematic bias in memory for race in White observers, perhaps contributing to the more general notion in social cognition research of a ‘White default.’
Journal of Vision | 2015
Scott A. Guerin; Stefan Uddenberg; Marcia K. Johnson; Chun Chun
Perception has a clear temporal structure: each stimulus is preceded and followed by other stimuli. However, we may think about recently encountered stimuli in a temporal order that deviates from the perceptual input. How does the brain generate and maintain distinct representations of temporal structure associated with perception and reflection? In this experiment, we constructed a task that dissociates the temporal structure of perception and reflection. Participants viewed one face and one scene (2 s each) in one of two sequences: Face-Scene or Scene-Face. Following perception, participants were cued to direct their internal attention towards one then the other of the just-seen stimuli (refreshing, Johnson et al., 2005). Participants were instructed to imagine the picture as vividly as possible and answer a question about it (Male/Female or Indoor/Outdoor). Each refresh period lasted 2 s. On half the trials, participants refreshed the pictures in the same order they were perceived. On the other half of trials, participants refreshed the pictures in the reverse order. We applied multi-voxel pattern analysis to decode the temporal structure of perception and reflection. Based on an initial analysis of 12 participants, and consistent with our previous findings, we were able to decode the temporal structure of perception above chance in occipital, ventral temporal, and parietal cortices (all p < .001), with a trend in prefrontal cortex (p = .06). Critically, we were also able to decode the temporal structure of reflection in occipital, ventral temporal, parietal (all p < .01), and prefrontal cortices (p < .05). Consistent with previous studies indicating that perception and reflection share overlapping visual representations, our results indicate that perception and reflection share common neural machinery for the representation of temporal structure. Meeting abstract presented at VSS 2015.
Emotion | 2015
Stefan Uddenberg; Won Mok Shim
Journal of Vision | 2016
Stefan Uddenberg; George E. Newman; Brian J. Scholl
Journal of Vision | 2015
Stefan Uddenberg; Brian J. Scholl
Journal of Vision | 2018
Stefan Uddenberg; Brian J. Scholl
Journal of Vision | 2017
Stefan Uddenberg; Brian J. Scholl
Journal of Vision | 2016
Joan Danielle Khonghun Ongchoco; Stefan Uddenberg; Marvin M. Chun
Journal of Vision | 2015
Benjamin van Buren; Stefan Uddenberg; Brian J. Scholl