Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Haberman is active.

Publication


Featured researches published by Jason Haberman.


Current Biology | 2007

Rapid extraction of mean emotion and gender from sets of faces.

Jason Haberman; David Whitney

Summary We frequently encounter crowds of faces. Here we report that, when presented with a group of faces, observers quickly and automatically extract information about the mean emotion in the group. This occurs even when observers cannot report anything about the individual identities that comprise the group. The results reveal an efficient and powerful mechanism that allows the visual system to extract summary statistics from a broad range of visual stimuli, including faces.


Journal of Experimental Psychology: General | 2012

Sensorimotor Coupling in Music and the Psychology of the Groove

Petr Janata; Stefan T. Tomic; Jason Haberman

The urge to move in response to music, combined with the positive affect associated with the coupling of sensory and motor processes while engaging with music (referred to as sensorimotor coupling) in a seemingly effortless way, is commonly described as the feeling of being in the groove. Here, we systematically explore this compelling phenomenon in a population of young adults. We utilize multiple levels of analysis, comprising phenomenological, behavioral, and computational techniques. Specifically, we show (a) that the concept of the groove is widely appreciated and understood in terms of a pleasurable drive toward action, (b) that a broad range of musical excerpts can be appraised reliably for the degree of perceived groove, (c) that the degree of experienced groove is inversely related to experienced difficulty of bimanual sensorimotor coupling under tapping regimes with varying levels of expressive constraint, (d) that high-groove stimuli elicit spontaneous rhythmic movements, and (e) that quantifiable measures of the quality of sensorimotor coupling predict the degree of experienced groove. Our results complement traditional discourse regarding the groove, which has tended to take the psychological phenomenon for granted and has focused instead on the musical and especially the rhythmic qualities of particular genres of music that lead to the perception of groove. We conclude that groove can be treated as a psychological construct and model system that allows for experimental exploration of the relationship between sensorimotor coupling with music and emotion.


Journal of Experimental Psychology: Human Perception and Performance | 2009

Seeing the mean: Ensemble coding for sets of faces

Jason Haberman; David Whitney

We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis.


Brain Research | 2006

Neural activity of inferences during story comprehension

Sandra Virtue; Jason Haberman; Zoe Clancy; Todd B. Parrish; Mark Beeman

In this event-related functional magnetic resonance imaging (fMRI) study, participants listened to and comprehended short stories implying or explicitly stating inference events. The aim of this study was to examine the neural mechanisms that underlie inference generation, a process essential to successful comprehension. We observed distinct patterns of increased fMRI signal for implied over explicit events at two critical points during the stories: (1) within the right superior temporal gyrus when a verb in the text implied the inference; and (2) within the left superior temporal gyrus at the coherence break or when participants need to generate an inference to understand the story. To find the most compelling evidence of neural activity during inference generation, we examined fMRI signal at these two critical points separately for people with high working memory capacity (i.e., those individuals who are most likely to draw inferences during text comprehension). Interestingly, high working memory individuals showed greater fMRI signal for implied than explicit events in the left inferior frontal gyrus at the coherence break compared to low working memory individuals. This present study provides evidence that areas within the superior temporal gyrus and inferior frontal gyrus are heavily recruited when individuals generate inferences, even during ongoing comprehension that demands many cognitive processes. In addition, the data suggest that the right hemisphere superior temporal gyrus is particularly involved during early inferential processing, whereas the left hemisphere superior temporal gyrus is particularly involved during later inferential processing in story comprehension.


Attention Perception & Psychophysics | 2010

The visual system discounts emotional deviants when extracting average expression

Jason Haberman; David Whitney

There has been a recent surge in the study of ensemble coding, the idea that the visual system represents a set of similar items using summary statistics (Alvarez & Oliva, 2008; Ariely, 2001; Chong & Treisman, 2003; Parkes, Lund, Angelucci, Solomon, & Morgan, 2001). We previously demonstrated that this ability extends to faces and thus requires a high level of object processing (Haberman & Whitney, 2007, 2009). Recent debate has centered on the nature of the summary representation of size (e.g., Myczek & Simons, 2008) and whether the perceived average simply reflects the sampling of a very small subset of the items in a set. In the present study, we explored this further in the context of faces, asking observers to judge the average expressions of sets of faces containing emotional outliers. Our results suggest that the visual system implicitly and unintentionally discounts the emotional outliers, thereby computing a summary representation that encompasses the vast majority of the information present. Additional computational modeling and behavioral results reveal that an intentional cognitive sampling strategy does not accurately capture observer performance. Observers derive precise ensemble information given a 250-msec exposure, suggesting a rapid and flexible system not bound by the limits of serial attention.


Psychonomic Bulletin & Review | 2011

Efficient summary statistical representation when change localization fails

Jason Haberman; David Whitney

People are sensitive to the summary statistics of the visual world (e.g., average orientation/speed/facial expression). We readily derive this information from complex scenes, often without explicit awareness. Given the fundamental and ubiquitous nature of summary statistical representation, we tested whether this kind of information is subject to the attentional constraints imposed by change blindness. We show that information regarding the summary statistics of a scene is available despite limited conscious access. In a novel experiment, we found that while observers can suffer from change blindness (i.e., not localize where change occurred between two views of the same scene), observers could nevertheless accurately report changes in the summary statistics (or “gist”) about the very same scene. In the experiment, observers saw two successively presented sets of 16 faces that varied in expression. Four of the faces in the first set changed from one emotional extreme (e.g., happy) to another (e.g., sad) in the second set. Observers performed poorly when asked to locate any of the faces that changed (change blindness). However, when asked about the ensemble (which set was happier, on average), observer performance remained high. Observers were sensitive to the average expression even when they failed to localize any specific object change. That is, even when observers could not locate the very faces driving the change in average expression between the two sets, they nonetheless derived a precise ensemble representation. Thus, the visual system may be optimized to process summary statistics in an efficient manner, allowing it to operate despite minimal conscious access to the information presented.


Frontiers in Psychology | 2012

The Frozen Face Effect: Why Static Photographs May Not Do You Justice

Robert B. Post; Jason Haberman; Lica Iwaki; David Whitney

When a video of someone speaking is paused, the stationary image of the speaker typically appears less flattering than the video, which contained motion. We call this the frozen face effect (FFE). Here we report six experiments intended to quantify this effect and determine its cause. In Experiment 1, video clips of people speaking in naturalistic settings as well as all of the static frames that composed each video were presented, and subjects rated how flattering each stimulus was. The videos were rated to be significantly more flattering than the static images, confirming the FFE. In Experiment 2, videos and static images were inverted, and the videos were again rated as more flattering than the static images. In Experiment 3, a discrimination task measured recognition of the static images that composed each video. Recognition did not correlate with flattery ratings, suggesting that the FFE is not due to better memory for particularly distinct images. In Experiment 4, flattery ratings for groups of static images were compared with those for videos and static images. Ratings for the video stimuli were higher than those for either the group or individual static stimuli, suggesting that the amount of information available is not what produces the FFE. In Experiment 5, videos were presented under four conditions: forward motion, inverted forward motion, reversed motion, and scrambled frame sequence. Flattery ratings for the scrambled videos were significantly lower than those for the other three conditions. In Experiment 6, as in Experiment 2, inverted videos and static images were compared with upright ones, and the response measure was changed to perceived attractiveness. Videos were rated as more attractive than the static images for both upright and inverted stimuli. Overall, the results suggest that the FFE requires continuous, natural motion of faces, is not sensitive to inversion, and is not due to a memory effect.


Journal of Vision | 2015

Mixed emotions: Sensitivity to facial variance in a crowd of faces

Jason Haberman; Pegan Lee; David Whitney

The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.


Behavior Research Methods | 2018

Correcting “confusability regions” in face morphs

Emma ZeeAbrahamsen; Jason Haberman

The visual system represents summary statistical information from a set of similar items, a phenomenon known as ensemble perception. In exploring various ensemble domains (e.g., orientation, color, facial expression), researchers have often employed the method of continuous report, in which observers select their responses from a gradually changing morph sequence. However, given their current implementation, some face morphs unintentionally introduce noise into the ensemble measurement. Specifically, some facial expressions on the morph wheel appear perceptually similar even though they are far apart in stimulus space. For instance, in a morph wheel of happy–sad–angry–happy expressions, an expression between happy and sad may not be discriminable from an expression between sad and angry. Without accounting for this confusability, observer ability will be underestimated. In the present experiments we accounted for this by delineating the perceptual confusability of morphs of multiple expressions. In a two-alternative forced choice task, eight observers were asked to discriminate between anchor images (36 in total) and all 360 facial expressions on the morph wheel. The results were visualized on a “confusability matrix,” depicting the morphs most likely to be confused for one another. The matrix revealed multiple confusable images between distant expressions on the morph wheel. By accounting for these “confusability regions,” we demonstrated a significant improvement in performance estimation on a set of independent ensemble data, suggesting that high-level ensemble abilities may be better than has been previously thought. We also provide an alternative computational approach that may be used to determine potentially confusable stimuli in a given morph space.


PLOS Biology | 2004

Neural Activity When People Solve Verbal Problems with Insight

Mark Jung-Beeman; Edward M. Bowden; Jason Haberman; Jennifer Frymiare; Stella Arambel-Liu; Richard Greenblatt; Paul J. Reber; John Kounios

Collaboration


Dive into the Jason Haberman's collaboration.

Top Co-Authors

Avatar

David Whitney

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amrita Puri

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert B. Post

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge