Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jo-Anne Bachorowski is active.

Publication


Featured researches published by Jo-Anne Bachorowski.


Current Directions in Psychological Science | 1999

Vocal Expression and Perception of Emotion

Jo-Anne Bachorowski

Speech is an acoustically rich signal that provides considerable personal information about talkers. The expression of emotions in speech sounds and corresponding abilities to perceive such emotions are both fundamental aspects of human communication. Findings from studies seeking to characterize the acoustic properties of emotional speech indicate that speech acoustics provide an external cue to the level of nonspecific arousal associated with emotionalprocesses and to a lesser extent, the relative pleasantness of experienced emotions. Outcomes from perceptual tests show that listeners are able to accurately judge emotions from speech at rates far greater than expected by chance. More detailed characterizations of these production and perception aspects of vocal communication will necessarily involve knowledge aboutdifferences among talkers, such as those components of speech that provide comparatively stable cues to individual talkers identities.


Journal of the Acoustical Society of America | 2001

The acoustic features of human laughter.

Jo-Anne Bachorowski; Moria J. Smoski; Michael J. Owren

Remarkably little is known about the acoustic features of laughter. Here, acoustic outcomes are reported for 1024 naturally produced laugh bouts recorded from 97 young adults as they watched funny video clips. Analyses focused on temporal features, production modes, source- and filter-related effects, and indexical cues to laugher sex and individual identity. Although a number of researchers have previously emphasized stereotypy in laughter, its acoustics were found now to be variable and complex. Among the variety of findings reported, evident diversity in production modes, remarkable variability in fundamental frequency characteristics, and consistent lack of articulation effects in supralaryngeal filtering are of particular interest. In addition, formant-related filtering effects were found to be disproportionately important as acoustic correlates of laugher sex and individual identity. These outcomes are examined in light of existing data concerning laugh acoustics, as well as a number of hypotheses and conjectures previously advanced about this species-typical vocal signal.


Psychological Science | 2001

Not All Laughs are Alike: Voiced but Not Unvoiced Laughter Readily Elicits Positive Affect

Jo-Anne Bachorowski; Michael J. Owren

We tested whether listeners are differentially responsive to the presence or absence of voicing, a salient, distinguishing acoustic feature, in laughter. Each of 128 participants rated 50 voiced and 20 unvoiced laughs twice according to one of five different rating strategies. Results were highly consistent regardless of whether participants rated their own emotional responses, likely responses of other people, or one of three perceived attributes concerning the laughers, thus indicating that participants were experiencing similarly differentiated affective responses in all these cases. Specifically, voiced, songlike laughs were significantly more likely to elicit positive responses than were variants such as unvoiced grunts, pants, and snortlike sounds. Participants were also highly consistent in their relative dislike of these other sounds, especially those produced by females. Based on these results, we argue that laughers use the acoustic features of their vocalizations to shape listener affect.


Psychological Science | 1995

Vocal Expression of Emotion: Acoustic Properties of Speech Are Associated With Emotional Intensity and Context

Jo-Anne Bachorowski; Michael J. Owren

Acoustic properties of speech likely provide external cues about internal emotional processes, a phenomenon called vocal expression of emotion Testing this supposition, we examined fundamental frequency (F0) and two perturbation measures, jitter and shimmer, in short speech samples recorded from subjects performing a lexical decision task Statistically significant differences were found between baseline and on-task values and as interaction effects involving differences in trait levels of emotional intensity and the proportion of success versus failure feedback received These results indicate that acoustic properties of speech can be used to index emotional processes and that characteristic differences in emotional intensity may mediate vocal expression of emotion


Cognition & Emotion | 1999

Emotions and Psychopathology

Ann M. Kring; Jo-Anne Bachorowski

Emotional disturbances are central to diverse psychopathologies. In this article, we argue that the functions of emotion are comparable for persons with and without psychopathology. However, impairment in one or more components of emotional processing disrupts the achievement of adaptive emotion functions. Adopting a theoretical conceptualisation of emotional processes that stresses activity in centrally mediated approach and withdrawal systems, we discuss the role of emotion in several forms of psychopathology, including major depression, some of the anxiety disorders, psychopathy, and schizophrenia. In doing so, we highlight the nature of emotion disturbance and attendant behavioural and cognitive deficits. Finally, we discuss the merits of this approach for conceptualising emotional disturbance in psychopathology.


Journal of the Acoustical Society of America | 1999

Acoustic correlates of talker sex and individual talker identity are present in a short vowel segment produced in running speech

Jo-Anne Bachorowski; Michael J. Owren

Although listeners routinely perceive both the sex and individual identity of talkers from their speech, explanations of these abilities are incomplete. Here, variation in vocal production-related anatomy was assumed to affect vowel acoustics thought to be critical for indexical cueing. Integrating this approach with source-filter theory, patterns of acoustic parameters that should represent sex and identity were identified. Due to sexual dimorphism, the combination of fundamental frequency (F0, reflecting larynx size) and vocal tract length cues (VTL, reflecting body size) was predicted to provide the strongest acoustic correlates of talker sex. Acoustic measures associated with presumed variations in supralaryngeal vocal tract-related anatomy occurring within sex were expected to be prominent in individual talker identity. These predictions were supported by results of analyses of 2500 tokens of the /epsilon/ phoneme, extracted from the naturally produced speech of 125 subjects. Classification by talker sex was virtually perfect when F0 and VTL were used together, whereas talker classification depended primarily on the various acoustic parameters associated with vocal-tract filtering.


Journal of Nonverbal Behavior | 2003

Reconsidering the Evolution of Nonlinguistic Communication: The Case of Laughter

Michael J. Owren; Jo-Anne Bachorowski

Nonlinguistic communication is typically proposed to convey representational messages, implying that particular signals are associated with specific signaler emotions, intentions, or external referents. However, common signals produced by both nonhuman primates and humans may not exhibit such specificity, with human laughter for example showing significant diversity in both acoustic form and production context. We therefore outline an alternative to the representational approach, arguing that laughter and other nonlinguistic vocalizations are used to influence the affective states of listeners, thereby also affecting their behavior. In the case of laughter, we propose a primary function of accentuating or inducing positive affect in the perceiver in order to promote a more favorable stance toward the laugher. Two simple strategies are identified, namely producing laughter with acoustic features that have an immediate impact on listener arousal, and pairing these sounds with positive affect in the listener to create learned affective responses. Both depend on factors like the listeners current emotional state and past interactions with the vocalizer, with laughers predicted to adjust their sounds accordingly. This approach is used to explain findings from two experimental studies that examined the use of laughter in same-sex and different-sex dyads composed of either friends or strangers, and may be applicable to other forms of nonlinguistic communication.


Child Development | 1999

Child‐Directed Speech Produced by Mothers with Symptoms of Depression Fails to Promote Associative Learning in 4‐Month‐Old Infants

Peter S. Kaplan; Jo-Anne Bachorowski; Patricia Zarlengo-Strouse

Child-directed (CD) speech segments produced by 20 mothers who varied in self-reported symptoms of depression, recorded during a structured play interaction with their 2- to 6-month-old infants, were used as conditioned stimuli with face reinforcers in a conditioned attention paradigm. After pairings of speech segments and faces, speech segments were assessed for their ability to increase time spent looking at a novel checker-board pattern (summation test) using 225 4-month-old infants of nondepressed mothers. Significant positive summation, an index of associative learning, was obtained in groups of infants tested with speech produced by mothers with comparatively fewer self-reported symptoms of depression (Beck Depression Inventory or BDI < or = 15). However, significant positive summation was not achieved using speech samples produced by mothers with comparatively more symptoms of depression (BDI > 15). These results indicate that the CD speech produced by mothers with symptoms of depression does not promote associative learning in infants.


Annals of the New York Academy of Sciences | 2006

Sounds of Emotion

Jo-Anne Bachorowski; Michael J. Owren

Abstract: In his writing Darwin emphasized direct veridical links between vocal acoustics and vocalizer emotional state. Yet he also recognized that acoustics influence the emotional state of listeners. This duality—that particular vocal expressions are likely linked to particular internal states, yet may specifically function to influence others—lies at the heart of contemporary efforts aimed at understanding affect‐related vocal acoustics. That work has focused most on speech acoustics and laughter, where the most common approach has been to argue that these signals reflect the occurrence of discrete emotional states in the vocalizer. An alternative view is that the underlying states can be better characterized using a small number of continuous dimensions such as arousal (or activation) and a valenced dimension such as pleasantness. A brief review of the evidence suggests, however, that neither approach is correct. Data from speech‐related research provides little support for a discrete‐emotions view, with emotion‐related aspects of the acoustics seeming more to reflect to vocalizer arousal. However, links to a corresponding emotional valence dimension have also been difficult to demonstrate, suggesting a need for interpretations outside this traditional dichotomy. We therefore suggest a different perspective in which the primary function of signaling is not to express signaler emotion, but rather to impact listener affect and thereby influence the behavior of these individuals. In this view, it is not expected that nuances of signaler states will be highly correlated with particular features of the sounds produced, but rather that vocalizers will be using acoustics that readily affect listener arousal and emotion. Attributions concerning signaler states thus become a secondary outcome, reflecting inferences that listeners base on their own affective responses to the sounds, their past experience with such signals, and the context in which signaling is occurring. This approach has found recent support in laughter research, with the bigger picture being that the sounds of emotion—be they carried in speech, laughter, or other species‐typical signals—are not informative, veridical beacons on vocalizer states so much as tools of social influence used to capitalize on listener sensitivities.


Infancy | 2001

Role of Clinical Diagnosis and Medication Use in Effects of Maternal Depression on Infant-Directed Speech

Peter S. Kaplan; Jo-Anne Bachorowski; Moria J. Smoski; Michael C. Zinser

Infant-directed (ID) speech was recorded from mothers as they interacted with their 4- to 12-month-old infants. Hierarchical regression analyses revealed that two variables, age of the mother and mothers diagnosed depression, independently accounted for significant proportions of the variance in the extent of change in fundamental frequency (ΔF0). Specifically, depressed mothers produced ID speech with smaller ΔF0 than did nondepressed mothers, and older mothers produced ID speech with larger ΔF0 than did younger mothers. Mothers who were taking antidepressant medication and who were diagnosed as being in at least partial remission produced ID speech with mean ΔF0 values that were comparable to those of nondepressed mothers. These results demonstrate explicit associations between major depressive disorder and an acoustic attribute of ID speech that is highly salient to young infants.

Collaboration


Dive into the Jo-Anne Bachorowski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter S. Kaplan

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Michael C. Zinser

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann M. Kring

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher D. Linker

University of Colorado Denver

View shared research outputs
Researchain Logo
Decentralizing Knowledge