Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Petri Laukka is active.

Publication


Featured researches published by Petri Laukka.


Psychological Bulletin | 2003

Communication of emotions in vocal expression and music performance : Different channels, same code?

Patrik N. Juslin; Petri Laukka

Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherers (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.


Journal of New Music Research | 2004

Expression, perception, and induction of musical emotions : A review and a questionnaire study of everyday listening

Patrik N. Juslin; Petri Laukka

In this article, we provide an up-to-date overview of theory and research concerning expression, perception, and induction of emotion in music. We also provide a critique of this research, noting that previous studies have tended to neglect the social context of music listening. The most likely reason for this neglect, we argue, is that that most research on musical emotion has, implicitly or explicitly, taken the perspective of the musician in understanding responses to music. In contrast, we argue that a promising avenue toward a better understanding of emotional responses to music involves diary and questionnaire studies of how ordinary listeners actually use music in everyday life contexts. Accordingly, we present findings from an exploratory questionnaire study featuring 141 music listeners (between 17 and 74 years of age) that offers some novel insights. The results provide preliminary estimates of the occurrence of various emotions in listening to music, as well as clues to how music is used by listeners in a number of different emotional ways in various life contexts. These results confirm that emotion is strongly related to most peoples primary motives for listening to music.


Emotion | 2001

Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion

Patrik N. Juslin; Petri Laukka

Actors vocally portrayed happiness, sadness, anger, fear, and disgust with weak and strong emotion intensity while reading brief verbal phrases aloud. The portrayals were recorded and analyzed according to 20 acoustic cues. Listeners decoded each portrayal by using forced-choice or quantitative ratings. The results showed that (a) portrayals with strong emotion intensity yielded higher decoding accuracy than portrayals with weak intensity, (b) listeners were able to decode the intensity of portrayals, (c) portrayals of the same emotion with different intensity yielded different patterns of acoustic cues, and (d) certain acoustic cues (e.g., fundamental frequency, high-frequency energy) were highly predictive of listeners ratings of emotion intensity. It is argued that lack of control for emotion intensity may account for some of the inconsistencies in cue utilization reported in the literature.


Cognition & Emotion | 2005

A dimensional approach to vocal expression of emotion

Petri Laukka; Patrik N. Juslin; Roberto Bresin

This study explored a dimensional approach to vocal expression of emotion. Actors vocally portrayed emotions (anger, disgust, fear, happiness, sadness) with weak and strong emotion intensity. Listeners (30 university students and 6 speech experts) rated each portrayal on four emotion dimensions (activation, valence, potency, emotion intensity). The portrayals were also acoustically analysed with respect to 20 vocal cues (e.g., speech rate, voice intensity, fundamental frequency, spectral energy distribution). The results showed that: (a) there were distinct patterns of ratings of activation, valence, and potency for the different emotions; (b) all four emotion dimensions were correlated with several vocal cues; (c) listeners ratings could be successfully predicted from the vocal cues for all dimensions except valence; and (d) the intensity dimension was positively correlated with the activation dimension in the listeners ratings.


Schizophrenia Bulletin | 2010

Getting the Cue: Sensory Contributions to Auditory Emotion Recognition Impairments in Schizophrenia

David I. Leitman; Petri Laukka; Patrik N. Juslin; Erica Saccente; Pamela D. Butler; Daniel C. Javitt

Individuals with schizophrenia show reliable deficits in the ability to recognize emotions from vocal expressions. Here, we examined emotion recognition ability in 23 schizophrenia patients relative to 17 healthy controls using a stimulus battery with well-characterized acoustic features. We further evaluated performance deficits relative to ancillary assessments of underlying pitch perception abilities. As predicted, patients showed reduced emotion recognition ability across a range of emotions, which correlated with impaired basic tone matching abilities. Emotion identification deficits were strongly related to pitch-based acoustic cues such as mean and variability of fundamental frequency. Whereas healthy subjects performance varied as a function of the relative presence or absence of these cues, with higher cue levels leading to enhanced performance, schizophrenia patients showed significantly less variation in performance as a function of cue level. In contrast to pitch-based cues, both groups showed equivalent variation in performance as a function of intensity-based cues. Finally, patients were less able than controls to differentiate between expressions with high and low emotion intensity, and this deficit was also correlated with impaired tone matching ability. Both emotion identification and intensity rating deficits were unrelated to valence of intended emotions. Deficits in both auditory emotion identification and more basic perceptual abilities correlated with impaired functional outcome. Overall, these findings support the concept that auditory emotion identification deficits in schizophrenia reflect, at least in part, a relative inability to process critical acoustic characteristics of prosodic stimuli and that such deficits contribute to poor global outcome.


Frontiers in Human Neuroscience | 2010

“It’s not what you say, but how you say it” : A reciprocal temporo-frontal network for affective prosody

David I. Leitman; Daniel H. Wolf; J. Daniel Ragland; Petri Laukka; James Loughead; Jeffrey N. Valdez; Daniel C. Javitt; Bruce I. Turetsky; Ruben C. Gur

Humans communicate emotion vocally by modulating acoustic cues such as pitch, intensity and voice quality. Research has documented how the relative presence or absence of such cues alters the likelihood of perceiving an emotion, but the neural underpinnings of acoustic cue-dependent emotion perception remain obscure. Using functional magnetic resonance imaging in 20 subjects we examined a reciprocal circuit consisting of superior temporal cortex, amygdala and inferior frontal gyrus that may underlie affective prosodic comprehension. Results showed that increased saliency of emotion-specific acoustic cues was associated with increased activation in superior temporal cortex [planum temporale (PT), posterior superior temporal gyrus (pSTG), and posterior superior middle gyrus (pMTG)] and amygdala, whereas decreased saliency of acoustic cues was associated with increased inferior frontal activity and temporo-frontal connectivity. These results suggest that sensory-integrative processing is facilitated when the acoustic signal is rich in affective information, yielding increased activation in temporal cortex and amygdala. Conversely, when the acoustic signal is ambiguous, greater evaluative processes are recruited, increasing activation in inferior frontal gyrus (IFG) and IFG STG connectivity. Auditory regions may thus integrate acoustic information with amygdala input to form emotion-specific representations, which are evaluated within inferior frontal regions.


Emotion | 2005

Categorical perception of vocal emotion expressions.

Petri Laukka

Continua of vocal emotion expressions, ranging from one expression to another, were created using speech synthesis. Each emotion continuum consisted of expressions differing by equal physical amounts. In 2 experiments, subjects identified the emotion of each expression and discriminated between pairs of expressions. Identification results show that the continua were perceived as 2 distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. These results suggest that vocal expressions are perceived categorically. Results are interpreted from an evolutionary perspective on the function of vocal expression.


Musicae Scientiae | 2011

Emotional reactions to music in a nationally representative sample of Swedish adults: Prevalence and causal influences

Patrik N. Juslin; Simon Liljeström; Petri Laukka; Daniel Västfjäll; Lars-Olov Lundqvist

Empirical studies have indicated that listeners value music primarily for its ability to arouse emotions. Yet little is known about which emotions listeners normally experience when listening to music, or about the causes of these emotions. The goal of this study was therefore to explore the prevalence of emotional reactions to music in everyday life and how this is influenced by various factors in the listener, the music, and the situation. A self-administered mail questionnaire was sent to a random and nationally representative sample of 1,500 Swedish citizens between the ages of 18 and 65, and 762 participants (51%) responded to the questionnaire. Thirty-two items explored both musical emotions in general (semantic estimates) and the most recent emotion episode featuring music for each participant (episodic estimates). The results revealed several variables (e.g., personality, age, gender, listener activity) that were correlated with particular emotions. A multiple discriminant analysis indicated that three of the most common emotion categories in a set of musical episodes (i.e., happiness, sadness, nostalgia) could be predicted with a mean accuracy of 70% correct based on data obtained from the questionnaire. The results may inform theorizing about musical emotions and guide the selection of causal variables for manipulation in future experiments.


American Journal of Psychiatry | 2012

Auditory Emotion Recognition Impairments in Schizophrenia: Relationship to Acoustic Features and Cognition

Rinat Gold; Pamela D. Butler; Nadine Revheim; David I. Leitman; John A. Hansen; Ruben C. Gur; Joshua T. Kantrowitz; Petri Laukka; Patrik N. Juslin; Gail Silipo; Daniel C. Javitt

OBJECTIVEnSchizophrenia is associated with deficits in the ability to perceive emotion based on tone of voice. The basis for this deficit remains unclear, however, and relevant assessment batteries remain limited. The authors evaluated performance in schizophrenia on a novel voice emotion recognition battery with well-characterized physical features, relative to impairments in more general emotional and cognitive functioning.nnnMETHODnThe authors studied a primary sample of 92 patients and 73 comparison subjects. Stimuli were characterized according to both intended emotion and acoustic features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched comparison subjects, and 188 general comparison subjects.nnnRESULTSnPatients showed statistically significant large-effect-size deficits in voice emotion recognition (d=1.1) and were preferentially impaired in recognition of emotion based on pitch features but not intensity features. Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=0.56) and within (r=0.47) groups. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample.nnnCONCLUSIONSnThe results demonstrate that patients with schizophrenia show a significant deficit in the ability to recognize emotion based on tone of voice and that this deficit is related to impairment in detecting the underlying acoustic features, such as change in pitch, required for auditory emotion recognition. This study provides tools for, and highlights the need for, greater attention to physical features of stimuli used in studying social cognition in neuropsychiatric disorders.


Computer Speech & Language | 2011

Expression of affect in spontaneous speech: Acoustic correlates and automatic detection of irritation and resignation

Petri Laukka; Daniel Neiberg; Mimmi Forsell; Inger Karlsson; Kjell Elenius

The majority of previous studies on vocal expression have been conducted on posed expressions. In contrast, we utilized a large corpus of authentic affective speech recorded from real-life voice controlled telephone services. Listeners rated a selection of 200 utterances from this corpus with regard to level of perceived irritation, resignation, neutrality, and emotion intensity. The selected utterances came from 64 different speakers who each provided both neutral and affective stimuli. All utterances were further automatically analyzed regarding a comprehensive set of acoustic measures related to F0, intensity, formants, voice source, and temporal characteristics of speech. Results first showed that several significant acoustic differences were found between utterances classified as neutral and utterances classified as irritated or resigned using a within-persons design. Second, listeners ratings on each scale were associated with several acoustic measures. In general the acoustic correlates of irritation, resignation, and emotion intensity were similar to previous findings obtained with posed expressions, though the effect sizes were smaller for the authentic expressions. Third, automatic classification (using LDA classifiers both with and without speaker adaptation) of irritation, resignation, and neutral performed at a level comparable to human performance, though human listeners and machines did not necessarily classify individual utterances similarly. Fourth, clearly perceived exemplars of irritation and resignation were rare in our corpus. These findings were discussed in relation to future research.

Collaboration


Dive into the Petri Laukka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hillary Anger Elfenbein

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel C. Javitt

Nathan Kline Institute for Psychiatric Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David I. Leitman

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Neiberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gail Silipo

Nathan Kline Institute for Psychiatric Research

View shared research outputs
Top Co-Authors

Avatar

Joshua T. Kantrowitz

Nathan Kline Institute for Psychiatric Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge