Hauke Egermann
Technical University of Berlin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hauke Egermann.
Cognitive, Affective, & Behavioral Neuroscience | 2013
Hauke Egermann; Marcus T. Pearce; Geraint A. Wiggins; Stephen McAdams
We present the results of a study testing the often-theorized role of musical expectations in inducing listeners’ emotions in a live flute concert experiment with 50 participants. Using an audience response system developed for this purpose, we measured subjective experience and peripheral psychophysiological changes continuously. To confirm the existence of the link between expectation and emotion, we used a threefold approach. (1) On the basis of an information-theoretic cognitive model, melodic pitch expectations were predicted by analyzing the musical stimuli used (six pieces of solo flute music). (2) A continuous rating scale was used by half of the audience to measure their experience of unexpectedness toward the music heard. (3) Emotional reactions were measured using a multicomponent approach: subjective feeling (valence and arousal rated continuously by the other half of the audience members), expressive behavior (facial EMG), and peripheral arousal (the latter two being measured in all 50 participants). Results confirmed the predicted relationship between high-information-content musical events, the violation of musical expectations (in corresponding ratings), and emotional reactions (psychologically and physiologically). Musical structures leading to expectation reactions were manifested in emotional reactions at different emotion component levels (increases in subjective arousal and autonomic nervous system activations). These results emphasize the role of musical structure in emotion induction, leading to a further understanding of the frequently experienced emotional effects of music.
Musicae Scientiae | 2011
Hauke Egermann; Mary Elizabeth Sutherland; Oliver Grewe; Frederik Nagel; Reinhard Kopiez; Eckart Altenmüller
Music has often been shown to induce emotion in listeners and is also often heard in social contexts (e.g., concerts, parties, etc.), yet until now, the influences of social settings on the emotions experienced by listeners was not known. This exploratory study investigated whether listening to music in a group setting alters the emotion felt by listeners. The emotional reactions to 10 musical excerpts were measured both psychologically (rating on retrospective questionnaires and button presses indicated the experience of a chill, defined as the experience of a shiver down the spine or goose pimples) and physiologically (skin conductance response) using a new, innovative multi-channel measuring device. In a repeated measures design, 14 members of an amateur orchestra (7 male, 7 female; mean age 29) came in for two testing sessions: once alone, and once as a group. Chills were validated in the data analysis: each chill was counted only if the button press was accompanied by a corresponding skin conductance response. The results showed no differences between conditions (group vs. solitary) for retrospective emotion ratings; however, the number of validated chills did show a non-significant trend towards experiencing more chills in the solitary listening session. Also, skin conductance responses during chills were significantly higher during the solitary listening condition. This and other results suggested that music listening was more arousing alone, possibly due to the lack of social feedback and of concentration on the music in the group setting.
Frontiers in Psychology | 2015
Hauke Egermann; Nathalie Fernando; Lorraine Chuen; Stephen McAdams
Subjective and psychophysiological emotional responses to music from two different cultures were compared within these two cultures. Two identical experiments were conducted: the first in the Congolese rainforest with an isolated population of Mebenzélé Pygmies without any exposure to Western music and culture, the second with a group of Western music listeners, with no experience with Congolese music. Forty Pygmies and 40 Canadians listened in pairs to 19 music excerpts of 29–99 s in duration in random order (eight from the Pygmy population and 11 Western instrumental excerpts). For both groups, emotion components were continuously measured: subjective feeling (using a two- dimensional valence and arousal rating interface), peripheral physiological activation, and facial expression. While Pygmy music was rated as positive and arousing by Pygmies, ratings of Western music by Westerners covered the range from arousing to calming and from positive to negative. Comparing psychophysiological responses to emotional qualities of Pygmy music across participant groups showed no similarities. However, Western stimuli, rated as high and low arousing by Canadians, created similar responses in both participant groups (with high arousal associated with increases in subjective and physiological activation). Several low-level acoustical features of the music presented (tempo, pitch, and timbre) were shown to affect subjective and physiological arousal similarly in both cultures. Results suggest that while the subjective dimension of emotional valence might be mediated by cultural learning, changes in arousal might involve a more basic, universal response to low-level acoustical characteristics of music.
Annals of the New York Academy of Sciences | 2009
Hauke Egermann; Oliver Grewe; Reinhard Kopiez; Eckart Altenmüller
Numerous studies have shown that music is a powerful means to induce emotions. The present study investigates whether these emotional effects can be manipulated by social feedback. In an Internet‐based study, 3315 participants were randomly assigned to two groups and they listened to different music excerpts. After each excerpt, participants rated emotions according to arousal and valence dimensions. Additionally, those in group 2 received feedback allegedly based on the emotional ratings of preceding participants. Results show that feedback significantly influenced participants’ ratings of group 2 in the manipulated direction compared to the group without feedback.
PLOS ONE | 2014
Bruno L. Giordano; Hauke Egermann; Roberto Bresin
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
Annals of the New York Academy of Sciences | 2009
Mary Elizabeth Sutherland; Oliver Grewe; Hauke Egermann; Frederik Nagel; Reinhard Kopiez; Eckart Altenmüller
The aim of this study was to investigate whether listening to music in a group setting influenced the emotion felt by the listeners. We hypothesized that individuals hearing music in a group would experience more intense emotions than the same individuals hearing the same music on their own. The emotional reactions to 10 musical excerpts (previously shown to contain chill‐inducing psychoacoustic parameters) were measured in a within‐subjects design. We found, contrary to our hypothesis, that the participants (all musicians) did not experience more chills when listening to music in a group than when listening alone. These findings may be explained by a lesser degree of concentration on the music in the group condition.
PLOS ONE | 2016
Melanie Irrgang; Hauke Egermann
Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems.
Archive | 2017
Gina Emerson; Hauke Egermann
Digital musical instruments (DMIs) rarely feature a clear, causal relationship between the performer’s actions and the sounds produced. Instead, they often function simply as controllers, triggering sounds that are or have been synthesised elsewhere; they are not necessarily sources of sound in themselves (Miranda and Wanderley 2006). Consequently, the performer’s physical interaction with the device frequently does not appear to correlate directly with the sonic output, thus making it difficult for spectators to discern how gestures and actions are translated into sounds. This relationship between input and output is determined by the mapping, the term for the process of establishing relationships of cause and effect between the control and sound generation elements of the instrument (Hunt et al. 2003). While there has been much consideration of the creative and expressive potential of mapping from the perspective of the performer and/or instrument designer, there has been little focus on the experience of those receiving DMIs. How do spectators respond to the perceptual challenge DMIs present them? What influence do mapping and other aspects of instrument design (e.g. the type of controller used and the sound design) have on the success of an instrument when considered from the spectator’s point of view? And to what extent can (and should) this area of artistic exploration be made more accessible to audiences? This article aims to consider these questions through providing a critical review of the existing theoretical and empirical work on DMI reception.
Musicae Scientiae | 2018
Gina Emerson; Hauke Egermann
Over the past four decades, the number, diversity and complexity of digital musical instruments (DMIs) has increased rapidly. There are very few constraints on DMI design as such systems can be easily reconfigured, offering near limitless flexibility for music-making. Given that new acoustic musical instruments have in many cases been created in response to the limitations of available technologies, what motivates the development of new DMIs? We conducted an interview study with ten designers of new DMIs, in order to explore (a) the motivations electronic musicians may have for wanting to build their own instruments; and (b) the extent to which these motivations relate to the context in which the artist works and performs (academic vs club settings). We found that four categories of motivation were mentioned most often: M1 – wanting to bring greater embodiment to the activity of performing and producing electronic music; M2 – wanting to improve audience experiences of DMI performances; M3 – wanting to develop new sounds, and M4 – wanting to build responsive systems for improvisation. There were also some detectable trends in motivation according to the context in which the artists work and perform. Our results offer the first systematically gathered insights into the motivations for new DMI design. It appears that the challenges of controlling digital sound synthesis drive the development of new DMIs, rather than the shortcomings of any one particular design or existing technology.
Convergence | 2015
Steffen Lepa; Anne-Kathrin Hoklas; Hauke Egermann; Stefan Weinzierl
Within academic music research, ‘musical expertise’ is often employed as a ‘moderator variable’ when conducting empirical studies on music listening. Prevalent conceptualizations typically conceive of it as a bundle of cognitive skills acquired through formal musical education. By implicitly drawing on the paradigm of the Western classical live concert, this ignores that for most people nowadays, the term ‘music’ refers to electro-acoustically generated sound waves rendered by audio or multimedia electronic devices. Hence, our article tries to challenge the traditional musicologist’s view by drawing on empirical findings from three more recent music-related research lines that explicitly include the question of media playback technologies. We conclude by suggesting a revised musical expertise concept that extends from the traditional dimensions and also incorporates expertise gained through ecological perception, material practice and embodied listening experiences in the everyday. Altogether, our contribution shall draw attention to growing convergences between musicology and media and communications research.