Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frederik Nagel is active.

Publication


Featured researches published by Frederik Nagel.


Behavior Research Methods | 2007

EMuJoy: Software for continuous measurement of perceived emotions in music

Frederik Nagel; Reinhard Kopiez; Oliver Grewe; Eckart Altenmüller

An adequate study of emotions in music and film should be based on the real-time measurement of selfreported data using a continuous-response method. The recording system discussed in this article reflects two important aspects of such research: First, for a better comparison of results, experimental and technical standards for continuous measurement should be taken into account, and second, the recording system should be open to the inclusion of multimodal stimuli. In light of these two considerations, our article addresses four basic principles of the continuous measurement of emotions: (1) the dimensionality of the emotion space, (2) data acquisition (e.g., the synchronization of media and the self-reported data), (3) interface construction for emotional responses, and (4) the use of multiple stimulus modalities. Researcher-developed software (EMuJoy) is presented as a freeware solution for the continuous measurement of responses to different media, along with empirical data from the self-reports of 38 subjects listening to emotional music and viewing affective pictures. Behavior Research Methods


international conference on acoustics, speech, and signal processing | 2009

A harmonic bandwidth extension method for audio codecs

Frederik Nagel; Sascha Disch

Todays efficient audio codecs for low bitrate application scenarios often rely on parametric coding of the upper frequency band portion of a signal while the lower frequency band portion of the same is conveyed by a waveform preserving coding method. At the decoder, the upper frequency signal is approximated from the lower frequency data using the upper frequency band parameters. However, commonly used methods of bandwidth extension almost inevitably suffer from a sensation of unpleasant roughness, which is especially present for tonal music items. In this paper we expose the origin of the roughness and propose a bandwidth extension method, which does not introduce roughness into the reconstructed audio signal. A listening test demonstrates the advantage of the proposed method compared to a standard bandwidth extension.


Musicae Scientiae | 2011

Does music listening in a social context alter experience? A physiological and psychological perspective on emotion:

Hauke Egermann; Mary Elizabeth Sutherland; Oliver Grewe; Frederik Nagel; Reinhard Kopiez; Eckart Altenmüller

Music has often been shown to induce emotion in listeners and is also often heard in social contexts (e.g., concerts, parties, etc.), yet until now, the influences of social settings on the emotions experienced by listeners was not known. This exploratory study investigated whether listening to music in a group setting alters the emotion felt by listeners. The emotional reactions to 10 musical excerpts were measured both psychologically (rating on retrospective questionnaires and button presses indicated the experience of a chill, defined as the experience of a shiver down the spine or goose pimples) and physiologically (skin conductance response) using a new, innovative multi-channel measuring device. In a repeated measures design, 14 members of an amateur orchestra (7 male, 7 female; mean age 29) came in for two testing sessions: once alone, and once as a group. Chills were validated in the data analysis: each chill was counted only if the button press was accompanied by a corresponding skin conductance response. The results showed no differences between conditions (group vs. solitary) for retrospective emotion ratings; however, the number of validated chills did show a non-significant trend towards experiencing more chills in the solitary listening session. Also, skin conductance responses during chills were significantly higher during the solitary listening condition. This and other results suggested that music listening was more arousing alone, possibly due to the lack of social feedback and of concentration on the music in the group setting.


Musicae Scientiae | 2008

Psychoacoustical correlates of musically induced chills

Frederik Nagel; Reinhard Kopiez; Oliver Grewe; Eckart Altenmüller

Abstract Music listening is often accompanied by the experience of emotions, sometimes even by so-called “strong experiences of music” (SEMs). SEMs can include such pleasurable reactions as shivers down the spine or goose pimples, which are referred to as “chills”. In the present study, the role of psychoacoustical features was investigated with respect to the experience of chills. Psychoacoustical parameters of short musical segments (total duration: 20 s), characterized as chill- inducing, were analyzed and compared with musical excerpts which did not induce chill responses. A significant increase of loudness in the frequency range between 8 and 18 Bark (920–4400 Hz) was found in those excerpts for which chills were reported. Frequency-dependent changes of loudness seem to play an important role in the induction of chills.


international conference on acoustics, speech, and signal processing | 2010

A continuous modulated single sideband bandwidth extension

Frederik Nagel; Sascha Disch; Stephan Wilde

Bandwidth extension (BWE) is an important parametric technique applied by modern audio coders in order to achieve efficient data rate compression at low bitrates. The perceptual quality of BWE enhanced signals is, however, often hampered by artifacts caused by inharmonicity. We propose hence a bandwidth extension method that avoids inharmonicity, and, at the same time, avoids the costly transmission of additional control parameters for frequency shifts. Harmonicity of the decoded signal is ensured by calculation of the autocorrelation function of the magnitude spectrum. The proposed bandwidth extension method is implemented by single sideband modulation (SSM). Fitting nicely in the general scheme of well established spectral band replication (SBR), the new method has some potential to replace the SBR patching algorithm. Potentially, the SSM can additionally govern the subsequent spectral shaping. Listening test results demonstrate an advantage of the novel scheme compared to SBR as used in high efficiency advanced audio coding (HE-AAC).


Journal of the Acoustical Society of America | 2013

Audio quality evaluation by experienced and inexperienced listeners

Nadja Schinkel-Bielefeld; Netaya Lotze; Frederik Nagel

Basic perceptual quality of coded audio material is commonly evaluated using ITU-R BS-1534 MUSHRA (Multi Stimulus with Hidden Reference and Anchors) listening tests. MUSHRA guidelines call for experienced listeners. However, the majority of consumers using the final product are no expert-listeners. Also the degree of expertise in a listening test may vary amongst listeners in the same laboratory. It would be useful to know how the audio quality evaluation differs between trained and untrained listeners and how training and actual tests should be designed in order to be as reliable as possible. To investigate the rating differences between experts and non-experts, we performed MUSHRA listening tests with 13 experienced and 11 inexperienced listeners using 5 speech and audio codecs delivering a wide range of basic audio quality. Except for the hidden reference, absolute ratings of non-experts were consistently higher than those of experts. However, rank order only rarely changed between experts and non-expe...


Musicae Scientiae | 2009

Individual emotional reactions towards music: Evolutionary-based universals?

Oliver Grewe; Frederik Nagel; Eckart Altenmüller; Reinhard Kopiez

Music can elicit strong feelings and physiological arousal in listeners. However, it is still under debate as to whether these reactions are based on universal reaction patterns or are acquired during a process of individual acculturation. Here we present evidence for the latter hypothesis: Subjective ratings on the axes of valence and arousal as well as physiological measurements of skin conductance response of 38 participants were assessed. Data were recorded continuously over time while participants listened to seven different musical pieces as well as five to ten pieces which they selected individually. Individual reactions showed extreme heterogeneity and revealed no systematic reaction patterns for all participants. In an exploratory approach, reactions of female and male participants were compared in response to singing voices of different registers (basso, tenor, alto, and soprano). The comparison of genders showed no significant differences, either in subjective ratings or in physiological reactions. The data presented here suggests that individual differences in the subjectively felt reactions to music dominate possible universal patterns. We argue that the high diversity in individual affective responses to music suggests a high adaptability of the underlying reaction patterns. This response mechanism might be evolutionarily beneficial due to its potential for social differentiation.


Annals of the New York Academy of Sciences | 2009

The Influence of Social Situations on Music Listening

Mary Elizabeth Sutherland; Oliver Grewe; Hauke Egermann; Frederik Nagel; Reinhard Kopiez; Eckart Altenmüller

The aim of this study was to investigate whether listening to music in a group setting influenced the emotion felt by the listeners. We hypothesized that individuals hearing music in a group would experience more intense emotions than the same individuals hearing the same music on their own. The emotional reactions to 10 musical excerpts (previously shown to contain chill‐inducing psychoacoustic parameters) were measured in a within‐subjects design. We found, contrary to our hypothesis, that the participants (all musicians) did not experience more chills when listening to music in a group than when listening alone. These findings may be explained by a lesser degree of concentration on the music in the group condition.


international conference on acoustics, speech, and signal processing | 2013

A MDCT based harmonic spectral bandwidth extension method

Christian Neukam; Frederik Nagel; Gerald Schuller; Michael Schnabel

Modern audio coding technologies apply methods of bandwidth extension (BWE) to efficiently represent audio data at low bitrates. An established method is the well-known spectral band replication (SBR) that is part of MPEG High Efficiency Advanced Audio Coding (HE-AAC). However, if the signal features a distinct harmonic spectral structure, the use of these methods tends to result in audible artifacts, because the harmonic structure is not reconstructed correctly. In this paper a bandwidth extension method is proposed which eliminates the undesirable effects and allows for an efficient implementation in the Modified Discrete Cosine Transform (MDCT) domain. The proposed Harmonic Spectral Bandwidth Extension (HSBE) method uses arbitrary frequency shifts for modulating the replicated spectrum in a way that the harmonic structure of the signal is preserved. A listening test demonstrates the advantage of the proposed method compared to the state of the art.


Archive | 2011

MPEG Unified Speech and Audio Coding – Bridging the Gap

Markus Multrus; Max Neuendorf; Jérémie Lecomte; Guillaume Fuchs; Stefan Bayer; Julien Robilliard; Frederik Nagel; Stephan Wilde; Daniel Fischer; Johannes Hilpert; Christian Helmrich; Sascha Disch; Ralf Geiger; Bernhard Grill

Speech and audio coding schemes originate from different worlds. Speech coding schemes typically assume a source model i.e. the human vocal tract. General audio coding schemes primarily rely on a sinkmodel i.e. the human auditory system. While speech coding schemes work well for the signal class they were designed for at very low rates, they are known to fail for general audio signals even at higher rates. In contrast, general audio coders work well for any content at higher rates, but typically have limited performance especially for speech signals at very low rates. Recently the ISO/MPEG group started a standardization activity to develop a new Unified Speech and Audio Coding scheme. A state of the art AAC based general audio coder, featuring transform coding, parametric bandwidth extension and parametric stereo coding,was extended by source model coding tools. All codec modules were further improved and revised for enhanced performance in particular at very low bitrates. The new unified coding scheme outperforms dedicated speech and general audio coding schemes and bridges the gap between both worlds. This paper describes the new codec in detail and shows how the goal of consistent high quality for all signal types is reached.

Collaboration


Dive into the Frederik Nagel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guillaume Fuchs

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Helmrich

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Christian Ertel

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge