Tanja Bänziger
University of Geneva
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tanja Bänziger.
Speech Communication | 2005
Tanja Bänziger; Klaus R. Scherer
The influence of emotions on intonation patterns (more specifically F0/pitch contours) is addressed in this article. A number of authors have claimed that specific intonation patterns reflect specific emotions, whereas others have found little evidence supporting this claim and argued that F0/pitch and other vocal aspects are continuously, rather than categorically, affected by emotions and/or emotional arousal. In this contribution, a new coding system for the assessment of F0 contours in emotion portrayals is presented. Results obtained for actor portrayed emotional expressions show that mean level and range of F0 in the contours vary strongly as a function of the degree of activation of the portrayed emotions. In contrast, there was comparatively little evidence for qualitatively different contour shapes for different emotions.
Emotion | 2012
Tanja Bänziger; Marcello Mortillaro; Klaus R. Scherer
Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.
Emotion | 2009
Tanja Bänziger; Didier Maurice Grandjean; Klaus R. Scherer
Emotion recognition ability has been identified as a central component of emotional competence. We describe the development of an instrument that objectively measures this ability on the basis of actor portrayals of dynamic expressions of 10 emotions (2 variants each for 5 emotion families), operationalized as recognition accuracy in 4 presentation modes combining the visual and auditory sense modalities (audio/video, audio only, video only, still picture). Data from a large validation study, including construct validation using related tests (Profile of Nonverbal Sensitivity; Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979; Japanese and Caucasian Facial Expressions of Emotion; Biehl et al., 1997; Diagnostic Analysis of Nonverbal Accuracy; Nowicki & Duke, 1994; Emotion Recognition Index; Scherer & Scherer, 2008), are reported. The results show the utility of a test designed to measure both coarse and fine-grained emotion differentiation and modality-specific skills. Factor analysis of the data suggests 2 separate abilities, visual and auditory recognition, which seem to be largely independent of personality dispositions.
affective computing and intelligent interaction | 2007
Tanja Bänziger; Klaus R. Scherer
Emotion research is intrinsically confronted with a serious difficulty to access pertinent data. For both practical and ethical reasons, genuine and intense emotions are problematic to induce in the laboratory; and sampling sufficient data to capture an adequate variety of emotional episodes requires extensive resources. For researchers interested in emotional expressivity and nonverbal communication of emotion, this situation is further complicated by the pervasiveness of expressive regulations. Given that emotional expressions are likely to be regulated in most situations of our daily lives, spontaneous emotional expressions are especially difficult to access. We argue in this paper that, in view of the needs of current research programs in this field, well-designed corpora of acted emotion portrayals can play a useful role. We present some of the arguments motivating the creation of a multimodal corpus of emotion portrayals (Geneva Multimodal Emotion Portrayal, GEMEP) and discuss its overall benefits and limitations for emotion research.
Progress in Brain Research | 2006
Didier Maurice Grandjean; Tanja Bänziger; Klaus R. Scherer
The vocal expression of human emotions is embedded within language and the study of intonation has to take into account two interacting levels of information--emotional and semantic meaning. In addition to the discussion of this dual coding system, an extension of Brunswiks lens model is proposed. This model includes the influences of conventions, norms, and display rules (pull effects) and psychobiological mechanisms (push effects) on emotional vocalizations produced by the speaker (encoding) and the reciprocal influences of these two aspects on attributions made by the listener (decoding), allowing the dissociation and systematic study of the production and perception of intonation. Three empirical studies are described as examples of possibilities of dissociating these different phenomena at the behavioral and neurological levels in the study of intonation.
Speech Communication | 2000
Inger Karlsson; Tanja Bänziger; Jana Dankovicová; Tom Johnstone; Johan Lindberg; Håkan Melin; Francis Nolan; Klaus R. Scherer
Some experiments have been carried out to study and compensate for within-speaker variations in speaker verification. To induce speaker variation, a speaking behaviour elicitation software package has been developed. A 50-speaker database with voluntary and involuntary speech variation has been recorded using this software. The database has been used for acoustic analysis as well as for automatic speaker verification (ASV) tests. The voluntary speech variations are used to form an enrolment set for the ASV system. This set is called structured training and is compared to neutral training where only normal speech is used. Both sets contain the same number of utterances. It is found that the ASV system improves its performance when testing on a mixed speaking style test without decreasing the performance of the tests with normal speech.
Vision Research | 2010
Hans Richter; Tanja Bänziger; S. Abdi; Mikael Forsman
In an experimental study four levels of oculomotor load were induced binocularly. Trapezius muscle activity was measured with bipolar surface electromyography and normalized to a submaximal contraction. Twenty-eight subjects with a mean age of 29 (range 19-42, std 8) viewed a high-contrast fixation target for four 5-min periods through: (i) -3.5 dioptre (D) lenses; (ii) 0 D lenses; (iii) individually adjusted prism D lenses (1-2 D base out); and (iv) +3.5 D lenses. The target was placed close to the individuals age-appropriate near point of accommodation in conditions (i-iii) and at 3m in condition (iv). Each subjects ability to compensate for the added blur was extracted via infrared photorefraction measurements. A bitwise linear regression model was fitted on group level with eye-lens refraction on the x-axis and normalized trapezius muscle EMG (%RVE) on the y-axis. The model had a constant level of trapezius muscle activity--where subjects had not compensated for the incurred defocus by a change in eye-lens accommodation--and a slope, where the subjects had compensated. The slope coefficient was significantly positive in the -D (i) and the +D blur conditions (iv). During no blur (ii) and prism blur (iii) there were no signs of relationships. Nor was there any sign of relationship between the convergence response and trapezius muscle EMG in any of the experimental conditions. The results appear directly attributable to an engagement of the eye-lens accommodative system and most likely reflect sensorimotor processing along its reflex arc for the purpose of achieving stabilization of gaze.
Emotion | 2012
Marc Mehu; Marcello Mortillaro; Tanja Bänziger; Klaus R. Scherer
We tested Ekmans (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2008
Asimina Vasalou; Adam N. Joinson; Tanja Bänziger; Peter Goldie; Jeremy Pitt
Archive | 2010
Klaus R. Scherer; Tanja Bänziger; Etienne B. Roesch