Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Neiberg is active.

Publication


Featured researches published by Daniel Neiberg.


Journal on Multimodal User Interfaces | 2011

Continuous Interaction with a Virtual Human

Dennis Reidsma; Iwan de Kok; Daniel Neiberg; Sathish Pammi; Bart van Straalen; Khiet Phuong Truong; Herwin van Welbergen

This paper presents our progress in developing a Virtual Human capable of being an attentive speaker. Such a Virtual Human should be able to attend to its interaction partner while it is speaking—and modify its communicative behavior on-the-fly based on what it observes in the behavior of its partner. We report new developments concerning a number of aspects, such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and strategies for generating appropriate reactions to listener responses. On the basis of this progress, a task-based setup for a responsive Virtual Human was implemented to carry out two user studies, the results of which are presented and discussed in this paper.


international conference on acoustics, speech, and signal processing | 2011

Syllabification of conversational speech using Bidirectional Long-Short-Term Memory Neural Networks

Christian Landsiedel; Jens Edlund; Florian Eyben; Daniel Neiberg; Björn W. Schuller

Segmentation of speech signals is a crucial task in many types of speech analysis. We present a novel approach at segmentation on a syllable level, using a Bidirectional Long-Short-Term Memory Neural Network. It performs estimation of syllable nucleus positions based on regression of perceptually motivated input features to a smooth target function. Peak selection is performed to attain valid nuclei positions. Performance of the model is evaluated on the levels of both syllables and the vowel segments making up the syllable nuclei. The general applicability of the approach is illustrated by good results for two common databases—Switchboard and TIMIT—for both read and spontaneous speech, and a favourable comparison with other published results.


conference of the international speech communication association | 2014

Evidence for cultural dialects in vocal emotion expression: Acoustic classification within and across five nations.

Petri Laukka; Daniel Neiberg; Hillary Anger Elfenbein

The possibility of cultural differences in the fundamental acoustic patterns used to express emotion through the voice is an unanswered question central to the larger debate about the universality versus cultural specificity of emotion. This study used emotionally inflected standard-content speech segments expressing 11 emotions produced by 100 professional actors from 5 English-speaking cultures. Machine learning simulations were employed to classify expressions based on their acoustic features, using conditions where training and testing were conducted on stimuli coming from either the same or different cultures. A wide range of emotions were classified with above-chance accuracy in cross-cultural conditions, suggesting vocal expressions share important characteristics across cultures. However, classification showed an in-group advantage with higher accuracy in within- versus cross-cultural conditions. This finding demonstrates cultural differences in expressive vocal style, and supports the dialect theory of emotions according to which greater recognition of expressions from in-group members results from greater familiarity with culturally specific expressive styles.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Exploring the Predictability of Non-Unique Acoustic-to-Articulatory Mappings

Gopal Ananthakrishnan; Olov Engwall; Daniel Neiberg

This paper explores statistical tools that help analyze the predictability in the acoustic-to-articulatory inversion of speech, using an Electromagnetic Articulography database of simultaneously recorded acoustic and articulatory data. Since it has been shown that speech acoustics can be mapped to non-unique articulatory modes, the variance of the articulatory parameters is not sufficient to understand the predictability of the inverse mapping. We, therefore, estimate an upper bound to the conditional entropy of the articulatory distribution. This provides a probabilistic estimate of the range of articulatory values (either over a continuum or over discrete non-unique regions) for a given acoustic vector in the database. The analysis is performed for different British/Scottish English consonants with respect to which articulators (lips, jaws or the tongue) are important for producing the phoneme. The paper shows that acoustic-articulatory mappings for the important articulators have a low upper bound on the entropy, but can still have discrete non-unique configurations.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2005

A haptic enabled multimodal pre-operative planner for hip arthroplasty

Silvano Imboden; Marco Petrone; Paolo Quadrani; Cinzia Zannoni; R. Mayoral; Gordon Clapworthy; Debora Testi; Marco Viceconti; Daniel Neiberg; Nikolaos G. Tsagarakis; Darwin G. Caldwell

This paper introduces the multisense idea, with a special reference to the use of haptics in the medical field and, in particular, in the planning of total hip replacement surgery. We emphasise the integration of different modalities and the capability of the multimodal system to gather and register data coming from different sources.


international conference on spoken language processing | 2006

Emotion Recognition in Spontaneous Speech Using GMMs

Daniel Neiberg; Kjell Elenius; Kornel Laskowski


Computer Speech & Language | 2011

Expression of affect in spontaneous speech: Acoustic correlates and automatic detection of irritation and resignation

Petri Laukka; Daniel Neiberg; Mimmi Forsell; Inger Karlsson; Kjell Elenius


conference of the international speech communication association | 2008

Automatic Recognition of Anger in Spontaneous Speech

Daniel Neiberg; Kjell Elenius


conference of the international speech communication association | 2008

The Acoustic to Articulation Mapping : Non-linear or Non-unique?

Daniel Neiberg; Gopal Ananthakrishnan; Olov Engwall


Fonetik 2006. Lund, Sweden. June 7-9, 2006 | 2009

Emotion Recognition in Spontaneous Speech

Daniel Neiberg; Kjell Elenius; Inger Karlsson; Kornel Laskowski

Collaboration


Dive into the Daniel Neiberg's collaboration.

Top Co-Authors

Avatar

Joakim Gustafson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gopal Ananthakrishnan

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kjell Elenius

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Olov Engwall

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge