Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adela Barbulescu is active.

Publication


Featured researches published by Adela Barbulescu.


motion in games | 2014

Beyond basic emotions: expressive virtual actors with social attitudes

Adela Barbulescu; Rémi Ronfard; Gérard Bailly; Georges Gagneré; Hüseyin Çakmak

The purpose of this work is to evaluate the contribution of audio-visual prosody to the perception of complex mental states of virtual actors. We propose that global audio-visual prosodic contours - i.e. melody, rhythm and head movements over the utterance - constitute discriminant features for both the generation and recognition of social attitudes. The hypothesis is tested on an acted corpus of social attitudes in virtual actors and evaluation is done using objective measures and perceptual tests.


9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013

Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

Nicolas d’Alessandro; Joëlle Tilmanne; Maria Astrinaki; Thomas Hueber; Rasmus Dall; Thierry Ravet; Alexis Moinet; Hüseyin Çakmak; Onur Babacan; Adela Barbulescu; Valentin Parfait; Victor Huguenin; Emine Sümeyye Kalaycı; Qiong Hu

This paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool.


ieee virtual reality conference | 2017

A system for creating virtual reality content from make-believe games

Adela Barbulescu; Maxime Garcia; Antoine Begault; Marie-Paule Cani; Maxime Portaz; Alexis Viand; Romain Dulery; Laurence Boissieux; Pierre Heinish; Rémi Ronfard; Dominique Vaufreydaz

Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation.


Speech Communication | 2017

Which prosodic features contribute to the recognition of dramatic attitudes

Adela Barbulescu; Rémi Ronfard; Gérard Bailly

In this work we explore the capability of audiovisual prosodic features (such as fundamental frequency, head motion or facial expressions) to discriminate among different dramatic attitudes. We extract the audiovisual parameters from an acted corpus of attitudes and structure them as frame, syllable and sentence-level features. Using Linear Discriminant Analysis classifiers, we show that prosodic features present a higher discriminating rate at sentence-level. This finding is confirmed by the perceptual evaluation results of audio and/or visual stimuli obtained from the recorded attitudes.


IEEE Computer Graphics and Applications | 2017

A Generative Audio-Visual Prosodic Model for Virtual Actors

Adela Barbulescu; Rémi Ronfard; Gérard Bailly

An important problem in computer animation of virtual characters is the expression of complex mental states during conversation using the coordinated prosody of voice, rhythm, facial expressions, and head and gaze motion. In this work, the authors propose an expressive conversion method for generating natural speech and facial animation in a variety of recognizable attitudes, using neutral speech and animation as input. Their method works by automatically learning prototypical prosodic contours at the sentence level from an original dataset of dramatic attitudes.


conference of the international speech communication association | 2016

Characterization of Audiovisual Dramatic Attitudes

Adela Barbulescu; Rémi Ronfard; Gérard Bailly

In this work we explore the capability of audiovisual parameters (such as voice frequency, rhythm, head motion or facial expressions) to discriminate among different dramatic attitudes. We extract the audiovisual parameters from an acted corpus of attitudes and structure them as frame, syllable, and sentence-level features. Using Linear Discriminant Analysis classifiers, we show that sentence-level features present a higher discriminating rate among the attitudes and are less dependent on the speaker than frame and sylable features. We also compare the classification results with the perceptual evaluation tests, showing that voice frequency is correlated to the perceptual results for all attitudes, while other features, such as head motion, contribute differently, depending both on the attitude and the speaker.


12th International Conference on Auditory-Visual Speech Processing (AVSP 2013) | 2013

Audio-Visual Speaker Conversion using Prosody Features

Adela Barbulescu; Thomas Hueber; Gérard Bailly; Rémi Ronfard


WOCCI 2017 - 6th Workshop on Child Computer Interaction at ICMI 2017 - 19th ACM International Conference on Multi-modal Interaction | 2017

Figurines, a multimodal framework for tangible storytelling

Maxime Portaz; Maxime Garcia; Adela Barbulescu; Antoine Begault; Laurence Boissieux; Marie-Paule Cani; Rémi Ronfard; Dominique Vaufreydaz


1st Joint Conference on Facial Analysis, Animation and Auditory-Visual Speech Processing (FAAVSP 2015) | 2015

Audiovisual Generation of Social Attitudes from Neutral Stimuli

Adela Barbulescu; Gérard Bailly; Rémi Ronfard; Maël Pouget


6th Workshop on Intelligent Cinematography and Editing (WICED 2017) | 2017

Making Movies from Make-Believe Games

Adela Barbulescu; Maxime Garcia; Dominique Vaufreydaz; Marie Paule Cani; Rémi Ronfard

Collaboration


Dive into the Adela Barbulescu's collaboration.

Top Co-Authors

Avatar

Gérard Bailly

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Hueber

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge