Atef Ben Youssef
Grenoble Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Atef Ben Youssef.
intelligent virtual agents | 2015
Atef Ben Youssef; Mathieu Chollet; Hazaël Jones; Nicolas Sabouret; Catherine Pelachaud; Magalie Ochs
This paper presents a socially adaptive virtual agent that can adapt its behaviour according to social constructs (e.g. attitude, relationship) that are updated depending on the behaviour of its interlocutor. We consider the context of job interviews with the virtual agent playing the role of the recruiter. The evaluation of our approach is based on a comparison of the socially adaptive agent to a simple scripted agent and to an emotionally-reactive one. Videos of these three different agents in situation have been created and evaluated by 83 participants. This subjective evaluation shows that the simulation and expression of social attitude is perceived by the users and impacts on the evaluation of the agent’s credibility. We also found that while the emotion expression of the virtual agent has an immediate impact on the user’s experience, the impact of the virtual agent’s attitude expression’s impact is stronger after a few speaking turns.
intelligent virtual agents | 2013
Atef Ben Youssef; Hiroshi Shimodaira; David A. Braude
It is known that subjects vary in their head movements. This paper presents an analysis of this variety over different tasks and speakers and their impact on head motion synthesis. Measured head and articulatory movements acquired by an ElectroMagnetic Articulograph (EMA) synchronously recorded with audio was used. Data set of speech of 12 people recorded on different tasks confirms that the head motion variate over tasks and speakers. Experimental results confirmed that the proposed models were capable of learning and synthesising task-dependent head motions from speech. Subjective evaluation of synthesised head motion using task models shows that trained models on the matched task is better than mismatched one and free speech data provide models that predict preferred motion by the participants compared to read speech data.
Archive | 2011
Gérard Bailly; Pierre Badin; Lionel Revéret; Atef Ben Youssef
The production of speech sounds, in which the acoustics results from the production process, entails coordinated action of the respiratory system to generate the air stream conditions needed for vocal fold vibration at the larynx, and complex neuromuscular control of the vocal tract articulators - such as the tongue, lips, jaw, and velum - that shape the vocal tract continuously through time. When we speak, we have access to a large variety of signals that inform us about the current state variables of the production process. These somesthetic signals include motor commands available as copies of efferent motorneural commands, proprioceptive signals that for example give access to muscular elongation or acoustic structure via tissue vibration and haptic signals delivered by surface tissues, as well as exteroceptive acoustic information delivered by the ears. When we speak, the interlocutor has access to exteroceptive acoustic and visual information about our articulation. Thus both speakers and listeners have access to a great variety of redundant and complementary information associated with speech movements. In this chapter, we describe and discuss approaches to examining the visible characteristics of speech production and their link with other sensory information, in particular articulation and acoustics.
conference of the international speech communication association | 2009
Atef Ben Youssef; Pierre Badin; Gérard Bailly; Panikos Heracleous
conference of the international speech communication association | 2010
Pierre Badin; Atef Ben Youssef; Gérard Bailly; Frédéric Elisei; Thomas Hueber
conference of the international speech communication association | 2013
Atef Ben Youssef; Hiroshi Shimodaira; David A. Braude
conference of the international speech communication association | 2012
Thomas Hueber; Atef Ben Youssef; Gérard Bailly; Pierre Badin; Frédéric Elisei
conference of the international speech communication association | 2010
Atef Ben Youssef; Pierre Badin; Gérard Bailly
conference of the international speech communication association | 2013
David A. Braude; Hiroshi Shimodaira; Atef Ben Youssef
9th International Conference on Auditory-Visual Speech Processing (AVSP 2010) | 2010
Atef Ben Youssef; Pierre Badin; Gérard Bailly