Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hüseyin Çakmak is active.

Publication


Featured researches published by Hüseyin Çakmak.


international conference on acoustics, speech, and signal processing | 2013

Evaluation of HMM-based laughter synthesis

Jérôme Urbain; Hüseyin Çakmak; Thierry Dutoit

In this paper we explore the potential of Hidden Markov Models (HMMs) for laughter synthesis. Several versions of HMMs are developed, with varying contextual information and algorithms for estimating the parameters of the source-filter synthesis model. These methods are compared, in a perceptive tests, to the naturalness of actual human laughs and copy-synthesis laughs. The evaluation shows that 1) the addition of contextual information did not increase the naturalness, 2) the proposed method is significantly less natural than human and copy-synthesized laughs, but 3) significantly improves laughter synthesis naturalness compared to the state of the art. The evaluation also demonstrates that the duration of the laughter units can be efficiently learnt by the HMM-based parametric synthesis methods.


IEEE Journal of Selected Topics in Signal Processing | 2014

Arousal-Driven Synthesis of Laughter

Jérôme Urbain; Hüseyin Çakmak; Aurelie Charlier; Maxime Denti; Thierry Dutoit; Stéphane Dupont

This paper presents the adaptation of HMM-based speech synthesis to laughter signals. Acoustic laughter synthesis HMMs are built with only 3 minutes of laughter data. An evaluation experiment shows that the method achieves significantly better performance than previous works. In addition, the first method to generate laughter phonetic transcriptions from high-level signals (in our case, arousal signals) is described. This enables to generate new laughter phonetic sequences, that do not exist in the original data. The generated phonetic sequences are used as input for HMM synthesis and reach similar perceived naturalness as laughs synthesized from existing phonetic transcriptions. These methods open promising perspectives for the integration of natural laughs in man-machine interfaces. It could also be used for other vocalizations (sighs, cries, coughs, etc.).


affective computing and intelligent interaction | 2013

Automatic Phonetic Transcription of Laughter and Its Application to Laughter Synthesis

Jérôme Urbain; Hüseyin Çakmak; Thierry Dutoit

In this paper, automatic phonetic transcription of laughter is achieved with the help of Hidden Markov Models (HMMs). The models are evaluated in a speaker-independent way. Several measures to evaluate the quality of the transcriptions are discussed, some focusing on the recognized sequences (without paying attention to the segmentation of the phones), other only taking into account the segmentation boundaries (without involving the phonetic labels). Although the results are far from perfect recognition, it is shown that using this kind of automatic transcriptions does not impair too much the naturalness of laughter synthesis. The paper opens interesting perspectives in automatic laughter analysis as well as in laughter synthesis, as it will enable faster developments of laughter synthesis on large sets of laughter data.


intelligent technologies for interactive entertainment | 2013

Multimodal Analysis of Laughter for an Interactive System

Jérôme Urbain; Radoslaw Niewiadomski; Maurizio Mancini; Harry J. Griffin; Hüseyin Çakmak; Laurent Ach; Gualtiero Volpe

In this paper, we focus on the development of new methods to detect and analyze laughter, in order to enhance human-computer interactions. First, the general architecture of such a laughter-enabled application is presented. Then, we propose the use of two new modalities, namely body movements and respiration, to enrich the audiovisual laughter detection and classification phase. These additional signals are acquired using easily constructed affordable sensors. Features to characterize laughter from body movements are proposed, as well as a method to detect laughter from a measure of thoracic circumference.


international conference on acoustics, speech, and signal processing | 2014

Evaluation of HMM-based visual laughter synthesis

Hüseyin Çakmak; Jérôme Urbain; Joëlle Tilmanne; Thierry Dutoit

In this paper we apply speaker-dependent training of Hidden Markov Models (HMMs) to audio and visual laughter synthesis separately. The two modalities are synthesized with a forced durations approach and are then combined together to render audio-visual laughter on a 3D avatar. This paper focuses on visual synthesis of laughter and its perceptive evaluation when combined with synthesized audio laughter. Previous work on audio and visual synthesis has been successfully applied to speech. The extrapolation to audio laughter synthesis has already been done. This paper shows that it is possible to extrapolate to visual laughter synthesis as well.


motion in games | 2014

Beyond basic emotions: expressive virtual actors with social attitudes

Adela Barbulescu; Rémi Ronfard; Gérard Bailly; Georges Gagneré; Hüseyin Çakmak

The purpose of this work is to evaluate the contribution of audio-visual prosody to the perception of complex mental states of virtual actors. We propose that global audio-visual prosodic contours - i.e. melody, rhythm and head movements over the utterance - constitute discriminant features for both the generation and recognition of social attitudes. The hypothesis is tested on an acted corpus of social attitudes in virtual actors and evaluation is done using objective measures and perceptual tests.


Toward Robotic Socially Believable Behaving Systems (I) | 2016

Laughter Research: A Review of the ILHAIRE Project

Stéphane Dupont; Hüseyin Çakmak; William Curran; Thierry Dutoit; Jennifer Hofmann; Olivier Pietquin; Tracey Platt; Willibald Ruch; Jérôme Urbain

Laughter is everywhere. So much so that we often do not even notice it. First, laughter has a strong connection with humour. Most of us seek out laughter and people who make us laugh, and it is what we do when we gather together as groups relaxing and having a good time. But laughter also plays an important role in making sure we interact with each other smoothly. It provides social bonding signals that allow our conversations to flow seamlessly between topics; to help us repair conversations that are breaking down; and to end our conversations on a positive note.


international symposium on signal processing and information technology | 2015

An HMM approach for synthesizing amused speech with a controllable intensity of smile

Kevin El Haddad; Hüseyin Çakmak; Alexis Moinet; Stéphane Dupont; Thierry Dutoit

Smile is not only a visual expression. When it occurs together with speech, it also alters its acoustic realization. Being able to synthesize speech altered by the expression of smile can hence be an important contributor for adding naturalness and expressiveness in interactive systems. In this work, we present a first attempt to develop a Hidden Markov Model (HMM)-based synthesis system allowing to control the degree of smile in speech. It relies on a model interpolation technique, enabling speech-smile sentences with various smiling intensities to be generated. Sentences synthesized using this approach have been evaluated through a perceptual test. Encouraging results are reported here.


international conference on acoustics, speech, and signal processing | 2015

Synchronization rules for HMM-based audio-visual laughter synthesis

Hüseyin Çakmak; Jérôme Urbain; Thierry Dutoit

In this paper we propose synchronization rules between acoustic and visual laughter synthesis systems. Previous works have addressed separately the acoustic and visual laughter synthesis following an HMM-based approach. The need of synchronization rules comes from the constraint that in laughter, HMM-based synthesis cannot be performed using a unified system where common transcriptions may be used as it has been shown to be the case for audio-visual speech synthesis. Therefore acoustic and visual models are trained independently without any synchronization constraints. In this work, we propose rules derived from the analysis of audio and visual laughter transcriptions in order to be able to generate a visual laughter transcriptions corresponding to an audio laughter data.


9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) | 2013

Reactive Statistical Mapping: Towards the Sketching of Performative Control with Data

Nicolas d’Alessandro; Joëlle Tilmanne; Maria Astrinaki; Thomas Hueber; Rasmus Dall; Thierry Ravet; Alexis Moinet; Hüseyin Çakmak; Onur Babacan; Adela Barbulescu; Valentin Parfait; Victor Huguenin; Emine Sümeyye Kalaycı; Qiong Hu

This paper presents the results of our participation to the ninth eNTERFACE workshop on multimodal user interfaces. Our target for this workshop was to bring some technologies currently used in speech recognition and synthesis to a new level, i.e. being the core of a new HMM-based mapping system. The idea of statistical mapping has been investigated, more precisely how to use Gaussian Mixture Models and Hidden Markov Models for realtime and reactive generation of new trajectories from inputted labels and for realtime regression in a continuous-to-continuous use case. As a result, we have developed several proofs of concept, including an incremental speech synthesiser, a software for exploring stylistic spaces for gait and facial motion in realtime, a reactive audiovisual laughter and a prototype demonstrating the realtime reconstruction of lower body gait motion strictly from upper body motion, with conservation of the stylistic properties. This project has been the opportunity to formalise HMM-based mapping, integrate various of these innovations into the Mage library and explore the development of a realtime gesture recognition tool.

Collaboration


Dive into the Hüseyin Çakmak's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier Pietquin

Institut Universitaire de France

View shared research outputs
Researchain Logo
Decentralizing Knowledge