Joëlle Tilmanne
University of Mons
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joëlle Tilmanne.
Journal of Sleep Research | 2009
Joëlle Tilmanne; Jérôme Urbain; Mayuresh V. Kothare; Alain Vande Wouwer; Sanjeev V. Kothare
The aim of this study was to investigate two new scoring algorithms employing artificial neural networks and decision trees for distinguishing sleep and wake states in infants using actigraphy and to validate and compare the performance of the proposed algorithms with known actigraphy scoring algorithms. The study employed previously recorded longitudinal physiological infant data set from the Collaborative Home Infant Monitoring Evaluation (CHIME) study conducted between 1994 and 1998 [ http://dccwww.bumc.bu.edu/ChimeNisp/Main_Chime.asp ; Sleep26 (1997) 553 ] at five clinical sites around the USA. The original CHIME data set contains recordings of 1079 infants <1 year old. In our study, we used the overnight polysomnography scored data and ankle actimeter (Alice 3) raw data for 354 infants from this data set. The participants were heterogeneous and grouped into four categories: healthy term, preterm, siblings of SIDS and infants with apparent life‐threatening events (apnea of infancy). The selection of the most discriminant actigraphy features was carried out using Fisher’s discriminant analysis. Approximately 80% of all the epochs were used to train the artificial neural network and decision tree models. The models were then validated on the remaining 20% of the epochs. The use of artificial neural networks and decision trees was able to capture potentially nonlinear classification characteristics, when compared to the previously reported linear combination methods and hence showed improved performance. The quality of sleep–wake scoring was further improved by including more wake epochs in the training phase and by employing rescoring rules to remove artifacts. The large size of the database (approximately 337 000 epochs for 354 patients) provided a solid basis for determining the efficacy of actigraphy in sleep scoring. The study also suggested that artificial neural networks and decision trees could be much more routinely utilized in the context of clinical sleep search.
human factors in computing systems | 2016
Marco Gillies; Rebecca Fiebrink; Atau Tanaka; Jérémie Garcia; Frédéric Bevilacqua; Alexis Heloir; Fabrizio Nunnari; Wendy E. Mackay; Saleema Amershi; Bongshin Lee; Nicolas D'Alessandro; Joëlle Tilmanne; Todd Kulesza; Baptiste Caramiaux
Machine learning is one of the most important and successful techniques in contemporary computer science. It involves the statistical inference of models (such as classifiers) from data. It is often conceived in a very impersonal way, with algorithms working autonomously on passively collected data. However, this viewpoint hides considerable human work of tuning the algorithms, gathering the data, and even deciding what should be modeled in the first place. Examining machine learning from a human-centered perspective includes explicitly recognising this human work, as well as reframing machine learning workflows based on situated human working practices, and exploring the co-adaptation of humans and systems. A human-centered understanding of machine learning in human context can lead not only to more usable machine learning tools, but to new ways of framing learning computationally. This workshop will bring together researchers to discuss these issues and suggest future research questions aimed at creating a human-centered approach to machine learning.
EURASIP Journal on Advances in Signal Processing | 2012
Joëlle Tilmanne; Alexis Moinet; Thierry Dutoit
In this work we present an expressive gait synthesis system based on hidden Markov models (HMMs), following and modifying a procedure originally developed for speaking style adaptation, in speech synthesis. A large database of neutral motion capture walk sequences was used to train an HMM of average walk. The model was then used for automatic adaptation to a particular style of walk using only a small amount of training data from the target style. The open source toolkit that we adapted for motion modeling also enabled us to take into account the dynamics of the data and to model accurately the duration of each HMM state. We also address the assessment issue and propose a procedure for qualitative user evaluation of the synthesized sequences. Our tests show that the style of these sequences can easily be recognized and look natural to the evaluators.
motion in games | 2010
Joëlle Tilmanne; Thierry Dutoit
In this paper we analyze walking sequences of an actor performing walk under eleven different states of mind. These walk sequences captured with an inertial motion capture system are used as training data to model walk in a reduced dimension space through principal component analysis (PCA). In that reduced PC space, the variability of walk cycles for each emotion and the length of each cycle are modeled using Gaussian distributions. Using this modeling, new sequences of walk can be synthesized for each expression, taking into account the variability of walk cycles over time in a continuous sequence.
international conference on acoustics, speech, and signal processing | 2014
Hüseyin Çakmak; Jérôme Urbain; Joëlle Tilmanne; Thierry Dutoit
In this paper we apply speaker-dependent training of Hidden Markov Models (HMMs) to audio and visual laughter synthesis separately. The two modalities are synthesized with a forced durations approach and are then combined together to render audio-visual laughter on a 3D avatar. This paper focuses on visual synthesis of laughter and its perceptive evaluation when combined with synthesized audio laughter. Previous work on audio and visual synthesis has been successfully applied to speech. The extrapolation to audio laughter synthesis has already been done. This paper shows that it is possible to extrapolate to visual laughter synthesis as well.
Computer Animation and Virtual Worlds | 2016
Sohaib Laraba; Joëlle Tilmanne
We present in this paper a hidden Markov model‐based system for real‐time gesture recognition and performance evaluation. The system decodes performed gestures and outputs at the end of a recognized gesture, a likelihood value that is transformed into a score. This score is used to evaluate a performance comparing to a reference one. For the learning procedure, a set of relational features has been extracted from high‐precision motion capture system and used to train hidden Markov models. At runtime, a low‐cost sensor (Microsoft Kinect) is used to capture a learners movements. An intermediate step of model adaptation was hence requested to allow recognizing gestures captured by this low‐cost sensor. We present one application of this gesture evaluation system in the context of traditional dance basics learning. The estimation of the log‐likelihood allows giving a feedback to the learner as a score related to his performance. Copyright
trans. computational science | 2012
Joëlle Tilmanne; Thierry Dutoit
We present a Hidden Markov Model (HMM) based stylistic walk synthesizer, where the synthesized styles are combinations or exaggerations of the walk styles present in the training database. Our synthesizer is also capable of generating walk sequences with controlled style transitions. In a first stage, Hidden Markov Models of eleven different gait styles are trained, using a database of motion capture walk sequences. In a second stage, the probability density functions inside the stylistic models are interpolated or extrapolated in order to synthesize walks with styles or style intensities that were not present in the training database. A continuous model of the style parameter space is thus constructed around the eleven original walk styles. Qualitative user evaluation of the synthesized sequences showed that the naturalness of motions is preserved after linear interpolation between styles and that evaluators are sensitive to the interpolation factor.
cyberworlds | 2011
Joëlle Tilmanne; Thierry Dutoit
In this work, we present a Hidden Markov Model (HMM) based stylistic walk synthesizer, where the synthesized styles are combinations or exaggerations of the walk styles present in the training database. In a first stage, Hidden Markov Models of eleven different styles of gait are trained, using a database of motion capture walk sequences. In a second stage, the probability density functions inside the stylistic models are interpolated or extrapolated in order to synthesize walks with styles or style intensities that were not present in the training database. A continuous model of the style parameter space is thus constructed around the eleven original walk styles. An informal user evaluation of the synthesized sequences showed that the naturalness of motions is preserved after linear interpolation.
international conference on acoustics, speech, and signal processing | 2008
F. Ofli; Cristian Canton-Ferrer; Joëlle Tilmanne; Y. Demir; Elif Bozkurt; Yücel Yemez; Engin Erzin; A.M. Tekalp
This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.
Computer Animation and Virtual Worlds | 2017
Sohaib Laraba; Mohammed Brahimi; Joëlle Tilmanne; Thierry Dutoit
In recent years, 3D skeleton‐based action recognition has become a popular technique of action classification, thanks to development and availability of cheaper depth sensors. State‐of‐the‐art methods generally represent motion sequences as high dimensional trajectories followed by a time‐warping technique. These trajectories are used to train a classification model to predict the classes of new sequences. Despite the success of these techniques in some fields, particularly when the data used are captured by a high‐precision motion capture system, action classification is still less successful than the field of image classification, especially with the advance of deep learning. In this paper, we present a new representation of motion sequences (Seq2Im—for sequence to image), which projects motion sequences onto the RGB domain. The 3D coordinates of joints are mapped to red, green, and blue values, and therefore, action classification becomes an image classification problem and algorithms for this field can be applied. This representation was tested with basic image classification algorithms (namely, support vector machine, k‐nearest neighbor, and random forests) in addition to convolutional neural networks. Evaluation of the proposed method on standard 3D human action recognition datasets shows its potential for action recognition and outperforms most of the state‐of‐the‐art results.