Yoann Baveye
Technicolor
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoann Baveye.
affective computing and intelligent interaction | 2015
Yoann Baveye; Emmanuel Dellandréa; Christel Chamaret; Liming Chen
Recently, mainly due to the advances of deep learning, the performances in scene and object recognition have been progressing intensively. On the other hand, more subjective recognition tasks, such as emotion prediction, stagnate at moderate levels. In such context, is it possible to make affective computational models benefit from the breakthroughs in deep learning? This paper proposes to introduce the strength of deep learning in the context of emotion prediction in videos. The two main contributions are as follow: (i) a new dataset, composed of 30 movies under Creative Commons licenses, continuously annotated along the induced valence and arousal axes (publicly available) is introduced, for which (ii) the performance of the Convolutional Neural Networks (CNN) through supervised fine-tuning, the Support Vector Machines for Regression (SVR) and the combination of both (Transfer Learning) are computed and discussed. To the best of our knowledge, it is the first approach in the literature using CNNs to predict dimensional affective scores from videos. The experimental results show that the limited size of the dataset prevents the learning or finetuning of CNN-based frameworks but that transfer learning is a promising solution to improve the performance of affective movie content analysis frameworks as long as very large datasets annotated along affective dimensions are not available.
computational color imaging workshop | 2013
Yoann Baveye; Fabrice Urban; Christel Chamaret; Vincent Demoulin; Pierre Hellier
The focus of this paper is automatic color harmonization, which amounts to re-coloring an image so that the obtained color palette is more harmonious for human observers. The proposed automatic algorithm builds on the pioneering works described in [3,12] where templates of harmonious colors are defined on the hue wheel. We bring three contributions in this paper: first, saliency [9] is used to predict the most attractive visual areas and estimate a consistent harmonious template. Second, an efficient color segmentation algorithm, adapted from [4], is proposed to perform consistent color mapping. Third, a new mapping function substitutes usual color shifting method. Results show that the method limits the visual artifacts of state-of-the-art methods and leads to a visually consistent harmonization.
international conference on computer vision | 2012
Yoann Baveye; Fabrice Urban; Christel Chamaret
Visual saliency models aim at predicting where people look. In free viewing conditions, people look at relevant objects that are in focus. Assuming blurred or out-of-focus objects do not belong to the region of interest, this paper proposes a significant improvement and the validation of a saliency model by taking blur into account. Blur identification is associated to a spatio-temporal saliency model. Bottom-up models are designed to mimic the low-level processing of the human visual system and can thus detect out-of-focus objects as salient. The blur identification allows decreasing saliency values on blurred areas while increasing values on sharp areas. In order to validate our new saliency model we conducted eye-tracking experiments to record ground truth of observers fixations on images and videos. Blur identification significantly improves fixation prediction for natural images and videos.
international conference on multimedia and expo | 2014
Yoann Baveye; Emmanuel Dellandréa; Christel Chamaret; Liming Chen
Automatic prediction of emotions requires reliably annotated data which can be achieved using scoring or pairwise ranking. But can we predict an emotional score using a ranking-based annotation approach? In this paper, we propose to answer this question by describing a regression analysis to map crowdsourced rankings into affective scores in the induced valence-arousal emotional space. This process takes advantages of the Gaussian Processes for regression that can take into account the variance of the ratings and thus the subjectivity of emotions. Regression models successfully learn to fit input data and provide valid predictions. Two distinct experiments were realized using a small subset of the publicly available LIRIS-ACCEDE affective video database for which crowdsourced ranks, as well as affective ratings, are available for arousal and valence. It allows to enrich LIRIS-ACCEDE by providing absolute video ratings for the whole database in addition to video rankings that are already available.
acm multimedia | 2015
Ting Li; Yoann Baveye; Christel Chamaret; Emmanuel Dellandréa; Liming Chen
On one hand, the fact that Galvanic Skin Response (GSR) is highly correlated with the user affective arousal provides the possibility to apply GSR in emotion detection. On the other hand, temporal correlation of real-time GSR and self-assessment of arousal has not been well studied. This paper confronts two modalities representing the induced emotion when watching 30 movies extracted from the LIRIS-ACCEDE database. While continuous arousal annotations have been self-assessed by 5 participants using a joystick, real-time GSR signal of 13 other subjects is supposed to catch user emotional response, objectively without users interpretation. As a main contribution, this paper introduces a method to make possible the temporal comparison of both signals. Thus, temporal correlation between continuous arousal peaks and GSR were calculated for all 30 movies. A global Pearsons correlation of 0.264 and a Spearmans rank correlation coefficient of 0.336 were achieved. This result proves the validity of using both signals to measure arousal and draws a reliable framework for the analysis of such signals.
Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia | 2014
Yoann Baveye; Christel Chamaret; Emmanuel Dellandréa; Liming Chen
Recently, we released a large affective video dataset, namely LIRIS-ACCEDE, which was annotated through crowdsourcing along both induced valence and arousal axes using pairwise comparisons. In this paper, we design an annotation protocol which enables the scoring of induced affective feelings for cross-validating the annotations of the LIRIS-ACCEDE dataset and identifying any potential bias. We have collected in a controlled setup the ratings from 28 users on a subset of video clips carefully selected from the dataset by computing the inter-observer reliabilities on the crowdsourced data. On contrary to crowdsourced rankings gathered in unconstrained environments, users were asked to rate each video through the Self-Assessment Manikin tool. The significant correlation between crowdsourced rankings and controlled ratings validates the reliability of the dataset for future uses in affective video analysis and paves the way for the automatic generation of ratings over the whole dataset.
IEEE Transactions on Affective Computing | 2017
Yoann Baveye; Christel Chamaret; Emmanuel Dellandréa; Liming Chen
In our present society, the cinema has become one of the major forms of entertainment providing unlimited contexts of emotion elicitation for the emotional needs of human beings. Since emotions are universal and shape all aspects of our interpersonal and intellectual experience, they have proved to be a highly multidisciplinary research field, ranging from psychology, sociology, neuroscience, etc., to computer science. However, affective multimedia content analysis work from the computer science community benefits but little from the progress achieved in other research fields. In this paper, a multidisciplinary state-of-the-art for affective movie content analysis is given, in order to promote and encourage exchanges between researchers from a very wide range of fields. In contrast to other state-of-the-art papers on affective video content analysis, this work confronts the ideas and models of psychology, sociology, neuroscience, and computer science. The concepts of aesthetic emotions and emotion induction, as well as the different representations of emotions are introduced, based on psychological and sociological theories. Previous global and continuous affective video content analysis work, including video emotion recognition and violence detection, are also presented in order to point out the limitations of affective video content analysis work.
computational color imaging workshop | 2013
Yoann Baveye; Fabrice Urban; Christel Chamaret; Vincent Demoulin; Pierre Hellier
In the original version, reference 16 was wrong. It should read as follows: 16. Skurowski, P., Kozielski, M.: Investigating human color harmony preferences using unsupervised machine learning. In: European Conference on Colour in Graphics, Imaging, and Vision, pp. 59-64 2012
IEEE Transactions on Affective Computing | 2015
Yoann Baveye; Emmanuel Dellandréa; Christel Chamaret; Liming Chen
MediaEval 2015 Workshop | 2015
Mats Sjöberg; Yoann Baveye; Hanli Wang; Vu Lam Quang; Bogdan Ionescu; Emmanuel Dellandréa; Markus Schedl; Claire-Hélène Demarty; Liming Chen