Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christel Chamaret is active.

Publication


Featured researches published by Christel Chamaret.


Proceedings of SPIE | 2010

Adaptive 3D Rendering based on Region-of-Interest

Christel Chamaret; Sylvain Godeffroy; Patrick Lopez; Olivier Le Meur

3D processing techniques are really promising. However, several hurdles have to be overcome. In this paper, two of them are examined. The first is related to the high disparity management. It is currently not well mastered and its impact is strong for viewing 3D scene on stereoscopic screens. The second concerns the salient regions of the scene. These areas are commonly called Region-Of-Interest (RoI) in the image processing domain. The problem appears when there are more than one region-of-interest in a video scene. Indeed, it is then complicated for the eyes to scan them and especially if the depth difference between them is high. In this contribution, the 3D experience is improved by applying some effects related to RoIs. The shift between the two views is adaptively adjusted in order to have a null disparity on a given area in the scene. In the proposed approach, these areas are the visually interesting areas. A constant disparity on the salient areas improves the viewing experience over the video sequence.


eurographics | 2014

A Survey of Color Mapping and its Applications

Hasan Sheikh Faridul; Tania Pouli; Christel Chamaret; Jurgen Stauder; Alain Trémeau; Erik Reinhard

Color mapping or color transfer methods aim to recolor a given image or video by deriving a mapping between that image and another image serving as a reference. This class of methods has received considerable attention in recent years, both in academic literature and in industrial applications. Methods for recoloring images have often appeared under the labels of color correction, color transfer or color balancing, to name a few, but their goal is always the same: mapping the colors of one image to another. In this report, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We discuss the relative merit of each class of techniques through examples and show how color mapping solutions can and have been applied to a diverse range of problems.


affective computing and intelligent interaction | 2013

A Large Video Data Base for Computational Models of Induced Emotion

Yoann Baveye; Jean-Noel Bettinelli; Emmanuel Dellandréa; Liming Chen; Christel Chamaret

To contribute to the need for emotional databases and affective tagging, the LIRIS-ACCEDE is proposed in this paper. LIRIS-ACCEDE is an Annotated Creative Commons Emotional DatabasE composed of 9800 video clips extracted from 160 movies shared under Creative Commons licenses. It allows to make this database publicly available without copyright issues. The 9800 video clips (each 8-12 seconds long) are sorted along the induced valence axis, from the video perceived the most negatively to the video perceived the most positively. The annotation was carried out by 1518 annotators from 89 different countries using crowd sourcing. A baseline late fusion scheme using ground truth from annotations is computed to predict emotion categories in video clips.


affective computing and intelligent interaction | 2015

Deep learning vs. kernel methods: Performance for emotion prediction in videos

Yoann Baveye; Emmanuel Dellandréa; Christel Chamaret; Liming Chen

Recently, mainly due to the advances of deep learning, the performances in scene and object recognition have been progressing intensively. On the other hand, more subjective recognition tasks, such as emotion prediction, stagnate at moderate levels. In such context, is it possible to make affective computational models benefit from the breakthroughs in deep learning? This paper proposes to introduce the strength of deep learning in the context of emotion prediction in videos. The two main contributions are as follow: (i) a new dataset, composed of 30 movies under Creative Commons licenses, continuously annotated along the induced valence and arousal axes (publicly available) is introduced, for which (ii) the performance of the Convolutional Neural Networks (CNN) through supervised fine-tuning, the Support Vector Machines for Regression (SVR) and the combination of both (Transfer Learning) are computed and discussed. To the best of our knowledge, it is the first approach in the literature using CNNs to predict dimensional affective scores from videos. The experimental results show that the limited size of the dataset prevents the learning or finetuning of CNN-based frameworks but that transfer learning is a promising solution to improve the performance of affective movie content analysis frameworks as long as very large datasets annotated along affective dimensions are not available.


Computer Graphics Forum | 2016

Colour Mapping: A Review of Recent Methods, Extensions and Applications

H. Sheikh Faridul; Tania Pouli; Christel Chamaret; Jurgen Stauder; Erik Reinhard; D. Kuzovkin; Alain Trémeau

The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and another image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to another. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We also provide a new dataset and a novel evaluation technique called ‘evaluation by colour mapping roundtrip’. We discuss the relative merit of each class of techniques through examples and show how colour mapping solutions can have been applied to a diverse range of problems.


computational color imaging workshop | 2013

Saliency-Guided Consistent Color Harmonization

Yoann Baveye; Fabrice Urban; Christel Chamaret; Vincent Demoulin; Pierre Hellier

The focus of this paper is automatic color harmonization, which amounts to re-coloring an image so that the obtained color palette is more harmonious for human observers. The proposed automatic algorithm builds on the pioneering works described in [3,12] where templates of harmonious colors are defined on the hue wheel. We bring three contributions in this paper: first, saliency [9] is used to predict the most attractive visual areas and estimate a consistent harmonious template. Second, an efficient color segmentation algorithm, adapted from [4], is proposed to perform consistent color mapping. Third, a new mapping function substitutes usual color shifting method. Results show that the method limits the visual artifacts of state-of-the-art methods and leads to a visually consistent harmonization.


computer vision and pattern recognition | 2013

No-reference Harmony-Guided Quality Assessment

Christel Chamaret; Fabrice Urban

Color harmony of simple color patterns has been widely studied for color design. Rules defined then by psychological experiments have been applied to derive image aesthetic scores, or to re-colorize pictures. But what is harmonious or not in an image? What can the human eye perceive disharmonious? Extensive research has been done in the context of quality assessment to define what is visible or not in images and videos. Techniques based on human visual system models use signal masking to define visibility thresholds. Based on results in both fields, we present a harmony quality assessment method to assess what is harmonious or not in an image. Color rules are used to detect what part of images are disharmonious, and visual masking is applied to estimate to what extent an image area can be perceived disharmonious. The output perceptual harmony quality map and scores can be used in a photo editing framework to guide the user getting the best artistic effects. Results show that the harmony maps reflect what a user perceives and that the score is correlated to the artistic intent.


international conference on computer vision | 2012

Image and video saliency models improvement by blur identification

Yoann Baveye; Fabrice Urban; Christel Chamaret

Visual saliency models aim at predicting where people look. In free viewing conditions, people look at relevant objects that are in focus. Assuming blurred or out-of-focus objects do not belong to the region of interest, this paper proposes a significant improvement and the validation of a saliency model by taking blur into account. Blur identification is associated to a spatio-temporal saliency model. Bottom-up models are designed to mimic the low-level processing of the human visual system and can thus detect out-of-focus objects as salient. The blur identification allows decreasing saliency values on blurred areas while increasing values on sharp areas. In order to validate our new saliency model we conducted eye-tracking experiments to record ground truth of observers fixations on images and videos. Blur identification significantly improves fixation prediction for natural images and videos.


Proceedings of SPIE | 2012

Video retargeting for stereoscopic content under 3D viewing constraints

Christel Chamaret; Guillaume Boisson; C. Chevance

The imminent deployment of new devices such as TV, tablet, smart phone supporting stereoscopic display creates a need for retargeting the content. New devices bring their own aspect ratio and potential small screen size. Aspect ratio conversion becomes mandatory and an automatic solution will be of high value especially if it maximizes the visual comfort. Some issues inherent to 3D domain are considered in this paper: no vertical disparity, no object having negative disparity (outward perception) on the border of the cropping window. A visual attention model is applied on each view and provides saliency maps with most attractive pixels. Dedicated 3D retargeting correlates the 2D attention maps for each view as well as additional computed information to ensure the best cropping window. Specific constraints induced by 3D experience influence the retargeted window through the map computation presenting objects that should not be cropped. The comparison with original content of 2:35 ratio having black stripes provide limited 3D experience on TV screen, while the automatic cropping and exploitation of full screen show more immersive experience. The proposed system is fully automatic, ensures a good final quality without missing fundamental parts for the global understanding of the scene. Eye-tracking data recorded on stereoscopic content have been confronted to retargeted window in order to ensure that the most attractive areas are inside the final video.


international conference on multimedia and expo | 2014

From crowdsourced rankings to affective ratings

Yoann Baveye; Emmanuel Dellandréa; Christel Chamaret; Liming Chen

Automatic prediction of emotions requires reliably annotated data which can be achieved using scoring or pairwise ranking. But can we predict an emotional score using a ranking-based annotation approach? In this paper, we propose to answer this question by describing a regression analysis to map crowdsourced rankings into affective scores in the induced valence-arousal emotional space. This process takes advantages of the Gaussian Processes for regression that can take into account the variance of the ratings and thus the subjectivity of emotions. Regression models successfully learn to fit input data and provide valid predictions. Two distinct experiments were realized using a small subset of the publicly available LIRIS-ACCEDE affective video database for which crowdsourced ranks, as well as affective ratings, are available for arousal and valence. It allows to enrich LIRIS-ACCEDE by providing absolute video ratings for the whole database in addition to video rankings that are already available.

Collaboration


Dive into the Christel Chamaret's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liming Chen

École centrale de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronan Boitard

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge