Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Damien Muselet is active.

Publication


Featured researches published by Damien Muselet.


computer vision and pattern recognition | 2013

Discriminative Color Descriptors

Rahat Khan; Joost van de Weijer; Fahad Shahbaz Khan; Damien Muselet; Christophe Ducottet; Cécile Barat

Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.


computer vision and pattern recognition | 2012

Discriminative feature fusion for image classification

Basura Fernando; Elisa Fromont; Damien Muselet; Marc Sebban

Bag-of-words-based image classification approaches mostly rely on low level local shape features. However, it has been shown that combining multiple cues such as color, texture, or shape is a challenging and promising task which can improve the classification accuracy. Most of the state-of-the-art feature fusion methods usually aim to weight the cues without considering their statistical dependence in the application at hand. In this paper, we present a new logistic regression-based fusion method, called LRFF, which takes advantage of the different cues without being tied to any of them. We also design a new marginalized kernel by making use of the output of the regression model. We show that such kernels, surprisingly ignored so far by the computer vision community, are particularly well suited to achieve image classification tasks. We compare our approach with existing methods that combine color and shape on three datasets. The proposed learning-based feature fusion process clearly outperforms the state-of-the art fusion methods for image classification.


computer vision and pattern recognition | 2015

Landmarks-based kernelized subspace alignment for unsupervised domain adaptation

Rahaf Aljundi; Rémi Emonet; Damien Muselet; Marc Sebban

Domain adaptation (DA) has gained a lot of success in the recent years in computer vision to deal with situations where the learning process has to transfer knowledge from a source to a target domain. In this paper, we introduce a novel unsupervised DA approach based on both subspace alignment and selection of landmarks similarly distributed between the two domains. Those landmarks are selected so as to reduce the discrepancy between the domains and then are used to non linearly project the data in the same space where an efficient subspace alignment (in closed-form) is performed. We carry out a large experimental comparison in visual domain adaptation showing that our new method outperforms the most recent unsupervised DA approaches.


british machine vision conference | 2012

Spatial orientations of visual word pairs to improve Bag-of-Visual-Words model.

Rahat Khan; Cécile Barat; Damien Muselet; Christophe Ducottet

This paper presents a novel approach to incorporate spatial information in the bag-of-visual-words model for category level and scene classification. In the traditional bag-of-visual-words model, feature vectors are histograms of visual words. This representation is appearance based and does not contain any information regarding the arrangement of the visual words in the 2D image space. In this framework, we present a simple and effi- cient way to infuse spatial information. Particularly, we are interested in explicit global relationships among the spatial positions of visual words. Therefore, we take advantage of the orientation of the segments formed by Pairs of Identical visual Words (PIW). An evenly distributed normalized histogram of angles of PIW is computed. Histograms pro- duced by each word type constitute a powerful description of intra type visual words relationships. Experiments on challenging datasets demonstrate that our method is com- petitive with the concurrent ones. We also show that, our method provides important complementary information to the spatial pyramid matching and can improve the overall performance.


Pattern Recognition | 2012

Supervised learning of Gaussian mixture models for visual vocabulary generation

Basura Fernando; Elisa Fromont; Damien Muselet; Marc Sebban

The creation of semantically relevant clusters is vital in bag-of-visual words models which are known to be very successful to achieve image classification tasks. Generally, unsupervised clustering algorithms, such as K-means, are employed to create such clusters from which visual dictionaries are deduced. K-means achieves a hard assignment by associating each image descriptor to the cluster with the nearest mean. By this way, the within-cluster sum of squares of distances is minimized. A limitation of this approach in the context of image classification is that it usually does not use any supervision that limits the discriminative power of the resulting visual words (typically the centroids of the clusters). More recently, some supervised dictionary creation methods based on both supervised information and data fitting were proposed leading to more discriminative visual words. But, none of them consider the uncertainty present at both image descriptor and cluster levels. In this paper, we propose a supervised learning algorithm based on a Gaussian mixture model which not only generalizes the K-means algorithm by allowing soft assignments, but also exploits supervised information to improve the discriminative power of the clusters. Technically, our algorithm aims at optimizing, using an EM-based approach, a convex combination of two criteria: the first one is unsupervised and based on the likelihood of the training data; the second is supervised and takes into account the purity of the clusters. We show on two well-known datasets that our method is able to create more relevant clusters by comparing its behavior with the state of the art dictionary creation methods.


Computer Vision and Image Understanding | 2015

Spatial histograms of soft pairwise similar patches to improve the bag-of-visual-words model

Rahat Khan; Cécile Barat; Damien Muselet; Christophe Ducottet

A new approach to improve image representation for category level classification.We encode pairwise relative spatial information of patches in the bag of word model.A simple approach complementary to the Spatial Pyramid Representation (SPR).Can be combined with SPR and outperforms other existing spatial methods.Experimental validation of the approach is shown on 5 challenging datasets. In the context of category level scene classification, the bag-of-visual-words model (BoVW) is widely used for image representation. This model is appearance based and does not contain any information regarding the arrangement of the visual words in the 2D image space. To overcome this problem, recent approaches try to capture information about either the absolute or the relative spatial location of visual words. In the first category, the so-called Spatial Pyramid Representation (SPR) is very popular thanks to its simplicity and good results. Alternatively, adding information about occurrences of relative spatial configurations of visual words was proven to be effective but at the cost of higher computational complexity, specifically when relative distance and angles are taken into account. In this paper, we introduce a novel way to incorporate both distance and angle information in the BoVW representation. The novelty is first to provide a computationally efficient representation adding relative spatial information between visual words and second to use a soft pairwise voting scheme based on the distance in the descriptor space. Experiments on challenging data sets MSRC-2, 15Scene, Caltech101, Caltech256 and Pascal VOC 2007 demonstrate that our method outperforms or is competitive with concurrent ones. We also show that it provides important complementary information to the spatial pyramid matching and can improve the overall performance.


Pattern Recognition Letters | 2007

Combining color and spatial information for object recognition across illumination changes

Damien Muselet; Ludovic Macaire

One of the most widely used approaches in the context of object recognition across illumination changes consists in comparing the images by means of the intersection between invariant histograms. However, this approach does not provide satisfying results with limited image databases. We propose to cope with the problem of illumination changes by analyzing simultaneously the color components of the pixels and their spatial arrangement in the image. For this purpose, we introduce the chromatic co-occurrence matrices to characterize the relationship between the color component levels of neighboring pixels. In order to compare two images acquired under different illuminations, these matrices are transformed into adapted co-occurrence matrices that are determined so that their intersection is higher when the two images contain the same object lit with different illuminations than when they contain different objects.


international conference on computer vision | 2012

Lighting estimation in indoor environments from low-quality images

Nathalia Neverova; Damien Muselet; Alain Trémeau

Lighting conditions estimation is a crucial point in many applications. In this paper, we show that combining color images with corresponding depth maps (provided by modern depth sensors) allows to improve estimation of positions and colors of multiple lights in a scene. Since usually such devices provide low-quality images, for many steps of our framework we propose alternatives to classical algorithms that fail when the image quality is low. Our approach consists in decomposing an original image into specular shading, diffuse shading and albedo. The two shading images are used to render different versions of the original image by changing the light configuration. Then, using an optimization process, we find the lighting conditions allowing to minimize the difference between the original image and the rendered one.


Neurocomputing | 2017

Multi-task, multi-domain learning

Damien Fourure; Rmi Emonet; Elisa Fromont; Damien Muselet; Natalia Neverova; Alain Trmeau; Christian Wolf

We present an approach that leverages multiple datasets annotated for different tasks (e.g., classification with different labelsets) to improve the predictive accuracy on each individual dataset. Domain adaptation techniques can correct dataset bias but they are not applicable when the tasks differ, and they need to be complemented to handle multi-task settings. We propose a new selective loss function that can be integrated into deep neural networks to exploit training data coming from multiple datasets annotated for related but possibly different labelsets. We show that the gradient-reversal approach for domain adaptation can be used in this setup to additionally handle domain shifts. We also propose an auto-context approach that further captures existing correlations across tasks. Thorough experiments on two types of applications (semantic segmentation and hand pose estimation) show the relevance of our approach in different contexts.


Pattern Recognition | 2013

Affine transforms between image space and color space for invariant local descriptors

Xiaohu Song; Damien Muselet; Alain Trémeau

Accurate local region description is a keypoint in many applications and has been the topic of lots of recent papers. Starting from the very accurate SIFT, most of the approaches exploit the local gradient information that suffers from several drawbacks. First it is unstable in case of severe geometry distortions, second it cannot be easily summarized in a compact way and third it is not designed to account vectorial color information. In this paper, we propose an alternative by designing compact descriptors that account both the colors present in the region and their spatial distribution. Each pixel being characterized by five coordinates, two in the image space and three in the color space, we try to evaluate affine transforms that allow to go from the spatial coordinates to the color coordinates and inversely. Obviously such kind of transform does not exist but we show that after applying it to the original coordinates, the resulted positions are both discriminative and invariant to many acquisition conditions. Hence, depending on the original space (image or color) and the destination space (color or image), we design different complementary descriptors. Their discriminative power and invariance properties are assessed and compared with the best color descriptors in the context of region matching and object classification.

Collaboration


Dive into the Damien Muselet's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Basura Fernando

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Sebban

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Jack-Gérard Postaire

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge