Emanuel Aldea
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emanuel Aldea.
advances in computing and communications | 2012
Nadege Zarrouati; Emanuel Aldea; Pierre Rouchon
In this paper, we use known camera motion associated to a video sequence of a static scene in order to estimate and incrementally refine the surrounding depth field. We exploit the SO(3)-invariance of brightness and depth fields dynamics to customize standard image processing techniques. Inspired by the Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth field. At each time step, this provides a diffusion equation on the unit Riemannian sphere of R3 that is numerically solved to obtain a real time depth field estimation of the entire field of view. Two asymptotic observers are derived from the governing equations of dynamics, respectively based on optical flow and depth estimations: implemented on noisy sequences of synthetic images as well as on real data, they perform a more robust and accurate depth estimation. This approach is complementary to most methods employing state observers for range estimation, which uniquely concern single or isolated feature points.
GbRPR'07 Proceedings of the 6th IAPR-TC-15 international conference on Graph-based representations in pattern recognition | 2007
Emanuel Aldea; Jamal Atif; Isabelle Bloch
We propose in this article an image classification technique based on kernel methods and graphs. Our work explores the possibility of applying marginalized kernels to image processing. In machine learning, performant algorithms have been developed for data organized as real valued arrays; these algorithms are used for various purposes like classification or regression. However, they are inappropriate for direct use on complex data sets. Our work consists of two distinct parts. In the first one we model the images by graphs to be able to represent their structural properties and inherent attributes. In the second one, we use kernel functions to project the graphs in a mathematical space that allows the use of performant classification algorithms. Experiments are performed on medical images acquired with various modalities and concerning different parts of the body.
Journal of Electronic Imaging | 2015
Emanuel Aldea; Sylvie Le Hégarat-Mascle
Abstract. We are interested in the performance of currently available algorithms for the detection of cracks in the specific context of aerial inspection, which is characterized by image quality degradation. We focus on two widely used families of algorithms based on minimal cost path analysis and on image percolation, and we highlight their limitations in this context. Furthermore, we propose an improved strategy based on a-contrario modeling which is able to withstand significant motion blur due to the absence of various thresholds which are usually required in order to cope with varying crack appearances and with varying levels of degradation. The experiments are performed on real image datasets to which we applied complex blur, and the results show that the proposed strategy is effective, while other methods which perform well on good quality data experience significant difficulties with degraded images.
International Conference on Belief Functions | 2016
Marie Lachaize; Sylvie Le Hégarat-Mascle; Emanuel Aldea; Aude Maitrot; Roger Reynaud
Hyperspectral imagery is a powerful source of information for recognition problems in a variety of fields. However, the resulting data volume is a challenge for classification methods especially considering industrial context requirements. Support Vector Machines (SVMs), commonly used classifiers for hyperspectral data, are originally suited for binary problems. Basing our study on [12] bbas allocation for binary classifiers, we investigate different strategies to combine two-class SVMs and tackle the multiclass problem. We evaluate the use of belief functions regarding the matter of SVM fusion with hyperspectral data for a waste sorting industrial application. We specifically highlight two possible ways of building a fast multi-class classifier using the belief functions framework that takes into account the process uncertainties and can use different information sources such as complementary spectra features.
intelligent robots and systems | 2014
Gaspard Florentz; Emanuel Aldea
In this study, we propose a novel solution to regulate the amount of interest points extracted from an image without significant additional computational cost. Our method acts at the very beginning of the detection process by using a corner occurrence model in order to predict the optimal threshold for a user-defined number of detections. Compared to existing approaches which guarantee a reasonable amount of corners by using a low threshold and then pruning the result, our approach is faster and more regular in terms of computation time as it avoids scoring and sorting the detected corners. Using the FAST detector as testbed, the strategy outlined in this article is evaluated in typical environments for robotics applications, and we report improved detection reliability during important scene variations. Taking into account the underlying visual navigation algorithms, we show that by regularizing the data input our solution facilitates a stable processing load, lower inter-frame computation time, and robustness to scene variations.
EGC (best of volume) | 2010
Emanuel Aldea; Isabelle Bloch
This paper deals with structural representations of images for machine learning and image categorization. The representation consists of a graph where vertices represent image regions and edges spatial relations between them. Both vertices and edges are attributed. The method is based on graph kernels, in order to derive a metrics for comparing images. We show in particular the importance of edge information (i.e. spatial relations) in the specific context of the influence of the satisfaction or non-satisfaction of a relation between two regions. The main contribution of the paper is situated in highlighting the challenges that follow in terms of image representation, if fuzzy models are considered for estimating relation satisfiability.
international symposium on visual computing | 2007
Emanuel Aldea; Geoffroy Fouquier; Jamal Atif; Isabelle Bloch
Various kernel functions on graphs have been defined recently. In this article, our purpose is to assess the efficiency of a marginalized kernel for image classification using structural information. Graphs are built from image segmentations, and various types of information concerning the underlying image regions as well as the spatial relationships between them are incorporated as attributes in the graph labeling. The main contribution of this paper consists in studying the impact of fusioning kernels for different attributes on the classification decision, while proposing the use of fuzzy attributes for estimating spatial relationships.
asian conference on computer vision | 2014
Emanuel Aldea; Khurom H. Kiyani
In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.
International Journal of Approximate Reasoning | 2018
Marie Lachaize; Sylvie Le Hégarat-Mascle; Emanuel Aldea; Aude Maitrot; Roger Reynaud
This paper addresses the difficult problem of segmenting objects in a scene and simultaneously estimating their material class. Focusing on the case where, individually, no dataset can achieve such a task, multiple sensor datasets are considered, including some images for retrieving the spatial information. The proposed approach is based on mutual validation between class decision (using the most relevant dataset) and segmen-tation (derived from image data). The main originality relies in the ability to make these two modules (classification and segmentation) interactive. Specifically, our application focuses on object-level material labeling using classic RGB images, laser profilome-ter images and a NIR spectral sensor. Starting from a superpixel segmentation, the relevant data are introduced as constraints modifying the initial segmentation in a split-and-merge process, which interacts with the material labeling process. In this work, we use the belief function framework to model the information extracted from each kind of data and to transfer it from one processing module to another. In particular we show the relevance of evidential conflict measure to drive the split process and to control the merge one. Experiments have been performed on actual scenes with stacked objects and difficult cases of material such as transparent polymers. They allow us to assess the performance of the proposed approach both in terms of material labeling and object segmentation as well as to illustrate some borderline cases.
International Journal of Approximate Reasoning | 2018
Nicola Pellicanò; Sylvie Le Hégarat-Mascle; Emanuel Aldea
Abstract This paper introduces an innovative approach for handling 2D compound hypotheses within the Belief Function framework. We propose a polygon-based generic representation which relies on polygon clipping operators, as well as on a topological ordering of the focal elements within a directed acyclic graph encoding their interconnections. This approach allows us to make the computational cost for the hypothesis representation independent of the cardinality of the discernment frame. For belief combination, canonical decomposition and decision making, we propose efficient algorithms which rely on hashes for fast lookup, and which benefit from the proposed graph representation. An implementation of the functionalities proposed in this paper is provided as an open source library. In addition to an illustrative synthetic example, quantitative experimental results on a pedestrian localization problem are reported. The experiments show that the solution is accurate and that it fully benefits from the scalability of the 2D search space granularity provided by our representation.