Bruno Emile
University of Orléans
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bruno Emile.
international conference on pattern recognition | 2008
Yannick Benezeth; Pierre-Marc Jodoin; Bruno Emile; Hélène Laurent; Christophe Rosenberger
Locating moving objects in a video sequence is the first step of many computer vision applications. Among the various motion-detection techniques, background subtraction methods are commonly implemented, especially for applications relying on a fixed camera. Since the basic inter-frame difference with global threshold is often a too simplistic method, more elaborate (and often probabilistic) methods have been proposed. These methods often aim at making the detection process more robust to noise, background motion and camera jitter. In this paper, we present commonly-implemented background subtraction algorithms and we evaluate them quantitatively. In order to gauge performances of each method, tests are performed on a wide range of real, synthetic and semi-synthetic video sequences representing different challenges.
Journal of Electronic Imaging | 2010
Yannick Benezeth; Pierre-Marc Jodoin; Bruno Emile; Hélène Laurent; Christophe Rosenberger
In this paper, we present a comparative study of several state of the art background subtraction methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested on different videos with ground truth. The goal of this study is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented motion detection methods. The methods are compared based on their robustness to different types of video, their memory requirement, and the computational effort they require. The impact of a Markovian prior as well as some post-processing operators are also evaluated. Most of the videos used in this study come from state-of-the-art benchmark databases and represent different challenges such as poor signal-to-noise ratio, multimodal background motion and camera jitter. Overall, this study not only helps better understand to which type of videos each method suits best but also estimate how better sophisticated methods are, compared to basic background subtraction methods.
EURASIP Journal on Advances in Signal Processing | 2006
Sébastien Chabrier; Bruno Emile; Christophe Rosenberger; Hélène Laurent
We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinets measure (correct classification rate) is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.
Eurasip Journal on Image and Video Processing | 2008
Sébastien Chabrier; Christophe Rosenberger; Bruno Emile; Hélène Laurent
Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.
Eurasip Journal on Image and Video Processing | 2008
Sébastien Chabrier; Hélène Laurent; Christophe Rosenberger; Bruno Emile
We present in this article a comparative study of well-known supervised evaluation criteria that enable the quantification of the quality of contour detection algorithms. The tested criteria are often used or combined in the literature to create new ones. Though these criteria are classical ones, none comparison has been made, on a large amount of data, to understand their relative behaviors. The objective of this article is to overcome this lack using large test databases both in a synthetic and a real context allowing a comparison in various situations and application fields and consequently to start a general comparison which could be extended by any person interested in this topic. After a review of the most common criteria used for the quantification of the quality of contour detection algorithms, their respective performances are presented using synthetic segmentation results in order to show their performance relevance face to undersegmentation, oversegmentation, or situations combining these two perturbations. These criteria are then tested on natural images in order to process the diversity of the possible encountered situations. The used databases and the following study can constitute the ground works for any researcher who wants to confront a new criterion face to well-known ones.
international conference on image processing | 2005
Anant Choksuriwong; Hélène Laurent; Bruno Emile
This paper deals with the performance evaluation of three object invariant descriptors: Hu moments, Zernike moments and Fourier-Mellin descriptors. Experiments are conducted on a database of 100 objects extracted from the Columbia Object Image Library (COIL-100). Original images from this database only present geometric transformations. They therefore allow to quantify the scale and rotation invariances of the different features and to compare their ability to discriminate objects. In order to test the robustness of the three tested descriptors, we have completed the data set by images including different perturbations: noise, occlusion, luminance variation, backgrounds (uniform, with noise, textured) added to the object. Recognition tests are realised using a support vector machine as supervised classification method. Experimental results are summarized and analyzed permitting to compare the global performances of the different descriptors.
international conference on image and signal processing | 2008
Yannick Benezeth; Bruno Emile; Hélène Laurent; Christophe Rosenberger
We present in this article a human detection and tracking algorithm using infrared vision in order to have reliable information on a room occupation. We intend to use this information to limit energetic consumption (light, heating). We perform first, a foreground segmentation with a Gaussian background model. A tracking step based on connected components intersections allows to collect information on 2D displacements of moving objects in the image plane. A classification based on a cascade of boosted classifiers is used for the recognition. Experimental results show the efficiency of the proposed algorithm.
International Journal of Social Robotics | 2010
Yannick Benezeth; Bruno Emile; Hélène Laurent; Christophe Rosenberger
In this paper, we propose a vision-based system for human detection and tracking in indoor environment using a static camera. The proposed method is based on object recognition in still images combined with methods using temporal information from the video. Doing that, we improve the performance of the overall system and reduce the task complexity. We first use background subtraction to limit the search space of the classifier. The segmentation is realized by modeling each background pixel by a single Gaussian model. As each connected component detected by the background subtraction potentially corresponds to one person, each blob is independently tracked. The tracking process is based on the analysis of connected components position and interest points tracking. In order to know the nature of various objects that could be present in the scene, we use multiple cascades of boosted classifiers based on Haar-like filters. We also present in this article a wide evaluation of this system based on a large set of videos.
international conference on indoor positioning and indoor navigation | 2013
Wael Elloumi; Kamel Guissous; Aladine Chetouani; Raphael Canals; Rémy Leconge; Bruno Emile; Sylvie Treuillet
Indoor navigation assistance is a highly challenging task that is increasingly needed in various types of applications such as visually impaired guidance, emergency intervention, tourism, etc. Many alternative techniques to GPS have been explored to deal with this challenge like pre-installed sensor networks (Wifi, Ultra Wide Band, Bluetooth, Radio Frequency IDentification etc), inertial sensors or camera. This paper presents an indoor navigation system on Smartphone that was designed taking into consideration low cost, portability and the lightweight of the used algorithm in terms of computation power and storage space. The proposed solution relies on embedded vision. Robust and fast camera orientation (3 dof) is estimated by tracking three orthogonal vanishing points in a video stream acquired with the camera of a free-handled Smartphone. The developed algorithm enables indoor pedestrian localization in two steps: an off-line learning step defines a reference path by selecting key frames along the way using saliency extraction method and computing the camera orientation in these frames. Then, in localization step, an approximate but realistic position of the walker is estimated in real time by comparing the orientation of the camera in the current image and that of reference to assist the pedestrian with navigation guidance. Unlike SLAM, this approach does not require to build 3D mapping of the environment. Online walking direction is given by Smartphone camera which advantageously replaces the compass sensor since it performs very poorly indoors due to electromagnetic noise. Experiments, executed online on Smartphone, that show the feasibility and evaluate the accuracy of the proposed positioning approach for different indoor paths.
Journal of Electronic Imaging | 2008
Anant Choksuriwong; Bruno Emile; Hélène Laurent; Christophe Rosenberger
Although many object invariant descriptors have been proposed in the literature, putting them into practice to obtain a robust recognition system that is able to face several perturbations is still a studied problem. After presenting the most commonly used global invariant descriptors, a comparative study permits us to show their ability to discriminate between objects with little training. The Columbia Object Image Library database (COIL-100), which presents a same object translated, rotated, and scaled, is used to test the invariant features of geometrical transforms. Partial object occultation or presence of complex background are examples of used images to test the robustness of the studied descriptors. We compare them in both a global and a local context (computed on the neighborhood of a pixel). The scale invariant feature transform descriptor is used as a reference for local invariant descriptors. This study shows the relative performance of invariant descriptors used in both a global and a local context and identifies the different situations for which they are best suited.