Gilles Simon
French Institute for Research in Computer Science and Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gilles Simon.
international conference on computer vision | 1998
Gilles Simon; Marie-Odile Berger
A model registration system capable of tracking an object, the model of which is known, in an image sequence is presented. It integrates tracking, pose determination and updating of the visible features. The heart of our system is the pose computation method, which handles various features (points, lines and free-form curves) in a very robust way and is able to give a correct estimate of the pose even when tracking errors occur. The reliability of the system is shown on an augmented reality project.
machine vision applications | 1999
Marie-Odile Berger; Brigitte Wrobel-Dautcourt; Sylvain Petitjean; Gilles Simon
Abstract. Mixing video and computer-generated images is a new and promising area of research for enhancing reality. It can be used in all the situations when a complete simulation would not be easy to implement. Past work on the subject has relied for a large part on human intervention at key moments of the composition. In this paper, we show that if enough geometric information about the environment is available, then efficient tools developed in the computer vision literature can be used to build a highly automated augmented reality loop. We focus on outdoor urban environments and present an application for the visual assessment of a new lighting project of the bridges of Paris. We present a fully augmented 300-image sequence of a specific bridge, the Pont Neuf. Emphasis is put on the robust calculation of the camera position. We also detail the techniques used for matching 2D and 3D primitives and for tracking features over the sequence. Our system overcomes two major difficulties. First, it is capable of handling poor-quality images, resulting from the fact that images were shot at night since the goal was to simulate a new lighting system. Second, it can deal with important changes in viewpoint position and in appearance along the sequence. Throughout the paper, many results are shown to illustrate the different steps and difficulties encountered.
international symposium on mixed and augmented reality | 2006
Gilles Simon
This paper proposes a method to automatically detect and reconstruct planar surfaces for immediate use in AR tasks. Traditional methods for plane detection are typically based on the comparison of transfer errors of a homography, which make them sensitive to the choice of a discrimination threshold. We propose a very different approach: the image is divided into a grid and rectangles that belong to the same planar surface are clustered around the local maxima of a Hough transform. As a result, we simultaneously get clusters of coplanar rectangles and the image of their intersection line with a reference plane, which easily leads to their 3D position and orientation. Results are shown on both synthetic and real data.
international symposium on mixed and augmented reality | 1999
Gilles Simon; Marie Odile Berger
We focus on the problem of adding computer-generated objects in video sequences that have been shot with a zoom lens camera. While numerous papers have been devoted to registration with fixed focal length, little attention has been brought to zoom lens cameras. We propose an efficient two-stage algorithm for handling zoom changing which are are likely to happen in a video sequence. We first attempt to partition the video into camera motions and zoom variations. Then, classical registration methods are used on the image frames labeled camera motion while keeping the internal parameters constant, whereas the zoom parameters are only updated for the frames labeled zoom variations. Results are presented demonstrating registration on various sequences. Augmented video sequences are also shown.
Computer Graphics Forum | 1996
Marie-Odile Berger; Christine Chevrier; Gilles Simon
Augmented reality shows great promises in fields where a simulation in situ would be impossible or too expensive. When mixing synthetic and real objects in the same animated sequence, we must be sure that the geometrical coherence as well as the photometrical coherence is ensured. One major challenge is to compute the camera viewpoint with sufficient accuracy to ensure a satisfactory composition. We especially address this point in this paper using computer vision techniques and robust statistical methods. We prove that such techniques make it possible to compute almost automatically the viewpoint for long video sequences even for bad quality images in outdoor environments. Significant results on the lighting simulation of the bridges of Paris are shown.
international symposium on mixed and augmented reality | 2005
Javier-Flavio Vigueras Gomez; Gilles Simon; Marie-Odile Berger
This paper confronts some theoretical camera models to reality and evaluates the suitability of these models for effective augmented reality (AR). It analyses what level of accuracy can be expected in real situations using a particular camera model and how robust the results are against realistic calibration errors. An experimental protocol is used that consists of taking images of a particular scene from different quality cameras mounted on a 4DOF micro-controlled device. The scene is made of a calibration target and three markers placed at different distances of the target. This protocol enables us to consider assessment criteria specific to AR as alignment error and visual impression, in addition to the classical camera positioning error.
asian conference on computer vision | 1998
Marie-Odile Berger; Gilles Simon
We present our augmented reality system for image composition. We have worked with a view to avoiding strong and tedious interactions with the user. In this paper, we especially stress on the robust temporal registration method we have devised. An original method for resolving occlusions is also presented.
european conference on computer vision | 2000
Gilles Simon; Marie Odile Berger
We focus on the problem of adding computer-generated objects in video sequences that have been shot with a zoom lens camera. While numerous papers have been devoted to registration with fixed focal length, little attention has been brought to zoom lens cameras. We propose an efficient two-stage algorithm for handling zoom changing which are are likely to happen in a video sequence. We first attempt to partition the video into camera motions and zoom variations. Then, classical registration methods are used on the image frames labeled camera motion while keeping the internal parameters constant, whereas the zoom parameters are only updated for the frames labeled zoom variations. Results are presented demonstrating registration on various sequences. Augmented video sequences are also shown.
international symposium on mixed and augmented reality | 2009
Gilles Simon
In this paper, we describe a purely image-based system that allows a user to interactively capture the 3D geometry of a polyhedral scene with the aid of its physical presence. A video camera is used as both an interaction and tracking device. The 3D user interface is intuitive to a non-expert and the mouseless control procedure makes the system particularly suitable for mobile devices such as PDAs and mobile phones. The efficiency and accuracy of the method are demonstrated on a polyhedral scene made of two house-like boxes.
international conference on pattern recognition | 2010
Srikrishna Bhat K. K; Marie-Odile Berger; Gilles Simon; Frédéric Sur
We present Transitive Closure based visual word formation technique for obtaining robust object representations from smoothly varying multiple views. Each one of our visual words is represented by a set of feature vectors which is obtained by performing transitive closure operation on SIFT features. We also present range-reducing tree structure to speed up the transitive closure operation. The robustness of our visual word representation is demonstrated for Structure from Motion (SfM) and location identification in video images.