David Roussel
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Roussel.
Virtual Reality | 2011
Mahmoud Haydar; David Roussel; Madjid Maidi; Samir Otmane; Malik Mallem
The paper presents different issues dealing with both the preservation of cultural heritage using virtual reality (VR) and augmented reality (AR) technologies in a cultural context. While the VR/AR technologies are mentioned, the attention is paid to the 3D visualization, and 3D interaction modalities illustrated through three different demonstrators: the VR demonstrators (immersive and semi-immersive) and the AR demonstrator including tangible user interfaces. To show the benefits of the VR and AR technologies for studying and preserving cultural heritage, we investigated the visualisation and interaction with reconstructed underwater archaeological sites. The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine, but drastically differ in the way they present information and exploit interaction modalities. The visualisation and interaction techniques developed through these demonstrators are the results of the ongoing dialogue between the archaeological requirements and the technological solutions developed.
international conference on virtual reality | 2008
Mahmoud Haydar; Madjid Maidi; David Roussel; Malik Mallem; Pierre Drap; Kim Bale; Paul Chapman
This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The first developed key point concerns the densification of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed.
international conference on robotics and automation | 2005
Jean-Yves Didier; David Roussel; Malik Mallem
One of the key problems in Augmented Reality systems, is the synchronization of real and virtual world. In the context of our research efforts, we need to augment a view of the real world with virtual elements which first requires to determine position and orientation of the point of view and then compute a virtual view according to this location. Augmented reality uses a lot of different sensors in order to estimate camera or operator’s point of view. These sensors could provide samples faster than the mixing of virtual and real informations could be displayed. We expose here a way to take into account samples that are generated during the mixing process by combining augmented reality and virtual reality techniques. This method is using a post-rendering technique involving a texture carried out by a software graphics pipeline to perform this task mainly devoted to optical see-through augmented reality systems. This algorithm can lead to deformations on the final rendered image. We will compute the theoretical errors introduced by this method and confront these results to those obtained with a simulation test-bench which is implementing our proposal.
INTELLIGENT SYSTEMS AND AUTOMATION: 2nd Mediterranean Conference on Intelligent#N#Systems and Automation (CISA’09) | 2009
Mahmoud Haydar; Madjid Maidi; David Roussel; Malik Mallem
Navigation in virtual environments is a complex task which imposes a high cognitive load on the user. It consists on maintaining knowledge of current position and orientation of the user while he moves through the space. In this paper, we present a novel approach for navigation in 3D virtual environments. The method is based on the principle of skiing, and the idea is to provide to the user a total control of his navigation speed and rotation using his two hands. This technique enables user‐steered exploration by determining the direction and the speed of motion using the knowledge of the positions of the user hands. A module of speed control is included to the technique to easily control the speed using the angle between the hands. The direction of motion is given by the orthogonal axis of the segment joining the two hands. A user study will show the efficiency of the method in performing exploration tasks in complex 3D large‐scale environments. Furthermore, we proposed an experimental protocol to prove ...
international symposium on mixed and augmented reality | 2004
Jean-Yves Didier; David Roussel; Malik Mallem
One of the key problems in augmented reality systems is registration, that is to say the synchronization of real and virtual world. Augmented reality uses a lot of different sensors in order to estimate camera or operators point of view. These sensors could provide samples faster than mixing of virtual and real information could be displayed. We expose here a way to fake into account samples that are generated during the mixing process. This method is using a post-rendering technique involving a texture to perform this task. We expose the errors reduction obtained by performing such technique with a simulation test-bench implementing our proposal.
advanced concepts for intelligent vision systems | 2016
Hakim Elchaoui Elghor; David Roussel; Fakhreddine Ababsa; El-Houssine Bouyakhf
Applications such as Simultaneous Localization and Mapping (SLAM) can greatly benefit from RGB-D sensor data to produce 3D maps of the environment as well as sensor’s trajectory estimation. However, the resulting 3D points map can be cumbersome, and since indoor environments are mainly composed of planar surfaces, the idea is to use planes as building blocks for a SLAM process. This paper describes an RGB-D SLAM system benefiting from planes segmentation to generate lightweight 3D plane-based maps. Our goal is to produce reduced 3D maps composed solely of planes sections that can be used on platforms with limited memory and computation resources. We present the introduction of planar regions in a regular RGB-D SLAM system and evaluate the benefits regarding both resulting map and estimated camera trajectory.
international conference on pattern recognition | 2002
Fakhreddine Ababsa; David Roussel; Malik Mallem; Jean-Yves Didier
This paper presents a new method for automatic matching between a free form 3D object and a single image. This matching can be used for object recognition and/or object 3D recovering. This kind of problem has raised a tremendous amount of interest in research fields such as computer vision, and more specifically in augmented reality. The original idea brought by our method is to use a photo-clinometric framework (such as shape from shading) in order to extract normal vectors information from the image. Our interest is focused on the distribution of angles between the surface normal vector and incident light vector. This information is discriminate for free form objects without symmetries, and one can show that it could be used to match between the image and model by using an aspect graph featuring this distribution.
digital identity management | 1999
David Roussel; Patrick Bourdot; Rachid Gherbi
We propose a reconstruction technique which consists in defining a parametric surface lying on a closed curve by setting the extremal behavior of the surface along the curve on which it lies. Therefore, we most build a set of curves in space from a stereo reconstruction of matched and closed contours in an image pair. Then, we use photometric information extracted near the image contours to define the local behavior of the surfaces lying on the closed curves. In order to build such a surface we use the duality existing between a stereo-based reconstruction which is able to determine the position of points in the scene space and photoclinometry which gives information about the shape of the objects in the scene. Our geometric model is composed of a radial set of triparametric Gregory patches for each surface to reconstruct. The topological concepts introduced by these kind of surfaces and the use of photometric models gave a framework for both image analysis processing and reconstruction geometrical processing.
international symposium on mixed and augmented reality | 2005
Jean-Yves Didier; David Roussel; Malik Mallem; Samir Otmane; Sylvie Naudet; Quoc-Cuong Pham; Steve Bourgeois; Christine Mégard; Christophe Leroux; Arnaud Hocquard
international conference on robotics and automation | 2004
Fakhreddine Ababsa; Malik Mallem; David Roussel