Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francois de Sorbier is active.

Publication


Featured researches published by Francois de Sorbier.


scandinavian conference on image analysis | 2013

3D Object Pose Estimation Using Viewpoint Generative Learning

Dissaphong Thachasongtham; Takumi Yoshida; Francois de Sorbier; Hideo Saito

Conventional local features such as SIFT or SURF are robust to scale and rotation changes but sensitive to large perspective change. Because perspective change always occurs when 3D object moves, using these features to estimate the pose of a 3D object is a challenging task. In this paper, we extend one of our previous works on viewpoint generative learning to 3D objects. Given a model of a textured object, we virtually generate several patterns of the model from different viewpoints and select stable keypoints from those patterns. Then our system learns a collection of feature descriptors from the stable keypoints. Finally, we are able to estimate the pose of a 3D object by using these robust features. In our experimental results, we demonstrate that our system is robust against large viewpoint change and even under partial occlusion.


multimedia signal processing | 2010

Depth camera based system for auto-stereoscopic displays

Francois de Sorbier; Yuko Uematsu; Hideo Saito

Stereoscopic displays are becoming very popular since more and more contents are now available. As an extension, auto-stereoscopic screens allow several users to watch stereoscopic images without wearing any glasses. For the moment, synthetized content are the easiest solutions to provide, in realtime, all the multiple input images required by such kind of technology. However, live videos are a very important issue in some fields like augmented reality applications, but remain difficult to be applied on auto-stereoscopic displays. In this paper, we present a system based on a depth camera and a color camera that are combined to produce the multiple input images in realtime. The result of this approach can be easily used with any kind of auto-stereoscopic screen.


virtual systems and multimedia | 2010

Augmented reality for 3D TV using depth camera input

Francois de Sorbier; Yuki Takaya; Yuko Uematsu; Ismaël Daribo; Hideo Saito

This paper presents a capture system based on a depth camera that is used for an augmented reality application. Most of depth cameras are unable to capture the color information corresponding to the same viewpoint. A color is then added besides the depth camera and we applied a transformation algorithm to match the depth map with the color cameras viewpoint. Then, using a depth image based rendering (DIBR) approaches, it becomes possible to synthesize new virtual views from the 2D-plus-depth data. We also address some research issues in the generation of the virtual view is to deal with the newly exposed areas, appearing as holes and denoted as occlusions, which may be revealed in each warped image. The color image and its corresponding enhanced depth image are then combined to produce a mesh representing the real scene. It is used to easily integrate virtual objects in the real scene. Finally, the result can be rendered to create the input image required by a auto-stereoscopic screen.


european workshop on visual information processing | 2014

Real-time enhancement of RGB-D point clouds using piecewise plane fitting

Kazuki Matsumoto; Francois de Sorbier; Hideo Saito

In this paper, we propose an efficient framework for reducing noise and holes in depth map captured with an RGB-D camera. This is performed by applying plane fitting to the groups of points assimilable to planar structures and filtering the curved surface points. We present a new method for finding global planar structures in a 3D scene by combining superpixel segmentation and graph component labeling. The superpixel segmentation is based on not only color information but also depth and normal maps. The labeling process is carried out by considering each normal in given superpixels clusters. We evaluate the reliability of each plane structure and apply the plane fitting only to true planar surfaces. As a result, our system can reduce the noise of the depth map especially on planar area while preserving curved surfaces. The process is done in real-time thanks to GPGPU acceleration via CUDA architecture.


european conference on computer vision | 2014

Visualization of Temperature Change Using RGB-D Camera and Thermal Camera

Wataru Nakagawa; Kazuki Matsumoto; Francois de Sorbier; Maki Sugimoto; Hideo Saito; Shuji Senda; Takashi Shibata; Akihiko Iketani

In this paper, we present a system for visualizing temperature changes in a scene using an RGB-D camera coupled with a thermal camera. This system has applications in the context of maintenance of power equipments. We propose a two-stage approach made of with an offline and an online phases. During the first stage, after the calibration, we generate a 3D reconstruction of the scene with the color and the thermal data. We then apply the Viewpoint Generative Learning (VGL) method on the colored 3D model for creating a database of descriptors obtained from features robust to strong viewpoint changes. During the second online phase we compare the descriptors extracted from the current view against the ones in the database for estimating the pose of the camera. In this situation, we can display the current thermal data and compare it with the data saved during the offline phase.


Proceedings of SPIE | 2014

Joint upsampling and noise reduction for real-time depth map enhancement

Kazuki Matsumoto; Chiyoung Song; Francois de Sorbier; Hideo Saito

An efficient system that upsamples depth map captured by Microsoft Kinect while jointly reducing the effect of noise is presented. The upsampling is carried by detecting and exploiting the piecewise locally planar structures of the downsampled depth map, based on corresponding high-resolution RGB image. The amount of noise is reduced by accumulating the downsampled data simultaneously. By benefiting from massively parallel computing capability of modern commodity GPUs, the system is able to maintain high frame rate. Our system is observed to produce the upsampled depth map that is very close to the original depth map both visually and mathematically.


international conference on computer vision theory and applications | 2015

Illumination Estimation and Relighting using an RGB-D Camera

Yohei Ogura; Takuya Ikeda; Francois de Sorbier; Hideo Saito

In this paper, we propose a relighting system combined with an illumination estimation method using RGBD camera. Relighting techniques can achieve the photometric registration of composite images. They often need illumination environments of the scene which include a target object and the background scene. Some relighting methods obtain the illumination environments beforehand. In this case, they cannot be used under the unknown dynamic illumination environment. Some on-line illumination estimation methods need light probes which can be invade the scene geometry. In our method, the illumination environment is estimated from pixel intensity, normal map and surface reflectance based on inverse rendering in on-line processing. The normal map of the arbitrary object which is used in the illumination estimation part and the relighting part is calculated from the denoised depth image on each frame. Relighting is achieved by calculating the ratio for the estimated Illumination environment of the each scene. Thus our implementation can be used for dynamic illumination or a dynamic object.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014

Camera pose estimation for mixed and diminished reality in FTV

Hideo Saito; Toshihiro Honda; Yusuke Nakayama; Francois de Sorbier

In this paper, we will present methods for camera pose estimation for mixed and diminished reality visualization in FTV application. We first present Viewpoint Generative Learning (VGL) based on 3D scene model reconstructed using multiple cameras including RGB-D camera. In VGL, a database of feature descriptors is generated for the 3D scene model to make the pose estimation robust to viewpoint change. Then we introduce an application of VGL to diminished reality. We also present our novel line feature descriptor, LEHF, which is also be applied to a line-based SLAM and improving camera pose estimation.


Revised Selected and Invited Papers of the International Workshop on Advances in Depth Image Analysis and Applications - Volume 7854 | 2012

Towards an Augmented Reality System for Violin Learning Support

Hiroyuki Shiino; Francois de Sorbier; Hideo Saito

The violin is one of the most beautiful but also one of the most difficult musical instruments for a beginner. This paper presents an on-going work about a new augmented reality system for training how to play violin. We propose to help the players by virtually guiding the movement of the bow and the correct position of their fingers for pressing the strings. Our system also recognizes the musical note played and the correctness of its pitch. The main benefit of our system is that it does not require any specific marker since our real-time solution is based on a depth camera.


international conference on pattern recognition applications and methods | 2015

Plane Fitting and Depth Variance Based Upsampling for Noisy Depth Map from 3D-ToF Cameras in Real-time

Kazuki Matsumoto; Francois de Sorbier; Hideo Saito

Recent advances of ToF depth sensor devices enables us to easily retrieve scene depth data with high frame rates. However, the resolution of the depth map captured from these devices is much lower than that of color images and the depth data suffers from the optical noise effects. In this paper, we propose an efficient algorithm that upsamples depth map captured by ToF depth cameras and reduces noise. The upsampling is carried out by applying plane based interpolation to the groups of points similar to planar structures and depth variance based joint bilateral upsampling to curved or bumpy surface points. For dividing the depth map into piecewise planar areas, we apply superpixel segmentation and graph component labeling. In order to distinguish planar areas and curved areas, we evaluate the reliability of detected plane structures. Compared with other state-of-the- art algorithms, our method is observed to produce an upsampled depth map that is smoothed and closer to the ground truth depth map both visually and numerically. Since the algorithm is parallelizable, it can work in real-time by utilizing highly parallel processing capabilities of modern commodity GPUs.

Collaboration


Dive into the Francois de Sorbier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Chaudeyrac

University of Marne-la-Vallée

View shared research outputs
Top Co-Authors

Avatar

Patrice Bouvier

University of Marne-la-Vallée

View shared research outputs
Top Co-Authors

Avatar

Venceslas Biri

University of Marne-la-Vallée

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge