Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Kerbiriou is active.

Publication


Featured researches published by Paul Kerbiriou.


ieee haptics symposium | 2012

Framework for enhancing video viewing experience with haptic effects of motion

Fabien Danieau; Julien Fleureau; Audrey Cabec; Paul Kerbiriou; Philippe Guillotel; Nicolas Mollet; Marc Christie; Anatole Lécuyer

This work aims at enhancing a classical video viewing experience by introducing realistic haptic feelings in a consumer environment. More precisely, a complete framework to both produce and render the motion embedded in an audiovisual content is proposed to enhance a natural movie viewing session. We especially consider the case of a first-person point of view audiovisual content and we propose a general workflow to address this problem. This latter includes a novel approach to both capture the motion and video of the scene of interest, together with a haptic rendering system for generating a sensation of motion. A complete methodology to evaluate the relevance of our framework is finally proposed and demonstrates the interest of our approach.


Proceedings of SPIE | 2014

Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

Guillaume Boisson; Paul Kerbiriou; Valter Drazic; Olivier Bureller; Neus Sabater; Arno Schubert

Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.


international conference on computer vision theory and applications | 2016

Shape and Reflectance from RGB-D Images using Time Sequential Illumination

Matis Hudon; Adrien Gruson; Paul Kerbiriou; Rémi Cozot; Kadi Bouatouch

In this paper we propose a method for recovering the shape (geometry) and the diffuse reflectance from an image (or video) using a hybrid setup consisting of a depth sensor (Kinect), a consumer camera and a partially controlled illumination (using a flash). The objective is to show how combining RGB-D acquisition with a sequential illumination is useful for shape and reflectance recovery. A pair of two images are captured: one non flashed (image under ambient illumination) and a flashed one. A pure flash image is computed by subtracting the non flashed image from the flashed image. We propose an novel and near real-time algorithm, based on a local illumination model of our flash and the pure flash image, to enhance geometry (from the noisy depth map) and recover reflectance information.


Proceedings of SPIE | 2010

Looking for an adequate quality criterion for depth coding

Paul Kerbiriou; Guillaume Boisson

This paper deals with 3DTV, more especially with 3D content transmission using disparity-based format. In 3DTV, the problem of measuring the stereoscopic quality of a 3D content remains open. Depth signal degradations due to 3DTV transmission will induce new types of artifacts in the final rendered views. Whereas we have some experience regarding the issue of texture coding, the issue of depth coding consequences is rather unknown. In this paper we focus on that particular issue. For that purpose we considered LDV contents (Layered Depth Video) and performed various encoding of their depth information - i.e. depth maps plus depth occlusions layers - using MPEG-4 Part 10 AVC/H.264 MVC. We investigate the impact of depth coding artifacts on the quality of the final views. To this end, we compute the correlation between depth coding errors with the quality of the synthesized views. The criteria used for synthesized views include MSE and structural criteria such as SSIM. The criteria used for depth maps include also a topological measure in the 3D space (the Hausdorff distance). Correlations between the two criteria sets are presented. Trends in function of quantization are also discussed.


international conference on multimedia and expo | 2017

Camera-agnostic format and processing for light-field data

Mitra Damghanian; Paul Kerbiriou; Valter Drazic; Didier Doyen; Laurent Blonde

Light-field (LF) is foreseen as an enabler for the next generation of 3D/AR/VR experiences. However, lack of unified representation, storage and processing formats, variant LF acquisition systems and capture-specific LF processing algorithms prevent cross-platform approaches and constrain the advancement and standardization process of the LF information. In this work we present our vision for camera-agnostic format and processing of LF data, aiming at a common ground for LF data storage, communication and processing. As a proof-of-concept for camera-agnostic pipeline, we present a new and efficient LF storage format (for 4D rays) and demonstrate feasibility of camera-agnostic LF processing. To do so, we implement a camera-agnostic depth extraction method. We use LF data from a camera-rig acquisition setup and several synthetic inputs including plenoptic and non-plenoptic captures, to emphasize the camera-agnostic nature of the proposed LF storage and processing pipeline.


computer vision and pattern recognition | 2017

Dataset and Pipeline for Multi-view Light-Field Video

Neus Sabater; Guillaume Boisson; Benoit Vandame; Paul Kerbiriou; Frederic Babon; Matthieu Hog; Remy Gendrot; Tristan Langlois; Olivier Bureller; Arno Schubert; Valerie Allie

The quantity and diversity of data in Light-Field videos makes this content valuable for many applications such as mixed and augmented reality or post-production in the movie industry. Some of such applications require a large parallax between the different views of the Light-Field, making the multi-view capture a better option than plenoptic cameras. In this paper we propose a dataset and a complete pipeline for Light-Field video. The proposed algorithms are specially tailored to process sparse and wide-baseline multi-view videos captured with a camera rig. Our pipeline includes algorithms such as geometric calibration, color homogenization, view pseudo-rectification and depth estimation. Such elemental algorithms are well known by the state-of-the-art but they must achieve high accuracy to guarantee the success of other algorithms using our data. Along this paper, we publish our Light-Field video dataset that we believe may be of special interest for the community. We provide the original sequences, the calibration parameters and the pseudo-rectified views. Finally, we propose a depth-based rendering algorithm for Dynamic Perspective Rendering.


Archive | 2002

Method and device for coding a mosaic

Paul Kerbiriou; Dominique Thorbeau; Gwenael Kervella; Edouard Francois


Archive | 2004

Method to transmit and receive font information in streaming systems

David Sahuc; Thierry Viellard; Paul Kerbiriou


Archive | 2009

Coding device for 3d video signals

Guillaume Boisson; Paul Kerbiriou; Patrick Lopez


Archive | 2009

Multistandard coding device for 3d video signals

Guillaume Boisson; Paul Kerbiriou; Patrick Lopez

Collaboration


Dive into the Paul Kerbiriou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge