Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guillaume Boisson is active.

Publication


Featured researches published by Guillaume Boisson.


Proceedings of SPIE | 2012

Video retargeting for stereoscopic content under 3D viewing constraints

Christel Chamaret; Guillaume Boisson; C. Chevance

The imminent deployment of new devices such as TV, tablet, smart phone supporting stereoscopic display creates a need for retargeting the content. New devices bring their own aspect ratio and potential small screen size. Aspect ratio conversion becomes mandatory and an automatic solution will be of high value especially if it maximizes the visual comfort. Some issues inherent to 3D domain are considered in this paper: no vertical disparity, no object having negative disparity (outward perception) on the border of the cropping window. A visual attention model is applied on each view and provides saliency maps with most attractive pixels. Dedicated 3D retargeting correlates the 2D attention maps for each view as well as additional computed information to ensure the best cropping window. Specific constraints induced by 3D experience influence the retargeted window through the map computation presenting objects that should not be cropped. The comparison with original content of 2:35 ratio having black stripes provide limited 3D experience on TV screen, while the automatic cropping and exploitation of full screen show more immersive experience. The proposed system is fully automatic, ensures a good final quality without missing fundamental parts for the global understanding of the scene. Eye-tracking data recorded on stereoscopic content have been confronted to retargeted window in order to ensure that the most attractive areas are inside the final video.


international conference on image processing | 2004

Accuracy-scalable motion coding for efficient scalable video compression

Guillaume Boisson; Edouard Francois; Christine Guillemot

For a scalable video coder to remain efficient over a wide range of bit-rates, covering e.g. both mobile video streaming and TV broadcasting, some form of scalability must exist in the motion information. In this paper we propose a new (1+2D) wavelet-based spatio-SNR-temporal-scalable video codec, coupled with an accuracy-scalable motion codec. It allows to decode a reduced amount of motion information at subresolutions, taking advantage that motion compensation requires less and less accuracy at lower spatial resolutions. This new motion codec proves its efficiency in our full-scalable framework, by improving significantly video quality at subresolutions without inducing any noticeable penalty at high bit-rates.


Proceedings of SPIE | 2014

Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

Guillaume Boisson; Paul Kerbiriou; Valter Drazic; Olivier Bureller; Neus Sabater; Arno Schubert

Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.


Proceedings of SPIE | 2012

Space carving MVD sequences for modeling natural 3D scenes

Youssef Alj; Guillaume Boisson; Philippe Bordes; Muriel Pressigout; Luce Morin

This paper presents a 3D modeling system designed for Multi-view Video plus Depth (MVD) sequences. The aim is to remove redundancy in both texture and depth information present in the MVD data. To this end, a volumetric framework is employed in order to merge the input depth maps. Hereby a variant of the Space Carving algorithm is proposed. Voxels are iteratively carved by ray-casting from each view, until the 3D model be geometrically consistent with every input depth map. A surface mesh is then extracted from this volumetric representation thanks to the Marching Cubes algorithm. Subsequently, to address the issue of texture modeling, a new algorithm for multi-texturing the resulting surface is presented. This algorithm selects from the set of input images the best texture candidate to map a given mesh triangle. The best texture is chosen according to a photoconsistency metric. Tests and results are provided using still images from usual MVD test-sequences.


Proceedings of SPIE | 2010

Looking for an adequate quality criterion for depth coding

Paul Kerbiriou; Guillaume Boisson

This paper deals with 3DTV, more especially with 3D content transmission using disparity-based format. In 3DTV, the problem of measuring the stereoscopic quality of a 3D content remains open. Depth signal degradations due to 3DTV transmission will induce new types of artifacts in the final rendered views. Whereas we have some experience regarding the issue of texture coding, the issue of depth coding consequences is rather unknown. In this paper we focus on that particular issue. For that purpose we considered LDV contents (Layered Depth Video) and performed various encoding of their depth information - i.e. depth maps plus depth occlusions layers - using MPEG-4 Part 10 AVC/H.264 MVC. We investigate the impact of depth coding artifacts on the quality of the final views. To this end, we compute the correlation between depth coding errors with the quality of the synthesized views. The criteria used for synthesized views include MSE and structural criteria such as SSIM. The criteria used for depth maps include also a topological measure in the 3D space (the Hausdorff distance). Correlations between the two criteria sets are presented. Trends in function of quantization are also discussed.


international conference on d imaging | 2012

Multi-texturing 3D models: How to choose the best texture?

Youssef Alj; Guillaume Boisson; Philippe Bordes; Muriel Pressigout; Luce Morin

In this article, the impact of 2D based approaches for multi-texturing 3D models using real images is studied. While conventional 3D based approaches assign the best texture for each mesh triangle according to geometric criteria such as triangle orientation or triangle area, 2D based approaches tend to minimize the distortion between the rendered views and the original ones. Evaluation of the two strategies is performed on real scenes for two image sequences and results are provided using the PSNR metric. Moreover, an improvement of the image-based approach is proposed for texturing partially visible triangles.


computer vision and pattern recognition | 2017

Dataset and Pipeline for Multi-view Light-Field Video

Neus Sabater; Guillaume Boisson; Benoit Vandame; Paul Kerbiriou; Frederic Babon; Matthieu Hog; Remy Gendrot; Tristan Langlois; Olivier Bureller; Arno Schubert; Valerie Allie

The quantity and diversity of data in Light-Field videos makes this content valuable for many applications such as mixed and augmented reality or post-production in the movie industry. Some of such applications require a large parallax between the different views of the Light-Field, making the multi-view capture a better option than plenoptic cameras. In this paper we propose a dataset and a complete pipeline for Light-Field video. The proposed algorithms are specially tailored to process sparse and wide-baseline multi-view videos captured with a camera rig. Our pipeline includes algorithms such as geometric calibration, color homogenization, view pseudo-rectification and depth estimation. Such elemental algorithms are well known by the state-of-the-art but they must achieve high accuracy to guarantee the success of other algorithms using our data. Along this paper, we publish our Light-Field video dataset that we believe may be of special interest for the community. We provide the original sequences, the calibration parameters and the pseudo-rectified views. Finally, we propose a depth-based rendering algorithm for Dynamic Perspective Rendering.


international conference on image processing | 2008

Inter-view coding for stereoscopic Digital Cinema

Guillaume Boisson; Patrick Lopez

An inter-view coding scheme is proposed for stereoscopic video applications such as 3D Digital Cinema. It tackles inter-view redundancy with efficient inter-view disparity-compensated filtering. In the temporal dimension, intra-coding with JPEG2000 is used to meet Digital Cinema specific requirements in terms of latency and random access. Inter-view disparity, as well as uncovered areas, are efficiently addressed within the inter-view Haar transform, and the resulting overhead information only amounts to a few percents of the bitstream. The proposed stereoscopic coding scheme delivers both left and right views with a high level of quality and yields to significant improvement in terms of bitrate saving and image quality in comparison with simulcast.


international conference on image processing | 2006

Removing Redundancy in Multi-Resolution Scalable Video Coding Schemes

Guillaume Boisson; Edouard Francois

Nowadays standard technologies for spatially scalable video coding use Gaussian pyramidal approaches, that naturally lead to redundant descriptions after the temporal analysis. However solutions have been proposed to preserve the critical sampling criterion within a multi-resolution framework. That research area is highly worth the interest since the redundancy suppression potentially improves compression efficiency. We focus here on two different solutions, with a special focus on the spectral composition of the transmitted information. Last some results are presented and discussed.


Archive | 2004

Scalable encoding and decoding of interlaced digital video data

Gwenaelle Marquant; Guillaume Boisson; Edouard Francois; Jerome Vieron; Philippe Robert; Christine Guillemot

Collaboration


Dive into the Guillaume Boisson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge