Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Yves Guillemaut is active.

Publication


Featured researches published by Jean-Yves Guillemaut.


international conference on computer vision | 2009

Robust graph-cut scene segmentation and reconstruction for free-viewpoint video of complex dynamic scenes

Jean-Yves Guillemaut; Joe Kilner; Adrian Hilton

Current state-of-the-art image-based scene reconstruction techniques are capable of generating high-fidelity 3D models when used under controlled capture conditions. However, they are often inadequate when used in more challenging outdoor environments with moving cameras. In this case, algorithms must be able to cope with relatively large calibration and segmentation errors as well as input images separated by a wide-baseline and possibly captured at different resolutions. In this paper, we propose a technique which, under these challenging conditions, is able to efficiently compute a high-quality scene representation via graph-cut optimisation of an energy function combining multiple image cues with strong priors. Robustness is achieved by jointly optimising scene segmentation and multiple view reconstruction in a view-dependent manner with respect to each input camera. Joint optimisation prevents propagation of errors from segmentation to reconstruction as is often the case with sequential approaches. View-dependent processing increases tolerance to errors in on-the-fly calibration compared to global approaches. We evaluate our technique in the case of challenging outdoor sports scenes captured with manually operated broadcast cameras and demonstrate its suitability for high-quality free-viewpoint video.


interactive 3d graphics and games | 2012

4D parametric motion graphs for interactive animation

Dan Casas; Margara Tejera; Jean-Yves Guillemaut; Adrian Hilton

A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced which combines the realistic deformation of previous non-linear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real-time based on surface shape and motion similarity. 4D parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.


digital identity management | 2007

A Bayesian Framework for Simultaneous Matting and 3D Reconstruction

Jean-Yves Guillemaut; Adrian Hilton; Jonathan Starck; Joe Kilner; Oliver Grau

Conventional approaches to 3D scene reconstruction often treat matting and reconstruction as two separate problems, with matting a prerequisite to reconstruction. The problem with such an approach is that it requires taking irreversible decisions at the first stage, which may translate into reconstruction errors at the second stage. In this paper, we propose an approach which attempts to solve both problems jointly, thereby avoiding this limitation. A general Bayesian formulation for estimating opacity and depth with respect to a reference camera is developed. In addition, it is demonstrated that in the special case of binary opacity values (background/foreground) and discrete depth values, a global solution can be obtained via a single graph-cut computation. We demonstrate the application of the method to novel view synthesis in the case of a large-scale outdoor scene. An experimental comparison with a two-stage approach based on chroma-keying and shape-from-silhouette illustrates the advantages of the new method.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Using points at infinity for parameter decoupling in camera calibration

Jean-Yves Guillemaut; Alberto S. Aguado; John Illingworth

The majority of camera calibration methods, including the gold standard algorithm, use point-based information and simultaneously estimate all calibration parameters. In contrast, we propose a novel calibration method that exploits line orientation information and decouples the problem into two simpler stages. We formulate the problem as minimization of the lateral displacement between single projected image lines and their vanishing points. Unlike previous vanishing point methods, parallel line pairs are not required. Additionally, the invariance properties of vanishing points mean that multiple images related by pure translation can be used to increase the calibration data set size without increasing the number of estimated parameters. We compare this method with vanishing point methods and the gold standard algorithm and demonstrate that it has comparable performance.


Signal Processing-image Communication | 2009

Objective quality assessment in free-viewpoint video production

Joe Kilner; Jonathan Starck; Jean-Yves Guillemaut; Adrian Hilton

This paper addresses the problem of objectively measuring quality in free-viewpoint video production. The accuracy of scene reconstruction is typically limited and an evaluation of free-viewpoint video should explicitly consider the quality of image production. A simple objective measure of accuracy is presented in terms of structural registration error in view synthesis. This technique can be applied as a full-reference metric to measure the fidelity of view synthesis to a ground truth image or as a no-reference metric to measure the error in registering scene appearance in image-based rendering. The metric is applied to a data-set with known geometric accuracy and a comparison is also demonstrated between two free-viewpoint video techniques across two prototype production studios.


international conference on computer vision | 2015

General Dynamic Scene Reconstruction from Multiple View Video

Armin Mustafa; Hansung Kim; Jean-Yves Guillemaut; Adrian Hilton

This paper introduces a general approach to dynamic scene reconstruction from multiple moving cameras without prior knowledge or limiting constraints on the scene structure, appearance, or illumination. Existing techniques or dynamic scene reconstruction from multiple wide-baseline camera views primarily focus on accurate reconstruction in controlled environments, where the cameras are fixed and calibrated and background is known. These approaches are not robust for general dynamic scenes captured with sparse moving cameras. Previous approaches for outdoor dynamic scene reconstruction assume prior knowledge of the static background appearance and structure. The primary contributions of this paper are twofold: an automatic method for initial coarse dynamic scene segmentation and reconstruction without prior knowledge of background appearance or structure, and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes from multiple wide-baseline static or moving cameras. Evaluation is performed on a variety of indoor and outdoor scenes with cluttered backgrounds and multiple dynamic non-rigid objects such as people. Comparison with state-of-the-art approaches demonstrates improved accuracy in both multiple view segmentation and dense reconstruction. The proposed approach also eliminates the requirement for prior knowledge of scene structure and appearance.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Outdoor Dynamic 3-D Scene Reconstruction

Hansung Kim; Jean-Yves Guillemaut; Takeshi Takai; Muhammad Sarim; Adrian Hilton

Existing systems for 3-D reconstruction from multiple view video use controlled indoor environments with uniform illumination and backgrounds to allow accurate segmentation of dynamic foreground objects. In this paper, we present a portable system for 3-D reconstruction of dynamic outdoor scenes that require relatively large capture volumes with complex backgrounds and nonuniform illumination. This is motivated by the demand for 3-D reconstruction of natural outdoor scenes to support film and broadcast production. Limitations of existing multiple view 3-D reconstruction techniques for use in outdoor scenes are identified. Outdoor 3-D scene reconstruction is performed in three stages: 1) 3-D background scene modeling using spherical stereo image capture; 2) multiple view segmentation of dynamic foreground objects by simultaneous video matting across multiple views; and 3) robust 3-D foreground reconstruction and multiple view segmentation refinement in the presence of segmentation and calibration errors. Evaluation is performed on several outdoor productions with complex dynamic scenes including people and animals. Results demonstrate that the proposed approach overcomes limitations of previous indoor multiple view reconstruction approaches enabling high-quality free-viewpoint rendering and 3-D reference models for production.


international symposium on 3d data processing visualization and transmission | 2004

Helmholtz Stereopsis on rough and strongly textured surfaces

Jean-Yves Guillemaut; Ondrej Drbohlav; Radim Šára; John Illingworth

Helmholtz Stereopsis (HS) has recently been explored as a promising technique for capturing shape of objects with unknown reflectance. So far, it has been widely applied to objects of smooth geometry and piecewise uniform Bidirectional Reflectance Distribution Function (BRDF). Moreover, for nonconvex surfaces the inter-reflect ion effects have been completely neglected. We extend the method to surfaces which exhibit strong texture, nontrivial geometry and are possibly nonconvex. The problem associated with these surface features is that Helmholtz reciprocity is apparently violated when point-based measurements are used independently to establish the matching constraint as in the standard HS implementation. We argue that the problem is avoided by computing radiance measurements on image regions corresponding exactly to projections of the same surface point neighbourhood with appropriate scale. The experimental results demonstrate the success of the novel method proposed on real objects.


IEEE Transactions on Visualization and Computer Graphics | 2013

Interactive Animation of 4D Performance Capture

Dan Casas; Margara Tejera; Jean-Yves Guillemaut; Adrian Hilton

A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced, which combines the realistic deformation of previous nonlinear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real time based on surface shape and motion similarity. Four-dimensional parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.


iberoamerican congress on pattern recognition | 2006

General pose face recognition using frontal face model

Jean-Yves Guillemaut; Josef Kittler; Mohammad T. Sadeghi; William J. Christmas

We present a face recognition system able to identify people from a single non-frontal image in an arbitrary pose. The key component of the system is a novel pose correction technique based on Active Appearance Models (AAMs), which is used to remap probe images into a frontal pose similar to that of gallery images. The method generalises previous pose correction algorithms based on AAMs to multiple axis head rotations. We show that such model can be combined with image warping techniques to increase the textural content of the images synthesised. We also show that bilateral symmetry of faces can be exploited to improve recognition. Experiments on a database of 570 non-frontal test images, which includes 148 different identities, show that the method produces a significant increase in the success rate (up to 77.4%) compared to conventional recognition techniques which do not consider pose correction.

Collaboration


Dive into the Jean-Yves Guillemaut's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge