Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean Charles Bazin is active.

Publication


Featured researches published by Jean Charles Bazin.


international conference on computer graphics and interactive techniques | 2012

Gaze correction for home video conferencing

Claudia Kuster; Jean Charles Bazin; Craig Gotsman; Markus H. Gross

Effective communication using current video conferencing systems is severely hindered by the lack of eye contact caused by the disparity between the locations of the subject and the camera. While this problem has been partially solved for high-end expensive video conferencing systems, it has not been convincingly solved for consumer-level setups. We present a gaze correction approach based on a single Kinect sensor that preserves both the integrity and expressiveness of the face as well as the fidelity of the scene as a whole, producing nearly artifact-free imagery. Our method is suitable for mainstream home video conferencing: it uses inexpensive consumer hardware, achieves real-time performance and requires just a simple and short setup. Our approach is based on the observation that for our application it is sufficient to synthesize only the corrected face. Thus we render a gaze-corrected 3D model of the scene and, with the aid of a face tracker, transfer the gaze-corrected facial portion in a seamless manner onto the original image.


Computer Vision and Image Understanding | 2010

Motion estimation by decoupling rotation and translation in catadioptric vision

Jean Charles Bazin; Cédric Demonceaux; Pascal Vasseur; In So Kweon

Previous works have shown that catadioptric systems are particularly suited for egomotion estimation thanks to their large field of view and thus numerous algorithms have already been proposed in the literature to estimate the motion. In this paper, we present a method for estimating six degrees of freedom camera motions from central catadioptric images in man-made environments. State-of-the-art methods can obtain very impressive results. However, our proposed system provides two strong advantages over the existing methods: first, it can implicitly handle the difficulty of planar/non-planar scenes, and second, it is computationally much less expensive. The only assumption deals with the presence of parallel straight lines which is reasonable in a man-made environment. More precisely, we estimate the motion by decoupling the rotation and the translation. The rotation is computed by an efficient algorithm based on the detection of dominant bundles of parallel catadioptric lines and the translation is calculated from a robust 2-point algorithm. We also show that the line-based approach allows to estimate the absolute attitude (roll and pitch angles) at each frame, without error accumulation. The efficiency of our approach has been validated by experiments in both indoor and outdoor environments and also by comparison with other existing methods.


computer vision and pattern recognition | 2012

Globally optimal line clustering and vanishing point estimation in Manhattan world

Jean Charles Bazin; Yongduek Seo; Cédric Demonceaux; Pascal Vasseur; Katsushi Ikeuchi; In So Kweon; Marc Pollefeys

The projections of world parallel lines in an image intersect at a single point called the vanishing point (VP). VPs are a key ingredient for various vision tasks including rotation estimation and 3D reconstruction. Urban environments generally exhibit some dominant orthogonal VPs. Given a set of lines extracted from a calibrated image, this paper aims to (1) determine the line clustering, i.e. find which line belongs to which VP, and (2) estimate the associated orthogonal VPs. None of the existing methods is fully satisfactory because of the inherent difficulties of the problem, such as the local minima and the chicken-and-egg aspect. In this paper, we present a new algorithm that solves the problem in a mathematically guaranteed globally optimal manner and can inherently enforce the VP orthogonality. Specifically, we formulate the task as a consensus set maximization problem over the rotation search space, and further solve it efficiently by a branch-and-bound procedure based on the Interval Analysis theory. Our algorithm has been validated successfully on sets of challenging real images as well as synthetic data sets.


The International Journal of Robotics Research | 2012

Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment

Jean Charles Bazin; Cédric Demonceaux; Pascal Vasseur; In So Kweon

Rotation estimation is a fundamental step for various robotic applications such as automatic control of ground/aerial vehicles, motion estimation and 3D reconstruction. However it is now well established that traditional navigation equipments, such as global positioning systems (GPSs) or inertial measurement units (IMUs), suffer from several disadvantages. Hence, some vision-based works have been proposed recently. Whereas interesting results can be obtained, the existing methods have non-negligible limitations such as a difficult feature matching (e.g. repeated textures, blur or illumination changes) and a high computational cost (e.g. analyze in the frequency domain). Moreover, most of them utilize conventional perspective cameras and thus have a limited field of view. In order to overcome these limitations, in this paper we present a novel rotation estimation approach based on the extraction of vanishing points in omnidirectional images. The first advantage is that our rotation estimation is decoupled from the translation computation, which accelerates the execution time and results in a better control solution. This is made possible by our complete framework dedicated to omnidirectional vision, whereas conventional vision has a rotation/translation ambiguity. Second, we propose a top-down approach which maintains the important constraint of vanishing point orthogonality by inverting the problem: instead of performing a difficult line clustering preliminary step, we directly search for the orthogonal vanishing points. Finally, experimental results on various data sets for diverse robotic applications have demonstrated that our novel framework is accurate, robust, maintains the orthogonality of the vanishing points and can run in real-time.


intelligent robots and systems | 2012

3-line RANSAC for orthogonal vanishing point detection

Jean Charles Bazin; Marc Pollefeys

A wide range of robotic systems needs to estimate their rotation for diverse tasks like automatic control and stabilization, among many others. In regards of the limitations of traditional navigation equipments (like GPS and inertial sensors), this paper follows a vision approach based on the observation of vanishing points (VPs). Urban environments (outdoor as well as indoor) generally contain orthogonal VPs which constitutes an important constraint to fulfill in order to correctly acquire the structure of the scenes. In contrast to existing VP-based techniques, our method inherently enforces the orthogonality of the VPs by directly incorporating the orthogonality constraint into the model estimation step of the RANSAC procedure, which allows real-time applications. The model is estimated from only 3 lines, which corresponds to the theoretical minimal sampling for rotation estimation and constitutes our 3-line RANSAC. We also propose a 1-line RANSAC when the horizon plane is known. Our algorithm has been validated successfully on challenging real datasets.


international conference on robotics and automation | 2008

UAV Attitude estimation by vanishing points in catadioptric images

Jean Charles Bazin; In So Kweon; Cédric Demonceaux; Pascal Vasseur

Unmanned aerial vehicles (UAV) are the subject of an increasing interest in many applications and a key requirement is the stabilization of the vehicle. Some previous works have suggested using catadioptric vision, instead of traditional perspective cameras, in order to gather much more information from the environment and therefore improve the robustness of the UAV attitude estimation. This paper belongs to a series of recent publications of our research group concerning catadioptric vision for UAVs. Currently, we focus on the estimation of the complete attitude of a UAV flying in urban environment. In order to avoid the limitations of horizon-based approaches, the difficulties of traditional epipolar methods (such as rotation-translation ambiguity, lack of features, retrieving motion parameters from matrix decomposition, etc..) and improve UAV dynamic control, we suggest computing infinite homography. We show how catadioptric vision plays a key role to: first, extract a large number of lines, second robustly estimate the associated vanishing points and third, track them even during long video sequences. Therefore it is not only possible to estimate the relative rotation between consecutive frames but also compute the absolute rotation between two distant frames without error accumulation. Finally, we present some experimental results with ground truth data to demonstrate the accuracy and the robustness of our method.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications

Tae-Hyun Oh; Yu-Wing Tai; Jean Charles Bazin; Hyeongwoo Kim; In So Kweon

Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values, which implicitly encourages the target rank constraint. Our experimental analyses show that, when the number of samples is deficient, our approach leads to a higher success rate than conventional rank minimization, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g., high dynamic range imaging, motion edge detection, photometric stereo, image alignment and recovery, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.


international conference on computer vision | 2013

Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Tae-Hyun Oh; Hyeongwoo Kim; Yu-Wing Tai; Jean Charles Bazin; In So Kweon

Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.


IEEE Computer | 2014

Immersive 3D Telepresence

Henry Fuchs; Andrei State; Jean Charles Bazin

Cutting-edge work on 3D telepresence at a multinational research center provides insight into the technologys potential, as well as into its remaining challenges. The first Web extra at http://youtu.be/r4SqJdXkOjQ is a video describing FreeCam, a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The second Web extra at http://youtu.be/Dw1glKUKs9A is a video showing a system designed to capture the enhanced 3D structure of a room-sized dynamic scene with commodity depth cameras, such as Microsoft Kinects. The third Web extra at http://youtu.be/G_VPzXRrmIw is a video demonstrating a system that adapts to a wide variety of telepresence scenarios. By combining Kinect-based 3D scanning with optical see-through HMDs, the user can precisely control which parts of the scene are real and which are virtual or remote objects. The fourth Web extra at http://youtu.be/n45N5AHsoCI is a video demonstrating a method based on moving least squares surfaces that robustly and efficiently reconstructs dynamic scenes captured by a set of hybrid color+depth cameras. The reconstruction provides spatiotemporal consistency and seamlessly fuses color and geometric information. The video also illustrates the formulation on a variety of real sequences and demonstrates that it favorably compares to state-of-the-art methods. The fifth Web extra at http://youtu.be/OSl3f2qZzKs is a video demonstrating a 3D acquisition system capable of simultaneously capturing an entire room-sized volume with an array of commodity depth cameras and rendering it from a novel viewpoint in real time. The sixth Web extra at http://youtu.be/zKWByH7evo0 is a video demonstrating a gaze-correction approach based on a single Kinect sensor that preserves both the integrity and expressiveness of the face as well as the fidelity of the scene as a whole, producing nearly artifact-free imagery. The method is suitable for mainstream home video conferencing: it uses inexpensive consumer hardware, achieves real-time performance, and requires just a simple and short setup.


international conference on computer graphics and interactive techniques | 2013

Painting by feature: texture boundaries for example-based image creation

Michal Lukác; Jakub Fišer; Jean Charles Bazin; Ondrej Jamriska; Alexander Sorkine-Hornung; Daniel Sýkora

In this paper we propose a reinterpretation of the brush and the fill tools for digital image painting. The core idea is to provide an intuitive approach that allows users to paint in the visual style of arbitrary example images. Rather than a static library of colors, brushes, or fill patterns, we offer users entire images as their palette, from which they can select arbitrary contours or textures as their brush or fill tool in their own creations. Compared to previous example-based techniques related to the painting-by-numbers paradigm we propose a new strategy where users can generate salient texture boundaries by our randomized graph-traversal algorithm and apply a content-aware fill to transfer textures into the delimited regions. This workflow allows users of our system to intuitively create visually appealing images that better preserve the visual richness and fluidity of arbitrary example images. We demonstrate the potential of our approach in various applications including interactive image creation, editing and vector image stylization.

Collaboration


Dive into the Jean Charles Bazin's collaboration.

Top Co-Authors

Avatar

Cédric Demonceaux

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge