Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyeongwoo Kim is active.

Publication


Featured researches published by Hyeongwoo Kim.


international conference on computer vision | 2011

High quality depth map upsampling for 3D-TOF cameras

Jaesik Park; Hyeongwoo Kim; Yu-Wing Tai; Michael S. Brown; In So Kweon

This paper describes an application framework to perform high quality upsampling on depth maps captured from a low-resolution and noisy 3D time-of-flight (3D-ToF) camera that has been coupled with a high-resolution RGB camera. Our framework is inspired by recent work that uses nonlocal means filtering to regularize depth maps in order to maintain fine detail and structure. Our framework extends this regularization with an additional edge weighting scheme based on several image features based on the additional high-resolution RGB input. Quantitative and qualitative results show that our method outperforms existing approaches for 3D-ToF upsampling. We describe the complete process for this system, including device calibration, scene warping for input alignment, and even how the results can be further processed using simple user markup.


computer vision and pattern recognition | 2013

Specular Reflection Separation Using Dark Channel Prior

Hyeongwoo Kim; Hailin Jin; Sunil Hadap; In So Kweon

We present a novel method to separate specular reflection from a single image. Separating an image into diffuse and specular components is an ill-posed problem due to lack of observations. Existing methods rely on a specular-free image to detect and estimate specularity, which however may confuse diffuse pixels with the same hue but a different saturation value as specular pixels. Our method is based on a novel observation that for most natural images the dark channel can provide an approximate specular-free image. We also propose a maximum a posteriori formulation which robustly recovers the specular reflection and chromaticity despite of the hue-saturation ambiguity. We demonstrate the effectiveness of the proposed algorithm on real and synthetic examples. Experimental results show that our method significantly outperforms the state-of-the-art methods in separating specular reflection.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications

Tae-Hyun Oh; Yu-Wing Tai; Jean Charles Bazin; Hyeongwoo Kim; In So Kweon

Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values, which implicitly encourages the target rank constraint. Our experimental analyses show that, when the number of samples is deficient, our approach leads to a higher success rate than conventional rank minimization, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g., high dynamic range imaging, motion edge detection, photometric stereo, image alignment and recovery, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.


international conference on computer vision | 2017

MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

Ayush Tewari; Michael Zollhöfer; Hyeongwoo Kim; Pablo Garrido; Florian Bernard; Patrick Pérez; Christian Theobalt

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.


international conference on computer vision | 2013

Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Tae-Hyun Oh; Hyeongwoo Kim; Yu-Wing Tai; Jean Charles Bazin; In So Kweon

Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.


IEEE Transactions on Image Processing | 2014

High-Quality Depth Map Upsampling and Completion for RGB-D Cameras

Jaesik Park; Hyeongwoo Kim; Yu-Wing Tai; Michael S. Brown; In So Kweon

This paper describes an application framework to perform high-quality upsampling and completion on noisy depth maps. Our framework targets a complementary system setup, which consists of a depth camera coupled with an RGB camera. Inspired by a recent work that uses a nonlocal structure regularization, we regularize depth maps in order to maintain fine details and structures. We extend this regularization by combining the additional high-resolution RGB input when upsampling a low-resolution depth map together with a weighting scheme that favors structure details. Our technique is also able to repair large holes in a depth map with consideration of structures and discontinuities utilizing edge information from the RGB input. Quantitative and qualitative results show that our method outperforms existing approaches for depth map upsampling and completion. We describe the complete process for this system, including device calibration, scene warping for input alignment, and even how our framework can be extended for video depth-map completion with the consideration of temporal coherence.


asian conference on computer vision | 2007

Simultaneous plane extraction and 2D homography estimation using local feature transformations

Ouk Choi; Hyeongwoo Kim; In So Kweon

In this paper, we use local feature transformations estimated in the matching process as initial seeds for 2D homography estimation. The number of testing hypotheses is equal to the number of matches, naturally enabling a full search over the hypothesis space. Using this property, we develop an iterative algorithm that clusters the matches under the common 2D homography into one group, i.e., features on a common plane. Our clustering algorithm is less affected by the proportion of inliers and as few as two features on the common plane can be clustered together; thus, the algorithm robustly detects multiple dominant scene planes. The knowledge of the dominant planes is used for robust fundamental matrix computation in the presence of quasi-degenerate data.


international conference on image processing | 2011

Two-phase approach for multi-view object extraction

Sungheum Kim; Yu-Wing Tai; Yunsu Bok; Hyeongwoo Kim; In So Kweon

In this paper, we propose an automatic method to extract a foreground object captured from multiple viewpoints. We consider the foreground object is within the visual hull of camera field of views. By exploring the multi-view geometric relationship and color measurements of the input images, we can estimate the foreground segmentations as well as their fractional boundaries. To facilitate efficient computation and high quality mattes, we adopt a two-phase approach. The first phase of our algorithm provides quick and rough binary segmentations of the foreground object using graph-cut; the second phase refines the segmentation boundaries using matting. Our result is the high quality alpha mattes of the foreground object consistently across all different viewpoints. We demonstrate the effectiveness of our method using challenging examples.


international conference on computer graphics and interactive techniques | 2018

Deep Video Portraits

Hyeongwoo Kim; Pablo Garrido; Ayush Tewari; Weipeng Xu; Justus Thies; Matthias Niessner; Patrick Pérez; Christian Richardt; Michael Zollhöfer; Christian Theobalt

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network - thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.


international conference on 3d vision | 2016

Dense Wide-Baseline Scene Flow from Two Handheld Video Cameras

Christian Richardt; Hyeongwoo Kim; Levi Valgaerts; Christian Theobalt

We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios. We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.

Collaboration


Dive into the Hyeongwoo Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge