Donghyeon Cho
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Donghyeon Cho.
international conference on computer vision | 2013
Donghyeon Cho; Minhaeng Lee; Sunyeong Kim; Yu-Wing Tai
Light-field imaging systems have got much attention recently as the next generation camera model. A light-field imaging system consists of three parts: data acquisition, manipulation, and application. Given an acquisition system, it is important to understand how a light-field camera converts from its raw image to its resulting refocused image. In this paper, using the Lytro camera as an example, we describe step-by-step procedures to calibrate a raw light-field image. In particular, we are interested in knowing the spatial and angular coordinates of the micro lens array and the resampling process for image reconstruction. Since Lytro uses a hexagonal arrangement of a micro lens image, additional treatments in calibration are required. After calibration, we analyze and compare the performances of several resampling methods for image reconstruction with and without calibration. Finally, a learning based interpolation method is proposed which demonstrates a higher quality image reconstruction than previous interpolation methods including a method used in Lytro software.
european conference on computer vision | 2016
Donghyeon Cho; Yu-Wing Tai; In So Kweon
We propose a deep Convolutional Neural Networks (CNN) method for natural image matting. Our method takes results of the closed form matting, results of the KNN matting and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs, and reconstructed alpha mattes. We analyze pros and cons of the closed form matting, and the KNN matting in terms of local and nonlocal principle, and show that they are complementary to each other. A major benefit of our method is that it can “recognize” different local image structures, and then combine results of local (closed form matting), and nonlocal (KNN matting) matting effectively to achieve higher quality alpha mattes than both of its inputs. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. In addition, our method has achieved the highest ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors.
european conference on computer vision | 2014
Donghyeon Cho; Sunyeong Kim; Yu-Wing Tai
We present a new image matting algorithm to extract consistent alpha mattes across sub-images of a light field image. Instead of matting each sub-image individually, our approach utilizes the epipolar plane image (EPI) to construct comprehensive foreground and background sample sets across the sub-images without missing a true sample. The sample sets represent all color variation of foreground and background in a light field image, and the optimal alpha matte is obtained by choosing the best combination of foreground and background samples that minimizes the linear composite error subject to the EPI correspondence constraint. To further preserve consistency of the estimated alpha mattes across different sub-images, we impose a smoothness constraint along the EPI of alpha mattes. In experimental evaluations, we have created a dataset where the ground truth alpha mattes of light field images were obtained by using the blue screen technique. A variety of experiments show that our proposed algorithm produces both visually and quantitatively high-quality matting results for light field images.
european conference on computer vision | 2016
Donghyeon Cho; Yasuyuki Matsushita; Yu-Wing Tai; In So Kweon
This paper studies the effects of non-uniform light intensities and sensor exposures across observed images in photometric stereo. While conventional photometric stereo methods typically assume that light intensities are identical and sensor exposure is constant across observed images taken under varying lightings, these assumptions easily break down in practical settings due to individual light bulb’s characteristics and limited control over sensors. Our method explicitly models these non-uniformities and develops a method for accurately determining surface normal without affected by these factors. In addition, we show that our method is advantageous for general photometric stereo settings, where auto-exposure control is desirable. We compare our method with conventional least-squares and robust photometric stereo methods, and the experimental result shows superior accuracy of our method in this practical circumstance.
computer vision and pattern recognition | 2017
Jinsun Park; Yu-Wing Tai; Donghyeon Cho; In So Kweon
In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation. This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for the weaknesses of other features when they are concatenated. For a full defocus map estimation, we extract image patches on strong edges sparsely, after which we use them for deep and hand-crafted feature extraction. In order to reduce the degree of patch-scale dependency, we also propose a multi-scale patch extraction strategy. A sparse defocus map is generated using a neural network classifier followed by a probability-joint bilateral filter. The final defocus map is obtained from the sparse defocus map with guidance from an edge-preserving filtered input image. Experimental results show that our algorithm is superior to state-of-the-art algorithms in terms of defocus estimation. Our work can be used for applications such as segmentation, blur magnification, all-in-focus image generation, and 3-D estimation.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017
Donghyeon Cho; Sunyeong Kim; Yu-Wing Tai; In So Kweon
In this paper, we introduce an automatic approach to generate trimaps and consistent alpha mattes of foreground objects in a light-field image. Our method first performs binary segmentation to roughly segment a light-field image into foreground and background based on depth and color. Next, we estimate accurate trimaps through analyzing color distribution along the boundary of the segmentation using guided image filter and KL-divergence. In order to estimate consistent alpha mattes across sub-images, we utilize the epipolar plane image (EPI) where colors and alphas along the same epipolar line must be consistent. Since EPI of foreground and background are mixed in the matting area, we propagate the EPI from definite foreground/background regions to unknown regions by assuming depth variations within unknown regions are spatially smooth. Using the EPI constraint, we derive two solutions to estimate alpha when color samples along epipolar line are known, and unknown. To further enhance consistency, we refine the estimated alpha mattes by using the multi-image matting Laplacian with an additional EPI smoothness constraint. In experimental evaluations, we have created a dataset where the ground truth alpha mattes of light-field images were obtained by using the blue screen technique. A variety of experiments show that our proposed algorithm produces both visually and quantitatively high-quality alpha mattes for light-field images.In this paper, we introduce an automatic approach to generate trimaps and consistent alpha mattes of foreground objects in a light-field image. Our method first performs binary segmentation to roughly segment a light-field image into foreground and background based on depth and color. Next, we estimate accurate trimaps through analyzing color distribution along the boundary of the segmentation using guided image filter and KL-divergence. In order to estimate consistent alpha mattes across sub-images, we utilize the epipolar plane image (EPI) where colors and alphas along the same epipolar line must be consistent. Since EPI of foreground and background are mixed in the matting area, we propagate the EPI from definite foreground/background regions to unknown regions by assuming depth variations within unknown regions are spatially smooth. Using the EPI constraint, we derive two solutions to estimate alpha when color samples along epipolar line are known, and unknown. To further enhance consistency, we refine the estimated alpha mattes by using the multi-image matting Laplacian with an additional EPI smoothness constraint. In experimental evaluations, we have created a dataset where the ground truth alpha mattes of light-field images were obtained by using the blue screen technique. A variety of experiments show that our proposed algorithm produces both visually and quantitatively high-quality alpha mattes for light-field images.
european conference on computer vision | 2018
Jin-Seok Park; Donghyeon Cho; Wonhyuk Ahn; Heung-Kyu Lee
Double JPEG detection is essential for detecting various image manipulations. This paper proposes a novel deep convolutional neural network for double JPEG detection using statistical histogram features from each block with a vectorized quantization table. In contrast to previous methods, the proposed approach handles mixed JPEG quality factors and is suitable for real-world situations. We collected real-world JPEG images from the image forensic service and generated a new double JPEG dataset with 1120 quantization tables to train the network. The proposed approach was verified experimentally to produce a state-of-the-art performance, successfully detecting various image manipulations.
international conference on ubiquitous robots and ambient intelligence | 2016
Donghyeon Cho; Jinsun Park; Yu-Wing Tai; In So Kweon
In this paper, we design a new asymmetric stereo system with catadioptric lens for intelligent robot. The main benefit of catadioptric lens is long focal length with short and light lens body compared to common telephoto lenses. However, there are two critical shortcomings in catadioptric lens: narrow depth of field (DOF) and ring bokeh. Narrow DOF is a property of telephoto lens and ring bokeh stems from its special inner structure. With a help of conventional compound lens, we propose a method to overcome limitations of catadioptric lens via defocus estimation using multi-scale patch matching. Experimental results demonstrate that our approach recovers all-in-focus images from a new proposed asymmetric stereo system. Also, we show various applications based on the estimated defocus map and the recovered image. Our new system can be helpful for constructing robust vision system for intelligent robot.
international conference on computer vision | 2017
Dahun Kim; Donghyeon Cho; Donggeun Yoo
international conference on computer vision | 2017
Donghyeon Cho; Jinsun Park; Tae-Hyun Oh; Yu-Wing Tai; In So Kweon