Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gyeongmin Choe is active.

Publication


Featured researches published by Gyeongmin Choe.


computer vision and pattern recognition | 2015

Accurate depth map estimation from a lenslet light field camera

Hae-Gon Jeon; Jaesik Park; Gyeongmin Choe; Jinsun Park; Yunsu Bok; Yu-Wing Tai; In So Kweon

This paper introduces an algorithm that accurately estimates depth maps using a lenslet light field camera. The proposed algorithm estimates the multi-view stereo correspondences with sub-pixel accuracy using the cost volume. The foundation for constructing accurate costs is threefold. First, the sub-aperture images are displaced using the phase shift theorem. Second, the gradient costs are adaptively aggregated using the angular coordinates of the light field. Third, the feature correspondences between the sub-aperture images are used as additional constraints. With the cost volume, the multi-label optimization propagates and corrects the depth map in the weak texture regions. Finally, the local depth map is iteratively refined through fitting the local quadratic function to estimate a non-discrete depth map. Because micro-lens images contain unexpected distortions, a method is also proposed that corrects this error. The effectiveness of the proposed algorithm is demonstrated through challenging real world examples and including comparisons with the performance of advanced depth estimation algorithms.


computer vision and pattern recognition | 2014

Exploiting Shading Cues in Kinect IR Images for Geometry Refinement

Gyeongmin Choe; Jaesik Park; Yu-Wing Tai; In So Kweon

In this paper, we propose a method to refine geometry of 3D meshes from the Kinect fusion by exploiting shading cues captured from the infrared (IR) camera of Kinect. A major benefit of using the Kinect IR camera instead of a RGB camera is that the IR images captured by Kinect are narrow band images which filtered out most undesired ambient light that makes our system robust to natural indoor illumination. We define a near light IR shading model which describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between a light source and surface points. To resolve ambiguity in our model between normals and distance, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not reconstructed by the Kinect fusion. Our approach directly operates on a 3D mesh model for geometry refinement. The effectiveness of our approach is demonstrated through several challenging real-world examples.


international conference on computer vision | 2015

High Quality Structure from Small Motion for Rolling Shutter Cameras

Sunghoon Im; Hyowon Ha; Gyeongmin Choe; Hae-Gon Jeon; Kyungdon Joo; In So Kweon

We present a practical 3D reconstruction method to obtain a high-quality dense depth map from narrow-baseline image sequences captured by commercial digital cameras, such as DSLRs or mobile phones. Depth estimation from small motion has gained interest as a means of various photographic editing, but important limitations present themselves in the form of depth uncertainty due to a narrow baseline and rolling shutter. To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras. Additionally, we present a depth propagation method to fill in the holes associated with the unknown pixels based on our novel geometric guidance model. Both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3D depth maps than those by the state-of-the-art method.


computer vision and pattern recognition | 2017

A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms

Ole Johannsen; Katrin Honauer; Bastian Goldluecke; Anna Alperovich; Federica Battisti; Yunsu Bok; Michele Brizzi; Marco Carli; Gyeongmin Choe; Maximilian Diebold; Marcel Gutsche; Hae-Gon Jeon; In So Kweon; Jaesik Park; Jinsun Park; Hendrik Schilling; Hao Sheng; Lipeng Si; Michael Strecke; Antonin Sulc; Yu-Wing Tai; Qing Wang; Ting-Chun Wang; Sven Wanner; Zhang Xiong; Jingyi Yu; Shuo Zhang; Hao Zhu

This paper presents the results of the depth estimation challenge for dense light fields, which took place at the second workshop on Light Fields for Computer Vision (LF4CV) in conjunction with CVPR 2017. The challenge consisted of submission to a recent benchmark [7], which allows a thorough performance analysis. While individual results are readily available on the benchmark web page http://www.lightfield-analysis.net, we take this opportunity to give a detailed overview of the current participants. Based on the algorithms submitted to our challenge, we develop a taxonomy of light field disparity estimation algorithms and give a report on the current state-ofthe- art. In addition, we include more comparative metrics, and discuss the relative strengths and weaknesses of the algorithms. Thus, we obtain a snapshot of where light field algorithm development stands at the moment and identify aspects with potential for further improvement.


International Journal of Computer Vision | 2017

Refining Geometry from Depth Sensors using IR Shading Images

Gyeongmin Choe; Jaesik Park; Yu-Wing Tai; In So Kweon

We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements.


european conference on computer vision | 2016

All-Around Depth from Small Motion with a Spherical Panoramic Camera

Sunghoon Im; Hyowon Ha; Francois Rameau; Hae-Gon Jeon; Gyeongmin Choe; In So Kweon

With the growing use of head-mounted displays for virtual reality (VR), generating 3D contents for these devices becomes an important topic in computer vision. For capturing full 360 degree panoramas in a single shot, the Spherical Panoramic Camera (SPC) are gaining in popularity. However, estimating depth from a SPC remains a challenging problem. In this paper, we propose a practical method that generates all-around dense depth map using a narrow-baseline video clip captured by a SPC. While existing methods for depth from small motion rely on perspective cameras, we introduce a new bundle adjustment approach tailored for SPC that minimizes the re-projection error directly on the unit sphere. It enables to estimate approximate metric camera poses and 3D points. Additionally, we present a novel dense matching method called sphere sweeping algorithm. This allows us to take advantage of the overlapping regions between the cameras. To validate the effectiveness of the proposed method, we evaluate our approach on both synthetic and real-world data. As an example of the applications, we also present stereoscopic panorama images generated from our depth results.


computer vision and pattern recognition | 2016

Simultaneous Estimation of Near IR BRDF and Fine-Scale Surface Geometry

Gyeongmin Choe; Srinivasa G. Narasimhan; In So Kweon

Near-Infrared (NIR) images of most materials exhibit less texture or albedo variations making them beneficial for vision tasks such as intrinsic image decomposition and structured light depth estimation. Understanding the reflectance properties (BRDF) of materials in the NIR wavelength range can be further useful for many photometric methods including shape from shading and inverse rendering. However, even with less albedo variation, many materials e.g. fabrics, leaves, etc. exhibit complex fine-scale surface detail making it hard to accurately estimate BRDF. In this paper, we present an approach to simultaneously estimate NIR BRDF and fine-scale surface details by imaging materials under different IR lighting and viewing directions. This is achieved by an iterative scheme that alternately estimates surface detail and NIR BRDF of materials. Our setup does not require complicated gantries or calibration and we present the first NIR dataset of 100 materials including a variety of fabrics (knits, weaves, cotton, satin, leather), and organic (skin, leaves, jute, trunk, fur) and inorganic materials (plastic, concrete, carpet). The NIR BRDFs measured from material samples are used with a shape-from-shading algorithm to demonstrate fine-scale reconstruction of objects from a single NIR image.


european conference on computer vision | 2016

Fine-Scale Surface Normal Estimation Using a Single NIR Image

Youngjin Yoon; Gyeongmin Choe; Namil Kim; Joon-Young Lee; In So Kweon

We present surface normal estimation using a single near infrared (NIR) image. We are focusing on reconstructing fine-scale surface geometry using an image captured with an uncalibrated light source. To tackle this ill-posed problem, we adopt a generative adversarial network, which is effective in recovering sharp outputs essential for fine-scale surface normal estimation. We incorporate the angular error and an integrability constraint into the objective function of the network to make the estimated normals incorporate physical characteristics. We train and validate our network on a recent NIR dataset, and also evaluate the generality of our trained model by using new external datasets that are captured with a different camera under different environments.


international conference on image processing | 2015

Depth from accidental motion using geometry prior

Sunghoon Im; Gyeongmin Choe; Hae-Gon Jeon; In So Kweon

We present a method to reconstruct dense 3D points from small camera motion. We begin with estimating sparse 3D points and camera poses by Structure from Motion (SfM) method with homography decomposition. Although the estimated points are optimized via bundle adjustment and gives reliable accuracy, the reconstructed points are sparse because it heavily depends on the extracted features of a scene. To handle this, we propose a depth propagation method using both a color prior from the images and a geometry prior from the initial points. The major benefit of our method is that we can easily handle the regions with similar colors but different depths by using the surface normal estimated from the initial points. We design our depth propagation framework into the cost minimization process. The cost function is linearly designed, which makes our optimization tractable. We demonstrate the effectiveness of our approach by comparing with a conventional method using various real-world examples.


international conference on image processing | 2015

Reflection removal using disparity and gradient-sparsity via smoothing algorithm.

Tharatch Sirinukulwattana; Gyeongmin Choe; In So Kweon

The purpose of this paper is to introduce a new method for removing reflections from multi-view images taken through a transparent medium, such as pane glass. Our method utilizes an optimization approach based on the probabilistic model of relative smoothness algorithm, which exploits gradient value to separate the image into two sub-layers. As this algorithm has certain limitations on removing reflections, we improve upon it by imposing a gradient-sparsity constraint. This allows the type of reflection captured within a cameras focal length to be effectively removed. We also introduce another constraint on a disparity map which smooths specific areas of reflection layer while simultaneously preserves the sharpness of the main object. These two major contributions are proven to be sufficient in producing high-quality images. Our algorithm demonstrates good results compared with other existing methods as most of the reflection spots have been removed, and the computational time of our system is arguably fast.

Collaboration


Dive into the Gyeongmin Choe's collaboration.

Researchain Logo
Decentralizing Knowledge