Sunghoon Im
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sunghoon Im.
computer vision and pattern recognition | 2016
Hyowon Ha; Sunghoon Im; Jaesik Park; Hae-Gon Jeon; In So Kweon
We propose a novel approach that generates a high-quality depth map from a set of images captured with a small viewpoint variation, namely small motion clip. As opposed to prior methods that recover scene geometry and camera motions using pre-calibrated cameras, we introduce a self-calibrating bundle adjustment tailored for small motion. This allows our dense stereo algorithm to produce a high-quality depth map for the user without the need for camera calibration. In the dense matching, the distributions of intensity profiles are analyzed to leverage the benefit of having negligible intensity changes within the scene due to the minuscule variation in viewpoint. The depth maps obtained by the proposed framework show accurate and extremely fine structures that are unmatched by previous literature under the same small motion configuration.
international conference on computer vision | 2015
Sunghoon Im; Hyowon Ha; Gyeongmin Choe; Hae-Gon Jeon; Kyungdon Joo; In So Kweon
We present a practical 3D reconstruction method to obtain a high-quality dense depth map from narrow-baseline image sequences captured by commercial digital cameras, such as DSLRs or mobile phones. Depth estimation from small motion has gained interest as a means of various photographic editing, but important limitations present themselves in the form of depth uncertainty due to a narrow baseline and rolling shutter. To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras. Additionally, we present a depth propagation method to fill in the holes associated with the unknown pixels based on our novel geometric guidance model. Both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3D depth maps than those by the state-of-the-art method.
european conference on computer vision | 2016
Sunghoon Im; Hyowon Ha; Francois Rameau; Hae-Gon Jeon; Gyeongmin Choe; In So Kweon
With the growing use of head-mounted displays for virtual reality (VR), generating 3D contents for these devices becomes an important topic in computer vision. For capturing full 360 degree panoramas in a single shot, the Spherical Panoramic Camera (SPC) are gaining in popularity. However, estimating depth from a SPC remains a challenging problem. In this paper, we propose a practical method that generates all-around dense depth map using a narrow-baseline video clip captured by a SPC. While existing methods for depth from small motion rely on perspective cameras, we introduce a new bundle adjustment approach tailored for SPC that minimizes the re-projection error directly on the unit sphere. It enables to estimate approximate metric camera poses and 3D points. Additionally, we present a novel dense matching method called sphere sweeping algorithm. This allows us to take advantage of the overlapping regions between the cameras. To validate the effectiveness of the proposed method, we evaluate our approach on both synthetic and real-world data. As an example of the applications, we also present stereoscopic panorama images generated from our depth results.
computer vision and pattern recognition | 2017
Jaeheung Surh; Hae-Gon Jeon; Yunwon Park; Sunghoon Im; Hyowon Ha; In So Kweon
Depth from focus (DfF) is a method of estimating depth of a scene by using the information acquired through the change of the focus of a camera. Within the framework of DfF, the focus measure (FM) forms the foundation on which the accuracy of the output is determined. With the result from the FM, the role of a DfF pipeline is to determine and recalculate unreliable measurements while enhancing those that are reliable. In this paper, we propose a new FM that more accurately and robustly measures focus, which we call the ring difference filter (RDF). FMs can usually be categorized as confident local methods or noise robust non-local methods. RDFs unique ring-and-disk structure allows it to have the advantageous sides of both local and non-local FMs. We then describe an efficient pipeline that utilizes the properties that the RDF brings. Our method is able to reproduce results that are on par with or even better than those of the state-of-the-art, while spending less time in computation.
computer vision and pattern recognition | 2016
Hae-Gon Jeon; Joon-Young Lee; Sunghoon Im; Hyowon Ha; In So Kweon
Consumer devices with stereo cameras have become popular because of their low-cost depth sensing capability. However, those systems usually suffer from low imaging quality and inaccurate depth acquisition under low-light conditions. To address the problem, we present a new stereo matching method with a color and monochrome camera pair. We focus on the fundamental trade-off that monochrome cameras have much better light-efficiency than color-filtered cameras. Our key ideas involve compensating for the radiometric difference between two cross-spectral images and taking full advantage of complementary data. Consequently, our method produces both an accurate depth map and high-quality images, which are applicable for various depth-aware image processing. Our method is evaluated using various datasets and the performance of our depth estimation consistently outperforms state-of-the-art methods.
international conference on image processing | 2015
Sunghoon Im; Gyeongmin Choe; Hae-Gon Jeon; In So Kweon
We present a method to reconstruct dense 3D points from small camera motion. We begin with estimating sparse 3D points and camera poses by Structure from Motion (SfM) method with homography decomposition. Although the estimated points are optimized via bundle adjustment and gives reliable accuracy, the reconstructed points are sparse because it heavily depends on the extracted features of a scene. To handle this, we propose a depth propagation method using both a color prior from the images and a geometry prior from the initial points. The major benefit of our method is that we can easily handle the regions with similar colors but different depths by using the surface normal estimated from the initial points. We design our depth propagation framework into the cost minimization process. The cost function is linearly designed, which makes our optimization tractable. We demonstrate the effectiveness of our approach by comparing with a conventional method using various real-world examples.
international conference on ubiquitous robots and ambient intelligence | 2015
Sunghoon Im; Hae-Gon Jeon; Hyowon Ha; In So Kweon
We present a 3D reconstruction method to estimate depth maps using light-field camera. The estimated depth is widely used for photographic editing. The proposed algorithm, as a feature-based method, provides high quality depth map. First, the distinctive features are extracted from reference image, and the correspondences are tracked. Second, the depth values for each points are calculated by small angle approximated bundle adjustment. Finally, the whole depth map is estimated by simple depth propagation method. To show the effectiveness of our algorithm, we compare our results with the state-of-the-art algorithm.
IEEE Signal Processing Letters | 2017
Seunghak Shin; Sunghoon Im; Inwook Shim; Hae-Gon Jeon; In So Kweon
In this letter, we present an accurate Depth from Small Motion approach, which reconstructs three-dimensional (3-D) depth from image sequences with extremely narrow baselines. We start with estimating sparse 3-D points and camera poses via the structure from motion method. For dense depth reconstruction, we propose a novel depth propagation using a geometric guidance term that considers not only the geometric constraint from the surface normal, but also color consistency. In addition, we propose an accurate surface normal estimation method with a multiple range search so that the normal vector can guide the direction of the depth propagation precisely. The major benefit of our depth propagation method is that it obtains detailed structures of a scene without fronto-parallel bias. We validate our method using various indoor and outdoor datasets, and both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3-D depth information than the results of existing state-of-the-art methods.
international conference on ubiquitous robots and ambient intelligence | 2015
Dong-Jin Kim; Donggeun Yoo; Sunghoon Im; Namil Kim; Tharatch Sirinukulwattana; In So Kweon
Our work is based on the idea of relative attributes, aiming to provide more descriptive information to the images. We propose the model that integrates relative-attribute framework with deep Convolutional Neural Networks (CNN) to increase the accuracy of attribute comparison. In addition, we analyzed the role of each network layer in the process. Our model uses features extracted from CNN and is learned by Rank SVM method with these feature vectors. As a result, our model outperforms the original relative attribute model in terms of significant improvement in accuracy.
international conference on robotics and automation | 2018
Gyeongmin Choe; Seong-Heum Kim; Sunghoon Im; Joon-Young Lee; Srinivasa G. Narasimhan; In So Kweon