Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyowon Ha is active.

Publication


Featured researches published by Hyowon Ha.


intelligent robots and systems | 2011

A novel 2.5D pattern for extrinsic calibration of tof and camera fusion system

Jiyoung Jung; Yekeun Jeong; Jaesik Park; Hyowon Ha; James D. K. Kim; In So Kweon

Recently, many researchers have made efforts for accurate calibration of a Time-of-Flight camera to fully utilize its provided depth values. Yet most previous works focus mainly on intrinsic calibration by modeling its systematic errors and noises while extrinsic calibration is also an important factor when constructing sensor fusion system. In this paper, we present a calibration process that can correctly transfer the depth measurements onto the color image. We use 2.5D pattern so that sufficient reprojection error can be considered for both color and ToF cameras. The issues on obtaining the correct correspondences for this pattern are discussed. In the optimization stage, the depth constraint is also employed to ensure the depth measurements to lie on the pattern plane. The strengths of the proposed method over previous approaches are evaluated in several robotic applications which require precise ToF and camera calibration.


computer vision and pattern recognition | 2016

High-Quality Depth from Uncalibrated Small Motion Clip

Hyowon Ha; Sunghoon Im; Jaesik Park; Hae-Gon Jeon; In So Kweon

We propose a novel approach that generates a high-quality depth map from a set of images captured with a small viewpoint variation, namely small motion clip. As opposed to prior methods that recover scene geometry and camera motions using pre-calibrated cameras, we introduce a self-calibrating bundle adjustment tailored for small motion. This allows our dense stereo algorithm to produce a high-quality depth map for the user without the need for camera calibration. In the dense matching, the distributions of intensity profiles are analyzed to leverage the benefit of having negligible intensity changes within the scene due to the minuscule variation in viewpoint. The depth maps obtained by the proposed framework show accurate and extremely fine structures that are unmatched by previous literature under the same small motion configuration.


international conference on computer vision | 2015

High Quality Structure from Small Motion for Rolling Shutter Cameras

Sunghoon Im; Hyowon Ha; Gyeongmin Choe; Hae-Gon Jeon; Kyungdon Joo; In So Kweon

We present a practical 3D reconstruction method to obtain a high-quality dense depth map from narrow-baseline image sequences captured by commercial digital cameras, such as DSLRs or mobile phones. Depth estimation from small motion has gained interest as a means of various photographic editing, but important limitations present themselves in the form of depth uncertainty due to a narrow baseline and rolling shutter. To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras. Additionally, we present a depth propagation method to fill in the holes associated with the unknown pixels based on our novel geometric guidance model. Both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3D depth maps than those by the state-of-the-art method.


Pattern Recognition Letters | 2016

Automated checkerboard detection and indexing using circular boundaries

Yunsu Bok; Hyowon Ha; In So Kweon

Low probability of missing true corners due to user-defined parameters of feature extraction.Discarding outliers using characteristics of checkerboard corners.Index extension using characteristics of neighboring checkerboard corners.Performance of the proposed method in terms of success ratio.Robustness of the proposed method against partial view, lens distortion and image noise. This paper presents a new algorithm for automated checkerboard detection and indexing. Automated checkerboard detection is essential for reducing user inputs in any camera calibration process. We adopt an iterative refinement algorithm to extract corner candidates. In order to utilize the characteristics of checkerboard corners, we extract a circular boundary from each candidate and find its sign-changing indices. We initialize an arbitrary point and its neighboring two points as seeds and assign world coordinates to the other points. The largest set of world-coordinate-assigned points is selected as the detected checkerboard. The performance of the proposed algorithm is evaluated using images with various sizes and particular conditions.


IEEE Transactions on Visualization and Computer Graphics | 2016

A Real-Time Augmented Reality System to See-Through Cars

Francois Rameau; Hyowon Ha; Kyungdon Joo; Jinsoo Choi; Kibaek Park; In So Kweon

One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicles driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.


european conference on computer vision | 2016

All-Around Depth from Small Motion with a Spherical Panoramic Camera

Sunghoon Im; Hyowon Ha; Francois Rameau; Hae-Gon Jeon; Gyeongmin Choe; In So Kweon

With the growing use of head-mounted displays for virtual reality (VR), generating 3D contents for these devices becomes an important topic in computer vision. For capturing full 360 degree panoramas in a single shot, the Spherical Panoramic Camera (SPC) are gaining in popularity. However, estimating depth from a SPC remains a challenging problem. In this paper, we propose a practical method that generates all-around dense depth map using a narrow-baseline video clip captured by a SPC. While existing methods for depth from small motion rely on perspective cameras, we introduce a new bundle adjustment approach tailored for SPC that minimizes the re-projection error directly on the unit sphere. It enables to estimate approximate metric camera poses and 3D points. Additionally, we present a novel dense matching method called sphere sweeping algorithm. This allows us to take advantage of the overlapping regions between the cameras. To validate the effectiveness of the proposed method, we evaluate our approach on both synthetic and real-world data. As an example of the applications, we also present stereoscopic panorama images generated from our depth results.


international conference on computer vision | 2015

Accurate Camera Calibration Robust to Defocus Using a Smartphone

Hyowon Ha; Yunsu Bok; Kyungdon Joo; Jiyoung Jung; In So Kweon

We propose a novel camera calibration method for defocused images using a smartphone under the assumption that the defocus blur is modeled as a convolution of a sharp image with a Gaussian point spread function (PSF). In contrast to existing calibration approaches which require well-focused images, the proposed method achieves accurate camera calibration with severely defocused images. This robustness to defocus is due to the proposed set of unidirectional binary patterns, which simplifies 2D Gaussian deconvolution to a 1D Gaussian deconvolution problem with multiple observations. By capturing the set of patterns consecutively displayed on a smartphone, we formulate the feature extraction as a deconvolution problem to estimate feature point locations in sub-pixel accuracy and the blur kernel in each location. We also compensate the error in camera parameters due to refraction of the glass panel of the display device. We evaluate the performance of the proposed method on synthetic and real data. Even under severe defocus, our method shows accurate camera calibration result.


international conference on 3d vision | 2015

Dense Depth and Albedo from a Single-Shot Structured Light

Hyowon Ha; Jaesik Park; In So Kweon

Single-shot structured light scanning has been actively investigated as it can recover accurate geometrical shape even on a dynamic scene. Since many single-shot approaches focus on improving depth accuracy, recovering the intrinsic properties of the scene such as albedo and shading are also valuable. In this paper, we propose a novel method that reconstructs not only the metric depth but also the intrinsic properties from a single structured light image. We extend the conventional color structured light model to embrace the Lambertian shading model. By using a color phase-shifting pattern, we parameterize the captured image with only two variables, albedo and depth. For an initial solution, a simple but powerful method to decompose sinusoids from the input image is presented. We formulate a non-linear cost function and jointly optimize albedo and depth efficiently by calculating analytic Jacobian. We demonstrate that our algorithm reasonably works on various real-world objects which exhibit challenging surface reflectance and albedo.


computer vision and pattern recognition | 2017

Noise Robust Depth from Focus Using a Ring Difference Filter

Jaeheung Surh; Hae-Gon Jeon; Yunwon Park; Sunghoon Im; Hyowon Ha; In So Kweon

Depth from focus (DfF) is a method of estimating depth of a scene by using the information acquired through the change of the focus of a camera. Within the framework of DfF, the focus measure (FM) forms the foundation on which the accuracy of the output is determined. With the result from the FM, the role of a DfF pipeline is to determine and recalculate unreliable measurements while enhancing those that are reliable. In this paper, we propose a new FM that more accurately and robustly measures focus, which we call the ring difference filter (RDF). FMs can usually be categorized as confident local methods or noise robust non-local methods. RDFs unique ring-and-disk structure allows it to have the advantageous sides of both local and non-local FMs. We then describe an efficient pipeline that utilizes the properties that the RDF brings. Our method is able to reproduce results that are on par with or even better than those of the state-of-the-art, while spending less time in computation.


computer vision and pattern recognition | 2016

Stereo Matching with Color and Monochrome Cameras in Low-Light Conditions

Hae-Gon Jeon; Joon-Young Lee; Sunghoon Im; Hyowon Ha; In So Kweon

Consumer devices with stereo cameras have become popular because of their low-cost depth sensing capability. However, those systems usually suffer from low imaging quality and inaccurate depth acquisition under low-light conditions. To address the problem, we present a new stereo matching method with a color and monochrome camera pair. We focus on the fundamental trade-off that monochrome cameras have much better light-efficiency than color-filtered cameras. Our key ideas involve compensating for the radiometric difference between two cross-spectral images and taking full advantage of complementary data. Consequently, our method produces both an accurate depth map and high-quality images, which are applicable for various depth-aware image processing. Our method is evaluated using various datasets and the performance of our depth estimation consistently outperforms state-of-the-art methods.

Collaboration


Dive into the Hyowon Ha's collaboration.

Researchain Logo
Decentralizing Knowledge