Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yong Ho Hwang is active.

Publication


Featured researches published by Yong Ho Hwang.


advances in multimedia | 2007

Using irradiance environment map on GPU for real-time composition

Jonghyub Kim; Yong Ho Hwang; Hyun-Ki Hong

For the seamless integration of synthetic objects within video images, generating consistent illumination is critical. This paper presents an interactive rendering system using a Graphics Process Unit-based (GPU) irradiance environment map. A camcorder with a fisheye lens captures environmental information and constructs the environment map in real-time. The pre-filtering method, which approximates the irradiance of the scene using 9 parameters, renders diffuse objects within real images. This proposed interactive common illumination system based on the GPU can generate photo-realistic images at 18 ~ 20 frames per second.


mexican international conference on artificial intelligence | 2004

An Improved ICP Algorithm Based on the Sensor Projection for Automatic 3D Registration

Sang-Hoon Kim; Yong Ho Hwang; Hyun-Ki Hong; Min-Hyung Choi

Three-dimensional (3D) registration is the process aligning the range data sets form different views in a common coordinate system. In order to generate a complete 3D model, we need to refine the data sets after coarse registration. One of the most popular refinery techniques is the iterative closest point (ICP) algorithm, which starts with pre-estimated overlapping regions. This paper presents an improved ICP algorithm that can automatically register multiple 3D data sets from unknown viewpoints. The sensor projection that represents the mapping of the 3D data into its associated range image and a cross projection are used to determine the overlapping region of two range data sets. By combining ICP algorithm with the sensor projection, we can make an automatic registration of multiple 3D sets without pre-procedures that are prone to errors and any mechanical positioning device or manual assistance. The experimental results demonstrated that the proposed method can achieve more precise 3D registration of a couple of 3D data sets than previous methods.


IEICE Transactions on Information and Systems | 2008

Key-Frame Selection and an LMedS-Based Approach to Structure and Motion Recovery

Yong Ho Hwang; Jung-Kak Seo; Hyun-Ki Hong

Auto-calibration for structure and motion recovery can be used for match move where the goal is to insert synthetic 3D objects into real scenes and create views as if they were part of the real scene. However, most auto-calibration methods for multi-views utilize bundle adjustment with non-linear optimization, which requires a very good starting approximation. We propose a novel key-frame selection measurement and LMedS (Least Median of Square)-based approach to estimate scene structure and motion from image sequences captured with a hand-held camera. First, we select key-frames considering the ratio of number of correspondences and feature points, the homography error and the distribution of corresponding points in the image. Then, by using LMedS, we reject erroneous frames among the key-frames in absolute quadric estimation. Simulation results demonstrated that the proposed method can select suitable key-frames efficiently and achieve more precise camera pose estimation without non-linear optimization.


mexican international conference on artificial intelligence | 2004

Structure and motion recovery using two step sampling for 3D match move

Jung-Kak Seo; Yong Ho Hwang; Hyun-Ki Hong

Camera pose and scene geometry estimation is a fundamental requirement for match move to insert synthetic 3D objects in real scenes. In order to automate this process, auto-calibration that estimates the camera motion without prior calibration information is needed. Most auto-calibration methods for multi-views contain bundle adjustment or non-linear minimization process that is complex and difficult problem. This paper presents two methods for recovering structure and motion from handheld image sequences: the one is key-frame selection, and the other is to reject the frame with large errors among key-frames in absolute quadric estimation by LMedS (Least Median of Square). The experimental results showed the proposed method can achieve precisely camera pose and scene geometry estimation without bundle adjustment.


international conference on computational science and its applications | 2004

View Morphing Based on Auto-calibration for Generation of In-between Views

Jin-Young Song; Yong Ho Hwang; Hyun-Ki Hong

Since image morphing methods do not account for changes in viewpoint or object pose, it may be difficult even to express simple 3D transformations. Although previous view morphing can synthesize change in viewpoint, it requires camera viewpoints for automatic generation of in-between views and control points by user input for post-warping. This paper presents a new morphing algorithm that can generate automatically in-between scenes by using auto-calibration. Our method rectifies two images based on the fundamental matrix, and computes a morph that is linear interpolation with a bilinear disparity map, then generates in-between views. The proposed method has an advantage that knowledge of 3D shape and camera settings is unnecessary.


international conference on pattern recognition | 2004

Frame grouping measure for factorization-based projective reconstruction

Yoon-Yong Jung; Yong Ho Hwang; Hyun-Ki Hong

The factorization-based method generally suffers less from drift and error accumulation than the merging. However, the factorization method assumes that all correspondences must remain in all frames. In order to overcome the limitation, we present a new factorization-based projective reconstruction from un-calibrated image sequences. The proposed method breaks the full sequence into sub-sequences based on a quantitative measure considering the number of matching points between frames, the homography error, and the distribution of matching points in the image. All of projective reconstructions in sub-sequences are registered into the same coordinate frame for a complete description of the scene. Experimental results showed our algorithm could recover more precise 3D structure than the merging method.


international symposium on visual computing | 2007

Contour matching in omnidirectional images

Yong Ho Hwang; Jaeman Lee; Hyun-Ki Hong

This paper presents a novel method for contour matching in the architectural scenes captured by the omnidirectional camera. Since most line segments of man-made objects are projected to lines and contours, contour matching problem is important for 3D analysis in an omnidirectional indoor scene. First, we compute an initial estimate of the camera parameters from corner points and correlation-based matching. Then, the obtained edges by Canny detector are linked and divided into separate 3D line segments. By using a minimum angular error of endpoints of each contour, we establish the corresponding contours, and the initial parameters are refined iteratively from the correspondences. The simulation results demonstrate that the algorithm precisely estimates the extrinsic parameters of the camera by contour matching.


international symposium on visual computing | 2006

Omnidirectional camera calibration and 3d reconstruction by contour matching

Yong Ho Hwang; Jaeman Lee; Hyun-Ki Hong

This paper presents a novel approach to both omnidirectional camera calibration and 3D reconstruction of the surrounding scene by contour matching in architectural scenes. By using a quantitative measure to consider the inlier distribution, we can estimate more precise camera model parameters and structure from motion. Since most of line segments of man-made objects are projected to the contours in omnidirectional images, contour matching problem is important in camera recovery process. We propose a novel 3D reconstruction method by contour matching in three omnidirectional views. First, two points on the contour and their viewing vectors are used to determine an interpretation plane equation, and we obtain a contour intersecting both the plane and the estimated patch of the camera model. Then, 3D line segment is calculated from two patches, which is projected to the contour on the third views, and these matching results are used in refinement of camera recovery.


RSCTC'06 Proceedings of the 5th international conference on Rough Sets and Current Trends in Computing | 2006

Calibration of omnidirectional camera by considering inlier distribution

Yong Ho Hwang; Hyun-Ki Hong

This paper presents a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera positions. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.


conference on multimedia modeling | 2007

Using camera calibration and radiosity on GPU for interactive common illumination

Yong Ho Hwang; Junhwan Kim; Hyun-Ki Hong

Global common illumination between real and virtual objects is the process of illuminating scenes and objects with images of light from the real world. After including the virtual objects, the resulting scene should have consistent shadow configuration. This paper presents a novel algorithm that integrates synthetic objects in the real photographs by using the radiosity on graphics processing unit (GPU) and high dynamic range (HDR) radiance map. In order to reconstruct 3D illumination environment of the scene, we estimate the camera model and the extrinsic parameters from omni-directional images. The simulation results showed that our method can generate photo-realistic images.

Collaboration


Dive into the Yong Ho Hwang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge