Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hwasup Lim is active.

Publication


Featured researches published by Hwasup Lim.


computer vision and pattern recognition | 2015

Generalized Deformable Spatial Pyramid: Geometry-preserving dense correspondence estimation

Junhwa Hur; Hwasup Lim; Changsoo Park; Sang Chul Ahn

We present a Generalized Deformable Spatial Pyramid (GDSP) matching algorithm for calculating the dense correspondence between a pair of images with large appearance variations. The main challenges of the problem generally originate in appearance dissimilarities and geometric variations between images. To address these challenges, we improve the existing Deformable Spatial Pyramid (DSP) [10] model by generalizing the search space and devising the spatial smoothness. The former is leveraged by rotations and scales, and the latter simultaneously considers dependencies between high-dimensional labels through the pyramid structure. Our spatial regularization in the high-dimensional space enables our model to effectively preserve the meaningful geometry of objects in the input images while allowing for a wide range of geometry variations such as perspective transform and non-rigid deformation. The experimental results on public datasets and challenging scenarios show that our method outperforms the state-of-the-art methods both qualitatively and quantitatively.


Optical Engineering | 2013

Interframe consistent multifrequency phase unwrapping for time-of-flight cameras

Ouk Choi; Seungkyu Lee; Hwasup Lim

Abstract. Commercially available time-of-flight cameras illuminate the scene with amplitude-modulated infrared light signals and detect their reflections to provide per-pixel depth maps in real time. These cameras, however, suffer from an inherent problem called phase wrapping, which occurs due to the modular ambiguity in the phase delay measurement. As a result, the measured distance to a scene point becomes much shorter than its actual distance if the point is farther than a certain maximum range. There have been multifrequency phase unwrapping methods, which recover the actual distance values by exploiting the consistency in the disambiguated depth values across depth maps of the same scene, acquired at different modulation frequencies. For robust and accurate estimation against noise, a cost function is built that evolves over time to enforce both the interframe depth consistency and the intraframe depth continuity. As demonstrated in the experiments with real scenes, the proposed method correctly disambiguates the depth measurements, extending the maximum range restricted by the modulation frequency.


international symposium on ubiquitous virtual reality | 2012

Putting Real-World Objects into Virtual World: Fast Automatic Creation of Animatable 3D Models with a Consumer Depth Camera

Hwasup Lim; Seong-Oh Lee; Jongho Lee; Min-Hyuk Sung; Young-Woon Cha; Hyoung-Gon Kim; Sang Chul Ahn

Consumer depth cameras such as Kinect are gaining huge popularity due to their potential to real-time 3D surface reconstruction. The creation of tractable 3D object models from the reconstructed surface model, however, requires further processings for segmentation, mesh/texture generation, and skeletal rigging. Providing solutions for all these issues, this paper formulate a comprehensive framework for generating animatable 3D models of real-world objects using only a single depth camera, with no needs of any generic models and specific viewers. The proposed system enables easy creation of 3D models even for non-experts within a few minutes. As demonstrated in the experimental results, our system generates visually realistic 3D models and their plausible animations.


international symposium on visual computing | 2014

3D Deformable Spatial Pyramid for Dense 3D Motion Flow of Deformable Object

Junhwa Hur; Hwasup Lim; Sang Chul Ahn

This paper presents an algorithm for finding the dense motion flow of deformable objects from RGB-D images. We introduce a 3D deformable spatial pyramid model by reformulating the previous 2D deformable spatial pyramid model [1] with depth information. Our algorithm recasts the problem of estimating 3D motion of deformable objects as a problem of estimating 2D motions of a set of grid cells where each pixel contains a viewpoint-invariant feature vector. These grid cells are controlled by a pyramid graph model. Our approach significantly reduces the computational cost through a 2D correspondence search and efficiently handles even large deformations with the pyramid graph model. As demonstrated in the experimental results, the proposed algorithm shows robustness in various deformation scenarios.


intelligent robots and systems | 2014

RGB-D Fusion: Real-time Robust Tracking and Dense Mapping with RGB-D Data Fusion

Seong-Oh Lee; Hwasup Lim; Hyoung-Gon Kim; Sang Chul Ahn

We present RGB-D Fusion, a framework which robustly tracks and reconstructs dense textured surfaces of scenes and objects by integrating both color and depth images streamed from a RGB-D sensor into a global colored volume in real-time. To handle failure of the ICP-based tracking approach, KinectFusion, due to the lack of sufficient geometric information, we propose a novel approach which registers the input RGB-D image with the colored volume by photometric tracking and geometric alignment. We demonstrate the strengths of the proposed approach compared with the ICP-based approach and show superior performance of our algorithm with real-world data.


intelligent robots and systems | 2016

Simultaneous segmentation, estimation and analysis of articulated motion from dense point cloud sequence

Youngji Kim; Hwasup Lim; Sang Chul Ahn; Ayoung Kim

In this paper, we present a unified approach for Expectation Maximization (EM) based motion segmentation, estimation and analysis from dense point cloud data. When identifying an underlying motion, literature mainly focuses on three related topics: motion segmentation, estimation and analysis. These topics are, however, mostly considered separately while integrated approaches are rare. Our approach specifically focuses on analyzing articulated motion from dense point cloud data by simultaneously solving for three topics using an integrated approach. No prior knowledge, such as background regions, number of segments and correspondence, is required since two iterations in this algorithm allow us to seamlessly accomplish integration of the three tasks. The first iteration of the algorithm is performed between segmentation and estimation, followed by the second iteration between motion estimation and analysis. For the first iteration, we propose EM based subspace clustering algorithm. For the second iteration, we simply fuse the motion analysis method from [1] into an iterative motion estimation algorithm. As a result, we can extract label, correspondence and motion of moving objects simultaneously from dense point cloud sequence. In experiment, we validate the performance of the proposed method on both synthetic and real world data.


international conference on ubiquitous robots and ambient intelligence | 2014

Interactive retexturing from unordered images

Jongho Lee; Hwasup Lim; Seong-Oh Lee; Young-Woon Cha; Hyoung-Gon Kim; Sang Chul Ahn

The texture quality of 3D models, generated from either multi-view color images or color-depth images, mainly depends on the resolution of the images and the lighting condition used in the 3D reconstruction stage. It is often required to increase the texture quality or remove the unwanted artifacts such as seams or shadings without reinitiating the entire process. We propose an interactive method to seamlessly replace the low quality texture map of 3D models using unordered high resolution images taken later with different cameras. The camera pose of each new image is estimated using the nearest key-frame image with two-frame bundle adjustment. The new images with the estimated pose are then individually mapped onto the texture map. Finally the user-preferred regions in the specific images are seamlessly aligned with other texture patches in the combinatorial optimization framework. As demonstrated in the results, our approach significantly improves visual quality of textured 3D models while preserving the geometric details.


symposium on 3d user interfaces | 2013

Poster: Real time hand pose recognition with depth sensors for mixed reality interfaces

Byungkyu Kang; Mathieu Rodrigue; Tobias Höllerer; Hwasup Lim

We present a method for predicting articulated hand poses in real-time with a single depth camera, such as the Kinect or Xtion Pro, for the purpose of interaction in a Mixed Reality environment and for studying the effects of realistic and non-realistic articulated hand models in a Mixed Reality simulator. We demonstrate that employing a randomized decision forest for hand recognition benefits real-time applications without the typical tracking pitfalls such as reinitialization. This object recognition approach to predict hand poses results in relatively low computation, high prediction accuracy and sets the groundwork needed to utilize articulated hand movements for 3D tasks in Mixed Reality workspaces.


Signal Processing-image Communication | 2016

Surface reflectance estimation and segmentation from single depth image of ToF camera

Seungkyu Lee; Jungjun Kim; Hwasup Lim; Sang Chul Ahn

Objects in real world can be characterized not just by their shape and color but also by their material characteristic. Estimating surface reflectance of an arbitrary object can play a critical role in object characterization, recognition and realistic reconstruction. We propose a reflectance estimation and segmentation method from single depth image taken by a consumer time of flight (ToF) depth camera. Surface points are grouped and segmented based on the similarity of their estimated reflectance. Experimental results show that our estimated surface reflectance provides better discrimination of different material surfaces. Moreover, reconstructed 3D model of an object can be visualized better with realistic interaction with light source based on the obtained reflectance. HighlightsFirst practical reflectance estimation framework using a mixture of Gaussian model and single consumer ToF depth camera.Semantic image understanding based on varying reflectance characteristics.Reflectance-aware 3D scene rendering for improved visualization of 3D scene.


international conference on image processing | 2013

Image unprojection for 3D surface reconstruction: A triangulation-based approach

Min-Hyuk Sung; Hwasup Lim; Hyoung-Gon Kim; Sang Chul Ahn

We present a new framework that reconstructs a 3D surface by incorporating a color image into sparse depth points. Assuming that the image intensity change is highly correlated with the scene geometry, we first generate a planar mesh on the image using color variation, and then, we unproject the triangulated image to the world space by integrating the sparse depth points. A quadric error metric-based mesh simplification method is employed for effective image triangulation, and a non-linear optimization is formulated to estimate the length of the projected ray from the camera center to each vertex of the triangulated image by minimizing the errors between the reconstructed surface and the depth points. Our approach can achieve an accurate 3D surface with smooth planar regions and sharp edges on the object boundaries, and it also outperforms other surface reconstruction methods (which use only depth points) in terms of the accuracy.

Collaboration


Dive into the Hwasup Lim's collaboration.

Top Co-Authors

Avatar

Sang Chul Ahn

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hyoung-Gon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Young-Woon Cha

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Seong-Oh Lee

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jongho Lee

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Min-Hyuk Sung

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyoung Gon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jaewon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge