Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Inwoo Ha is active.

Publication


Featured researches published by Inwoo Ha.


interactive 3d graphics and games | 2011

Realtime human motion control with a small number of inertial sensors

Huajun Liu; Xiaolin K. Wei; Jinxiang Chai; Inwoo Ha; Taehyun Rhee

This paper introduces an approach to performance animation that employs a small number of motion sensors to create an easy-to-use system for an interactive control of a full-body human character. Our key idea is to construct a series of online local dynamic models from a prerecorded motion database and utilize them to construct full-body human motion in a maximum a posteriori framework (MAP). We have demonstrated the effectiveness of our system by controlling a variety of human actions, such as boxing, golf swinging, and table tennis, in real time. Given an appropriate motion capture database, the results are comparable in quality to those obtained from a commercial motion capture system with a full set of motion sensors (e.g., XSens [2009]); however, our performance animation system is far less intrusive and expensive because it requires a small of motion sensors for full body control. We have also evaluated the performance of our system by leave-one-out-experiments and by comparing with two baseline algorithms.


Computer Graphics Forum | 2011

Making Imperfect Shadow Maps View-Adaptive: High-Quality Global Illumination in Large Dynamic Scenes

Tobias Ritschel; Elmar Eisemann; Inwoo Ha; James Dokyoon Kim; Hans-Peter Seidel

We propose an algorithm to compute interactive indirect illumination in dynamic scenes containing millions of triangles. It makes use of virtual point lights (VPL) to compute bounced illumination and a point‐based scene representation to query indirect visibility, similar to Imperfect Shadow Maps (ISM). To ensure a high fidelity of indirect light and shadows, our solution is made view‐adaptive by means of two orthogonal improvements: First, the VPL distribution is chosen to provide more detail, that is, more dense VPL sampling, where these contribute most to the current view. Second, the scene representation for indirect visibility is adapted to ensure geometric detail where it affects indirect shadows in the current view.


international conference on image processing | 2010

A probabilistic approach to realistic face synthesis

Hyunjung Shim; Inwoo Ha; Taehyun Rhee; James D. K. Kim; Chang-Yeong Kim

This paper presents a novel approach to face modeling for realistic synthesis, powered by a probabilistic face diffuse model and a generic face specular map. We first construct a probabilistic face diffuse model for estimating the albedo and the normals of a face from an unknown input image. Then, we introduce a generic face specular map for estimating the specularity of the face. Using the estimated albedo, normal and specular information, we can synthesize the face under arbitrary lighting and viewing directions realistically. Unlike many existing face modeling techniques, our approach can retain both the diffuse and specular properties of the face without involving an elaborating 3D matching procedure. Thanks to the compact representation and the effective inference scheme, our technique can be applied to many practical applications, such as face normalization, avatar creation and de-identification.


ieee global conference on consumer electronics | 2012

Smooth mesh generation from noisy depth image

Inwoo Ha; Taehyun Rhee; James D. K. Kim

This paper presents a plausible real-time smooth mesh generation method from real scene images, captured from a conventional color and depth camera. We perform statistical outlier removal method to detect holes for the captured raw depth image and fill the detected holes with the weighted mean values of its neighbors. The 3D surface mesh of the real scene is then reconstructed. The initially generated noisy mesh is filtered with our novel multi-weight depth refinement algorithm. The entire pipeline is fully automated from capturing to mesh generation and working at interactive frame rates. Robust results are shown even in the dynamic scene captured from the moving camera.


international conference on consumer electronics | 2014

Real-time photorealistic rendering for mobile devices

Inwoo Ha; Minsu Ahn; Hyong-Euk Lee

We present a real-time system for photorealistic rendering on mobile devices. Mobile devices have limitations in memory bandwidth and computing power. Therefore, algorithms on mobile devices should be carefully designed to efficiently use limited resources. Our approach combines spherical harmonics and ambient occlusion for efficient rendering under environment light. Spherical harmonics can efficiently represent low frequency illumination and materials with a few coefficients. The technique benefits especially for mobile devices with low memory bandwidth by reducing transferred data size. Ambient occlusion computes the ratio of irradiance blocked by nearby geometry from the environment light. Combining both techniques, we render fully dynamic scenes under environment illumination on mobile devices in real-time.


Proceedings of SPIE | 2014

Real-time global illumination on mobile device

Minsu Ahn; Inwoo Ha; Hyong-Euk Lee; James D. K. Kim

We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.


Proceedings of SPIE | 2014

Combining spherical harmonics and point lights for real-time photorealistic rendering

Inwoo Ha; Minsu Ahn; Hyungwook Lee; James D. K. Kim

Photorealistic rendering with all frequency lights and materials in real time is a difficult problem. The environment lights and complex materials can be approximated with spherical harmonics defined in spherical Fourier domain. Then, low frequency components of complex environment lights and materials are projected on just a few bases of spherical harmonics, which makes real-time rendering possible in low dimensional space. However, high frequency components, such as small bright light and glossy reflection, are filtered out during the spherical harmonics projection. In the other hand, point lights are efficient to represent high frequency lights, while they are inefficient for low frequency lights, such as smooth area lights. Combining spherical harmonics and point lights approaches, we can render a scene in real time, preserving both of low and high frequency effects.


Archive | 2011

PROCESSING APPARATUS AND METHOD FOR CREATING AVATAR

Taehyun Rhee; Inwoo Ha; Do-kyoon Kim; Xiaolin Wei; Jinxiang Chai; Huajun Liu


Archive | 2016

DEVICE AND METHOD TO DISPLAY OBJECT WITH VISUAL EFFECT

Keechang Lee; Minsu Ahn; Inwoo Ha; Seungin Park; Hyong Euk Lee; Heesae Lee


Archive | 2011

Data processing apparatus and method for motion synthesis

Jinxiang Chai; Inwoo Ha; Taehyun Rhee; Do-kyoon Kim; Huajun Liu; Xiaolin Wei

Collaboration


Dive into the Inwoo Ha's collaboration.

Top Co-Authors

Avatar

Taehyun Rhee

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge