Taehyun Rhee
Victoria University of Wellington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Taehyun Rhee.
Computer Graphics Forum | 2006
Taehyun Rhee; John P. Lewis; Ulrich Neumann
WPSD (Weighted Pose Space Deformation) is an example based skinning method for articulated body animation. The per‐vertex computation required in WPSD can be parallelized in a SIMD (Single Instruction Multiple Data) manner and implemented on a GPU. While such vertex‐parallel computation is often done on the GPU vertex processors, further parallelism can potentially be obtained by using the fragment processors. In this paper, we develop a parallel deformation method using the GPU fragment processors. Joint weights for each vertex are automatically calculated from sample poses, thereby reducing manual effort and enhancing the quality of WPSD as well as SSD (Skeletal Subspace Deformation). We show sufficient speed‐up of SSD, PSD (Pose Space Deformation) and WPSD to make them suitable for real‐time applications.
interactive 3d graphics and games | 2011
Huajun Liu; Xiaolin K. Wei; Jinxiang Chai; Inwoo Ha; Taehyun Rhee
This paper introduces an approach to performance animation that employs a small number of motion sensors to create an easy-to-use system for an interactive control of a full-body human character. Our key idea is to construct a series of online local dynamic models from a prerecorded motion database and utilize them to construct full-body human motion in a maximum a posteriori framework (MAP). We have demonstrated the effectiveness of our system by controlling a variety of human actions, such as boxing, golf swinging, and table tennis, in real time. Given an appropriate motion capture database, the results are comparable in quality to those obtained from a commercial motion capture system with a full set of motion sensors (e.g., XSens [2009]); however, our performance animation system is far less intrusive and expensive because it requires a small of motion sensors for full body control. We have also evaluated the performance of our system by leave-one-out-experiments and by comparing with two baseline algorithms.
interactive 3d graphics and games | 2006
Taehyun Rhee; Ulrich Neumann; John P. Lewis
The human hand is an important interface with complex shape and movement. In virtual reality and gaming applications the use of an individualized rather than generic hand representation can increase the sense of immersion and in some cases may lead to more effortless and accurate interaction with the virtual world. We present a method for constructing a person-specific model from a single canonically posed palm image of the hand without human guidance. Tensor voting is employed to extract the principal creases on the palmar surface. Joint locations are estimated using extracted features and analysis of surface anatomy. The skin geometry of a generic 3D hand model is deformed using radial basis functions guided by correspondences to the extracted surface anatomy and hand contours. The result is a 3D model of an individuals hand, with similar joint locations, contours, and skin texture.
eurographics | 2014
John P. Lewis; Ken Anjyo; Taehyun Rhee; Mengjie Zhang; Frédéric H. Pighin; Zhigang Deng
Blendshapes”, a simple linear model of facial expression, is the prevalent approach to realistic facial animation. It has driven animated characters in Hollywood films, and is a standard feature of commercial animation packages. The blendshape approach originated in industry, and became a subject of academic research relatively recently. This survey describes the published state of the art in this area, covering both literature from the graphics research community, and developments published in industry forums. We show that, despite the simplicity of the blendshape approach, there remain open problems associated with this fundamental technique.
symposium on computer animation | 2011
Taehyun Rhee; Youngkyoo Hwang; James D. K. Kim; Changyeong Kim
This paper describes a complete pipe-line of a practical system for producing real-time facial expressions of a 3D virtual avatar controlled by an actors live performances. The system handles practical challenges arising from markerless expression captures from a single conventional video camera. For robust tracking, a localized algorithm constrained by belief propagation is applied to the upper face, and an appearance matching technique using a parameterized generic face model is exploited for lower face and head pose tracking. The captured expression features then transferred to high dimensional 3D animation controls using our facial expression space which is a structure-preserving map between two algebraic structures. The transferred animation controls drive facial animation of a 3D avatar while optimizing the smoothness of the face mesh. An example-based face deformation technique produces non-linear local detail deformations on the avatar that are not captured in the movement of the animation controls.
IEEE Computer Graphics and Applications | 2015
Kieran Carnegie; Taehyun Rhee
Although head-mounted displays (HMDs) are ideal devices for personal viewing of immersive stereoscopic content, exposure to VR applications on them results in significant discomfort for the majority of people, with symptoms including eye fatigue, headaches, nausea, and sweating. A conflict between accommodation and vergence depth cues on stereoscopic displays is a significant cause of visual discomfort. This article describes the results of an evaluation used to judge the effectiveness of dynamic depth-of-field (DoF) blur in an effort to reduce discomfort caused by exposure to stereoscopic content on HMDs. Using a commercial game engine implementation, study participants report a reduction of visual discomfort on a simulator sickness questionnaire when DoF blurring is enabled. The study participants reported a decrease in symptom severity caused by HMD exposure, indicating that dynamic DoF can effectively reduce visual discomfort.
The Visual Computer | 2012
Hyunjung Shim; Rolf Adelsberger; James D. K. Kim; Seon-Min Rhee; Taehyun Rhee; Jae Young Sim; Markus H. Gross; Changyeong Kim
This paper presents a multi-view acquisition system using multi-modal sensors, composed of time-of-flight (ToF) range sensors and color cameras. Our system captures the multiple pairs of color images and depth maps at multiple viewing directions. In order to ensure the acceptable accuracy of measurements, we compensate errors in sensor measurement and calibrate multi-modal devices. Upon manifold experiments and extensive analysis, we identify the major sources of systematic error in sensor measurement and construct an error model for compensation. As a result, we provide a practical solution for the real-time error compensation of depth measurement. Moreover, we implement the calibration scheme for multi-modal devices, unifying the spatial coordinate for multi-modal sensors.The main contribution of this work is to present the thorough analysis of systematic error in sensor measurement and therefore provide a reliable methodology for robust error compensation. The proposed system offers a real-time multi-modal sensor calibration method and thereby is applicable for the 3D reconstruction of dynamic scenes.
IEEE Transactions on Visualization and Computer Graphics | 2017
Taehyun Rhee; Lohit Petikam; Benjamin Peter Allen; Andrew Chalmers
This paper presents a novel immersive system called MR360 that provides interactive mixed reality (MR) experiences using a conventional low dynamic range (LDR) 360° panoramic video (360-video) shown in head mounted displays (HMDs). MR360 seamlessly composites 3D virtual objects into a live 360-video using the input panoramic video as the lighting source to illuminate the virtual objects. Image based lighting (IBL) is perceptually optimized to provide fast and believable results using the LDR 360-video as the lighting source. Regions of most salient lights in the input panoramic video are detected to optimize the number of lights used to cast perceptible shadows. Then, the areas of the detected lights adjust the penumbra of the shadow to provide realistic soft shadows. Finally, our real-time differential rendering synthesizes illumination of the virtual 3D objects into the 360-video. MR360 provides the illusion of interacting with objects in a video, which are actually 3D virtual objects seamlessly composited into the background of the 360-video. MR360 was implemented in a commercial game engine and tested using various 360-videos. Since our MR360 pipeline does not require any pre-computation, it can synthesize an interactive MR scene using a live 360-video stream while providing realistic high performance rendering suitable for HMDs.
IEEE Transactions on Visualization and Computer Graphics | 2011
Taehyun Rhee; John P. Lewis; Ulrich Neumann; Krishna S. Nayak
This paper describes a complete system to create anatomically accurate example-based volume deformation and animation of articulated body regions, starting from multiple in vivo volume scans of a specific individual. In order to solve the correspondence problem across volume scans, a template volume is registered to each sample. The wide range of pose variations is first approximated by volume blend deformation (VBD), providing proper initialization of the articulated subject in different poses. A novel registration method is presented to efficiently reduce the computation cost while avoiding strong local minima inherent in complex articulated body volume registration. The algorithm highly constrains the degrees of freedom and search space involved in the nonlinear optimization, using hierarchical volume structures and locally constrained deformation based on the biharmonic clamped spline. Our registration step establishes a correspondence across scans, allowing a data-driven deformation approach in the volume domain. The results provide an occlusion-free person-specific 3D human body model, asymptotically accurate inner tissue deformations, and realistic volume animation of articulated movements driven by standard joint control estimated from the actual skeleton. Our approach also addresses the practical issues arising in using scans from living subjects. The robustness of our algorithms is tested by their applications on the hand, probably the most complex articulated region in the body, and the knee, a frequent subject area for medical imaging due to injuries.
pacific conference on computer graphics and applications | 2007
Taehyun Rhee; John P. Lewis; Ulrich Neumann; Krishna S. Nayak
Articulated body animation with smooth skin deformation is an important topic in computer graphics. This paper presents a pipeline that extends articulated body deformation to the volume graphics domain. The pipeline consists of in-vivo volume scans, kinematic joint estimation, volumetric joint weight computation, soft-tissue volume deformation, and direct volume rendering. The result is a fully articulated body volume driven by intuitive joint control that respects rigid deformation of the bone structures and produces smooth deformations of both the skin surface and the interior soft tissue regions.In this paper, we propose a fast global illumination solution for interactive lighting design. Using our method, light sources and the viewpoint are movable, and the characteristics of materials can be modified (assuming low-frequency BRDF) during rendering. Our solution is based on particle tracing (a variation of photon mapping) and final gathering. We assume that objects in the input scene are static, and pre-compute potential light paths for particle tracing and final gathering. To perform final gathering fast, we propose an efficient technique called Hierarchical Histogram Estimation for rapid estimation of radiances from the distribution of the particles. The rendering process of our method can be fully implemented on the GPU and our method achieves interactive frame rates for rendering scenes with even more than 100,000 triangles.