Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byungkuk Choi is active.

Publication


Featured researches published by Byungkuk Choi.


ACM Transactions on Graphics | 2012

Spacetime expression cloning for blendshapes

Yeongho Seol; John P. Lewis; Jaewoo Seo; Byungkuk Choi; Ken Anjyo; Junyong Noh

The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.


ACM Transactions on Graphics | 2013

Data-driven control of flapping flight

Eunjung Ju; Jungdam Won; Jehee Lee; Byungkuk Choi; Junyong Noh; Min Gyu Choi

We present a physically based controller that simulates the flapping behavior of a bird in flight. We recorded the motion of a dove using marker-based optical motion capture and high-speed video cameras. The bird flight data thus acquired allow us to parameterize natural wingbeat cycles and provide the simulated bird with reference trajectories to track in physics simulation. Our controller simulates articulated rigid bodies of a birds skeleton and deformable feathers to reproduce the aerodynamics of bird flight. Motion capture from live birds is not as easy as human motion capture because of the lack of cooperation from subjects. Therefore, the flight data we could acquire were limited. We developed a new method to learn wingbeat controllers even from sparse, biased observations of real bird flight. Our simulated bird imitates life-like flapping of a flying bird while actively maintaining its balance. The bird flight is interactively controllable and resilient to external disturbances.


Computer Animation and Virtual Worlds | 2011

Characteristic facial retargeting

Jaewon Song; Byungkuk Choi; Yeongho Seol; Junyong Noh

Facial motion retargeting has been developed mainly in the direction of representing high fidelity between a source and a target model. We present a novel facial motion retargeting method that properly regards the significant characteristics of target face model. We focus on stylistic facial shapes and timings that reveal the individuality of the target model well, after the retargeting process is finished. The method works with a range of expression pairs between the source and the target facial expressions and emotional sequence pairs of the source and the target facial motions. We first construct a prediction model to place semantically corresponding facial shapes. Our hybrid retargeting model, which combines the radial basis function (RBF) and kernel canonical correlation analysis (kCCA)‐based regression methods copes well with new input source motions without visual artifacts. 1D Laplacian motion warping follows after the shape retargeting process, replacing stylistically important emotional sequences and thus, representing the characteristics of the target face. Copyright


international symposium on visual computing | 2009

Cartoon Animation Style Rendering of Water

Mi You; Jinho Park; Byungkuk Choi; Junyong Noh

We present a cartoon animation style rendering method for water animation. In an effort to capture and represent crucial features of water observed in traditional cartoon animation, we propose a Cartoon Water Shader. The proposed rendering method is a modified Phong illumination model augmented by the optical properties that ray tracing provides. We also devise a metric that automatically changes between refraction and reflection based on the angle between the normal vector of the water surface and the camera direction. An essential characteristic in cartoon water animation is the use of flow lines. We produce water flow regions with a Water Flow Shader. Assuming that an input to our system is a result of an existing fluid simulation, the input mesh contains proper geometric properties. The water flow lines can be recovered by computing the curvature from the input geometry, through which ridges and valleys are easily identified.


international conference on computer graphics and interactive techniques | 2016

SketchiMo: sketch-based motion editing for articulated characters

Byungkuk Choi; Roger Blanco i Ribera; John P. Lewis; Yeongho Seol; Seok-Pyo Hong; Haegwang Eom; Sunjin Jung; Junyong Noh

We present SketchiMo, a novel approach for the expressive editing of articulated character motion. SketchiMo solves for the motion given a set of projective constraints that relate the sketch inputs to the unknown 3D poses. We introduce the concept of sketch space, a contextual geometric representation of sketch targets---motion properties that are editable via sketch input---that enhances, right on the viewport, different aspects of the motion. The combination of the proposed sketch targets and space allows for seamless editing of a wide range of properties, from simple joint trajectories to local parent-child spatiotemporal relationships and more abstract properties such as coordinated motions. This is made possible by interpreting the users input through a new sketch-based optimization engine in a uniform way. In addition, our view-dependent sketch space also serves the purpose of disambiguating the user inputs by visualizing their range of effect and transparently defining the necessary constraints to set the temporal boundaries for the optimization.


Computer Graphics Forum | 2015

Interactive Rigging with Intuitive Tools

Seungbae Bang; Byungkuk Choi; Roger Blanco i Ribera; Meekyoung Kim; Sung-Hee Lee; Junyong Noh

Rigging is a core element in the process of bringing a 3D character to life. The rig defines and delimits the motions of the character and provides an interface for an animator with which to interact with the 3D character. The quality of the rig has a key impact on the expressiveness of the character. Creating a usable, rich, production ready rig is a laborious task requiring direct intervention by a trained professional because the goal is difficult to achieve with fully automatic methods. We propose a semi‐automatic rigging editing framework which eases the need for manual intervention while maintaining an important degree of control over the final rig. Starting by automatically generated base rig, we provide interactive operations which efficiently configure the skeleton structure and mesh skinning.


Computer Graphics Forum | 2017

Sparse Rig Parameter Optimization for Character Animation

Jaewon Song; Roger Blanco i Ribera; Kyungmin Cho; Mi You; John P. Lewis; Byungkuk Choi; Junyong Noh

We propose a novel motion retargeting method that efficiently estimates artist‐friendly rig space parameters. Inspired by the workflow typically observed in keyframe animation, our approach transfers a source motion into a production friendly character rig by optimizing the rig space parameters while balancing the considerations of fidelity to the source motion and the ease of subsequent editing. We propose the use of an intermediate object to transfer both the skeletal motion and the mesh deformation. The target rig‐space parameters are then optimized to minimize the error between the motion of an intermediate object and the target character. The optimization uses a set of artist defined weights to modulate the effect of the different rig space parameters over time. Sparsity inducing regularizers and keyframe extraction streamline any additional editing processes. The results obtained with different types of character rigs demonstrate the versatility of our method and its effectiveness in simplifying any necessary manual editing within the production pipeline.


international conference on computer graphics and interactive techniques | 2011

Data-driven bird simulation

Eunjung Ju; Byungkuk Choi; Junyong Noh; Jehee Lee

Natural motion of living creatures such as human and animals has generated widespread interest in computer animation field. Many film and game industries want to present these virtual creatures on their products and exhibit natural and realistic motions as much as possible. Among them, flying animals such as birds have been particularly focused on because of their special condition moving in flight. Because they move in the sky with its wings and their motions are affected by subtle air flow, the principle for generating flying behavior has to be completely different from that for creating biped humans or quadruped animals locomotion behavior.


Computer Animation and Virtual Worlds | 2008

Extended spatial keyframing for complex character animation

Byungkuk Choi; Mi You; Junyong Noh


Korea Computer Graphics Society 2012 | 2012

Characteristic Facial Retargeting

Junyong Noh; Jaewon Song; Yeongho Seol; Soyeong Jeon; Byungkuk Choi

Collaboration


Dive into the Byungkuk Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John P. Lewis

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Eunjung Ju

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jehee Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge