Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yeongho Seol is active.

Publication


Featured researches published by Yeongho Seol.


ACM Transactions on Graphics | 2012

Spacetime expression cloning for blendshapes

Yeongho Seol; John P. Lewis; Jaewoo Seo; Byungkuk Choi; Ken Anjyo; Junyong Noh

The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.


symposium on computer animation | 2013

Creature features: online motion puppetry for non-human characters

Yeongho Seol; Carol O'Sullivan; Jehee Lee

We present a novel real-time motion puppetry system that drives the motion of non-human characters using human motion input. We aim to control a variety of creatures whose body structures and motion patterns can differ greatly from a humans. A combination of direct feature mapping and motion coupling enables the generation of natural creature motion, along with intuitive and expressive control for puppetry. First, in the design phase, direct feature mappings and motion classification can be efficiently and intuitively computed given crude motion mimicking as input. Later, during the puppetry phase, the users body motions are used to control the target character in real-time, using the combination of feature mappings generated from the design phase. We demonstrate the effectiveness of our approach with several examples of natural puppetry, where a variety of non-human creatures are controlled in real-time using human motion input from a commodity motion sensing device.


international conference on computer graphics and interactive techniques | 2014

Interactive manipulation of large-scale crowd animation

Jongmin Kim; Yeongho Seol; Taesoo Kwon; Jehee Lee

Editing large-scale crowd animation is a daunting task due to the lack of an efficient manipulation method. This paper presents a novel cage-based editing method for large-scale crowd animation. The cage encloses animated characters and supports convenient space/time manipulation methods that were unachievable with previous approaches. The proposed method is based on a combination of cage-based deformation and as-rigid-as-possible deformation with a set of constraints integrated into the system to produce desired results. Our system allows animators to edit existing crowd animations intuitively with real-time performance while maintaining complex interactions between individual characters. Our examples demonstrate how our cage-based user interfaces mitigate the time and effort for the user to manipulate large crowd animation.


Computer Animation and Virtual Worlds | 2011

Characteristic facial retargeting

Jaewon Song; Byungkuk Choi; Yeongho Seol; Junyong Noh

Facial motion retargeting has been developed mainly in the direction of representing high fidelity between a source and a target model. We present a novel facial motion retargeting method that properly regards the significant characteristics of target face model. We focus on stylistic facial shapes and timings that reveal the individuality of the target model well, after the retargeting process is finished. The method works with a range of expression pairs between the source and the target facial expressions and emotional sequence pairs of the source and the target facial motions. We first construct a prediction model to place semantically corresponding facial shapes. Our hybrid retargeting model, which combines the radial basis function (RBF) and kernel canonical correlation analysis (kCCA)‐based regression methods copes well with new input source motions without visual artifacts. 1D Laplacian motion warping follows after the shape retargeting process, replacing stylistically important emotional sequences and thus, representing the characteristics of the target face. Copyright


The Visual Computer | 2012

Weighted pose space editing for facial animation

Yeongho Seol; Jaewoo Seo; Paul Hyunjin Kim; J. P. Lewis; Junyong Noh

Blendshapes are the most commonly used approach to realistic facial animation in production. A blendshape model typically begins with a relatively small number of blendshape targets reflecting major muscles or expressions. However, the majority of the effort in constructing a production quality model occurs in the subsequent addition of targets needed to reproduce various subtle expressions and correct for the effects of various shapes in combination. To make this subsequent modeling process much more efficient, we present a novel editing method that removes the need for much of the iterative trial-and-error decomposition of an expression into targets. Isolated problematic frames of an animation are re-sculpted as desired and used as training for a nonparametric regression that associates these shapes with the underlying blendshape weights. Using this technique, the artist’s correction to a problematic expression is automatically applied to similar expressions in an entire sequence, and indeed to all future sequences. The extent and falloff of editing is controllable and the effect is continuously propagated to all similar expressions. In addition, we present a search scheme that allows effective reuse of pre-sculpted editing examples. Our system greatly reduces time and effort required by animators to create high quality facial animations.


Computer Animation and Virtual Worlds | 2013

Human motion reconstruction from sparse 3D motion sensors using kernel CCA-based regression

Jongmin Kim; Yeongho Seol; Jehee Lee

This paper presents a real‐time performance animation system that reproduces full‐body character animation based on sparse three‐dimensional (3D) motion sensors on a performer. Producing faithful character animation from this setting is a mathematically ill‐posed problem, because input data from the sensors are not sufficient to determine the full degrees of freedom of a character. Given the input data from 3D motion sensors, we select similar poses from a motion database and build an online local model that transforms the low‐dimensional input signal into a high‐dimensional character pose. A regression method based on kernel canonical correlation analysis (CCA) is employed, because it effectively handles a wide variety of motions. Examples show that various human motions are naturally reproduced by the proposed method. Copyright


Computer Animation and Virtual Worlds | 2010

Rigging transfer

Jaewoo Seo; Yeongho Seol; Daehyeon Wi; Younghui Kim; Junyong Noh

Realistic character animation requires elaborate rigging built on top of high quality 3D models. Sophisticated anatomically based rigs are often the choice of visual effect studios where life-like animation of CG characters is the primary objective. However, rigging a character with a muscular-skeletal system is very involving and time-consuming process, even for professionals. Although, there have been recent research efforts to automate either all or some parts of the rigging process, the complexity of anatomically based rigging nonetheless opens up new research challenges. We propose a new method to automate anatomically based rigging that transfers an existing rig of one character to another. The method is based on a data interpolation in the surface and volume domain, where various rigging elements can be transferred between different models. As it only requires a small number of corresponding input feature points, users can produce highly detailed rigs for a variety of desired character with ease. Copyright


Computer Graphics Forum | 2010

A Smoke Visualization Model for Capturing Surface-Like Features

Jinho Park; Yeongho Seol; Frederic Cordier; Junyong Noh

Incense, candle smoke and cigarette smoke often exhibit smoke flows with a surface‐like appearance. Although delving into well‐known computational fluid dynamics may provide a solution to create such an appearance, we propose a much efficient alternative that combines a low‐resolution fluid simulation with explicit geometry provided by NURBS surfaces. Among a wide spectrum of fluid simulation, our algorithm specifically tailors to reproduce the semi‐transparent surface look and motion of the smoke. The main idea is that we follow the traces called streaklines created by the advected particles from a simulation and reconstruct NURBS surfaces passing through them. Then, we render the surfaces by applying an opacity map to each surface, where the opacity map is created by utilizing the smoke density and the characteristics of the surface contour. Augmenting the results from low‐resolution simulations such a way requires a low computational cost and memory usage by design.


international conference on computer graphics and interactive techniques | 2016

SketchiMo: sketch-based motion editing for articulated characters

Byungkuk Choi; Roger Blanco i Ribera; John P. Lewis; Yeongho Seol; Seok-Pyo Hong; Haegwang Eom; Sunjin Jung; Junyong Noh

We present SketchiMo, a novel approach for the expressive editing of articulated character motion. SketchiMo solves for the motion given a set of projective constraints that relate the sketch inputs to the unknown 3D poses. We introduce the concept of sketch space, a contextual geometric representation of sketch targets---motion properties that are editable via sketch input---that enhances, right on the viewport, different aspects of the motion. The combination of the proposed sketch targets and space allows for seamless editing of a wide range of properties, from simple joint trajectories to local parent-child spatiotemporal relationships and more abstract properties such as coordinated motions. This is made possible by interpreting the users input through a new sketch-based optimization engine in a uniform way. In addition, our view-dependent sketch space also serves the purpose of disambiguating the user inputs by visualizing their range of effect and transparently defining the necessary constraints to set the temporal boundaries for the optimization.


motion in games | 2012

Realtime Performance Animation Using Sparse 3D Motion Sensors

Jongmin Kim; Yeongho Seol; Jehee Lee

This paper presents a realtime performance animation system that reproduces full-body character animation based on sparse 3D motion sensors on the performer. Producing faithful character animation from this setting is a mathematically ill-posed problem because input data from the sensors is not sufficient to determine the full degrees of freedom of a character. Given the input data from 3D motion sensors, we pick similar poses from the motion database and build an online local model that transforms the low-dimensional input signal into a high-dimensional character pose. Kernel CCA (Canonical Correlation Analysis)-based regression is employed as the model, which effectively covers a wide range of motion. Examples show that various human motions are naturally reproduced by our method.

Collaboration


Dive into the Yeongho Seol's collaboration.

Top Co-Authors

Avatar

Jehee Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

John P. Lewis

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jongmin Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge