Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junyong Noh is active.

Publication


Featured researches published by Junyong Noh.


ACM Transactions on Graphics | 2012

Spacetime expression cloning for blendshapes

Yeongho Seol; John P. Lewis; Jaewoo Seo; Byungkuk Choi; Ken Anjyo; Junyong Noh

The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.


Computer Graphics Forum | 2010

A Hybrid Approach to Multiple Fluid Simulation using Volume Fractions

Nahyup Kang; Jinho Park; Junyong Noh; Sung Yong Shin

This paper presents a hybrid approach to multiple fluid simulation that can handle miscible and immiscible fluids, simultaneously. We combine distance functions and volume fractions to capture not only the discontinuous interface between immiscible fluids but also the smooth transition between miscible fluids. Our approach consists of four steps: velocity field computation, volume fraction advection, miscible fluid diffusion, and visualization. By providing a combining scheme between volume fractions and level set functions, we are able to take advantages of both representation schemes of fluids. From the system point of view, our work is the first approach to Eulerian grid‐based multiple fluid simulation including both miscible and immiscible fluids. From the technical point of view, our approach addresses the issues arising from variable density and viscosity together with material diffusion. We show that the effectiveness of our approach to handle multiple miscible and immiscible fluids through experiments.


international conference on computer graphics and interactive techniques | 2011

Compression and direct manipulation of complex blendshape models

Jaewoo Seo; Geoffrey Irving; John P. Lewis; Junyong Noh

We present a method to compress complex blendshape models and thereby enable interactive, hardware-accelerated animation of these models. Facial blendshape models in production are typically large in terms of both the resolution of the model and the number of target shapes. They are represented by a single huge blendshape matrix, whose size presents a storage burden and prevents real-time processing. To address this problem, we present a new matrix compression scheme based on a hierarchically semi-separable (HSS) representation with matrix block reordering. The compressed data are also suitable for parallel processing. An efficient GPU implementation provides very fast feedback of the resulting animation. Compared with the original data, our technique leads to a huge improvement in both storage and processing efficiency without incurring any visual artifacts. As an application, we introduce an extended version of the direct manipulation method to control a large number of facial blendshapes efficiently and intuitively.


international conference on computer graphics and interactive techniques | 2016

Rich360: optimized spherical representation from structured panoramic camera arrays

Jungjin Lee; Bumki Kim; Kyehyun Kim; Young Hui Kim; Junyong Noh

This paper presents Rich360, a novel system for creating and viewing a 360° panoramic video obtained from multiple cameras placed on a structured rig. Rich360 provides an as-rich-as-possible 360° viewing experience by effectively resolving two issues that occur in the existing pipeline. First, a deformable spherical projection surface is utilized to minimize the parallax from multiple cameras. The surface is deformed spatio-temporally according to the depth constraints estimated from the overlapping video regions. This enables fast and efficient parallax-free stitching independent of the number of views. Next, a non-uniform spherical ray sampling is performed. The density of the sampling varies depending on the importance of the image region. Finally, for interactive viewing, the non-uniformly sampled video is mapped onto a uniform viewing sphere using a UV map. This approach can preserve the richness of the input videos when the resolution of the final 360° panoramic video is smaller than the overall resolution of the input videos, which is the case for most 360° panoramic videos. We show various results from Rich360 to demonstrate the richness of the output video and the advancement in the stitching results.


ACM Transactions on Graphics | 2013

Data-driven control of flapping flight

Eunjung Ju; Jungdam Won; Jehee Lee; Byungkuk Choi; Junyong Noh; Min Gyu Choi

We present a physically based controller that simulates the flapping behavior of a bird in flight. We recorded the motion of a dove using marker-based optical motion capture and high-speed video cameras. The bird flight data thus acquired allow us to parameterize natural wingbeat cycles and provide the simulated bird with reference trajectories to track in physics simulation. Our controller simulates articulated rigid bodies of a birds skeleton and deformable feathers to reproduce the aerodynamics of bird flight. Motion capture from live birds is not as easy as human motion capture because of the lack of cooperation from subjects. Therefore, the flight data we could acquire were limited. We developed a new method to learn wingbeat controllers even from sparse, biased observations of real bird flight. Our simulated bird imitates life-like flapping of a flying bird while actively maintaining its balance. The bird flight is interactively controllable and resilient to external disturbances.


Computer Animation and Virtual Worlds | 2011

Characteristic facial retargeting

Jaewon Song; Byungkuk Choi; Yeongho Seol; Junyong Noh

Facial motion retargeting has been developed mainly in the direction of representing high fidelity between a source and a target model. We present a novel facial motion retargeting method that properly regards the significant characteristics of target face model. We focus on stylistic facial shapes and timings that reveal the individuality of the target model well, after the retargeting process is finished. The method works with a range of expression pairs between the source and the target facial expressions and emotional sequence pairs of the source and the target facial motions. We first construct a prediction model to place semantically corresponding facial shapes. Our hybrid retargeting model, which combines the radial basis function (RBF) and kernel canonical correlation analysis (kCCA)‐based regression methods copes well with new input source motions without visual artifacts. 1D Laplacian motion warping follows after the shape retargeting process, replacing stylistically important emotional sequences and thus, representing the characteristics of the target face. Copyright


The Visual Computer | 2010

Multilevel vorticity confinement for water turbulence simulation

Taekwon Jang; Heeyoung Kim; Jinhyuk Bae; Jaewoo Seo; Junyong Noh

Physically based fluid simulation can provide realism, but simulating water turbulence remains challenging. Recently, there have been much work on gas turbulence, but these algorithms mostly rely on the Kolmogorov theory which is not directly applicable to water turbulence simulation. This paper presents a novel technique for simulating water turbulence. We show that sub-grid turbulence can be created by employing a flow-scale separation technique. We adopted the multi-scale flow separation method to derive a special small-scale equation. Small-scale velocities are then generated and manipulated by the equation. To simulate the turbulence effect, this work employed the vorticity confinement method. By extending the original method to multi-level, we effectively simulate energy cascading effects.


The Visual Computer | 2012

Weighted pose space editing for facial animation

Yeongho Seol; Jaewoo Seo; Paul Hyunjin Kim; J. P. Lewis; Junyong Noh

Blendshapes are the most commonly used approach to realistic facial animation in production. A blendshape model typically begins with a relatively small number of blendshape targets reflecting major muscles or expressions. However, the majority of the effort in constructing a production quality model occurs in the subsequent addition of targets needed to reproduce various subtle expressions and correct for the effects of various shapes in combination. To make this subsequent modeling process much more efficient, we present a novel editing method that removes the need for much of the iterative trial-and-error decomposition of an expression into targets. Isolated problematic frames of an animation are re-sculpted as desired and used as training for a nonparametric regression that associates these shapes with the underlying blendshape weights. Using this technique, the artist’s correction to a problematic expression is automatically applied to similar expressions in an entire sequence, and indeed to all future sequences. The extent and falloff of editing is controllable and the effect is continuously propagated to all similar expressions. In addition, we present a search scheme that allows effective reuse of pre-sculpted editing examples. Our system greatly reduces time and effort required by animators to create high quality facial animations.


Computer Animation and Virtual Worlds | 2013

A heterogeneous CPU–GPU parallel approach to a multigrid Poisson solver for incompressible fluid simulation

Hwi-ryong Jung; Sun-Tae Kim; Junyong Noh; Jeong-Mo Hong

One of the major obstacles in incompressible fluid simulations is the projection step that enforces zero divergence of the velocity field. We propose a novel heterogeneous CPU–GPU parallel multigrid Poisson solver that decomposes the high‐frequency components of the residual field using a wavelet decomposition and conducts an additional smoothing process on them, using the CPU, while the GPU is performing projection at the coarsest level. In example animations of smoke and turbulent flow with thermal buoyancy, this additional smoothing improves the accuracy of the parallel multigrid Poisson solver in a single multigrid cycle and reduces the number of multigrid cycles required to reach a specified accuracy. Copyright


Computer Graphics Forum | 2012

Video Panorama for 2D to 3D Conversion

Roger Blanco i Ribera; Sungwoo Choi; Younghui Kim; Jungjin Lee; Junyong Noh

Accurate depth estimation is a challenging, yet essential step in the conversion of a 2D image sequence to a 3D stereo sequence. We present a novel approach to construct a temporally coherent depth map for each image in a sequence. The quality of the estimated depth is high enough for the purpose of2D to 3D stereo conversion. Our approach first combines the video sequence into a panoramic image. A user can scribble on this single panoramic image to specify depth information. The depth is then propagated to the remainder of the panoramic image. This depth map is then remapped to the original sequence and used as the initial guess for each individual depth map in the sequence. Our approach greatly simplifies the required user interaction during the assignment of the depth and allows for relatively free camera movement during the generation of a panoramic image. We demonstrate the effectiveness of our method by showing stereo converted sequences with various camera motions.

Collaboration


Dive into the Junyong Noh's collaboration.

Researchain Logo
Decentralizing Knowledge