Ji-yong Kwon
Yonsei University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ji-yong Kwon.
The Visual Computer | 2008
Ji-yong Kwon; In-Kwon Lee
We propose a method to determine camera parameters for character motion, which considers the motion by itself. The basic idea is to approximately compute the area swept by the motion of the character’s links that are orthogonally projected onto the image plane, which we call “motion area”. Using the motion area, we can determine good fixed camera parameters and camera paths for a given character motion in the off-line or real-time camera control. In our experimental results, we demonstrate that our camera path generation algorithms can compute a smooth moving camera path while the camera effectively displays the dynamic features of character motion. Our methods can be easily used in combination with the method for generating occlusion-free camera paths. We expect that our methods can also be utilized by the general camera planning method as one of heuristics for measuring the visual quality of the scenes that include dynamically moving characters.
pacific conference on computer graphics and applications | 2007
Ji-yong Kwon; In-Kwon Lee
Efficient numerical techniques developed in the field of computer graphics are able to simulate compellingly realistic simulations of interactions between solids and fluids. These techniques are less used for engineering applications due to errors inherent in the systemic approximations. The errors, manifesting as excessive damping, are expected to change the nature of steady-state fields and nullify the typical engineering static flow analysis. On the other hand, near-body solid/fluids interaction are affected to a lesser degree since errors originating from dissipation are more severe when accumulated over time. Although it is generally not possible to numerically validate unsteady, high-velocity vector fields, our investigations show that near- body solid/fluid dynamics converges with respect to parametric refinements. Though far from a correctness proof, the numerical convergence of near-body quantities suggests the applicability of the method to certain classes of analysis and visualization where near-body characteristics are of greater concern. This article applies the semi-Lagrangian stable fluids method to biomechanical hydrodynamics visualization. The near-body surface dynamics provide meaningful information for rendering visuals that are intuitive to the streamlined flow characteristics surrounding the body. The techniques are applied to the visualization of active and passive resistive forces on the body in a video-based capture of an immersed dolphin kick.Motion capture cannot generate cartoon-style animation directly. We emulate the rubber-like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon-like movement. We achieve this using trajectory-based motion exaggeration while allowing the violation of link-length constraints. We extend this technique to obtain smooth, rubber-like motion by dividing the original links into shorter sub-links and computing the positions of joints using B´ezier curve interpolation and a mass-spring simulation. This method is fast enough to be used in real time.
Computer Graphics Forum | 2008
Ji-yong Kwon; In-Kwon Lee
Motion capture cannot generate cartoon‐style animation directly. We emulate the rubber‐like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon‐like movement. We achieve this using trajectory‐based motion exaggeration while allowing the violation of link‐length constraints. We extend this technique to obtain smooth, rubber‐like motion by dividing the original links into shorter sub‐links and computing the positions of joints using Bézier curve interpolation and a mass‐spring simulation. This method is fast enough to be used in real time.
IEEE Transactions on Visualization and Computer Graphics | 2012
Ji-yong Kwon; In-Kwon Lee
The squash-and-stretch describes the rigidity of the character. This effect is the most important technique in traditional cartoon animation. In this paper, we introduce a method that applies the squash-and-stretch effect to character motion. Our method exaggerates the motion by sequentially applying the spatial exaggeration technique and the temporal exaggeration technique. The spatial exaggeration technique globally deforms the pose in order to make the squashed or stretched pose by modeling it as a covariance matrix of joint positions. Then, the temporal exaggeration technique computes a time-warping function for each joint, and applies it to the position of the joint allowing the character to stretch its links appropriately. The motion stylized by our method is a sequence of squashed and stretched poses with stretching limbs. By performing a user survey, we prove that the motion created using our method is similar to that used in 2D cartoon animation and is funnier than the original motion for human observers who are familiar with 2D cartoon animation.
human factors in computing systems | 2009
Hyunju Kim; Min-Joon Yoo; Ji-yong Kwon; In-Kwon Lee
In this paper, we discuss the generation of icons that represent the emotion expressed in music. We use the emotion plane for connecting the music with the icon shape affectively. A model to project arbitrary music on the plane is introduced using the result of a user survey and various features of audio signals. Icon shapes are located on the plane from the result of user survey. The icon shape of the input music is obtained by blending neighbor icon shapes of the point of the music on the emotion plane. Using this method, one can easily guess the emotion of music from the corresponding icon shape and find the music he or she wants.
Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2011
Ji-yong Kwon; In-Kwon Lee
In this paper, we introduce a method that endows a given animation signal with slow-in and slow-out effects by using a bilateral filter scheme. By modifying the equation of the bilateral filter, the method applies reparameterization to the original animation trajectory. This holds extreme poses in the original animation trajectory for a long time, in such a way that there is no distortion or loss of the original information in the animation path. Our method can successfully enhance the slow-in and slow-out effects for several different types of animation data: keyframe and hand-drawn trajectory animation, motion capture data, and physically-based animation by using a rigid body simulation system.
international conference on computer graphics and interactive techniques | 2009
Ji-yong Kwon; In-Kwon Lee
The realistic motion data such as the motion capture data and physically based motion are not suitable for cartoon animations, in which the narrative is emphasized by exaggerated physical motion. Recently, several methods have been developed for converting an input motion into a cartoon-style motion. Wang et al. [2006] proposed an innovative method that generates anticipation and follow-through effects through the convolution of a Laplacian of Gaussians (LoG) kernel. Inspired by their work, we propose a new way of superimposing non-rigid squash-and-stretch effects [Lasseter 1987] on the motion of a rigid linkage. Our key idea is to apply a time-shift filter to position data of each joint of a skeletal character individually in order to allow its cartoon character to undergo squash-and-stretch.
international symposium on ubiquitous virtual reality | 2011
Ji-yong Kwon; In-Kwon Lee
Three different types of cartoon-like stylization methods are introduced. First, we introduce the rubber-like exaggeration method that allows a virtual character to stretch and bend its limbs flexibly by subdividing the joint hierarchy. Second, a slow-in and slow-out filter is introduced, which is based on the time-shift filter and is useful to give a slow-in and slow-out effect to a character. Finally, we briefly introduce the optimization-based method that exaggerates the motion both spatially and temporally in order to give a good squash-and-stretch effect.
international conference on computer graphics and interactive techniques | 2011
Ji-yong Kwon; In-Kwon Lee
Video composition is an indispensable technique in the production of many types of videos; nevertheless, it remains challenging problem if the source video is normally captured without a blue or green screen, especially when the object to be pasted between videos does not have a clear boundary (e.g. water, gas, and fire). We will call such a region a secondary foreground, and we propose a simple composition method in which foreground, background, and secondary foreground weights are determined using a geodesic distance tramsform [Criminisi et al. 2010]. This method will run in real time, but is nevertheless able to produce convincing results with difficult secondary foregrounds.
Science in China Series F: Information Sciences | 2011
Ji-yong Kwon; In-Kwon Lee
In this paper, we introduce an interface for motion editing that visualizes editing procedures. For intuitive understanding and construction of motion editing procedures, our system represents an editing element that produces modified motion from a given input motion as a graphical node, and enables user to construct a whole motion editing procedure by connecting the nodes with a few mouse interactions. The system provides both the 3D transform manipulator and the time-line slider for intuitive controlling of the parameters of each node. The user evaluation results show that our system is easy and intuitive to be used by users for editing of motion sequences.