Won-Sook Lee
University of Ottawa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Won-Sook Lee.
Image and Vision Computing | 2000
Won-Sook Lee; Nadia Magnenat-Thalmann
Abstract This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct a three-dimensional (3D) facial model for animation from two orthogonal pictures taken from front and side views, or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then fine modifications follow if range data is available. Automatic texture mapping is employed using an image composed from the two images. The reconstructed 3D-face can be animated immediately with given expression parameters. Several faces by obtained one methodology applied to different input data to get a final animatable face are illustrated.
Computer Graphics Forum | 2000
Won-Sook Lee; Jin Gu; Nadia Magnenat-Thalmann
We present an easy, practical and efficient full body cloning methodology. This system utilizes photos taken from the front, side and back of a person in any given imaging environment without requiring a special background or a controlled illuminating condition. A seamless generic body specified in the VRML H‐Anim 1.1 format is used to generate an individualized virtual human. The system is composed of two major components: face‐cloning and body‐cloning. The face‐cloning component uses feature points on front and side images and then applies DFFD for shape modification. Next a fully automatic seamless texture mapping is generated for 360° coloring on a 3D polygonal model. The body‐cloning component has two steps: (i feature points specification, which enables automatic silhouette detection in an arbitrary background (ii two‐stage body modification by using feature points and body silhouette respectively. The final integrated human model has photo‐realistic animatable face, hands, feet and body. The result can be visualized in any VRML compliant browser.
Proceedings Computer Animation 1999 | 1999
Won-Sook Lee; Marc Escher; Gael Sannier; Nadia Magnenat-Thalmann
MPEG-4 is scheduled to become an international standard in March 1999. The paper demonstrates an experiment for a virtual cloning method and animation system, which is compatible with the MPEG-4 standard facial object specification. Our method uses orthogonal photos (front and side view) as input and reconstructs the 3D facial model. The method is based on extracting MPEG-4 face definition parameters (FDP) from photos, which initializes a custom face in a more capable interface, and deforming a generic model. Texture mapping is employed using an image composed of the two orthogonal images, which is done completely automatically. A reconstructed head can be animated immediately inside our animation system, which is adapted to the MPEG-4 standard specification of face animation parameters (FAP). The result is integrated into our virtual human director (VHD) system.
Lecture Notes in Computer Science | 1998
Won-Sook Lee; Nadia Magnenat-Thalmann
This paper describes a combined method of facial reconstruction and morphing between two heads, showing the extensive usage of feature points detected from pictures. We first present an efficient method to generate a 3D head for animation from picture data and then a simple method to do 3D-shape interpolation and 2D morphing based on triangulation. The basic idea is to generate an individualized head modified from a generic model using orthogonal picture input, then process automatic texture mapping with texture image generation by combining orthogonal pictures and coordinate generation by projection from a resulted head in front, right and left views, which results a nice triangulation on texture image. Then an intermediate shape can be obtained from interpolation between two different persons. The morphing between 2D images is processed by generating an intermediate image and new texture coordinate. Texture coordinates are interpolated linearly, and the texture image is created using Barycentric coordinates for each pixel in each triangle given from a 3D head. Various experiments, with different ratio between shape, images and various expressions, are illustrated.
international symposium on multimedia | 2009
Dan Yang; Won-Sook Lee
Very large online music databases have recently been created by vendors, but they generally lack content-based retrieval methods. One exception is Allmusic.com which offers browsing by musical emotion, using human experts to classify several thousand songs into 183 moods. In this paper, machine learning techniques are used instead of human experts to extract emotions in Music. The classification is based on a psychological model of emotion that is extended to 23 specific emotion categories. Our results for mining the lyrical text of songs for specific emotion are promising, generate classification models that are human-comprehensible, and generate results that correspond to commonsense intuitions about specific emotions. Mining lyrics focused in this paper is one aspect of research which combines different classifiers of musical emotion such as acoustics and lyrical text.
graphics interface | 2007
Pengcheng Xi; Won-Sook Lee; Chang Shu
Analysis on a dataset of 3D scanned surfaces have presented problems because of incompleteness on the surfaces and because of variances in shape, size and pose. In this paper, a high-resolution generic model is aligned to data in the Civilian American and European Surface Anthropometry Resources (CAESAR) database in order to obtain a consistent parameterization. A Radial Basis Function (RBF) network is built for rough deformation by using landmark information from the generic model, anatomical landmarks provided by CAESAR dataset and virtual landmarks created automatically for geometric deformation. Fine mapping then successfully applies a weighted sum of errors on both surface data and the smoothness of deformation. Compared with previous methods, our approach makes robust alignment in a higher efficiency. This consistent parameterization also makes it possible for Principal Components Analysis (PCA) on the whole body as well as human body segments. Our analysis on segmented bodies displays a richer variation than that of the whole body. This analysis indicates that a wider application of human body reconstruction with segments is possible in computer animation.
Signal Processing-image Communication | 2002
Taro Goto; Won-Sook Lee; Nadia Magnenat-Thalmann
There are two main processes to create a 3D animatable facial model from photographs. The first is to extract features such as eyes, nose, mouth and chin curves on the photographs. The second is to create 3D individualized facial model using extracted feature information. The final facial model is expected to have an individualized shape, photograph-realistic skin color, and animatable structures. Here, we describe our novel approach to detect features automatically using a statistical analysis for facial information. We are not only interested in the location of the features but also the shape of local features. How to create 3D models from detected features is also explained and several resulting 3D facial models are illustrated and discussed.
ieee virtual reality conference | 1999
Won-Sook Lee; Yin Wu; Nadia Magnenat-Thalmann
Face cloning and animation considering wrinkle formation and aging are an aspiring goal and a challenging task. This paper describes a cloning method and an aging simulation in a family. We reconstruct a father, mother, son and daughter of one family and mix their shapes and textures in 3D to get virtual persons with some variation. The idea of reconstruction of a head is to detect features from two orthogonal pictures, modify a generic model with an animation structure and use an automatic texture mapping method. It is followed by a simple method to do 3D-shape interpolation and 2D morphing based on triangulation for experiments of mixing 3D heads between family members. Finally, wrinkles within facial animation and aging are generated based on detected feature points. Experiments are made to generate aging wrinkles on the faces of the son and the daughter.
IEEE Transactions on Instrumentation and Measurement | 2011
Vijaya Lakshmi Guruswamy; Jochen Lang; Won-Sook Lee
Haptic tactile feedback is a widely used and effective technique in virtual reality applications. When an objects surface is explored by stroking it using fingers, finger nails, or a tool, a vibration response is sensed. The vibrations convey information about the surface finish and patterns in the surface structure, and they may help identify the surface. We study characteristics of real-world physical objects that are based on actual measurements. We propose novel techniques for modeling haptic vibration textures using digital filters that can simulate both stochastic and patterned textures of objects. Modeling is based on a spatial distribution of infinite-impulse-response filters that operate in the time domain. We match the impulse response of the filters to acceleration profiles that are obtained from scanning of real-world objects. The results show that our modeling is efficient in representing varying roughness characteristics of both regular-patterned and stochastic surfaces unlike prior methods that are based on a parametric decaying sinusoidal model. Our experiments employ an existing handheld mobile scanning setup with a visually tracked probe, which provides acceleration and force profiles. Our simple capturing devices also remove any need for a robotic manipulator.
ieee international workshop on haptic audio visual environments and games | 2009
Vijaya Lakshmi Guruswamy; Jochen Lang; Won-Sook Lee
Vibration feedback models are known to be effective to convey tactile characteristics in virtual environments and they can be rendered with existing haptic devices. In this paper we develop a novel texture model based on a spatial distribution of infinite-impulse-response filters which operate in the time domain. We match the impulse response of the filters to measured acceleration profiles obtained from scanning of real-world objects. We report results on surfaces with varying roughness characteristics including surfaces with stochastic variations and surfaces with regular features. Our novel use of infinite-impulse-response filters allows us to represent multiple frequencies of the response, and to unify the haptic texture model to arbitrary surfaces unlike the conventional rendering method for patterned textures based on a decaying sinusoid. We employ an existing hand-held mobile scanning set-up with a visually-tracked probe, which provides acceleration and force profiles. Our simple capturing devices also removes any need for a robotic manipulator.