Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where I-Chen Lin is active.

Publication


Featured researches published by I-Chen Lin.


The Visual Computer | 2005

Mirror MoCap: Automatic and efficient capture of dense 3D facial motion parameters from video

I-Chen Lin; Ming Ouhyoung

In this paper, we present an automatic and efficient approach to the capture of dense facial motion parameters, which extends our previous work of 3D reconstruction from mirror-reflected multiview video. To narrow search space and rapidly generate 3D candidate position lists, we apply mirrored-epipolar bands. For automatic tracking, we utilize spatial proximity of facial surfaces and temporal coherence to find the best trajectories and rectify statuses of missing and false tracking. More than 300 markers on a subject’s face are tracked from video at a process speed of 9.2 frames per second (fps) on a regular PC. The estimated 3D facial motion trajectories have been applied to our facial animation system and can be used for facial motion analysis.


IEEE Transactions on Visualization and Computer Graphics | 2011

Adaptive Motion Data Representation with Repeated Motion Analysis

I-Chen Lin; Jen-Yu Peng; Chao-Chih Lin; Ming-Han Tsai

In this paper, we present a representation method for motion capture data by exploiting the nearly repeated characteristics and spatiotemporal coherence in human motion. We extract similar motion clips of variable lengths or speeds across the database. Since the coding costs between these matched clips are small, we propose the repeated motion analysis to extract the referred and repeated clip pairs with maximum compression gains. For further utilization of motion coherence, we approximate the subspace-projected clip motions or residuals by interpolated functions with range-aware adaptive quantization. Our experiments demonstrate that the proposed feature-aware method is of high computational efficiency. Furthermore, it also provides substantial compression gains with comparable reconstruction and perceptual errors.In this paper, we present a representation method for motion capture data by exploiting the nearly repeated characteristics and spatiotemporal coherence in human motion. We extract similar motion clips of variable lengths or speeds across the database. Since the coding costs between these matched clips are small, we propose the repeated motion analysis to extract the referred and repeated clip pairs with maximum compression gains. For further utilization of motion coherence, we approximate the subspace-projected clip motions or residuals by interpolated functions with range-aware adaptive quantization. Our experiments demonstrate that the proposed feature-aware method is of high computational efficiency. Furthermore, it also provides substantial compression gains with comparable reconstruction and perceptual errors.


Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596) | 2001

Realistic 3D facial animation parameters from mirror-reflected multi-view video

I-Chen Lin; Jeng-Sheng Yeh; Ming Ouhyoung

A robust, accurate and inexpensive approach to estimate 3D facial motion from multi-view video is proposed, where two mirrors located near ones cheeks can reflect the side views of markers on one face. Nice properties of mirrored images are utilized to simplify the proposed tracking algorithm significantly, while a Kalman filter is employed to reduce the noise and to predict the occluded marker positions. More than 50 markers on one face are continuously tracked at 30 frames per second. The estimated 3D facial motion data has been practically applied to our facial animation system. In addition, the dataset of facial motion can also be applied to the analysis of co-articulation effects, facial expressions, and audio-visual hybrid recognition system.


Multimedia Tools and Applications | 2014

Human face aging with guided prediction and detail synthesis

Ming-Han Tsai; Yen-Kai Liao; I-Chen Lin

In this paper, we present an example-based method to estimate the aging process of a human face. To tackle the difficulty of collecting considerable chronological photos of individuals, we utilize a two-layer strategy. Based on a sparse aging database, an EM-PCA-based algorithm with the personal guidance vector is first applied to conjecture the temporal variations of a target face. Since the subspace-based prediction may not preserve detailed creases, we propose synthesizing facial details with a separate texture dataset. Besides automatic simulation, the proposed framework can also include further guidance, e.g., parents’ impact vector or users’ indication of wrinkles. Our estimated results can improve feature point positions and user evaluation demonstrates that the two-layer approach provides more reasonable aging prediction.


computer-aided design and computer graphics | 2011

Lattice-Based Skinning and Deformation for Real-Time Skeleton-Driven Animation

Cheng-Hao Chen; I-Chen Lin; Ming-Han Tsai; Pin-Hua Lu

In this paper, we present an efficient framework to deform polygonal models for skeleton-driven animation. Standard solutions of skeleton-driven animation, such as linear blend skinning, require intensive artist intervention and focus on primary deformations. The proposed approach can generate both low- and high-frequency surface motions such as muscle deformation and vibrations with little user intervention. Given a surface mesh, we construct a lattice of cubic cells embracing the mesh and we apply lattice-based smooth skinning to drive the surface primary deformation with volume preservation. Lattice shape matching with dynamic particles, in the meantime, is utilized for secondary deformations. Due to the highly parallel lattice structure, the proposed method is liable to GPU computation. Our results show that it is adequate to vividly real-time animation.


pacific conference on computer graphics and applications | 1999

A speech driven talking head system based on a single face image

I-Chen Lin; Cheng-Sheng Hung; Tzong-Jer Yang; Ming Ouhyoung

In this paper, a lifelike talking head system is proposed. The talking head, which is driven by speaker independent speech recognition, requires only one single face image to synthesize lifelike facial expression. The proposed system uses speech recognition engines to get utterances and corresponding time stamps in the speech data. Associated facial expressions can be fetched from an expression pool and the synthetic facial expression can then be synchronized with speech. When applied to Internet, our web-enabled talking head system can be a vivid merchandise narrator, and only requires 50 K bytes/minute with an additional face image (about 40 Kbytes in CIF format, 24 bit-color, JPEG compression). The system can synthesize facial animation more than 30 frames/sec on a Pentium II 266 MHz PC.


Archive | 1999

Speech Driven Facial Animation

Tzong-Jer Yang; I-Chen Lin; Cheng-Sheng Hung; Chien-Feng Huang; Ming Ouhyoung

In this paper, we present an approach that animates facial expressions through speech analysis. An individualized 3D head model is first generated by modifying a generic head model, where a set of MPEG-4 Facial Definition Parameters (FDPs) has been pre-defined. To animate facial expressions of the 3D head model, a real-time speech analysis module is employed to obtain mouth shapes that are converted to MPEG-4 Facial Animation Parameters (FAPs) to drive the 3D head model with corresponding facial expressions. The approach has been implemented as a real-time speech-driven facial animation system. On a PC with a single Pentinum-III 500MHz CPU, the system performance is around 15–24 frames/sec with image size 120×150. The input is live audio, and initial delay is within 4 seconds. An ongoing model-based visual communication system that integrates a 3D head motion estimation technique with this system is also described.


interactive 3d graphics and games | 2016

Augmented reality instruction for object assembly based on markerless tracking

Li-Chen Wu; I-Chen Lin; Ming-Han Tsai

Conventional object assembly instructions are usually written or illustrated in a paper manual. Users have to associate these static instructions with real objects in 3D space. In this paper, a novel augmented reality system is presented for a user to interact with objects and instructions. While most related methods pasted obvious markers onto objects for tracking and constrained their orientations or shapes, we adopt a markerless strategy for more intuitive interaction. Based on live information from an off-the-shelf RGB-D camera, the proposed tracking procedure identifies components in a scene, tracks their 3D positions and orientations, and evaluates whether there are combinations of components. According to the detected events and poses, our indication procedure then dynamically displays indication lines, circular arrows and other hints to guide a user to manipulate the components into correct poses. The experiment shows that the proposed system can robustly track the components and respond intuitive instructions at an interactive rate. Most of users in evaluation are interested and willing to use this novel technique for object assembly.


The Visual Computer | 2013

Skeleton-driven surface deformation through lattices for real-time character animation

Cheng-Hao Chen; Ming-Han Tsai; I-Chen Lin; Pin-Hua Lu

In this paper, an efficient deformation framework is presented for skeleton-driven polygonal characters. Standard solutions, such as linear blend skinning, focus on primary deformations and require intensive user adjustment. We propose constructing a lattice of cubic cells embracing the input surface mesh. Based on the lattice, our system automatically propagates smooth skinning weights from bones to drive the surface primary deformation, and it rectifies the over-compressed regions by volume preservation. The secondary deformation is, in the meanwhile, generated by the lattice shape matching with dynamic particles. The proposed framework can generate both low- and high-frequency surface motions such as muscle deformation and vibrations with few user interventions. Our results demonstrate that the proposed lattice-based method is liable to GPU computation, and it is adequate to real-time character animation.


Journal of Computer Science and Technology | 2004

Surface detail capturing for realistic facial animation

Pei-Hsuan Tu; I-Chen Lin; Jeng-Sheng Yeh; Rung-Huei Liang; Ming Ouhyoung

In this paper, a facial animation system is proposed for capturing both geometrical information and illumination changes of surface details, called expression details, from video clips simultaneously, and the captured data can be widely applied to different 2D face images and 3D face models. While tracking the geometric data, we record the expression details by ratio images. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures. Because a ratio image is obtained via dividing colors of an expressive face by those of a neutral face, pixels with ratio value smaller than one are where a wrinkle or crease appears. Therefore, the gradients of the ratio value at each pixel in ratio images are regarded as changes of a face surface, and original normals on the surface can be adjusted according to these gradients. Based on this idea, we can convert the ratio images into a sequence of normal maps and then apply them to animated 3D model rendering. With the expression detail mapping, the resulted facial animations are more life-like and more expressive.

Collaboration


Dive into the I-Chen Lin's collaboration.

Top Co-Authors

Avatar

Ming Ouhyoung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Ming-Han Tsai

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Jen-Yu Peng

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Jeng-Sheng Yeh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wen-Chieh Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Yu-Shuen Wang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Pin-Hua Lu

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Chao-Chih Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Hao Chen

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Sheng Hung

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge