Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuen-Huei Guan is active.

Publication


Featured researches published by Shuen-Huei Guan.


international conference on computer graphics and interactive techniques | 2011

Stereoscopic 3D experience optimization using cropping and warping

Hong-Shang Lin; Shuen-Huei Guan; Chu-Tien Lee; Ming Ouhyoung

The purpose of this paper is to optimize the stereoscopic 3D experience when users watch those contents. We propose two factors in stereoscopic experience: visual fatigue and depth perception. In order to optimize stereoscopic experience, we propose five principles to reduce visual fatigue and enhance depth perception and implement them by cropping and warping. Using our methods, time-consuming problems of existing view interpolation methods such as camera calibration, accurate dense depth map, and inpainting are avoided. We also design a GUI that enables users to efficiently edit the stereoscopic video, optimize the stereoscopic experience, and preview the stereoscopic result. The user study shows that our method is successful in optimizing stereoscopic experience. One example is shown in Figure 1.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Animating Lip-Sync Characters With Dominated Animeme Models

Yu-Mei Chen; Fu-Chung Huang; Shuen-Huei Guan; Bing-Yu Chen

Character speech animation is traditionally considered as important but tedious work, especially when taking lip synchronization (lip-sync) into consideration. Although there are some methods proposed to ease the burden on artists to create facial and speech animation, almost none is fast and efficient. In this paper, we introduce a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts, starting from training dominated animeme models (DAMs) for each kind of phoneme by learning the characters animation control signal through an expectation-maximization (EM)-style optimization approach. The DAMs are further decomposed to polynomial-fitted animeme models and corresponding dominance functions while taking coarticulation into account. Finally, given a novel speech sequence and its corresponding texts, the animation control signal of the character can be synthesized in real time with the trained DAMs. The synthesized lip-sync animation can even preserve exaggerated characteristics of the characters facial geometry. Moreover, since our method can perform in real time, it can be used for many applications, such as lip-sync animation prototyping, multilingual animation reproduction, avatar speech, and mass animation production. Furthermore, the synthesized animation control signal can be imported into 3-D packages for further adjustment, so our method can be easily integrated into the existing production pipeline.


international conference on computer graphics and interactive techniques | 2009

Animating lip-sync speech faces by dominated animeme models

Fu-Chung Huang; Yu-Mei Chen; Tse-Hsien Wang; Bing-Yu Chen; Shuen-Huei Guan

Speech animation is traditionally considered as important but tedious work for most applications, because the muscles on the face are complex and dynamically interacting. In this paper, we introduce a framework for synthesizing a 3D lip-sync speech animation by a given speech sequence and its corresponding texts. We first identify the representative key-lip-shapes from a training video that are important for blend-shapes and guiding the artist to create corresponding 3D key-faces (lips). The training faces in the video are then cross-mapped to the crafted key-faces to construct the Dominated Animeme Models (DAM) for each kind of phoneme. Considering the coarticulation effects in animation control signals from the cross-mapped training faces, the DAM computes two functions: polynomial-fitted animeme shape functions and corresponding dominance weighting functions. Finally, given a novel speech sequence and its corresponding texts, a lip-sync speech animation can be synthesized in a short time with the DAM.


international conference on computer graphics and interactive techniques | 2008

Lips-sync 3D speech animation

Fu-Chung Huang; Bing-Yu Chen; Yung-Yu Chaung; Shuen-Huei Guan

Facial animation is traditionally considered as an important but tedious task for many applications.Recently the demand for lipssyncs animation is increasing, but there seems few fast and easy generation methods.In this talk, a system to synthesize lips-syncs speech animation given a novel utterance is presented. Our system uses a nonlinear blend-shape method and derives key-shapes using a novel automatic clustering algorithm. Finally a Gaussianphoneme model is used to predict the proper motion dynamic that can be used for synthesizing a new speech animation.


international conference on computer graphics and interactive techniques | 2007

Digital restoration of moldy aged films

Chieh-Ju Tu; Shuen-Huei Guan; Yung-Yu Chuang; Jiann-Rong Wu; Bing-Yu Chen; Ming Ouhyoung

Film is regarded as an important art form, often reflecting the culture from which it is stemmed. Films record our history, represent contemporary culture and have great artistic value. Thus, they are precious cultural assets. Unfortunately, because of aging, improper storage conditions and other reasons, old films are threaten with defects caused by decaying, dust, dirt, scratch and mold. Consequently, digital film restoration, repairing defects in films, has been recognized as an important issue by archives, content owners and film companies. This paper proposes a learning-based defect detection method and a flow-based defect repairing algorithm for greatly reducing manual efforts in film restoration. The main contributions include a novel example-based approach for defect detection and a restoration algorithm which can repair seriously damaged films.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

A Tool for Stereoscopic Parameter Setting Based on Geometric Perceived Depth Percentage

Shuen-Huei Guan; Yu-Chi Lai; Kuo-Wei Chen; Hsuang-Ting Chou; Yung-Yu Chuang

It is a necessary but challenging task for creative producers to have an idea of the depth perception of the target audience when watching a stereoscopic film in a cinema during production. This paper proposes a novel metric, geometric perceived depth percentage (GPDP), to numerate and depict the depth perception of a scene before rendering. In addition to the geometric relationship between the object depth and focal distance, GPDP takes the screen width and viewing distance into account. As a result, it provides a more intuitive means for predicting stereoscopy and is universal across different viewing conditions. Based on GPDP, we design a practical tool to visualize the stereoscopic perception without the need for any 3-D device or special environment. The tool utilizes the stereoscopic comfort volume, GPDP-based shading schemes, depth perception markers, and GPDP histograms as visual cues so that animators can set stereoscopic parameters more easily. The tool is easily implemented in any modern rendering pipeline, including interactive Autodesk Maya and offline Pixars RenderMan renderer. It has been used in several production projects including commercial ones. Finally, two user studies show that GPDP is a proper depth perception indicator and the proposed tool can make the stereoscopic parameter setting process easier and more efficient.


international conference on computer graphics and interactive techniques | 2012

Lip-synced character speech animation with dominated animeme models

Shuen-Huei Guan; Yu-Mei Chen; Fu-Chun Huang; Bing-Yu Chen

One of the holy grails of computer graphics is the generation of photorealistic images with motion data. To re-generate convincing human animations might not be the most challenging part, but it is definitely one of ultimate goals for computer graphics. Amongst full-body human animations, facial animation is the challenging part because of its subtlety and familarity to human beings. In this paper, we like to share the work of lip-sync animation, part of facial animations, as a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts.


international conference on computer graphics and interactive techniques | 2009

Animating character images in 3D space

Bing-Yu Chen; Shih-Chiang Dai; Shuen-Huei Guan; Tomoyuki Nishita

In this extended abstract, we present a system that allows the user to animate character images in 3D space by applying an existed 3D character model with motion data. The character model with skeleton rigged is used as a template model to fit the silhouette of the character image. After assigning some corresponding points between the character image and template model, the system then fits the model to the image and transfer the colors and patterns of the image to the model as the textures. Finally, the user can apply any motion data to animate the fitted 3D character model in 3D space.


international conference on multimedia and expo | 2004

Feature refinement strategy for extended marching cubes: Handling on dynamic nature of real-time sculpting application

Chien-Chang Ho; Yan-Hong Lu; Hung-Te Lin; Shuen-Huei Guan; Sheng-Yao Cho; Rung-Huei Liang; Bing-Yu Chen; Ming Ouhyoung

Digital sculpting is a new trend for creating 3D models, but its application in the manipulation of volumetric data raises several issues that need to be addressed. With the extended marching cubes algorithm (EMC), sharp features of 3D models are well preserved. Additionally, the dynamic nature of modifying models in real time needs to be dealt with in sculpting applications: since the sampling of sharp features is implicit, direct modification on cell data will cause problems. A feature refinement strategy is proposed to preserve the dynamically modified model correct, and efficiently. Overall, the proposed methods provide an adaptive resolution and feature-preserved sculpting system that handles dynamic behavior in real-time performance


international conference on computer graphics and interactive techniques | 2004

Enhanced 3D model retrieval system through characteristic views using orthogonal visual hull: Copyright restrictions prevent ACM from providing the full text for this work.

Shuen-Huei Guan; Ming-Kei Hsieh; Chia-Chi Yeh; Bing-Yu Chen

The introduction of the Princeton Shape Benchmark database (PSB) [Shilane et al. 2004] has given rise to extensive research into ways to accurately perform 3D matching. Among the extant methods proposed, the LightField Descriptor (LFD) [Chen et al. 2003] based method currently is the most accurate, but it suffers from a deficiency in on-line matching speed. The contribution of this research is twofold. First, fewer characteristic views are used instead of huge images to boost the on-line retrieval time. Second, depth information is employed to keep the retrieval accuracy.

Collaboration


Dive into the Shuen-Huei Guan's collaboration.

Top Co-Authors

Avatar

Bing-Yu Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Ming Ouhyoung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yung-Yu Chuang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Fu-Chung Huang

University of California

View shared research outputs
Top Co-Authors

Avatar

Yu-Mei Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Rung-Huei Liang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Sheng-Yao Cho

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Fu-Chun Huang

University of California

View shared research outputs
Top Co-Authors

Avatar

Chieh-Ju Tu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chien-Chang Ho

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge