Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fu-Chung Huang is active.

Publication


Featured researches published by Fu-Chung Huang.


international conference on computer graphics and interactive techniques | 2009

Moving gradients: a path-based method for plausible image interpolation

Dhruv Mahajan; Fu-Chung Huang; Wojciech Matusik; Ravi Ramamoorthi; Peter N. Belhumeur

We describe a method for plausible interpolation of images, with a wide range of applications like temporal up-sampling for smooth playback of lower frame rate video, smooth view interpolation, and animation of still images. The method is based on the intuitive idea, that a given pixel in the interpolated frames traces out a path in the source images. Therefore, we simply move and copy pixel gradients from the input images along this path. A key innovation is to allow arbitrary (asymmetric) transition points, where the path moves from one image to the other. This flexible transition preserves the frequency content of the originals without ghosting or blurring, and maintains temporal coherence. Perhaps most importantly, our framework makes occlusion handling particularly simple. The transition points allow for matches away from the occluded regions, at any suitable point along the path. Indeed, occlusions do not need to be handled explicitly at all in our initial graph-cut optimization. Moreover, a simple comparison of computed path lengths after the optimization, allows us to robustly identify occluded regions, and compute the most plausible interpolation in those areas. Finally, we show that significant improvements are obtained by moving gradients and using Poisson reconstruction.


international conference on computer graphics and interactive techniques | 2014

Eyeglasses-free display: towards correcting visual aberrations with computational light field displays

Fu-Chung Huang; Gordon Wetzstein; Brian A. Barsky; Ramesh Raskar

Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. By designing optics in concert with prefiltering algorithms, the proposed display architecture achieves significantly higher resolution and contrast than prior approaches to vision-correcting image display. We demonstrate that inexpensive light field displays driven by efficient implementations of 4D prefiltering algorithms can produce the desired vision-corrected imagery, even for higher-order aberrations that are difficult to be corrected with glasses. The proposed computational display architecture is evaluated in simulation and with a low-cost prototype device.


symposium on computer animation | 2006

Progressive deforming meshes based on deformation oriented decimation and dynamic connectivity updating

Fu-Chung Huang; Bing-Yu Chen; Yung-Yu Chuang

We present a method for progressive deforming meshes. Most existing mesh decimation methods focus on static meshes. However, there are more and more animation data today, and it is important to address the problem of simplifying deforming meshes. Our method is based on deformation oriented decimation (DOD) error metric and dynamic connectivity updating (DCU) algorithm. Deformation oriented decimation extends the deformation sensitivity decimation (DSD) error metric by augmenting an additional term to model the distortion introduced by deformation. This new metric preserves not only geometric features but also areas with large deformation. Using this metric, a static reference connectivity is extracted for the whole animation. Dynamic connectivity updating algorithm utilizes vertex trees to further reduce geometric distortion by allowing the connectivity to change. Temporal coherence in the dynamic connectivity between frames is achieved by penalizing large deviations from the reference connectivity. The combination of DOD and DCU demonstrates better simplification and triangulation performance than previous methods for deforming mesh simplification.


international conference on computer graphics and interactive techniques | 2012

Correcting for optical aberrations using multilayer displays

Fu-Chung Huang; Douglas Lanman; Brian A. Barsky; Ramesh Raskar

Optical aberrations of the human eye are currently corrected using eyeglasses, contact lenses, or surgery. We describe a fourth option: modifying the composition of displayed content such that the perceived image appears in focus, after passing through an eye with known optical defects. Prior approaches synthesize pre-filtered images by deconvolving the content by the point spread function of the aberrated eye. Such methods have not led to practical applications, due to severely reduced contrast and ringing artifacts. We address these limitations by introducing multilayer pre-filtering, implemented using stacks of semi-transparent, light-emitting layers. By optimizing the layer positions and the partition of spatial frequencies between layers, contrast is improved and ringing artifacts are eliminated. We assess design constraints for multilayer displays; autostereoscopic light field displays are identified as a preferred, thin form factor architecture, allowing synthetic layers to be displaced in response to viewer movement and refractive errors. We assess the benefits of multilayer pre-filtering versus prior light field pre-distortion methods, showing pre-filtering works within the constraints of current display resolutions. We conclude by analyzing benefits and limitations using a prototype multilayer LCD.


eurographics | 2010

Sparsely precomputing the light transport matrix for real-time rendering

Fu-Chung Huang; Ravi Ramamoorthi

Precomputation‐based methods have enabled real‐time rendering with natural illumination, all‐frequency shadows, and global illumination. However, a major bottleneck is the precomputation time, that can take hours to days. While the final real‐time data structures are typically heavily compressed with clustered principal component analysis and/or wavelets, a full light transport matrix still needs to be precomputed for a synthetic scene, often by exhaustive sampling and raytracing. This is expensive and makes rapid prototyping of new scenes prohibitive. In this paper, we show that the precomputation can be made much more efficient by adaptive and sparse sampling of light transport. We first select a small subset of “dense vertices”, where we sample the angular dimensions more completely (but still adaptively). The remaining “sparse vertices” require only a few angular samples, isolating features of the light transport. They can then be interpolated from nearby dense vertices using locally low rank approximations. We demonstrate sparse sampling and precomputation 5 × faster than previous methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Animating Lip-Sync Characters With Dominated Animeme Models

Yu-Mei Chen; Fu-Chung Huang; Shuen-Huei Guan; Bing-Yu Chen

Character speech animation is traditionally considered as important but tedious work, especially when taking lip synchronization (lip-sync) into consideration. Although there are some methods proposed to ease the burden on artists to create facial and speech animation, almost none is fast and efficient. In this paper, we introduce a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts, starting from training dominated animeme models (DAMs) for each kind of phoneme by learning the characters animation control signal through an expectation-maximization (EM)-style optimization approach. The DAMs are further decomposed to polynomial-fitted animeme models and corresponding dominance functions while taking coarticulation into account. Finally, given a novel speech sequence and its corresponding texts, the animation control signal of the character can be synthesized in real time with the trained DAMs. The synthesized lip-sync animation can even preserve exaggerated characteristics of the characters facial geometry. Moreover, since our method can perform in real time, it can be used for many applications, such as lip-sync animation prototyping, multilingual animation reproduction, avatar speech, and mass animation production. Furthermore, the synthesized animation control signal can be imported into 3-D packages for further adjustment, so our method can be easily integrated into the existing production pipeline.


international conference on computer graphics and interactive techniques | 2009

Animating lip-sync speech faces by dominated animeme models

Fu-Chung Huang; Yu-Mei Chen; Tse-Hsien Wang; Bing-Yu Chen; Shuen-Huei Guan

Speech animation is traditionally considered as important but tedious work for most applications, because the muscles on the face are complex and dynamically interacting. In this paper, we introduce a framework for synthesizing a 3D lip-sync speech animation by a given speech sequence and its corresponding texts. We first identify the representative key-lip-shapes from a training video that are important for blend-shapes and guiding the artist to create corresponding 3D key-faces (lips). The training faces in the video are then cross-mapped to the crafted key-faces to construct the Dominated Animeme Models (DAM) for each kind of phoneme. Considering the coarticulation effects in animation control signals from the cross-mapped training faces, the DAM computes two functions: polynomial-fitted animeme shape functions and corresponding dominance weighting functions. Finally, given a novel speech sequence and its corresponding texts, a lip-sync speech animation can be synthesized in a short time with the DAM.


international conference on computer graphics and interactive techniques | 2013

Computational light field display for correcting visual aberrations

Fu-Chung Huang; Gordon Wetzstein; Brian A. Barsky; Ramesh Raskar

We create a computational light field display that corrects for visual aberrations. This new method enables better image resolution and higher image contrast. The prototype is built using readily available off-the-shelf components and has a thin form factor for mobile devices.


international conference on computer graphics and interactive techniques | 2008

Lips-sync 3D speech animation

Fu-Chung Huang; Bing-Yu Chen; Yung-Yu Chaung; Shuen-Huei Guan

Facial animation is traditionally considered as an important but tedious task for many applications.Recently the demand for lipssyncs animation is increasing, but there seems few fast and easy generation methods.In this talk, a system to synthesize lips-syncs speech animation given a novel utterance is presented. Our system uses a nonlinear blend-shape method and derives key-shapes using a novel automatic clustering algorithm. Finally a Gaussianphoneme model is used to predict the proper motion dynamic that can be used for synthesizing a new speech animation.


european conference on computer vision | 2014

Vision Correcting Displays Based on Inverse Blurring and Aberration Compensation

Brian A. Barsky; Fu-Chung Huang; Douglas Lanman; Gordon Wetzstein; Ramesh Raskar

The concept of a vision correcting display involves digitally modifying the content of a display using measurements of the optical aberrations of the viewer’s eye so that the display can be seen in sharp focus by the user without requiring the use of eyeglasses or contact lenses. Our first approach inversely blurs the image content on a single layer. After identifying fundamental limitations of this approach, we propose the multilayer concept. We then develop a fractional frequency separation method to enhance the image contrast and build a multilayer prototype comprising transparent LCDs. Finally, we combine our viewer-adaptive inverse blurring with off-the-shelf lenslets or parallax barriers and demonstrate that the resulting vision-correcting computational display system facilitates significantly higher contrast and resolution as compared to previous solutions. We also demonstrate the capability to correct higher order aberrations.

Collaboration


Dive into the Fu-Chung Huang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing-Yu Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuen-Huei Guan

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yung-Yu Chuang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu-Mei Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge