Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongzhi Wu is active.

Publication


Featured researches published by Hongzhi Wu.


ACM Transactions on Graphics | 2007

Context-aware textures

Jianye Lu; Athinodoros S. Georghiades; Andreas Glaser; Hongzhi Wu; Li-Yi Wei; Baining Guo; Julie Dorsey; Holly E. Rushmeier

Interesting textures form on the surfaces of objects as the result of external chemical, mechanical, and biological agents. Simulating these textures is necessary to generate models for realistic image synthesis. The textures formed are progressively variant, with the variations depending on the global and local geometric context. We present a method for capturing progressively varying textures and the relevant context parameters that control them. By relating textures and context parameters, we are able to transfer the textures to novel synthetic objects. We present examples of capturing chemical effects, such as rusting; mechanical effects, such as paint cracking; and biological effects, such as the growth of mold on a surface. We demonstrate a user interface that provides a method for specifying where an object is exposed to external agents. We show the results of complex, geometry-dependent textures evolving on synthetic objects.


international conference on computer graphics and interactive techniques | 2016

Real-time facial animation with image-based dynamic avatars

Chen Cao; Hongzhi Wu; Yanlin Weng; Tianjia Shao; Kun Zhou

We present a novel image-based representation for dynamic 3D avatars, which allows effective handling of various hairstyles and headwear, and can generate expressive facial animations with fine-scale details in real-time. We develop algorithms for creating an image-based avatar from a set of sparsely captured images of a user, using an off-the-shelf web camera at home. An optimization method is proposed to construct a topologically consistent morphable model that approximates the dynamic hair geometry in the captured images. We also design a real-time algorithm for synthesizing novel views of an image-based avatar, so that the avatar follows the facial motions of an arbitrary actor. Compelling results from our pipeline are demonstrated on a variety of cases.


Computer Graphics Forum | 2011

A Sparse Parametric Mixture Model for BTF Compression, Editing and Rendering

Hongzhi Wu; Julie Dorsey; Holly E. Rushmeier

Bidirectional texture functions (BTFs) represent the appearance of complex materials. Three major shortcomings with BTFs are the bulky storage, the difficulty in editing and the lack of efficient rendering methods. To reduce storage, many compression techniques have been applied to BTFs, but the results are difficult to edit. To facilitate editing, analytical models have been fit, but at the cost of accuracy of representation for many materials. It becomes even more challenging if efficient rendering is also needed. We introduce a high‐quality general representation that is, at once, compact, easily editable, and can be efficiently rendered. The representation is computed by adopting the stagewise Lasso algorithm to search for a sparse set of analytical functions, whose weighted sum approximates the input appearance data. We achieve compression rates comparable to a state‐of‐the‐art BTF compression method. We also demonstrate results in BTF editing and rendering.


international conference on computer graphics and interactive techniques | 2011

Physically-based interactive bi-scale material design

Hongzhi Wu; Julie Dorsey; Holly E. Rushmeier

We present the first physically-based interactive system to facilitate the appearance design at different scales consistently, through manipulations of both small-scale geometry and materials. The core of our system is a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of the Bidirectional Visible Normal Distribution Function and pre-rotated BRDFs in the matrix formulation of our rendering problem. Our algorithm is three orders of magnitude faster than a ground-truth method. We demonstrate various editing results of different small-scale geometry with analytical and measured BRDFs. In addition, we show the applications of our system to physical realization of appearance, as well as modeling of real-world materials using very sparse measurements.


international conference on computer graphics and interactive techniques | 2016

AutoHair: fully automatic hair modeling from a single image

Menglei Chai; Tianjia Shao; Hongzhi Wu; Yanlin Weng; Kun Zhou

We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.


IEEE Transactions on Visualization and Computer Graphics | 2014

Effects of Approximate Filtering on the Appearance of Bidirectional Texture Functions

Adrian Jarabo; Hongzhi Wu; Julie Dorsey; Holly E. Rushmeier; Diego Gutierrez

The BTF data structure was a breakthrough for appearance modeling in computer graphics. More research is needed though to make BTFs practical in rendering applications. We present the first systematic study of the effects of Approximate filtering on the appearance of BTFs, by exploring the spatial, angular and temporal domains over a varied set of stimuli. We perform our initial experiments on simple geometry and lighting, and verify our observations on more complex settings. We consider multi-dimensional filtering versus conventional mipmapping, and find that multi-dimensional filtering produces superior results. We examine the tradeoff between under- and oversampling, and find that different filtering strategies can be applied in each domain, while maintaining visual equivalence with respect to a ground truth. For example, we find that preserving contrast is more important in static than dynamic images, indicating greater levels of spatial filtering are possible for animations. We find that filtering can be performed more aggressively in the angular domain than in the spatial. Additionally, we find that high-level visual descriptors of the BTF are linked to the perceptual performance of pre-filtered approximations. In turn, some of these high-level descriptors correlate with low level statistics of the BTF. We show six different practical applications of applying our findings to improving filtering, rendering and compression strategies.


eurographics | 2009

Characteristic point maps

Hongzhi Wu; Julie Dorsey; Holly E. Rushmeier

Extremely dense spatial sampling is often needed to prevent aliasing when rendering objects with high frequency variations in geometry and reflectance. To accelerate the rendering process, we introduce characteristic point maps (CPMs), a hierarchy of view‐independent points, which are chosen to preserve the appearance of the original model across different scales. In preprocessing, randomized matrix column sampling is used to reduce an initial dense sampling to a minimum number of characteristic points with associated weights. In rendering, the reflected radiance is computed using a weighted average of reflectances from characteristic points. Unlike existing techniques, our approach requires no restrictions on the original geometry or reflectance functions.


international conference on computer graphics and interactive techniques | 2013

Inverse bi-scale material design

Hongzhi Wu; Julie Dorsey; Holly E. Rushmeier

One major shortcoming of existing bi-scale material design systems is the lack of support for inverse design: there is no way to directly edit the large-scale appearance and then rapidly solve for the small-scale details that approximate that look. Prior work is either too slow to provide quick feedback, or limited in the types of small-scale details that can be handled. We present a novel computational framework for inverse bi-scale material design. The key idea is to convert the challenging inverse appearance computation into efficient search in two precomputed large libraries: one including a wide range of measured and analytical materials, and the other procedurally generated and height-map-based geometries. We demonstrate a variety of editing operations, including finding visually equivalent details that produce similar large-scale appearance, which can be useful in applications such as physical fabrication of materials.


IEEE Transactions on Visualization and Computer Graphics | 2016

Simultaneous Localization and Appearance Estimation with a Consumer RGB-D Camera

Hongzhi Wu; Zhaotian Wang; Kun Zhou

Acquiring general material appearance with hand-held consumer RGB-D cameras is difficult for casual users, due to the inaccuracy in reconstructed camera poses and geometry, as well as the unknown lighting that is coupled with materials in measured color images. To tackle these challenges, we present a novel technique for estimating the spatially varying isotropic surface reflectance, solely from color and depth images captured with an RGB-D camera under unknown environment illumination. The core of our approach is a joint optimization, which alternates among solving for plausible camera poses, materials, the environment lighting and normals. To refine camera poses, we exploit the rich spatial and view-dependent variations of materials, treating the object as a localization-self-calibrating model. To recover the unknown lighting, measured color images along with the current estimate of materials are used in a global optimization, efficiently solved by exploiting the sparsity in the wavelet domain. We demonstrate the substantially improved quality of estimated appearance on a variety of daily objects.


Computer Graphics Forum | 2015

AppFusion: Interactive Appearance Acquisition Using a Kinect Sensor

Hongzhi Wu; Kun Zhou

We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on‐the‐fly and provides quick visual feedback. We build the system entirely on low‐end, off‐the‐shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra‐red emitter/receiver, originally designed for depth computation, as an active hand‐held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system.

Collaboration


Dive into the Hongzhi Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge