Lvdi Wang
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lvdi Wang.
eurographics symposium on rendering techniques | 2007
Lvdi Wang; Li-Yi Wei; Kun Zhou; Baining Guo; Heung-Yeung Shum
We introduce high dynamic range image hallucination for adding high dynamic range details to the over-exposed and under-exposed regions of a low dynamic range image. Our method is based on a simple assumption: there exist high quality patches in the image with similar textures as the regions that are over or under exposed. Hence, we can add high dynamic range details to a region by simply transferring texture details from another patch that may be under different illumination levels. In our approach, a user only needs to annotate the image with a few strokes to indicate textures that can be applied to the corresponding under-exposed or over-exposed regions, and these regions are automatically hallucinated by our algorithm. Experiments demonstrate that our simple, yet effective approach is able to significantly increase the amount of texture details in a wide range of common scenarios, with a modest amount of user interaction.
international conference on computer graphics and interactive techniques | 2012
Menglei Chai; Lvdi Wang; Yanlin Weng; Yizhou Yu; Baining Guo; Kun Zhou
Human hair is known to be very difficult to model or reconstruct. In this paper, we focus on applications related to portrait manipulation and take an application-driven approach to hair modeling. To enable an average user to achieve interesting portrait manipulation results, we develop a single-view hair modeling technique with modest user interaction to meet the unique requirements set by portrait manipulation. Our method relies on heuristics to generate a plausible high-resolution strand-based 3D hair model. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization, which simultaneously considers depth constraints, layering constraints as well as regularization terms. Our single-view hair modeling enables a number of interesting applications that were previously challenging, including transferring the hairstyle of one subject to another in a potentially different pose, rendering the original portrait in a novel view and image-space hair editing.
international conference on computer graphics and interactive techniques | 2009
Lvdi Wang; Yizhou Yu; Kun Zhou; Baining Guo
We present an example-based approach to hair modeling because creating hairstyles either manually or through image-based acquisition is a costly and time-consuming process. We introduce a hierarchical hair synthesis framework that views a hairstyle both as a 3D vector field and a 2D arrangement of hair strands on the scalp. Since hair forms wisps, a hierarchical hair clustering algorithm has been developed for detecting wisps in example hairstyles. The coarsest level of the output hairstyle is synthesized using traditional 2D texture synthesis techniques. Synthesizing finer levels of the hierarchy is based on cluster oriented detail transfer. Finally, we compute a discrete tangent vector field from the synthesized hair at every level of the hierarchy to remove undesired inconsistencies among hair trajectories. Improved hair trajectories can be extracted from the vector field. Based on our automatic hair synthesis method, we have also developed simple user-controlled synthesis and editing techniques including feature-preserving combing as well as detail transfer between different hairstyles.
international conference on computer graphics and interactive techniques | 2013
Menglei Chai; Lvdi Wang; Yanlin Weng; Xiaogang Jin; Kun Zhou
This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.
international conference on computer graphics and interactive techniques | 2010
Lvdi Wang; Kun Zhou; Yizhou Yu; Baining Guo
In this paper, we introduce a compact random-access vector representation for solid textures made of intermixed regions with relatively smooth internal color variations. It is feature-preserving and resolution-independent. In this representation, a texture volume is divided into multiple regions. Region boundaries are implicitly defined using a signed distance function. Color variations within the regions are represented using compactly supported radial basis functions (RBFs). With a spatial indexing structure, such RBFs enable efficient color evaluation during real-time solid texture mapping. Effective techniques have been developed for generating such a vector representation from bitmap solid textures. Data structures and techniques have also been developed to compactly store region labels and distance values for efficient random access during boundary and color evaluation.
interactive 3d graphics and games | 2007
Lvdi Wang; Xi Wang; Peter-Pike J. Sloan; Li-Yi Wei; Xin Tong; Baining Guo
High dynamic range (HDR) images are increasingly employed in games and interactive applications for accurate rendering and illumination. One disadvantage of HDR images is their large data size; unfortunately, even though solutions have been proposed for future hardware, commodity graphics hardware today does not provide any native compression for HDR textures. In this paper, we perform extensive study of possible methods for supporting compressed HDR textures on commodity graphics hardware. A desirable solution must be implementable on DX9 generation hardware, as well as meet the following requirements. First, the data size should be small and the reconstruction quality must be good. Second, the decompression must be efficient; in particular, bilinear/trilinear/anisotropic texture filtering ought to be performed via native texture hardware instead of custom pixel shader filtering. We present a solution that optimally meets these requirements. Our basic idea is to convert a HDR texture to a custom LUVW space followed by an encoding into a pair of 8-bit DXT textures. Since DXT format is supported on modern commodity graphics hardware, our approach has wide applicability. Our compression ratio is 3:1 for FP16 inputs, allowing applications to store 3 times the number of HDR texels in the same memory footprint. Our decompressor is efficient and can be implemented as a short pixel program. We leverage existing texturing hardware for fast decompression and native texture filtering, allowing HDR textures to be utilized just like traditional 8-bit DXT textures. Our reduced data size has a further advantage: it is even faster than rendering from uncompressed HDR textures due to our reduced texture memory access. Given the quality and efficiency, we believe our approach suitable for games and interactive applications.
international conference on computer graphics and interactive techniques | 2011
Lvdi Wang; Yizhou Yu; Kun Zhou; Baining Guo
We introduce multiscale vector volumes, a compact vector representation for volumetric objects with complex internal structures spanning a wide range of scales. With our representation, an object is decomposed into components and each component is modeled as an SDF tree, a novel data structure that uses multiple signed distance functions (SDFs) to further decompose the volumetric component into regions. Multiple signed distance functions collectively can represent non-manifold surfaces and deliver a powerful vector representation for complex volumetric features. We use multiscale embedding to combine object components at different scales into one complex volumetric object. As a result, regions with dramatically different scales and complexities can co-exist in an object. To facilitate volumetric object authoring and editing, we have also developed a scripting language and a GUI prototype. With the help of a recursively defined spatial indexing structure, our vector representation supports fast random access, and arbitrary cross sections of complex volumetric objects can be visualized in real time.
Computer Graphics Forum | 2013
Yanlin Weng; Lvdi Wang; Xiao Li; Menglei Chai; Kun Zhou
In this paper we study the problem of hair interpolation: given two 3D hair models, we want to generate a sequence of intermediate hair models that transform from one input to another both smoothly and aesthetically pleasing. We propose an automatic method that efficiently calculates a many-to-many strand correspondence between two or more given hair models, taking into account the multi-scale clustering structure of hair. Experiments demonstrate that hair interpolation can be used for producing more vivid portrait morphing effects and enabling a novel example-based hair styling methodology, where a user can interactively create new hairstyles by continuously exploring a “style space” spanning multiple input hair models.
international conference on computer graphics and interactive techniques | 2014
Zexiang Xu; Hsiang-Tao Wu; Lvdi Wang; Changxi Zheng; Xin Tong; Yue Qi
International Journal of Computer Vision | 2002
Heung Yeung Shum; Lvdi Wang; Jinxiang Chai; Xin Tong