Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William V. Baxter is active.

Publication


Featured researches published by William V. Baxter.


interactive 3d graphics and games | 2001

HLODs for faster display of large static and dynamic environments

Carl Erikson; Dinesh Manocha; William V. Baxter

We present an algorithm and a system for accelerated display of massive static and dynamic environments using hierarchical simplification. Given a geometric dataset, we represent it using a scene graph and compute levels of detail (LODs) for each node in the graph. We augment the LODs with automatically-generated hierarchical levels of detail (HLODs) that serve as higher fidelity drastic simplifications of entire branches of the scene graph. We extend the algorithm to handle a class of dynamic environments by incrementally recomputing a subset of the HLODs on the fly when objects move. We leverage the properties of the HLOD scene graph in our system, using them to render the environment in a specified image quality or target frame rate mode. The resulting algorithms have been implemented as part of a system named SHAPE. We demonstrate its performance on complex CAD environments composed of tens of millions of polygons. Overall, SHAPE is able to achieve considerable speedups in frame rate with little loss in image quality.


non-photorealistic animation and rendering | 2004

IMPaSTo: a realistic, interactive model for paint

William V. Baxter; Jeremy D. Wendt; Ming C. Lin

We present a paint model for use in interactive painting systems that captures a wide range of styles similar to oils or acrylics. The model includes both a numerical simulation to recreate the physical flow of paint and an optical model to mimic the paint appearance.Our physical model for paint is based on a conservative advection scheme that simulates the basic dynamics of paint, augmented with heuristics that model the remaining key properties needed for painting. We allow one active wet layer, and an unlimited number of dry layers, with each layer being represented as a height-field.We represent paintings in terms of paint pigments rather than RGB colors, allowing us to relight paintings under any full-spectrum illuminant. We also incorporate an interactive implementation of the Kubelka-Munk diffuse reflectance model, and use a novel eight-component color space for greater color accuracy.We have integrated our paint model into a prototype painting system, with both our physical simulation and rendering algorithms running as fragment programs on the graphics hardware. The system demonstrates the models effectiveness in rendering a variety of painting styles from semi-transparent glazes, to scumbling, to thick impasto.


eurographics | 2002

GigaWalk: interactive walkthrough of complex environments

William V. Baxter; Avneesh Sud; Naga K. Govindaraju; Dinesh Manocha

We present a new parallel algorithm and a system, GigaWalk, for interactive walkthrough of complex, gigabyte-sized environments. Our approach combines occlusion culling and levels-of-detail and uses two graphics pipelines with one or more processors. GigaWalk uses a unified scene graph representation for multiple acceleration techniques, and performs spatial clustering of geometry, conservative occlusion culling, and load-balancing between graphics pipelines and processors. GigaWalk has been used to render CAD environments composed of tens of millions of polygons at interactive rates on systems consisting of two graphics pipelines. Overall, our systems combination of levels-of-detail and occlusion culling techniques results in significant improvements in frame-rate over view-frustum culling or either single technique alone.


international conference on computer graphics and interactive techniques | 2007

Locally controllable stylized shading

Hideki Todo; Ken-ichi Anjyo; William V. Baxter; Takeo Igarashi

Recent progress in non-photorealistic rendering (NPR) has led to many stylized shading techniques that efficiently convey visual information about the objects depicted. Another crucial goal of NPR is to give artists simple and direct ways to express the abstract ideas born of their imaginations. In particular, the ability to add intentional, but often unrealistic, shading effects is indispensable for many applications. We propose a set of simple stylized shading algorithms that allow the user to freely add localized light and shade to a model in a manner that is consistent and seamlessly integrated with conventional lighting techniques. The algorithms provide an intuitive, direct manipulation method based on a paint-brush metaphor, to control and edit the light and shade locally as desired. Our prototype system demonstrates how our method can enhance both the quality and range of applicability of conventional stylized shading for offline animation and interactive applications.


non-photorealistic animation and rendering | 2006

Tweakable light and shade for cartoon animation

Ken-ichi Anjyo; Shuhei Wemler; William V. Baxter

Light and shade in the context of non-photorealistic imaging, such as digital cel animation, are semantic notations, rather than physical phenomena. Therefore stylized light and shade should be intentionally animated instead of simulated. This paper proposes an intuitive, direct manipulation method for animating stylized light and shade in real-time. Our method provides intuitive click-and-drag operations for translating and deforming the shaded areas, including rotation, directional scaling, splitting, and squaring of high-lights, all without tedious parameter tuning. Our prototype system demonstrates the algorithms in our method, which are implemented along with a real-time cartoon shader on commodity graphics hard-ware. This system allows the interactive creation of stylized shading keyframes for animations, illustrating the effectiveness of the proposed techniques.


non-photorealistic animation and rendering | 2008

Rigid shape interpolation using normal equations

William V. Baxter; Pascal Barla; Ken-ichi Anjyo

In this paper we provide a new compact formulation of rigid shape interpolation in terms of normal equations, and propose several enhancements to previous techniques. Specifically, we propose 1) a way to improve mesh independence, making the interpolation result less influenced by variations in tessellation, 2) a faster way to make the interpolation symmetric, and 3) simple modifications to enable controllable interpolation. Finally we also identify 4) a failure mode related to large rotations that is easily triggered in practical use, and we present a solution for this as well.


pacific conference on computer graphics and applications | 2004

A versatile interactive 3D brush model

William V. Baxter; Ming C. Lin

We present a flexible modeling approach capable of realistically simulating many varieties of brushes commonly used in painting. Our geometric model of brush heads is a combination of subdivision surfaces and hundreds of individual bristles represented by thin polygonal strips. We exploit bristle-to-bristle coherence, simulating only a fraction of the bristles and using interpolation for the remainder. Our dynamic model incorporates realistic physically-based deformation, including anisotropic friction, brush plasticity, and tip spreading. We use an energy minimization framework with a novel geometric representation of the brush head to generate a wider variety of brushes. Finally, we have developed an improved haptic model that provides realistic force feedback, directly related to the results of the brush dynamic simulation. Using this model, we are able to simulate a wide range of brush styles and create an excellent variety of strokes such as the crisp, curvy strokes of Western decorative painting, or rough scratchy strokes like certain Oriental calligraphy. We have also developed an exporter for a popular free 3D modeling package that makes it easier for non-programmers to create any desired style of brush, real or fanciful.


eurographics | 2006

Latent Doodle Space

William V. Baxter; Ken-ichi Anjyo

We propose the concept of a latent doodle space, a low‐dimensional space derived from a set of input doodles, or simple line drawings. The latent space provides a foundation for generating new drawings that are similar, but not identical to, the input examples. The two key components of this technique are 1) a heuristic algorithm for finding stroke correspondences between the drawings, and 2) the use of latent variable methods to automatically extract a low‐dimensional latent doodle space from the inputs. We present two practical applications that demonstrate the utility of this idea: first, a randomized stamp tool that creates a different image on every usage; and second, “personalized probabilistic fonts,” a handwriting synthesis technique that mimics the idiosyncrasies of ones own handwriting.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2007

Finite volume flow simulations on arbitrary domains

Jeremy D. Wendt; William V. Baxter; Ipek Oguz; Ming C. Lin

We present a novel method for solving the incompressible Navier-Stokes equations that more accurately handles arbitrary boundary conditions and sharp geometric features in the fluid domain. It uses a space filling tetrahedral mesh, which can be created using many well-known methods, to represent the fluid domain. Examples of the methods strengths are illustrated by free surface fluid simulations and smoke simulations of flows around objects with complex geometry.


Communications of The ACM | 2004

Physically based virtual painting

Ming C. Lin; William V. Baxter; Vincent Scheib; Jeremy D. Wendt

Tapping the compelling illusion of physical interaction with paints, brushes, surfaces, color, and light, users express the nuances of their visual and emotional imaginations.

Collaboration


Dive into the William V. Baxter's collaboration.

Top Co-Authors

Avatar

Ming C. Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jeremy D. Wendt

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dinesh Manocha

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Vincent Scheib

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yuanxin Liu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Anselmo Lastra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Avneesh Sud

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Erikson

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge