Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Budge is active.

Publication


Featured researches published by Brian Budge.


international conference on computer graphics and interactive techniques | 2005

Shell maps

Serban D. Porumbescu; Brian Budge; Louis Feng; Kenneth I. Joy

A shell map is a bijective mapping between shell space and texture space that can be used to generate small-scale features on surfaces using a variety of modeling techniques. The method is based upon the generation of an offset surface and the construction of a tetrahedral mesh that fills the space between the base surface and its offset. By identifying a corresponding tetrahedral mesh in texture space, the shell map can be implemented through a straightforward barycentric-coordinate map between corresponding tetrahedra. The generality of shell maps allows texture space to contain geometric objects, procedural volume textures, scalar fields, or other shell-mapped objects.


IEEE Computer Graphics and Applications | 2003

An ocularist's approach to human iris synthesis

Aaron E. Lefohn; Brian Budge; Peter Shirley; Richard Caruso; Erik Reinhard

We have a particularly fortunate situation in iris synthesis: artificial eye makers (ocularists) have developed a procedure for physical iris synthesis that results in eyes with all the important appearance characteristics of real eyes. They have refined this procedure over decades,and the performance of their products in the real world completely validates the approach. Our approach lets users (other than trained ocularists) create a realistic looking human eye, paying particular attention to the iris. We draw from domain knowledge provided by ocularists to provide a toolkit that composes a human iris by layering semitransparent textures. These textures look decidedly painted and unrealistic. The composited result, however, provides a sense of depth to the iris and takes on a level of realism that we believe others have not previously achieved. Prior work on rendering eyes has concentrated predominantly on producing geometry for facial animation or for medical applications. Some work has focused on accurately modeling the cornea. In contrast, the goal of our work is the easy creation of realistic looking irises for both the ocular prosthetics and entertainment industries.


Computer Graphics Forum | 2009

Out-of-core Data Management for Path Tracing on Hybrid Resources

Brian Budge; Tony Bernardin; Jeff A. Stuart; Shubhabrata Sengupta; Kenneth I. Joy; John D. Owens

We present a software system that enables path‐traced rendering of complex scenes. The system consists of two primary components: an application layer that implements the basic rendering algorithm, and an out‐of‐core scheduling and data‐management layer designed to assist the application layer in exploiting hybrid computational resources (e.g., CPUs and GPUs) simultaneously. We describe the basic system architecture, discuss design decisions of the systems data‐management layer, and outline an efficient implementation of a path tracer application, where GPUs perform functions such as ray tracing, shadow tracing, importance‐driven light sampling, and surface shading. The use of GPUs speeds up the runtime of these components by factors ranging from two to twenty, resulting in a substantial overall increase in rendering speed. The path tracer scales well with respect to CPUs, GPUs and memory per node as well as scaling with the number of nodes. The result is a system that can render large complex scenes with strong performance and scalability.


IEEE Transactions on Visualization and Computer Graphics | 2008

Geometric Texturing Using Level Sets

Anders Brodersen; Ken Museth; Serban D. Porumbescu; Brian Budge

We present techniques for warping and blending (or subtracting) geometric textures onto surfaces represented by high-resolution level sets. The geometric texture itself can be represented either explicitly as a polygonal mesh or implicitly as a level set. Unlike previous approaches, we can produce topologically connected surfaces with smooth blending and low distortion. Specifically, we offer two different solutions to the problem of adding fine-scale geometric detail to surfaces. Both solutions assume a level set representation of the base surface, which is easily achieved by means of a mesh-to-level-set scan conversion. To facilitate our mapping, we parameterize the embedding space of the base level set surface using fast particle advection. We can then warp explicit texture meshes onto this surface at nearly interactive speeds or blend level set representations of the texture to produce high-quality surfaces with smooth transitions.


ieee vgtc conference on visualization | 2005

Dense geometric flow visualization

Sung W. Park; Brian Budge; Lars Linsen; Bernd Hamann; Kenneth I. Joy

We present a flow visualization technique based on rendering geometry in a dense, uniform distribution. Flow is integrated using particle advection. By adopting ideas from texture-based techniques and taking advantage of parallelism and programmability of contemporary graphics hardware, we generate streamlines and pathlines addressing both steady and unsteady flow. Pipelining is used to manage seeding, advection, and expiration of streamlines/ pathlines with constant lifetime. We achieve high numerical accuracy by enforcing short particle lifetimes and employing a fourth-order integration method. The occlusion problem inherent to dense volumetric representations is addressed by applying multi-dimensional transfer functions (MDTFs), restricting particle attenuation to regions of certain physical behavior, or features. Geometry is rendered in graphics hardware using techniques such as depth sorting, illumination, haloing, flow orientation, and depth-based color attenuation to enhance visual perception. We achieve dense geometric three-dimensional flow visualization with interactive frame rates.


pacific conference on computer graphics and applications | 2004

Multi-dimensional transfer functions for interactive 3D flow visualization

Sung W. Park; Brian Budge; Lars Linsen; Bernd Hamann; Kenneth I. Joy

Transfer functions are a standard technique used in volume rendering to assign color and opacity to a volume of a scalar field. Multidimensional transfer functions (MDTFs) have proven to be an effective way to extract specific features with subtle properties. As 3D texture-based methods gain widespread popularity for the visualization of steady and unsteady flow field data, there is a need to define and apply similar MDTFs to interactive 3D flow visualization. We exploit flow field properties such as velocity, gradient, curl, helicity, and divergence using vector calculus methods to define an MDTF that can be used to extract and track features in a flow field. We show how the defined MDTF can be applied to interactive 3D flow visualization by combining them with state-of-the-art texture-based flow visualization of steady and unsteady fields. We demonstrate that MDTFs can be used to help alleviate the problem of occlusion, which is one of the main inherent drawbacks of 3D texture-based flow visualization techniques. In our implementation, we make use of current graphics hardware to obtain interactive frame rates.


Computer Graphics Forum | 2008

Caustic Forecasting: Unbiased Estimation of Caustic Lighting for Global Illumination

Brian Budge; John C. Anderson; Kenneth I. Joy

We present an unbiased method for generating caustic lighting using importance sampled Path Tracing with Caustic Forecasting. Our technique is part of a straightforward rendering scheme which extends the Illumination by Weak Singularities method to allow for fully unbiased global illumination with rapid convergence. A photon shooting preprocess, similar to that used in Photon Mapping, generates photons that interact with specular geometry. These photons are then clustered, effectively dividing the scene into regions which will contribute similar amounts of caustic lighting to the image. Finally, the photons are stored into spatial data structures associated with each cluster, and the clusters themselves are organized into a spatial data structure for fast searching. During rendering we use clusters to decide the caustic energy importance of a region, and use the local photons to aid in importance sampling, effectively reducing the number of samples required to capture caustic lighting.


2008 IEEE Symposium on Interactive Ray Tracing | 2008

A straightforward CUDA implementation for interactive ray-tracing

Brian Budge; John C. Anderson; Christoph Garth; Kenneth I. Joy

In recent years, applying the powerful computational resources delivered by modern GPUs to ray tracing has resulted in a number of ray tracing implementations that allow rendering of moderately sized scenes at interactive speeds. In our poster, we present a fast implementation for ray tracing with CUDA. We describe an optimized GPU-based ray tracing approach within the CUDA framework that does not explicitly make use of ray coherency or architectural specifics and is therefore simple to implement, while still exceeding performance of previously presented approaches. Optimal performance is achieved by empirically tuning the ray tracing kernel to the executing hardware. We describe our implementation in detail and provide a performance analysis and comparison to prior work.


software visualization | 2008

Stacked-widget visualization of scheduling-based algorithms

Tony Bernardin; Brian Budge; Bernd Hamann

We present a visualization system to assist designers of scheduling-based multi-threaded out-of-core algorithms. Our system facilitates the understanding and improving of the algorithm through a stack of visual widgets that effectively correlate the out-of-core system state with scheduling decisions. The stack presents an increasing refinement in the scope of both time and abstraction level; at the top of the stack, the evolution of a derived efficiency measure is shown for the scope of the entire out-of-core system execution and at the bottom the details of a single scheduling decision are displayed. The stack provides much more than a temporal zoom-effect as each widget presents a different view of the scheduling decision data, presenting distinct aspects of the out-of-core system state as well as correlating them with the neighboring widgets in the stack. This approach allows designers to to better understand and more effectively react to problems in scheduling or algorithm design. As a case study we consider a global illumination renderer and show how visualization of the scheduling behavior has led to key improvements of the renderers performance.


Journal of Graphics Tools | 2002

Simple Nested Dielectrics in Ray Traced Images

Charles M. Schmidt; Brian Budge

Abstract This paper presents a simple method for modeling and rendering refractive objects that are nested within each other. The technique allows the use of simpler scene geometry and can even improve rendering time in some images. The algorithm can be easily added into an existing ray tracer and makes no assumptions about the drawing primitives that have been implemented.

Collaboration


Dive into the Brian Budge's collaboration.

Top Co-Authors

Avatar

Kenneth I. Joy

University of California

View shared research outputs
Top Co-Authors

Avatar

Bernd Hamann

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sung W. Park

University of California

View shared research outputs
Top Co-Authors

Avatar

Tony Bernardin

University of California

View shared research outputs
Top Co-Authors

Avatar

Lars Linsen

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge