Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brent Burley is active.

Publication


Featured researches published by Brent Burley.


eurographics | 2008

Ptex: per-face texture mapping for production rendering

Brent Burley; Dylan Lacewell

Explicit parameterization of subdivision surfaces for texture mapping adds significant cost and complexity to film production. Most parameterization methods currently in use require setup effort, and none are completely general. We propose a new texture mapping method for Catmull‐Clark subdivision surfaces that requires no explicit parameterization. Our method, Ptex, stores a separate texture per quad face of the subdivision control mesh, along with a novel per‐face adjacency map, in a single texture file per surface. Ptex uses the adjacency data to perform seamless anisotropic filtering of multi‐resolution textures across surfaces of arbitrary topology. Just as importantly, Ptex requires no manual setup and scales to models of arbitrary mesh complexity and texture detail. Ptex has been successfully used to texture all of the models in an animated theatrical short and is currently being applied to an entire animated feature. Ptex has eliminated UV assignment from our studio and significantly increased the efficiency of our pipeline.


eurographics | 2013

Sorted deferred shading for production path tracing

Christian Eisenacher; Gregory Nichols; Andrew Selle; Brent Burley

Ray‐traced global illumination (GI) is becoming widespread in production rendering but incoherent secondary ray traversal limits practical rendering to scenes that fit in memory. Incoherent shading also leads to intractable performance with production‐scale textures forcing renderers to resort to caching of irradiance, radiosity, and other values to amortize expensive shading. Unfortunately, such caching strategies complicate artist workflow, are difficult to parallelize effectively, and contend for precious memory. Worse, these caches involve approximations that compromise quality. In this paper, we introduce a novel path‐tracing framework that avoids these tradeoffs. We sort large, potentially out‐of‐core ray batches to ensure coherence of ray traversal. We then defer shading of ray hits until we have sorted them, achieving perfectly coherent shading and avoiding the need for shading caches.


2008 IEEE Symposium on Interactive Ray Tracing | 2008

Raytracing prefiltered occlusion for aggregate geometry

Dylan Lacewell; Brent Burley; Solomon Boulos; Peter Shirley

We prefilter occlusion of aggregate geometry, e.g., foliage or hair, storing local occlusion as a directional opacity in each node of a bounding volume hierarchy (BVH). During intersection, we terminate rays early at BVH nodes based on ray differential, and composite the stored opacities. This makes intersection cost independent of geometric complexity for rays with large differentials, and simultaneously reduces the variance of occlusion estimates. These two algorithmic improvements result in significant performance gains for soft shadows and ambient occlusion. The prefiltered opacity data depends only on geometry, not lights, and can be computed in linear time based on assumptions about the statistics of aggregate geometry.


Journal of Graphics Tools | 2007

Exact Evaluation of Catmull-Clark Subdivision Surfaces Near B-Spline Boundaries

Dylan Lacewell; Brent Burley

We extend the eigenbasis method of Stam to evaluate Catmull-Clark subdivision surfaces near extraordinary vertices on B-spline boundaries. Source code is available online.


Computer Graphics Forum | 2016

A practical and controllable hair and fur model for production path tracing

Matt Jen-Yuan Chiang; Benedikt Bitterli; Chuck Tappan; Brent Burley

We present an energy‐conserving fiber shading model for hair and fur that is efficient enough for path tracing. Our model adopts a near‐field formulation to avoid the expensive integral across the fiber, accounts for all high order internal reflection events with a single lobe, and proposes a novel, closed‐form distribution for azimuthal roughness based on the logistic distribution. Additionally, we derive, through simulation, a parameterization that relates intuitive user controls such as multiple‐scattering albedo and isotropic cylinder roughness to the underlying physical parameters.


international conference on computer graphics and interactive techniques | 2017

Path tracing in production - part 1: production renderers

Luca Fascione; Johannes Hanika; Marcos Fajardo; Per H. Christensen; Brent Burley; Brian Green

The last few years have seen a decisive move of the movie making industry towards rendering using physically-based methods, mostly implemented in terms of path tracing. Increasing demands on the realism of lighting, rendering and material modeling, paired with a working paradigm that very naturally models the behaviour of light like in the real world mean that more and more movies each year are created the physically-based way. This shift has also been recently recognised by the Academy of Motion Picture Arts and Sciences, which in this years SciTech ceremony has awarded three ray tracing renderers for their crucial contribution to this move. While the language and toolkit available to the technical directors get closer and closer to natural language, an understanding of the techniques and algorithms behind the workings of the renderer of choice are still of fundamental importance to make efficient use of the available resources, especially when the hard-learned lessons and tricks from the previous world of rasterization-based rendering can introduce confusion and cause costly mistakes. In this course, the architectures and novel possibilities of the next generation of production renderers are introduced to a wide audience including technical directors, artists, and researchers. This is the first part of a two part course. While the first part focuses on architecture and implementation, the second one focuses on usage patterns and workflows.


international conference on computer graphics and interactive techniques | 2010

Example-based texture synthesis on Disney's Tangled

Chuck Tappan; Brent Burley; Daniel Teece; Arthur Shek

Look development on Walt Disneys animated feature Tangled called for artists to paint hundreds of organic elements with high-resolution textures on a tight schedule. With our Ptex format [Burley and Lacewell 2008], we had the infrastructure to handle massive textures within our pipeline, but the task of manually painting the patterned textures would still involve tedious effort.


ACM Transactions on Graphics | 2018

The Design and Evolution of Disney’s Hyperion Renderer

Brent Burley; David Adler; Matt Jen-Yuan Chiang; Hank Driskill; Ralf Habel; Patrick Kelly; Peter Kutz; Yining Karl Li; Daniel Teece

Walt Disney Animation Studios has transitioned to path-traced global illumination as part of a progression of brute-force physically based rendering in the name of artist efficiency. To achieve this without compromising our geometric or shading complexity, we built our Hyperion renderer based on a novel architecture that extracts traversal and shading coherence from large, sorted ray batches. In this article, we describe our architecture and discuss our design decisions. We also explain how we are able to provide artistic control in a physically based renderer, and we demonstrate through case studies how we have benefited from having a proprietary renderer that can evolve with production needs.


international conference on computer graphics and interactive techniques | 2018

Plausible iris caustics and limbal arc rendering

Matt Jen-Yuan Chiang; Brent Burley

In this paper, we apply anterior segment tomography measurements from contact lens research to photorealistic eye rendering. We improve on existing analytic rendering models by including a conical extension to the usual ellipsoidal corneal surface and we demonstrate the advantage of using a more accurate iris depth. We also introduce a practical method for automatically rendering the limbal arc as an intrinsic part of sclerotic scattering.


Computer Graphics Forum | 2018

Denoising Deep Monte Carlo Renderings: Denoising Deep Monte Carlo Renderings

D. Vicini; David Adler; Jan Novák; Fabrice Rousselle; Brent Burley

We present a novel algorithm to denoise deep Monte Carlo renderings, in which pixels contain multiple colour values, each for a different range of depths. Deep images are a more expressive representation of the scene than conventional flat images. However, since each depth bin receives only a fraction of the flat pixels samples, denoising the bins is harder due to the less accurate mean and variance estimates. Furthermore, deep images lack a regular structure in depth—the number of depth bins and their depth ranges vary across pixels. This prevents a straightforward application of patch‐based distance metrics frequently used to improve the robustness of existing denoising filters. We address these constraints by combining a flat image‐space non‐local means filter operating on pixel colours with a deep cross‐bilateral filter operating on auxiliary features (albedo, normal, etc.). Our approach significantly reduces noise in deep images while preserving their structure. To our best knowledge, our algorithm is the first to enable efficient deep‐compositing workflows with denoised Monte Carlo renderings. We demonstrate the performance of our filter on a range of scenes highlighting the challenges and advantages of denoising deep images.

Collaboration


Dive into the Brent Burley's collaboration.

Top Co-Authors

Avatar

Gregory Nichols

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Daniel Teece

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Matt Jen-Yuan Chiang

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

David Adler

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Dylan Lacewell

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Marcos Fajardo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chuck Tappan

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Peter Kutz

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Ralf Habel

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Sean Jenkins

Walt Disney Animation Studios

View shared research outputs
Researchain Logo
Decentralizing Knowledge