Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matt Pharr is active.

Publication


Featured researches published by Matt Pharr.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Light Transport I: Surface Reflection

Matt Pharr; Greg Humphreys

This chapter focuses on the ray-tracing algorithms, radiometric concepts, and Monte Carlo sampling algorithms of the previous chapters to implement a set of integrators that compute scattered radiance from surfaces in the scene. Integrators are responsible for evaluating the integral equation that describes the equilibrium distribution of radiance in an environment (the light transport equation). As the SamplerRenderer uses the Camera to generate rays and then finds intersections with scene geometry, information about the intersections is passed to the SurfaceIntegrator and the VolumeIntegrator that the user selected; together these two classes are responsible for doing appropriate shading and lighting computations to compute the radiance along the ray, accounting for light reflected from the first surface visible along the ray as well as light attenuated and scattered by participating media along the ray. SurfaceIntegrators compute reflected light from geometric surfaces. The chapter describes SurfaceIntegrators. Because the light transport equation can be solved in closed form only for trivial scenes, it is necessary to apply a numerical integration technique to approximate its solution. Many solution methods have been proposed for doing so in rendering. The chapter presents implementations of a number of different integrators based on Monte Carlo integration that represent a selection of representative approaches to the problem. The chapter concludes with the implementation of a Renderer that applies the Metropolis sampling approach introduced in Section 13.4 to the light transport problem.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Chapter Eleven – Volume Scattering

Matt Pharr

Publisher Summary This chapter introduces the mathematics to describe how light is affected as it passes through participating media—particles distributed throughout a region of 3D space. Simulating the effect of participating media makes it possible to render images with atmospheric haze, beams of light through clouds, light passing through cloudy water, and subsurface scattering, where light exits a solid object at a different place than where it entered. The chapter begins by describing the basic physical processes that affect the radiance along rays passing through participating media. It then introduces the VolumeRegion base class, an interface for modeling different types of media. Like a BSDF, the volume description characterizes how light is scattered at individual points. In order to determine the global effect on the distribution of light in the scene, VolumeIntegrators are necessary. Furthermore, the chapter describes the abstraction that represents the subsurface scattering properties of objects, the BSSRDF, as well a number of Materials for translucent objects.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

15 – Light Transport II: Volume Rendering

Matt Pharr; Wenzel Jakob; Greg Humphreys

Just as SurfaceIntegrators are the meeting point of scene geometry, materials, and lights, applying sophisticated algorithms to solve the light transport equation and determine the distribution of radiance in the scene, VolumeIntegrators are responsible for incorporating the effect of participating media (as described by VolumeRegions ) into this process and determining how it affects the distribution of radiance. This chapter briefly introduces the equation of transfer, which describes how participating media change radiance along rays, and then describes the VolumeIntegrator interface as well as a few simple VolumeIntegrator implementations. Section 16.5 then describes the implementation of a surface integrator that accounts for the effect of subsurface scattering , where incident light travels some distance inside a surface before exiting. Although the approach is implemented as a SurfaceIntegrator , it is included in this chapter since its implementation is based on an approximate solution to light transport through participating media.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Color and Radiometry

Matt Pharr; Greg Humphreys

In order to precisely describe how light is represented and sampled to compute images, we must first establish some background in radiometry —the study of the propagation of electromagnetic radiation in an environment. Of particular interest in rendering are the wavelengths (λ) of electromagnetic radiation between approximately 370 nm and 730 nm, which account for light visible to humans. The lower wavelengths (λ ≈ 400 nm) are the bluish colors, the middle wavelengths (λ ≈ 550 nm) are the greens, and the upper wavelengths (λ ≈ 650 nm) are the reds.


Physically Based Rendering (Third Edition)#R##N#From Theory to Implementation | 2017

07 – Sampling and Reconstruction

Matt Pharr; Wenzel Jakob; Greg Humphreys

Although the final output of a renderer like pbrt is a two-dimensional grid of colored pixels, incident radiance is actually a continuous function defined over the film plane. The manner in which the discrete pixel values are computed from this continuous function can noticeably affect the quality of the final image generated by the renderer; if this process is not performed carefully, artifacts will be present. Fortunately, a relatively small amount of additional computation to this end can substantially improve the quality of the rendered images. This chapter introduces sampling theory—the theory of taking discrete sample values from functions defined over continuous domains and then using those samples to reconstruct new functions that are similar to the original. Building on principles of sampling theory, the Samplers in this chapter select sample points on the image plane at which incident radiance will be computed (recall that in the previous chapter, Cameras used Samples generated by a Sampler to construct their camera rays). Three Sampler implementations are described in this chapter, spanning a variety of approaches to the sampling problem. This chapter concludes with the Filter class. The Filter is used to determine how multiple samples near each pixel are blended together to compute the final pixel value.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Retrospective and the Future

Matt Pharr; Greg Humphreys

This chapter looks back at some of the details of the complete pbrt system, discusses some design alternatives, and also discusses some potential major extensions to the system. pbrt represents one single point in the space of rendering system designs. pbrt represents one single point in the space of rendering system designs. The basic decisions win pbrt are that ray tracing would be the geometric visibility algorithm used, that physical correctness would be a cornerstone of the system, and that Monte Carlo would be the main approach used for numerical integration; all these have pervasive implications for the systems design. One of the basic assumptions in pbrts design was that the most interesting types of images to render are images with complex geometry and lighting. One result of these assumptions is that pbrt is relatively inefficient at rendering simple images. Another performance implication of this design approach is that finding the BSDF at a ray intersection is more computationally intensive than it is in renderers that do not expend as much effort filtering textures and computing ray differentials. Another instance where the chosen abstractions impact the overall system efficiency is the range of geometric primitives that the renderer supports. While ray tracings ability to handle a wide variety of shapes is elegant, this property is not as useful in practice as one might initially expect. Not many shapes that are commonly encountered in real-world scenes can be described well with spheres and cones. An alternative approach to design a ray tracer around a single low-level shape representation like triangles and only operating on this representation throughout much of the pipeline has several advantages which increase performance and remove complexity from the system.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Monte Carlo Integration II: Improving Efficiency

Matt Pharr; Greg Humphreys

This chapter develops the theory and practice of techniques for improving the efficiency of Monte Carlo integration without necessarily increasing the number of samples. Variance in Monte Carlo ray tracing manifests itself as noise in the image. The battle against variance is the basis of most of the work in optimizing Monte Carlo. Monte Carlos convergence rate means that it is necessary to quadruple the number of samples in order to reduce the variance by half. Because the run time of the estimation procedure is proportional to the number of samples, the cost of reducing variance can be high. One of the techniques that has been most effective for improving efficiency for rendering problems is a method called importance sampling . Choosing a sampling distribution that is similar in shape to the integrand leads to reduced variance. This technique is called importance sampling because samples tend to be taken in “important” parts of the functions domain, where the functions value is relatively large. The chapter discusses importance sampling and a number of other techniques for improving the efficiency of Monte Carlo. Furthermore the chapter derives techniques for generating samples according to the distributions of BSDFs, light sources, and functions related to volume scattering so that they can be used as sampling distributions for importance sampling.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Light Transport III: Precomputed Light Transport

Matt Pharr; Greg Humphreys

The rendering algorithms in the preceeding chapters generally take minutes to hours of computation to generate high-quality imagery for interesting scenes. This is in general the price to be paid for their robustness, flexibility, and generality. Of course, this computational cost has a number of disadvantages. Artists modeling scenes generally desire quick feedback, and many applications require not just faster rendering but also full interactivity.


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Chapter Three – Shapes

Matt Pharr

Publisher Summary This chapter presents pbrts abstraction for geometric primitives such as spheres and triangles. Careful abstraction of geometric shapes in a ray tracer is a key component of a clean system design, and shapes are the ideal candidates for an object-oriented approach. All geometric primitives implement a common interface, and the rest of the renderer can use this interface without needing any details about the underlying shape. This makes it possible to isolate the geometric and shading subsystems of pbrt. Without this isolation, adding new shapes to the system would be unnecessarily difficult and error prone. pbrt hides details about its primitives behind a two-level abstraction, including the Shape class and the Primitive class. The Shape class provides access to the raw geometric properties of the primitive, such as its surface area and bounding box, and provides a ray intersection routine. This chapter focuses on the geometry-only Shape class. The interface for Shapes is in the source file core/shape.h, and definitions of common Shape methods can be found in core/shape.cpp. The Shape class in pbrt is reference counted—pbrt keeps track of the number of outstanding pointers to a particular shape and automatically deletes the shape when that reference count goes to zero. All shapes are defined in object coordinate space. In order to place a sphere at another position in the scene, a transformation that describes the mapping from object space to world spacemust be provided. The Shape class stores both this transformation and its inverse. All Shapes in the system are given a unique 32-bit numeric id, stored here in the shapeId member variable. This identifier has a variety of uses, among them the adaptive image sampling routines that take additional samples in the areas of pixels that have multiple shapes overlapping them


Physically Based Rendering (Second Edition)#R##N#From Theory to Implementation | 2010

Chapter Ten – Texture

Matt Pharr

Publisher Summary This chapter describes a set of interfaces and classes that allows incorporation of texture into material models. The materials are based on various parameters that describe their characteristics (diffuse reflectance, glossiness, etc.). Because real-world material properties typically vary over surfaces, it is necessary to be able to describe these patterns in some manner. In pbrt software, because the texture abstractions are defined in a way that separates the pattern generation methods from the material implementations, it is easy to combine them in arbitrary ways, thereby making it easier to create a wide variety of appearances. In pbrt, a texture is an extremely general concept: it is a function that maps points in some domain (e.g., a surfaces (u, v) parametric space or (x, y, z) object space) to values in some other domain. A wide variety of implementations of texture classes are available in the pbrt system. Textures may be a source of high-frequency variation in the final image. The chapter begins by discussing the problem of texture aliasing and general approaches that can be implemented to solve it. It describes the basic texture interface and illustrates its use with a few simple texture functions. Furthermore, the chapter presents a variety of more complex texture implementations, demonstrating the use of a number of different texture antialiasing techniques along the way.

Collaboration


Dive into the Matt Pharr's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John D. Owens

University of California

View shared research outputs
Top Co-Authors

Avatar

Carsten Dachsbacher

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge