Steven J. Gortler
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven J. Gortler.
international conference on computer graphics and interactive techniques | 1996
Steven J. Gortler; Radek Grzeszczuk; Richard Szeliski; Michael F. Cohen
This paper discusses a new method for capturing the complete appearanceof both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used in computer vision and the rendering process traditionally used in computer graphics, our approach does not rely on geometric representations. Instead we sample and reconstruct a 4D function, which we call a Lumigraph. The Lumigraph is a subsetof the complete plenoptic function that describes the flow of light at all positions in all directions. With the Lumigraph, new images of the object can be generated very quickly, independent of the geometric or illumination complexity of the scene or object. The paper discusses a complete working system including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images from this new representation.
international conference on computer graphics and interactive techniques | 1998
Jonathan Shade; Steven J. Gortler; Li-wei He; Richard Szeliski
In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan’s warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alphacompositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem.
international conference on computer graphics and interactive techniques | 2000
Wojciech Matusik; Chris Buehler; Ramesh Raskar; Steven J. Gortler; Leonard McMillan
In this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from the computation complexity, limited resolution, or quantization artifacts of previous volumetric approaches. We demonstrate the use of this algorithm in a real-time virtualized reality application running off a small number of video streams.
international conference on computer graphics and interactive techniques | 2002
Xianfeng Gu; Steven J. Gortler; Hugues Hoppe
Surface geometry is often modeled with irregular triangle meshes. The process of remeshing refers to approximating such geometry using a mesh with (semi)-regular connectivity, which has advantages for many graphics applications. However, current techniques for remeshing arbitrary surfaces create only semi-regular meshes. The original mesh is typically decomposed into a set of disk-like charts, onto which the geometry is parametrized and sampled. In this paper, we propose to remesh an arbitrary surface onto a completely regular structure we call a geometry image. It captures geometry as a simple 2D array of quantized points. Surface signals like normals and colors are stored in similar 2D arrays using the same implicit surface parametrization --- texture coordinates are absent. To create a geometry image, we cut an arbitrary mesh along a network of edge paths, and parametrize the resulting single chart onto a square. Geometry images can be encoded using traditional image compression algorithms, such as wavelet-based coders.
international conference on computer graphics and interactive techniques | 2001
Chris Buehler; Michael Bosse; Leonard McMillan; Steven J. Gortler; Michael F. Cohen
We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.
international conference on computer graphics and interactive techniques | 2001
Pedro V. Sander; John Snyder; Steven J. Gortler; Hugues Hoppe
Given an arbitrary mesh, we present a method to construct a progressive mesh (PM) such that all meshes in the PM sequence share a common texture parametrization. Our method considers two important goals simultaneously. It minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. It also minimizes texture deviation (“slippage” error based on parametric correspondence) to obtain accurate textured mesh approximations. The method begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. Next, it simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole PM sequence. Finally, the charts are packed into a texture atlas. We demonstrate using such atlases to sample color and normal maps over several models.
international conference on computer graphics and interactive techniques | 2000
Aaron Isaksen; Leonard McMillan; Steven J. Gortler
This research further develops the light field and lumigraph image-based rendering methods and extends their utility. We present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, unknown depth variation and 2) low-cost, passive autostereoscopic viewing. Using a dynamic reparameterization, these techniques can be used to interactively render photographic effects such as variable focus and depth-of-field within a light field. The dynamic parameterization is independent of scene geometry and does not require actual or approximate geometry of the scene. We explore the frequency domain and ray-space aspects of dynamic reparameterization, and present an interactive rendering technique that takes advantage of todays commodity rendering hardware.
symposium on geometry processing | 2003
Pedro V. Sander; Zoë J. Wood; Steven J. Gortler; John Snyder; Hugues Hoppe
We introduce multi-chart geometry images, a new representation for arbitrary surfaces. It is created by resampling a surface onto a regular 2D grid. Whereas the original scheme of Gu et al. maps the entire surface onto a single square, we use an atlas construction to map the surface piecewise onto charts of arbitrary shape. We demonstrate that this added flexibility reduces parametrization distortion and thus provides greater geometric fidelity, particularly for shapes with long extremities, high genus, or disconnected components. Traditional atlas constructions suffer from discontinuous reconstruction across chart boundaries, which in our context create unacceptable surface cracks. Our solution is a novel zippering algorithm that creates a watertight surface. In addition, we present a new atlas chartification scheme based on clustering optimization.
international conference on computer graphics and interactive techniques | 1993
Steven J. Gortler; Peter Schröder; Michael F. Cohen; Pat Hanrahan
Radiosity methods have been shown to be an effective means to solve the global illumination problem in Lambertian diffuse environments. These methods approximate the radiosity integral equation by projecting the unknown radiosity function into a set of basis functions with limited support resulting in a set of n linear equations where n is the number of discrete elements in the scene. Classical radiosity methods required the evaluation of n interaction coefficients. Efforts to reduce the number of required coefficients without compromising error bounds have focused on raising the order of the basis functions, meshing, accounting for discontinuities, and on developing hierarchical approaches, which have been shown to reduce the required interactions to O(n). In this paper we show that the hierarchical radiosity formulation is an instance of a more general set of methods based on wavelet theory. This general framework offers a unified view of both higher order element approaches to radiosity and the hierarchical radiosity methods. After a discussion of the relevant theory, we discuss a new set of linear time hierarchical algorithms based on wavelets such as the multiwavelet family and a flatlet basis which we introduce. Initial results of experimentation with these basis sets are demonstrated and discussed. CR
symposium on geometry processing | 2008
Ligang Liu; Lei Zhang; Yin Xu; Craig Gotsman; Steven J. Gortler
We present a novel approach to parameterize a mesh with disk topology to the plane in a shape‐preserving manner. Our key contribution is a local/global algorithm, which combines a local mapping of each 3D triangle to the plane, using transformations taken from a restricted set, with a global “stitch” operation of all triangles, involving a sparse linear system. The local transformations can be taken from a variety of families, e.g. similarities or rotations, generating different types of parameterizations. In the first case, the parameterization tries to force each 2D triangle to be an as‐similar‐as‐possible version of its 3D counterpart. This is shown to yield results identical to those of the LSCM algorithm. In the second case, the parameterization tries to force each 2D triangle to be an as‐rigid‐as‐possible version of its 3D counterpart. This approach preserves shape as much as possible. It is simple, effective, and fast, due to pre‐factoring of the linear system involved in the global phase. Experimental results show that our approach provides almost isometric parameterizations and obtains more shape‐preserving results than other state‐of‐the‐art approaches.