Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Lindstrom is active.

Publication


Featured researches published by Peter Lindstrom.


international conference on computer graphics and interactive techniques | 1996

Real-time, continuous level of detail rendering of height fields

Peter Lindstrom; David Koller; William Ribarsky; Larry F. Hodges; Nickolas L. Faust; Gregory A. Turner

We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.


ieee visualization | 1998

Fast and memory efficient polygonal simplification

Peter Lindstrom; Greg Turk

Conventional wisdom says that in order to produce high-quality simplified polygonal models, one must retain and use information about the original model during the simplification process. We demonstrate that excellent simplified models can be produced without the need to compare against information from the original geometry while performing local changes to the model. We use edge collapses to perform simplification, as do a number of other methods. We select the position of the new vertex so that the original volume of the model is maintained and we minimize the per-triangle change in volume of the tetrahedra swept out by those triangles that are moved. We also maintain surface area near boundaries and minimize the per-triangle area changes. Calculating the edge collapse priorities and the positions of the new vertices requires only the face connectivity and the the vertex locations in the intermediate model. This approach is memory efficient, allowing the simplification of very large polygonal models, and it is also fast. Moreover, simplified models created using this technique compare favorably to a number of other published simplification methods in terms of mean geometric error.


international conference on computer graphics and interactive techniques | 2000

Out-of-core simplification of large polygonal models

Peter Lindstrom

We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each clusters representative vertex, which better preserves fine details and results in a low mean geometric error. The use of quadrics instead of the vertex grading approach in [13] has the additional benefits of requiring less disk space and only a single pass over the model rather than two. The resulting linear time algorithm allows simplification of datasets of arbitrary complexity. In order to handle degenerate quadrics associated with (near) flat regions and regions with zero Gaussian curvature, we present a robust method for solving the corresponding underconstrained least-squares problem. The algorithm is able to detect these degeneracies and handle them gracefully. Key features of the simplification method include a bounded Hausdorff error, low mean geometric error, high simplification speed (up to 100,000 triangles/second reduction), output (but not input) sensitive memory requirements, no disk space overhead, and a running time that is independent of the order in which vertices and triangles occur in the mesh.


IEEE Transactions on Visualization and Computer Graphics | 2002

Terrain simplification simplified: a general framework for view-dependent out-of-core visualization

Peter Lindstrom; Valerio Pascucci

We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.


ieee visualization | 2001

Visualization of large terrains made easy

Peter Lindstrom; Valerio Pascucci

We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts. Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient.


ACM Transactions on Graphics | 2000

Image-driven simplification

Peter Lindstrom; Greg Turk

We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use at comparisons between images of the original model against those of a simplified model to determine the cost of an ease collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. All of these trade-offs are balanced by the image metric. Benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. In order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. This method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces.


IEEE Transactions on Visualization and Computer Graphics | 2006

Fast and Efficient Compression of Floating-Point Data

Peter Lindstrom; Martin Isenburg

Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data


international conference on computer graphics and interactive techniques | 2005

Cache-oblivious mesh layouts

Sung-Eui Yoon; Peter Lindstrom; Valerio Pascucci; Dinesh Manocha

We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes.We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2--20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications.


IEEE Transactions on Visualization and Computer Graphics | 1999

Evaluation of memoryless simplification

Peter Lindstrom; Greg Turk

We investigate the effectiveness of the memoryless simplification approach described by Lindstrom and Turk (1998). Like many polygon simplification methods, this approach reduces the number of triangles in a model by performing a sequence of edge collapses. It differs from most recent methods, however, in that it does not retain a history of the geometry of the original model during simplification. We present numerical comparisons showing that the memoryless method results in smaller mean distance measures than many published techniques that retain geometric history. We compare a number of different vertex placement schemes for an edge collapse in order to identify the aspects of the memoryless simplification that are responsible for its high level of fidelity. We also evaluate simplification of models with boundaries, and we show how the memoryless method may be tuned to trade between manifold and boundary fidelity. We found that the memoryless approach yields consistently low mean errors when measured by the Metro mesh comparison tool. In addition to using complex models for the evaluations, we also perform comparisons using a sphere and portions of a sphere. These simple surfaces turn out to match the simplification behaviors for the more complex models that we used.


ieee visualization | 2003

Large mesh simplification using processing sequences

Martin Isenburg; Peter Lindstrom; Stefan Gumhold; Jack Snoeyink

In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm (Isenburg, et al., 2003), using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such a parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.

Collaboration


Dive into the Peter Lindstrom's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth I. Joy

University of California

View shared research outputs
Top Co-Authors

Avatar

Mark A. Duchaineau

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Martin Isenburg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Bernd Hamann

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jarek Rossignac

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge