Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Turk is active.

Publication


Featured researches published by Greg Turk.


international conference on computer graphics and interactive techniques | 1992

Re-tiling polygonal surfaces

Greg Turk

This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speeding-up the off-line rendering of complex scenes. Unfortunately, generating these levels of detail is a time-consuming task usually left to a human modeler. This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a re-tiling of a surface that is faithful to both the geometry and the topology of the original surface. The main contributions of this paper are: 1) a robust method of connecting together new vertices over a surface, 2) a way of using an estimate of surface curvature to distribute more new vertices at regions of higher curvature and 3) a method of smoothly interpolating between models that represent the same object at different levels of detail. The key notion in the re-tiling procedure is the creation of an intermediate model called the mutual tessellation of a surface that contains both the vertices from the original model and the new points that are to become vertices in the re-tiled surface. The new model is then created by removing each original vertex and locally re-triangulating the surface in a way that matches the local connectedness of the initial surface. This technique for surface retessellation has been successfully applied to iso-surface models derived from volume data, Connolly surface molecular models and a tessellation of a minimal surface of interest to mathematicians.


international conference on computer graphics and interactive techniques | 2003

Multi-level partition of unity implicits

Yutaka Ohtake; Alexander G. Belyaev; Marc Alexa; Greg Turk; Hans-Peter Seidel

We present a new shape representation, the multi-level partition of unity implicit surface, that allows us to construct surface models from very large sets of points. There are three key ingredients to our approach: 1) piecewise quadratic functions that capture the local shape of the surface, 2) weighting functions (the partitions of unity) that blend together these local shape functions, and 3) an octree subdivision method that adapts to variations in the complexity of the local shape.Our approach gives us considerable flexibility in the choice of local shape functions, and in particular we can accurately represent sharp features such as edges and corners by selecting appropriate shape functions. An error-controlled subdivision leads to an adaptive approximation whose time and memory consumption depends on the required accuracy. Due to the separation of local approximation and local blending, the representation is not global and can be created and evaluated rapidly. Because our surfaces are described using implicit functions, operations such as shape blending, offsets, deformations and CSG are simple to perform.We present a new shape representation, the multi-level partition of unity implicit surface, that allows us to construct surface models from very large sets of points. There are three key ingredients to our approach: 1) piecewise quadratic functions that capture the local shape of the surface, 2) weighting functions (the partitions of unity) that blend together these local shape functions, and 3) an octree subdivision method that adapts to variations in the complexity of the local shape.Our approach gives us considerable flexibility in the choice of local shape functions, and in particular we can accurately represent sharp features such as edges and corners by selecting appropriate shape functions. An error-controlled subdivision leads to an adaptive approximation whose time and memory consumption depends on the required accuracy. Due to the separation of local approximation and local blending, the representation is not global and can be created and evaluated rapidly. Because our surfaces are described using implicit functions, operations such as shape blending, offsets, deformations and CSG are simple to perform.


international conference on computer graphics and interactive techniques | 1996

Simplification envelopes

Jonathan D. Cohen; Amitabh Varshney; Dinesh Manocha; Greg Turk; Hans Weber; Pankaj K. Agarwal; Frederick P. Brooks; William Wright

We propose the idea of simplification envelopes for generating a hierarchy of level-of-detail approximations for a given polygonal model. Our approach guarantees that all points of an approximation are within a user-specifiable distance from the original model and that all points of the original model are within a distance from the approximation. Simplificationenvelopes provide a general framework within which a large collection of existing simplification algorithms can run. We demonstrate this technique in conjunction with two algorithms, one local, the other global. The local algorithm provides a fast method for generating approximations to large input meshes (at least hundreds of thousands of triangles). The global algorithm provides the opportunity to avoid local “minima” and possibly achieve better simplifications as a result. Each approximation attempts to minimize the total number of polygons required to satisfy the above constraint. The key advantages of our approach are: General technique providing guaranteed error bounds for genus-preserving simplification Automation of both the simplification process and the selection of appropriate viewing distances Prevention of self-intersection Preservation of sharp features Allows variation of approximation distance across different portions of a model CR


international conference on computer graphics and interactive techniques | 1999

Shape transformation using variational implicit functions

Greg Turk; James F. O'Brien

Traditionally, shape transformation using implicit functions is performed in two distinct steps: 1) creating two implicit functions, and 2) interpolating between these two functions. We present a new shape transformation method that combines these two tasks into a single step. We create a transformation between two N-dimensional objects by casting this as a scattered data interpolation problem in N + 1 dimensions. For the case of 2D shapes, we place all of our data constraints within two planes, one for each shape. These planes are placed parallel to one another in 3D. Zero-valued constraints specify the locations of shape boundaries and positive-valued constraints are placed along the normal direction in towards the center of the shape. We then invoke a variational interpolation technique (the 3D generalization of thin-plate interpolation), and this yields a single implicit function in 3D. Intermediate shapes are simply the zero-valued contours of 2D slices through this 3D function. Shape transformation between 3D shapes can be performed similarly by solving a 4D interpolation problem. To our knowledge, ours is the first shape transformation method to unify the tasks of implicit function creation and interpolation. The transformations produced by this method appear smooth and natural, even between objects of differing topologies. If desired, one or more additional shapes may be introduced that influence the intermediate shapes in a sequence. Our method can also reconstruct surfaces from multiple slices that are not restricted to being parallel to one another.


international conference on computer graphics and interactive techniques | 1991

Generating textures on arbitrary surfaces using reaction-diffusion

Greg Turk

This paper describes a biologically motivated method of texture synthesis called reaction-diffusion and demonstrates how these textures can be generated in a manner that directly matches the geometry of a given surface. Reaction-diffusion is a process in which two or more chemicals diffuse at unequal rates over a surface and react with one another to form stable patterns such as spots and stripes. Biologists and mathematicians have explored the patterns made by several reaction-diffusion systems. We extend the range of textures that have previously been generated by using a cascade of multiple reaction-diffusion systems in which one system lays down an initial pattern and then one or more later systems refine the pattern. Examples of patterns generated by such a cascade process include the clusters of spots on leopards known as rosettes and the web-like patterns found on giraffes. In addition, this paper introduces a method which reaction-diffusion textures are created to match the geometry of an arbitrary polyhedral surface. This is accomplished by creating a mesh over a given surface and then simulating the reaction-diffusion process directly on this mesh. This avoids the often difficult task of assigning texture coordinates to a complex surface. A mesh is generated by evenly distributing points over the model using relaxation and then determining which points are adjacent by constructing their Voronoi regions. Textures are rendered directly from the mesh by using a weighted sum of mesh values to compute surface color at a given position. Such textures can also be used as bump maps.


international conference on computer graphics and interactive techniques | 1999

LCIS: a boundary hierarchy for detail-preserving contrast reduction

Jack Tumblin; Greg Turk

High contrast scenes are difficult to depict on low contrast displays without loss of important fine details and textures. Skilled artists preserve these details by drawing scene contents in coarseto-fine order using a hierarchy of scene boundaries and shadings. We build a similar hierarchy using multiple instances of a new low curvature image simplifier (LCIS), a partial differential equation inspired by anisotropic diffusion. Each LCIS reduces the scene to many smooth regions that are bounded by sharp gradient discontinuities, and a single parameter K chosen for each LCIS controls region size and boundary complexity. With a few chosen K values (K1 > K2 > K3:::) LCIS makes a set of progressively simpler images, and image differences form a hierarchy of increasingly important details, boundaries and large features. We construct a high detail, low contrast display image from this hierarchy by compressing only the large features, then adding back all small details. Unlike linear filter hierarchies such as wavelets, filter banks, or image pyramids, LCIS hierarchies do not smooth across scene boundaries, avoiding “halo” artifacts common to previous contrast reducing methods and some tone reproduction operators. We demonstrate LCIS effectiveness on several example images. CR Descriptors: I.3.3 [Computer Graphics]: Picture/image generation Display algorithms; I.4.1 [Image Processing and Computer Vision]: Enhancement -Digitization and Image Capture


international conference on computer graphics and interactive techniques | 1989

Pixel-planes 5: a heterogeneous multiprocessor graphics system using processor-enhanced memories

Henry Fuchs; John W. Poulton; John G. Eyles; Trey Greer; Jack Goldfeather; David Ellsworth; Steven Molnar; Greg Turk; Brice Tebbs; Laura Israel

This paper introduces the architecture and initial algorithms for Pixel-Planes 5, a heterogeneous multi-computer designed both for high-speed polygon and sphere rendering (1M Phong-shaded triangles/second) and for supporting algorithm and application research in interactive 3D graphics. Techniques are described for volume rendering at multiple frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form-factors. The hardware consists of up to 32 math-oriented processors, up to 16 rendering units, and a conventional 1280 × 1024-pixel frame buffer, interconnected by a 5 gigabit ring network. Each rendering unit consists of a 128 × 128-pixel array of processors-with-memory with parallel quadratic expression evaluation for every pixel. Implemented on 1.6 micron CMOS chips designed to run at 40MHz, this array has 208 bits/pixel on-chip and is connected to a video RAM memory system that provides 4,096 bits of off-chip memory. Rendering units can be independently reasigned to any part of the screen or to non-screen-oriented computation. As of April 1989, both hardware and software are still under construction, with initial system operation scheduled for fall 1989.


international conference on computer graphics and interactive techniques | 2001

Texture synthesis on surfaces

Greg Turk

Many natural and man-made surface patterns are created by interactions between texture elements and surface geometry. We believe that the best way to create such patterns is to synthesize a texture directly on the surface of the model. Given a texture sample in the form of an image, we create a similar texture over an irregular mesh hierarchy that has been placed on a given surface. Our method draws upon texture synthesis methods that use image pyramids, and we use a mesh hierarchy to serve in place of such pyramids. First, we create a hierarchy of points from low to high density over a given surface, and we connect these points to form a hierarchy of meshes. Next, the user specifies a vector field over the surface that indicates the orientation of the texture. The mesh vertices on the surface are then sorted in such a way that visiting the points in order will follow the vector field and will sweep across the surface from one end to the other. Each point is then visited in turn to determine its color. The color of a particular point is found by examining the color of neighboring points and finding the best match to a similar pixel neighborhood in the given texture sample. The color assignment is done in a coarse-to-fine manner using the mesh hierarchy. A texture created this way fits the surface naturally and seamlessly.


ieee visualization | 1998

Fast and memory efficient polygonal simplification

Peter Lindstrom; Greg Turk

Conventional wisdom says that in order to produce high-quality simplified polygonal models, one must retain and use information about the original model during the simplification process. We demonstrate that excellent simplified models can be produced without the need to compare against information from the original geometry while performing local changes to the model. We use edge collapses to perform simplification, as do a number of other methods. We select the position of the new vertex so that the original volume of the model is maintained and we minimize the per-triangle change in volume of the tetrahedra swept out by those triangles that are moved. We also maintain surface area near boundaries and minimize the per-triangle area changes. Calculating the edge collapse priorities and the positions of the new vertices requires only the face connectivity and the the vertex locations in the intermediate model. This approach is memory efficient, allowing the simplification of very large polygonal models, and it is also fast. Moreover, simplified models created using this technique compare favorably to a number of other published simplification methods in terms of mean geometric error.


international conference on computer graphics and interactive techniques | 1996

Image-guided streamline placement

Greg Turk; David C. Banks

Accurate control of streamline density is key to producing several effective forms of visualization of two-dimensional vector fields. We introduce a technique that uses an energy function to guide the placement of streamlines at a specified density. This energy function uses a low-pass filtered version of the image to measure the difference between the current image and the desired visual density. We reduce the energy (and thereby improve the placement of streamlines) by (1) changing the positions and lengths of streamlines, (2) joining streamlines that nearly abut, and (3) creating new streamlines to fill sufficiently large gaps. The entire process is iterated to produce streamlines that are neither too crowded nor too sparse. The resulting streamlines manifest a more hand-placed appearance than do regularlyor randomly-placed streamlines. Arrows can be added to the streamlines to disambiguate flow direction, and flow magnitude can be represented by the thickness, density, or intensity of the lines. CR Categories: I.3.3 [Computer Graphics]: Picture/Image generation; I.4.3 [Image Processing]: Enhancement. Additional

Collaboration


Dive into the Greg Turk's collaboration.

Top Co-Authors

Avatar

C. Karen Liu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chris Wojtan

Institute of Science and Technology Austria

View shared research outputs
Top Co-Authors

Avatar

Jie Tan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter J. Mucha

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Wenhao Yu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Huong Quynh Dinh

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eugene Zhang

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

Gary Yngve

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Stieglitz

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge