Andrew S. Glassner
PARC
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew S. Glassner.
Archive | 1995
Andrew S. Glassner
If you are indoors and reading this document on paper, then the page may be lit by a fluorescent light bulb. The gases inside the bulb absorb high-energy electrons, and then fluoresce, or re-radiate that absorbed energy at a different frequency. The particular gases in common fluorescent bulbs are chosen to be efficient at re-radiating this energy in the visible wavelengths. If you are reading this document on-line, then you’re probably reading it on a cathode-ray tube (CRT). The face of the CRT is lined with phosphors, which absorb the high-energy electrons directed at them, and gradually release that energy over time in the visible band. The two phenomena of fluorescence and phosphorescence are not as common as simple reflection and transmission, but do have an important part to play in the complete description of macroscopic physical behavior that should be modeled by image synthesis programs. This paper presents a mathematical model for global energy balancing which includes these phenomena.
Graphics Gems II | 1991
Andrew S. Glassner
Publisher Summary This chapter analyzes winged-edge models. The winged-edge data structure is a powerful mechanism for manipulating polyhedral models. The basic idea rests on the idea of an edge and its adjacent polygons. The name is derived by imagining the two polygons as a butterflys wings, and the edge as the butterflys body separating them. This simple concept provides a basis for implementing a variety of powerful tools for performing high-level operations on models. There are several fundamental operations that a winged-edge library must support. Each operation requires carefully moving and adjusting a variety of pointers. This chapter describes solution of some tricky pointer-stitching problems in implementing a winged-edge library. It suggests a general architecture and data structures, and gives recipes for performing those basic mechanisms that require some care to implement correctly. The architecture suggested by Pat Hanrahan is discussed. The general structure in this architecture is that faces, edges, and vertices each are stored in a ring.
ACM Transactions on Graphics | 1995
Andrew S. Glassner; Kenneth P. Fishkin; David H. Marimont; Maureen C. Stone
Rendering systems can produce images that include the entire range of visible colors. Imaging hardware, however, can reproduce only a subset of these colors: the device gamut. An image can only be correctly displayed if all of its colors lie inside of the gamut of the target device. Current solutions to this problem are either to correct the scene colors by hand, or to apply gamut mapping techniques to the final image. We propose a methodology called device-directed rendering that performs scene color adjustments automatically. Device-directed rendering applies classic minimization techniques to a symbolic representation of the image that describes the relationship of the scene lights and surfaces to the pixel colors. This representation can then be evaluated to produce an image that is guaranteed to be in gamut. Although our primary application has been correcting out-of-gamut colors, this methodology can be generally applied to the problem of adjusting a scene description to accommodate constraints on the output image pixel values.
Graphics gems | 1994
Andrew S. Glassner
Abstract Many polygonal models are used as piecewise-flat approximations of curved models, and are thus “smooth-shaded” when displayed. To apply Gouraud or Phong shading to a model one needs to compute a surface normal at every vertex; often this simply involves averaging the surface normal of each polygon sharing that vertex. This Gem provides a general-purpose procedure that computes vertex normals from any list of polygons. I assume that the polygons describe a simple manifold in 3D space, so that every local neighborhood is a flat sheet. I also assume that the structure is a mesh; that is, there are no “T” vertices, isolated vertices, or dangling edges. Except for the addition of normals at the vertices, the input model is unchanged. I infer the topology of the model by building a data structure that allows quick access to all the polygons that have a vertex in the same region of space. To find the normal for a selected vertex, one needs only search the region surrounding the vertex and then average the normals for all polygons that share that vertex.
IEEE Computer Graphics and Applications | 1992
Andrew S. Glassner
A collection of rewrite rules that transform simple polyhedral models (and polyhedral control meshes) into richer, more visually expressive models is presented. All are based on simple transformations that can be easily implemented and extended. The focus is mostly on techniques of cell replacement, where a unit of the structure (typically a polygon or polyhedron) is replaced by one or more variant units. Such replacements are encoded in rewrite rules. The approach presented helps a modeler who, having created a simple low-detail model, wants to amplify that shapes complexity. Features of the substrate model (such as symmetry relationships and prominent surface features) are generally conserved and often echoed and enhanced in the amplified shape.<<ETX>>
Graphics Gems II | 1991
Andrew S. Glassner
Publisher Summary This chapter elaborates an adaptive run-length encoding method. Some compression techniques work in two dimensions, storing regions of the picture in a data structure such as a quadtree. High compression can be achieved this way for some images, but finding the best decomposition of the image can be difficult, and fast decompression of selected regions also can be hard. A less sophisticated but simpler image storage technique is known as run-length encoding. However, this scheme is not efficient for images where the colors change quickly. If each pixel in a scanline is different than the preceding pixel, then run-length encoding actually will double the size of the file relative to a straightforward dump, because each pixel will be preceded by a count byte with value 0. To handle this situation, the count byte is redefined as a signed 8-bit integer. If the value is zero or positive, it indicates a run of length one more than its value; but if the count is negative, it means that what follows is a dump. This is called adaptive run-length encoding. A straightforward application of this technique usually produces a smaller file than a raw dump, but not always.
Graphics Gems II | 1991
Andrew S. Glassner
Publisher Summary This chapter describes a simple viewing geometry. Sometimes, it is handy to be able to set up a simple viewing geometry; for example, packages without matrix libraries, quick preview hacks, and simple ray tracers all can benefit from a simple viewing construction. This chapter describes one such simple viewing geometry. The input is a viewpoint E , a gaze direction and distance G , an up vector U , and viewing half-angles θ; and φ. The output is a screen midpoint M and two vectors, H and V , which are used to sweep the screen. The chapter illustrates the setup for this viewing geometry. If the origin is assumed in the lower left, then any point S on the screen may be specified by ( sx , sy ), both numbers between 0 and 1.
ACM Transactions on Graphics | 1990
Andrew S. Glassner
Description: Many two-dimensional graphics programs provide a user with a rectangular screen window for viewing a two-dimensional image. Common examples of this underlying “world” image include text, line, or shaded pictures, and plots of one- or two-dimensional functions. Typically the screen image cannot display the world image with a 1:1 ratio between screen units (pixels) and the smallest resolvable world units. Thus the screen image is typically scaled and panned across the world image. The scaling is often differential, i.e., different scaling factors are applied in X and Y. Suppose a user is viewing a function y = f(x) at a ratio of 1:1. Increasing the scale factor in X will bring more data into view, while retaining vertical amplitude; increasing the scale factor in Y will provide a finer view of the values of the function, while retaining the range plotted. We have developed a compact control device which allows a user to continuously adjust the aspect ratio of the world data presented to the window. Our model is based on the projection of the window on the untouched world data. If the screen window is narrow and tall in the world data, then the world data will be expanded horizontally and compressed vertically when displayed on the screen (note that the window itself never changes size on the screen). Accompanying the aspect ratio selection is a zoom multiplier, which can uniformly grow or shrink the screen windows image in the world. We also include variable-speed scrolling controls. Scrolling and uniform zooming are decoupled from differential scaling. The advantage of our technique is that the user need not independently scale X and Y while searching for the proper scaling of data; although both may be adjusted individually, and may also be adjusted simultaneously in a coupled, single-position device. Thus a single-button input device (such as a mouse) is all that is needed to control any aspect of the display. The technique has the additional advantage of being nonmodal, so the user need not remember any state during operation.
Graphics Gems III (IBM Version) | 1992
Andrew S. Glassner
Publisher Summary This chapter presents an overview of darklights. Every image designer knows that appropriate illumination is an important part of an effective image. The play of light on the surfaces of the scene gives the image depth and mood, indicates position and character, reveals both objective visual information, and suggests subjective emotional impressions. Even objective physical rendering, when used in fields such as scientific visualization, requires sensitive judgement for the placement and control of illumination. The chapter discusses a tool that is used subconsciously by many painters and quite explicitly by many graphics designers. Darklights can cause the final pixel values to dip below zero. The techniques that are applied to map pixel values greater than one into gamut should also be applied to pixel values that dip below zero. As with all lights, darklights must be designed and positioned with care to achieve an effect. They are useful for simulating fuzzy shadows, darkening up corners of rooms, and changing the relative brightness of objects without affecting the basic lighting scheme. They are naturally available in virtually all rendering systems. If used with care and sensitivity, darklights can extend ones expressive power with the medium of image synthesis.
Graphics Gems III | 1992
Andrew S. Glassner
Publisher Summary Much of graphics rendering today takes place in a square pixel grid. This is because the boundaries of the square grid are all vertical and horizontal lines, which eases the computational burden of sampling, filtering, and reconstruction. But it is well-known that the square grid is not the most uniform in terms of sampling densities. In two dimensions, the triangular lattice is more isotropic. Triangular lattice permits two tilings by regular polygons: hexagonal and triangular. Both unit cells have been used for image processing. This chapter presents an inexpensive antialiasing technique for triangular pixels. The prefiltered contribution of the polygon can be parameterized to each element of the grid by building a look-up table based on the vertices of the polygon and the polygons intersections with the edges of the triangle. A simple box filter simply returns the amount of area within the triangle; higher-order filters may apply a weighting mask first.