Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maryann Simmons is active.

Publication


Featured researches published by Maryann Simmons.


international conference on computer graphics and interactive techniques | 2005

JPEG-HDR: a backwards-compatible, high dynamic range extension to JPEG

Greg Ward; Maryann Simmons

The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility. In this paper, we demonstrate a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings. A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard output-referred image. This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original. The tone-mapped image data is also compressed, and the composite is delivered in a standard JPEG wrapper. To naive software, the image looks like any other, and displays as a tone-mapped version of the original. To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband. Our method further extends the color range to encompass the visible gamut, enabling a new generation of display devices that are just beginning to enter the market.


applied perception in graphics and visualization | 2004

Subband encoding of high dynamic range imagery

Greg Ward; Maryann Simmons

The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility. In this paper, we propose a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings. A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard 24-bit RGB format. This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original. The tone-mapped image data may be compressed, permitting the composite to be delivered in a standard JPEG wrapper. To naive software, the image looks like any other, and displays as a tone-mapped version of the original. To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband. We present specifics of the method and the results of encoding a series of synthetic and natural HDR images, using various published global and local tone-mapping operators to generate the foreground images. Errors are visible in only a very small percentage of the pixels after decoding, and the technique requires only a modest amount of additional space for the subband data, independent of image size.The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility. In this paper, we propose a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings. A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard 24-bit RGB format. This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original. The tone-mapped image data may be compressed, permitting the composite to be delivered in a standard JPEG wrapper. To naïve software, the image looks like any other, and displays as a tone-mapped version of the original. To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband. We present specifics of the method and the results of encoding a series of synthetic and natural HDR images, using various published global and local tone-mapping operators to generate the foreground images. Errors are visible in only a very small percentage of the pixels after decoding, and the technique requires only a modest amount of additional space for the subband data, independent of image size.


eurographics symposium on rendering techniques | 2000

Tapestry: A Dynamic Mesh-based Display Representation for Interactive Rendering

Maryann Simmons; Carlo H. Séquin

This paper presents a new method for interactive viewing of dynamically sampled environments. We introduce a 3D mesh-based reconstruction called a tapestry that serves both as the display representation and as a cache that supports the re-use of samples across views. As the user navigates through the environment, the mesh continuously evolves to provide an appropriate image reconstruction for the current view. In addition, the reconstruction process provides feedback to the renderer to guide adaptive sampling.


ACM Transactions on Graphics | 1999

The holodeck ray cache: an interactive rendering system for global illumination in nondiffuse environments

Gregory J. Ward; Maryann Simmons

We present a new method for rendering complex environments using interactive, progressive, view-independent, parallel ray tracing. A four-dimensional holodeck data structure serves as a rendering target and caching mechanism for interactive walk-throughs of nondiffuse environments with full global illumination. Ray sample density varies locally according to need, and on-demand ray computation is supported in a parallel implementation. The holodeck file is stored on disk and cached in memory by a server using a least-recently-used (LRU) beam-replacement strategy. The holodeck server coordinates separate ray evaluation and display processes, optimizing disk and memory usage. Different display systems are supported by specialized drivers, which handle display rendering, user interaction, and input. The display driver creates an image from ray samples sent by the server and permits the manipulation of local objects, which are rendered dynamically using approximate lighting computed from holodeck samples. The overall method overcomes many of the conventionl limits of interactive rendering in scenes with complex surface geometry and reflectance properties, through an effective combination of ray tracing, caching, and hardware rendering.


International Journal of Shape Modeling | 1998

2D SHAPE DECOMPOSITION AND THE AUTOMATIC GENERATION OF HIERARCHICAL REPRESENTATIONS

Maryann Simmons; Carlo H. Séquin

Many of the tasks that are performed with objects in a virtual environment, such as collision detection, rendering, and visibility culling, are based on the geometric structure of the objects. These operations are most efficient when the objects are represented with well-balanced trees of hierarchical components that reflect the geometric structure of the object at resolution levels appropriate for the particular task. This paper presents a framework for automatically generating hierarchical 2D object representations specialized for geometric tasks. The approach first constructs a multi-resolution representation that encapsulates the salient geometric features of an object, as well as its topological decomposition into parts. The main components of the representation are a Cell-Based Representation that provides spatial filtering at the desired feature resolution, and an Axial Shape Graph that captures local shape information as well as global information about the overall geometric structure of the object. Using the Axial Shape Graph, the task of shape decomposition is reduced to a graph partitioning problem whose solution results in a well-balanced part hierarchy. We show that this structure can be utilized to generate hierarchical representations specialized for the task of Collision Detection in 2D environments.


international conference on computer graphics and interactive techniques | 2003

Per-pixel smooth shader level of detail

Maryann Simmons; Dave Shreiner

Programmable hardware fragment shaders are subject to resource limitations, restricting the complexity of the shader, or number of shaded objects that can be rendered at interactive rates. In graphics, there is a rich tradition of Level of Detail (LOD) techniques for geometric models [Akenine-Moller and Haines 2002]. Similarly, shader LOD [Olano and Kuehne 2002], has recently been shown to improve performance when using shaders. Three important aspects illustrated by geometric LOD techniques are relevant: 1) the definition/generation of appropriate shaders for each level of detail, 2) the choice of LOD at run-time, 3) smooth transitioning between levels to avoid distracting “popping” effects.This sketch investigates per-pixel shader LOD for interactive applications. The focus here is on the latter two aspects presented above. A prototype solution is presented using the OpenGL Shader framework (which evolved from [Peercy et al. 2000]). In OpenGL Shader, shaders can be hand-crafted or generated automatically by the compiler to peform LOD with a series of if-else statements that switch between the levels based on a user-defined run-time parameter (usually distance from the viewer). There are two problems with this base technique: first, the same level is used for the entire object, instead of per-pixel level selection; and second, the transitions between levels are sharp, resulting in visual artifacts. In this sketch we present a solution that produces smooth transitions between levels, and finer grain level selection. Figure 1 shows an example. If the shader has access to the sampling rate at each pixel, it can use this information to choose which LOD to use, and to blend smoothly between levels. The derivation of sampling rate for an arbitrary shader is difficult, and the shader may not have sufficient information to calculate derivatives. If the shader, however, is structured as a procedural texture generator, we can assign texture coordinates to the vertices of the object being shaded, along with proxy texture(s) that approximate the sampling rate of the shader. We then utilize MIP-mapping hardware to determine the appropriate LOD and blend factor, all on a per-pixel basis. OpenGL Shader’s high-level language (ISL) supports conditional if-else blocks, as well as per-pixel framebuffer queries. We construct a proxy MIP texture where the base (most detailed) level contains the value 1.0 at each pixel, at a resolution that approximates the desired shader sampling rate. The subsequent MIP levels interpolate smoothly down to 0.0. The LOD parameter (e.g. distance value) used to choose the LOD level is replaced by a query of the proxy MIP texture. With MIP inter-level linear interpolation, the MIP value will vary smoothly between levels, and therefore can


virtual systems and multimedia | 2001

Citywalk: a second generation walkthrough system

Richard William Bukowski; Laura Michele Downs; Maryann Simmons; Carlo H. Séquin; Seth J. Teller

The architectural framework of an advanced virtual walkthrough environment is described and placed in perspective with first generation systems built during the last two decades. This framework integrates support for scalable, distributed, interactive models with plug-in physical simulation to provide a large and rich environment suitable for architectural evaluation and training applications. An outlook is also given to a possible third generation of virtual environment architectures that are capable of integrating different heterogeneous walkthrough models.


color imaging conference | 2005

JPEG-HDR: A Backwards-Compatible, High Dynamic Range Extension to JPEG.

Greg Ward; Maryann Simmons


Archive | 2001

Tapestry: an efficient mesh-based display representation for interactive rendering

Maryann Simmons; Carlo H. Séquin


Archive | 2004

Dekodierung von Bildern mit hohem Dynamikumfang (HDR)

Gregory John Ward; Maryann Simmons

Collaboration


Dive into the Maryann Simmons's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Ward

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory J. Ward

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seth J. Teller

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge