Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Molnar is active.

Publication


Featured researches published by Steven Molnar.


IEEE Computer Graphics and Applications | 1994

A sorting classification of parallel rendering

Steven Molnar; Michael Cox; David Ellsworth; Henry Fuchs

We describe a classification scheme that we believe provides a more structured framework for reasoning about parallel rendering. The scheme is based on where the sort from object coordinates to screen coordinates occurs, which we believe is fundamental whenever both geometry processing and rasterization are performed in parallel. This classification scheme supports the analysis of computational and communication costs, and encompasses the bulk of current and proposed highly parallel renderers - both hardware and software. We begin by reviewing the standard feed-forward rendering pipeline, showing how different ways of parallelizing it lead to three classes of rendering algorithms. Next, we consider each of these classes in detail, analyzing their aggregate processing and communication costs, possible variations, and constraints they may impose on rendering applications. Finally, we use these analyses to compare the classes and identify when each is likely to be preferable.<<ETX>>


international conference on computer graphics and interactive techniques | 1989

Pixel-planes 5: a heterogeneous multiprocessor graphics system using processor-enhanced memories

Henry Fuchs; John W. Poulton; John G. Eyles; Trey Greer; Jack Goldfeather; David Ellsworth; Steven Molnar; Greg Turk; Brice Tebbs; Laura Israel

This paper introduces the architecture and initial algorithms for Pixel-Planes 5, a heterogeneous multi-computer designed both for high-speed polygon and sphere rendering (1M Phong-shaded triangles/second) and for supporting algorithm and application research in interactive 3D graphics. Techniques are described for volume rendering at multiple frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form-factors. The hardware consists of up to 32 math-oriented processors, up to 16 rendering units, and a conventional 1280 &times; 1024-pixel frame buffer, interconnected by a 5 gigabit ring network. Each rendering unit consists of a 128 &times; 128-pixel array of processors-with-memory with parallel quadratic expression evaluation for every pixel. Implemented on 1.6 micron CMOS chips designed to run at 40MHz, this array has 208 bits/pixel on-chip and is connected to a video RAM memory system that provides 4,096 bits of off-chip memory. Rendering units can be independently reasigned to any part of the screen or to non-screen-oriented computation. As of April 1989, both hardware and software are still under construction, with initial system operation scheduled for fall 1989.


international conference on computer graphics and interactive techniques | 1992

PixelFlow: high-speed rendering using image composition

Steven Molnar; John G. Eyles; John W. Poulton

We describe PixelFlow, an architecture for high-speed image generation that overcomes the transformationand frame-buffer– access bottlenecks of conventional hardware rendering architectures. PixelFlow uses the technique of image composition: it distributes the rendering task over an array of identical renderers, each of which computes a fill-screen image of a fraction of the primitives. A high-performance image-composition network composites these images in real time to produce an image of the entire scene. Image-composition architectures offer performance that scales linearly with the number of renderers; there is no fundamental limit to the maximum performance achievable using this approach. A single PixelFlow renderer rasterizes up to 1.4 million triangles per second, and an n-renderer system can rasterize at up to n times this basic rate. PixelFlow performs antialiasing by supersampling. It supports defemed shading with separate hardware shaders that operate on composite images containing intermediate pixel data. PixelFlow shaders compute complex shading algorithms and procedural and image-based textures in real-time. The shading rate is independent of scene complexity. A Pixel Flow system can be coupled to a parallel supercomputer to serve as an immediatemode graphics server, or it can maintain a display list for retainedmode rendering. The PixelFlow design has been simulated extensively at high level. Custom chip design is underway. We anticipate a working system by late 1993. CR


international conference on computer graphics and interactive techniques | 1997

PixelFlow: the realization

John G. Eyles; Steven Molnar; John W. Poulton; Trey Greer; Anselmo Lastra; Nick England; Lee Westover

PlxelFlow is an architecture for high-speed, highly realistic image generation, based on the techniques of object-parallelism and image composition, Its initial architecture was described in [MOLN92]. After development by the original team of researchers at the University of North Carolina, and codevelopment with industry partners, Division Ltd. and HcwlettPackard, PixelFlow now is a much more capable system than initially conceived and its hardware and software systems have evolved considerably. This paper describes the final realization of PixelFlow, along with hardware and software enhancements heretofore unpublished. CR Cntcgorics and Subject Descriptors: C.5.4 [Computer System Implementation]: VLSI Systems; 1.3.1 [Computer Graphics]: Hardware Architecture; 1.3.3 [Computer Graphics]: Picture/Image Generation; 1.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism. Additlonnl


interactive 3d graphics and games | 1995

Real-time programmable shading

Anselmo Lastra; Steven Molnar; Marc Olano; Yulan Wang

One of the main techniques used by software renderers to produce stunningly realistic images is programmable shading—executing an arbitrarily complex program to compute the color at each pixel. Thus far, programmable shading has only been available on software rendering systems that run on general-purpose computers. Rendering each image can take from minutes to hours.nParallel rendering engines, on the other hand, have steadily increased in generality and in performance. We believe that they are nearing the point where they will be able to perform moderately complex shading at real-time rates. Some of the obstacles to this are imposed by hardware, such as limited amounts of frame-buffer memory and the enormous computational resources that are needed to shade in real time. Other obstacles are imposed by software. For example, users generally are not granted access to the hardware at the level required for programmable shading.nThis paper first explores the capabilities that are needed to perform programmable shading in real times. We then describe the design issues and algorithms for a prototype shading architecture on PixelFlow, an experimental graphics engine under construction. We demonstrate through examples and simulation that PixelFlow will be able to perform high-quality programmable shading at real-time (30 to 60 Hz) rates. We hope that our experience will be useful to shading implementors on other hardware graphics systems.


eurographics | 1991

Combining Z-buffer Engines for Higher-Speed Rendering

Steven Molnar

Described is a hardware architecture for combining the outputs of a number of z-buffer rendering engines to achieve higher performance than is possible with a single renderer. It allows a combination of renderers to achieve the same price/ performance ratio as the individual renderers that compose it, and can be extended to create systems with arbitrarily high performance. n nThe described architecture is based on a fusion of scan-line rendering and the conventional z-buffer algorithm. The frame buffers of several z-buffer engines are modified to scan out z-values as well as color values. Multiplexing devices combine the z/color streams from each pair of frame-buffers. These z/color streams are then combined by further multiplexers, creating a binary tree that funnels the z/color information from the many conventional frame buffers into a single z/color stream. The color stream is then used to dnve a standard display device. n nThe proposed architecture allows rendering rates of millions and even tens of millions of polygons per second. The basic architecture can be extended with additional hardware to perform antialiasing and texture-mapping.


IEEE Computer Graphics and Applications | 1992

Breaking the frame-buffer bottleneck with logic-enhanced memories

John W. Poulton; John G. Eyles; Steven Molnar; Henry Fuchs

Logic-enhanced memory chips that can remove the rasterizer/frame buffer bottleneck which limits the performance of current image-generation architectures are discussed. Putting pixel memory on-chip with rasterizing processors provides the two to three orders of magnitude improvement in access rates needed to support realistic shading models and aliasing in interactive systems. Current high-performance graphics systems and logic-enhanced memory architectural issues are reviewed. The design of the PixelFlow Enhanced Memory Chip (EMC), which exploits advances in semiconductor technology and circuit techniques to build compact, high-performance rasterizers, is described.<<ETX>>


eurographics conference on graphics hardware | 1995

The pixelflow texture and image subsystem

Steven Molnar

Texturing and imaging have become essential tasks for highspeed, high-quality rendering systems. They make possible effects such as photo-textures, environment maps, decals, modulated transparency, shadows, environment maps, and bump maps, to name just a few. n nThese operations all require high-speed access to a large image memory closely connected to the rasterizer hardware. The design of such memory systems is challenging because there are many competing constraints: memory bandwidth, memory size, flexibility, and, of course, cost. n nPixelFlow is an experimental hardware architecture designed to support new levels of geometric complexity and to incorporate realistic rendering effects such as programmable shading. This required an extremely flexible and high-performance texture/image subsystem. This paper describes the PixelFlow texture/image subsystem, the design decisions behind it and its advantages and limitations. Future directions are also described.


international symposium on microarchitecture | 1988

A microprogramming support tool for pipelined architectures

Steven Molnar; Mark C. Surles

We describe a software tool to aid the development of microcode for horizontal, pipelined architectures. The tool is a preprocessor for microcode source that allows the programmer full flexibility to optimize code, but removes many of the tedious and error-prone aspects of microprogramming. It automatically allocates floating-point registers, expands complex instructions, and analyzes code for pipeline-related errors. We have written a working version of the tool for the Weitek XL-8032 floating-point chip set, a horizontal architecture with pipelined sequencer and floating-point datapaths. Although the tool was designed for the XL architecture, the algorithms used are applicable to other parallel/pipelined architectures. This paper argues for the existence of such tools, summarizes the algorithms needed to analyze control and data flow in the presence of pipelining, and characterizes the tools performance based on nine microcoded routines written for a real-time 3-D graphics system.


Archive | 1995

Architecture and apparatus for image generation utilizing enhanced memory devices

John W. Poulton; Steven Molnar; John G. Eyles

Collaboration


Dive into the Steven Molnar's collaboration.

Top Co-Authors

Avatar

John G. Eyles

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Anselmo Lastra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Ellsworth

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yulan Wang

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge