Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bengt-Olaf Schneider is active.

Publication


Featured researches published by Bengt-Olaf Schneider.


international conference on computer graphics and interactive techniques | 1992

Interactive inspection of solids: cross-sections and interferences

Jarek Rossignac; Abe Megahed; Bengt-Olaf Schneider

To reduce the cost of correcting design errors, assemblies of mechanical parts are modeled using CAD systems and verified electronically before the designs are sent to manufacturing. Shaded images are insufficient for examining the internal structures of assemblies and for detecting interferences. Thus, designers must rely on expensive numerical techniques that compute geometric representations of cross-sections and of intersections of solids. The solid-clipping approach presented here bypasses these geometric calculations and offers realtime rendering of cross-sections and interferences for solids represented by their facetted boundaries. In its simplest form, the technique is supported by contemporary highend graphics workstations. Its variations, independently developed elsewhere, have already been demonstrated. Our implementation is based on the concept of a cutvolume interactively manipulated to remove obstructing portions of the assembly and reveal its internal structure. For clarity, faces of the cut-volume which intersect a single solid are hatched and shaded with the color of that solid. Interference areas between two or more solids are highlighted. Furthermore, to help users find the first occurrence of an interference along a search direction, we have developed an adaptive subdivision search based on a projective approach which guarantees a sufficient condition for object disjointness. The additional performance cost for solid-clipping and interference highlighting is comparable to the standard rendering cost. An efficient implementation of the disjointness test requires a minor extension of the graphics functions currently supported on commercial hardware.


Computers & Graphics | 1999

An adaptive framework for 3D graphics over networks

Bengt-Olaf Schneider; Ioana M. Martin

Abstract Access to and transmission of 3D models over networks becomes increasingly popular. However, the performance and quality of access to remote 3D models strongly depends on system load conditions and the capabilities of the various system components, such as clients, servers, and interconnect. The network graphics framework (NGF) integrates various transmission methods for downloading 3D models in a client–server environment. The NGF automatically selects the optimal transmission method for a given pair of client and server, taking into account characteristics of the model to be transmitted, critical environment conditions, user preferences and the capabilities of the client and the server. The NGF aims to provide constant quality of service across different clients and under varying environment conditions.


Archive | 1995

BRUSH as a Walkthrough System for Architectural Models

Bengt-Olaf Schneider; Paul Borrel; Jai Menon; Josh Mittleman; Jarek Rossignac

Brush provides an interactive environment for the real-time visualization and inspection of very large mechanical and architectural CAD databases. It supports immersive and non-immersive virtual reality walkthrough applications (for example, when validating or demonstrating to a customer an architectural concept) and detailed design reviews of complex mechanical assemblies such as engines, plants, airplanes, or ships.


eurographics | 1991

PROOF: An Architecture for Rendering in Object Space

Bengt-Olaf Schneider; Ute Claussen

This paper gives a short introduction into the field of computer image generation in hardware. It discusses the two main approaches, namely partitioning in Image space and In object space. Based on the object space partitioning approach we have defined the PROOF architecture. PROOF is a system that aims at high performance and high quality rendering of raster images. High performance means that up to 30 pictures are generated in one second. The pictures are shaded and anti-aliased, giving the images a high degree of realism. The architecture comprises tnree stages which are responsible for hidden surface removal, shading, and filtering respectively. The first of these stages is a pipeline of object processors. Each of these processors stores and scan converts one object Furthermore, It interpolates the depth and the normal vector across the Object. Each object processor is able to handle objects of a certain primitive type. The specialization of an object processor to a certain primitive type is encapsulated in a Single block called primitive processor. The output of the object processor pipeline is the input to a stage for shading. The illumination model employed takes into account both diffuse and specular reflections. The paper reviews Gouraud and Phong shading with regard to their suitability for a hardware implementation. The final stage of the PROOF system is formed by a stage for filtering the colours of those objects that contribute to a pixel. This done by constructing a subpixel mask and filtering across an area of 2×2 pixels. At the end the paper briefly reports on the current state of the project.


IEEE Transactions on Visualization and Computer Graphics | 1998

Efficient polygon clipping for an SIMD graphics pipeline

Bengt-Olaf Schneider; J. van Welzen

SIMD processors have become popular architectures for multimedia. Though most of the 3D graphics pipeline can be implemented on such SIMD platforms in a straightforward manner, polygon clipping tends to cause clumsy and expensive interruptions to the SIMD pipeline. This paper describes a way to increase the efficiency of SIMD clipping without sacrificing the efficient flow of a SIMD graphics pipeline. In order to fully utilize the parallel execution units, we have developed two methods to avoid serialization of the execution stream: deferred clipping postpones polygon clipping and uses hardware assistance to buffer polygons that need to be clipped. SIMD clipping partitions the actual polygon clipping procedure between the SIMD engine and a conventional RISC processor. To increase the efficiency of SIMD clipping, we introduce the concepts of clip-plane pairs and edge batching. Clip-plane pairs allow clipping a polygon against two clip planes without introducing corner vertices. Edge batching reduces the communication and control overhead for starting of clipping on the SIMD engine.


eurographics conference on graphics hardware | 1992

M-buffer: a flexible MISD architecture for advanced graphics

Bengt-Olaf Schneider; Jarek Rossignac

Contemporary graphics architectures are based on a hardware-supported geometric pipeline, a rasterizer, a z-buffer and two frame buffers. Additional pixel memory is used for alpha blending and for storing logical information. Although their functionality is growing it is still limited because of the fixed use of pixel memory and the restricted set of operations provided by these architectures. A new class of graphics algorithms that considerably extends the current technology is based on a more flexible use of pixel memory, not supported by current architectures. The M-Buffer architecture described here divides pixel memory into general-purpose buffers, each associated with one processor. Pixel data is broadcast to all buffers simultaneously. Logical and numeric tests are performed by each processor and the results are broadcast and used by all buffers in parallel to evaluate logical expressions for the pixel update condition. The architecture is scalable by addition of buffer-processors, suitable for pixel parallelization, and permits the use of buffers for different purposes. The architecture, its functional description, and a powerful programming interface are described.


eurographics | 1989

Towards a taxonomy for display processors

Bengt-Olaf Schneider

Image generation for raster displays proceeds in two main steps: geometry processing and pixel processing. The snbsystem performing the pixel processing is called display processor. In the paper a model for the display processor is developed that takes into account both function and timing properties. The model identifies scan conversion, hidden surface removal, shading and anti-aliasing as tile key functions of the display processor. The timing model is expressed in an in equation being fundamental for all display processor architectures. On the basis of that model a taxonomy is presented which classifies display processors according to four main criteria: function, partitioning, architecture and performance. The taxonomy is applied to five real display processors: Pixel-planes, SLAM, PROOF, the Ray-Casting Machine and the Structured Frame Store System. Investigation of existing display processor architectures on the basis of the developed taxonomy revealed a potential new architecture. This architecture partitions the image generation process in image space and employs a tree topology.


Wiley Encyclopedia of Electrical and Electronics Engineering | 1999

Raster Graphics Architectures

Bengt-Olaf Schneider

The sections in this article are 1 History 2 Basic Raster Graphics Architecture 3 Parallel Raster Graphics Architectures 4 Special Purpose Raster Graphics Architectures Keywords: graphics pipeline; frame buffer memories; video memory; video processor; display processor; geometry and rasterization subsystems; parallel vs. pipelined subsystems; object and image parallelism; special-purpose architectures


eurographics conference on graphics hardware | 1992

Accelerating polygon clipping

Bengt-Olaf Schneider

Polygon clipping is a central part of image generation and image visualization systems. In spite of its algorithmic simplicity it consumes a considerable amount of hardware or software resources. Polygon clipping performance is dominated by two processes: intersection calculations and data transfers. The paper analyzes the prevalent Sutherland-Hodgman algorithm for polygon clipping and identifies cases for which this algorithm performs inefficiently. Such cases are characterized by subsequent vertices in the input polygon that share a common region, e.g. a common halfspace. The paper will present new techniques that detect such constellations and simplify the input polygon such that the Sutherland-Hodgman algorithm runs more efficiently. Block diagrams and pseudo-code demonstrate that the new techniques are well suited for both hardware and software implementations. Finally, the paper discusses the results of a prototype implementation of the presented techniques. The analysis compares the performance of the new techniques to the traditional Sutherland-Hodgman algorithm for different test scenes. The new techniques reduce the number data transfers by up to 90% and the number of intersection calculations by up to 60%.


Archive | 1998

System and method for optimizing computer software and hardware

Daniel Peter Dumarot; David Alan Stevenson; Nicholas R. Dono; James Randall Moulic; Clifford A. Pickover; Bengt-Olaf Schneider; Adelbert Smith

Researchain Logo
Decentralizing Knowledge