Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christof Rezk-Salama is active.

Publication


Featured researches published by Christof Rezk-Salama.


international conference on computer graphics and interactive techniques | 2000

Interactive volume on standard PC graphics hardware using multi-textures and multi-stage rasterization

Christof Rezk-Salama; Klaus Engel; Michael Bauer; Günther Greiner; Thomas Ertl

Interactive direct volume rendering has yet been restricted to high-end graphics workstations and special-purpose hardware, due to the large amount of trilinear interpolations, that are necessary to obtain high image quality. Implementations that use the 2D-texture capabilities of standard PC hardware, usually render object-aligned slices in order to substitute trilinear by bilinear interpolation. However the resulting images often contain visual artifacts caused by the lack of spatial interpolation. In this paper we propose new rendering techniques that significantly improve both performance and image quality of the 2D-texture based approach. We will show how in ulti-texturing capabilitiesof modern consumer PC graphboards are exploited to enable in teractive high quality volume visualization on low-cost hardware. Furthermore we demonstrate how multi-stage rasterization hardware can be used to efficiently render shaded isosurfaces and to compute diffuse illumination for semi-transparent volume rendering at interactive frame rates.


siggraph eurographics conference on graphics hardware | 2004

Hardware-based simulation and collision detection for large particle systems

Andreas Kolb; Lutz Latta; Christof Rezk-Salama

Particle systems have long been recognized as an essential building block for detail-rich and lively visual environments. Current implementations can handle up to 10,000 particles in real-time simulations and are mostly limited by the transfer of particle data from the main processor to the graphics hardware (GPU) for rendering.This paper introduces a full GPU implementation using fragment shaders of both the simulation and rendering of a dynamically-growing particle system. Such an implementation can render up to 1 million particles in real-time on recent hardware. The massively parallel simulation handles collision detection and reaction of particles with objects for arbitrary shape. The collision detection is based on depth maps that represent the outer shape of an object. The depth maps store distance values and normal vectors for collision reaction. Using a special texture-based indexing technique to represent normal vectors, standard 8-bit textures can be used to describe the complete depth map data. Alternately, several depth maps can be stored in one floating point texture.In addition, a GPU-based parallel sorting algorithm is introduced that can be used to perform a depth sorting of the particles for correct alpha blending.


ieee visualization | 1999

Interactive exploration of volume line integral convolution based on 3D-texture mapping

Christof Rezk-Salama; Peter Hastreiter; Christian Teitzel; Thomas Ertl

Line integral convolution (LIC) is an effective technique for visualizing vector fields. The application of LIC to 3D flow fields has yet been limited by difficulties to efficiently display and animate the resulting 3D-images. Texture-based volume rendering allows interactive visualization and manipulation of 3D-LIC textures. In order to ensure the comprehensive and convenient exploration of flow fields, we suggest interactive functionality including transfer functions and different clipping mechanisms. Thereby, we efficiently substitute the calculation of LIC based on sparse noise textures and show the convenient visual access of interior structures. Further on, we introduce two approaches for animating static 3D-flow fields without the computational expense and the immense memory requirements for pre-computed 3D-textures and without loss of interactivity. This is achieved by using a single 3D-LIC texture and a set of time surfaces as clipping geometries. In our first approach we use the clipping geometry to pre-compute a special 3D-LIC texture that can be animated by time-dependent color tables. Our second approach uses time volumes to actually clip the 3D-LIC volume interactively during rasterization. Additionally, several examples demonstrate the value of our strategy in practice.


Computer Graphics Forum | 2006

Opacity Peeling for Direct Volume Rendering

Christof Rezk-Salama; Andreas Kolb

The most important technique to visualize 3D scalar data, as they arise e.g. in medicine from tomographic measurement, is direct volume rendering. A transfer function maps the scalar values to optical properties which are used to solve the integral of light transport in participating media. Many medical data sets, especially MRI data, however, are difficult to visualize due to different tissue types being represented by the same scalar value. The main problem is that interesting structures will be occluded by less important structures because they share the same range of data values. Occlusion, however, is a view‐dependent problem and cannot be solved easily by transfer function design. This paper proposes a new method to display different entities inside the volume data in a single rendering pass. The proposed opacity peeling technique reveals structures in the data set that cannot be visualized directly by one‐or multi‐dimensional transfer functions without explicit segmentation. We also demonstrate real‐time implementations using texture mapping and multiple render targets.


international conference on computer graphics and interactive techniques | 2001

Fast volumetric deformation on general purpose hardware

Christof Rezk-Salama; Michael Scheuering; Grzegorz Soza; Günther Greiner

High performance deformation of volumetric objects is a common problem in computer graphics that has not yet been handled sufficiently. As a supplement to 3D texture based volume rendering, a novel approach is presented, which adaptively subdivides the volume into piecewise linear patches. An appropriate mathematical model based on tri-linear interpolation and its approximations is proposed. New optimizations are introduced in this paper which are especially tailored to an efficient implementation using general purpose rasterization hardware, including new technologies, such as vertex programs and pixel shaders. Additionally, a high performance model for local illumination calculation is introduced, which meets the aesthetic requirements of visual arts and entertainment. The results demonstrate the significant performance benefit and allow for time-critical applications, such as computer assisted surgery.


Computer Graphics Forum | 2001

Real‐Time Volume Deformations

Rüdiger Westermann; Christof Rezk-Salama

Real‐time free‐form deformation tools are primarily based on surface or particle representations to allow for interactive modification and fast rendering of complex models. The efficient handling of volumetric representations, however, is still a challenge and has not yet been addressed sufficiently. Volumetric models, on the other hand, form an important class of representation in many applications. In this paper we present a novel approach to the real‐time deformation of scalar volume data sets taking advantage of hardware supported 3D texture mapping. In a prototype implementation a modeling environment has been designed that allows for interactive manipulation of arbitrary parts of volumetric objects. In this way, any desired shape can be modeled and used subsequently in various applications. The underlying algorithms have wide applicability and can be exploited effectively for volume morphing and medical data processing.


ieee pacific visualization symposium | 2010

Interactive volumetric lighting simulating scattering and shadowing

Timo Ropinski; Christian Döring; Christof Rezk-Salama

In this paper we present a volumetric lighting model, which simulates scattering as well as shadowing in order to generate high quality volume renderings. By approximating light transport in inhomogeneous participating media, we are able to come up with an efficient GPU implementation, in order to achieve the desired effects at interactive frame rates. Moreover, in many cases the frame rates are even higher as those achieved with conventional gradient-based shading. To evaluate the impact of the proposed illumination model on the spatial comprehension of volumetric objects, we have conducted a user study, in which the participants had to perform depth perception tasks. The results of this study show, that depth perception is significantly improved when comparing our illumination model to conventional gradient-based volume shading. Additionally, since our volumetric illumination model is not based on gradient calculation, it is also less sensitive to noise and therefore also applicable to imaging modalities incorporating a higher degree of noise, as for instance magnet resonance tomography or 3D ultrasound.


IEEE Transactions on Visualization and Computer Graphics | 2008

Interactive Volume Exploration for Feature Detection and Quantification in Industrial CT Data

Markus Hadwiger; Fritz Laura; Christof Rezk-Salama; Thomas Höllt; Georg Geier; Thomas Pabel

This paper presents a novel method for interactive exploration of industrial CT volumes such as cast metal parts, with the goal of interactively detecting, classifying, and quantifying features using a visualization-driven approach. The standard approach for defect detection builds on region growing, which requires manually tuning parameters such as target ranges for density and size, variance, as well as the specification of seed points. If the results are not satisfactory, region growing must be performed again with different parameters. In contrast, our method allows interactive exploration of the parameter space, completely separated from region growing in an unattended pre-processing stage. The pre-computed feature volume tracks a feature size curve for each voxel over time, which is identified with the main region growing parameter such as variance. A novel 3D transfer function domain over (density, feature.size, time) allows for interactive exploration of feature classes. Features and feature size curves can also be explored individually, which helps with transfer function specification and allows coloring individual features and disabling features resulting from CT artifacts. Based on the classification obtained through exploration, the classified features can be quantified immediately.


medical image computing and computer assisted intervention | 1998

Fast Analysis of Intracranial Aneurysms Based on Interactive Direct Volume Rendering and CTA

Peter Hastreiter; Christof Rezk-Salama; Bernd Tomandl; Knut E. W. Eberhardt; Thomas Ertl

The diagnosis of intracranial aneurysms and the planning of related interventions is effectively assisted by spiral CT-angiography and interactive direct volume rendering. Based on 3D texture mapping, we suggest a hardware accelerated approach which provides fast and meaningful visualization without time-consuming pre-processing. Interactive tools provide reliable measurement of distance and volume allowing to calculate the size of vessels and aneurysms directly within the 3D viewer. Thereby, the expensive material required for coiling procedures is estimated more precisely. Interactively calculated shaded isosurfaces, presented in [1] were evaluated in respect of enhanced perception of depth. Based on the integration into OpenInventor, global overview and simultaneous detail information is provided by communicating windows allowing for intuitive and user-guided navigation. Due to an average of 15–20 minutes required for the complete medical analysis, our approach is expected to be useful for clinical routine. Additional registration and simultaneous visualization of MR and CT-angiography gives further anatomical orientation. Several examples demonstrate the potential of our approach.


Computers & Graphics | 2000

Registration techniques for the analysis of the brain shift in neurosurgery

Peter Hastreiter; Christof Rezk-Salama; Christopher Nimsky; Christoph Lürig; Günther Greiner; Thomas Ertl

Abstract The brain shift is a phenomenon that occurs during surgical operations on the opened head. It is a deformation of the brain which prohibits exact navigation with pre-operatively acquired tomographic scans since correlation between the image data and the actual anatomical situation invalidates quickly after opening the skull. In order to analyze the brain shift nonlinear registration of two data sets is performed. Thereby, one data set is obtained before and the other during the operation with an open magnetic resonance scanner. Using registration based on deformable surfaces, models of the pre- and the intra-operative brain are obtained. After efficient distance calculation color encoding of the models gives quantitative information. For further anatomical orientation these models are integrated into a representation of the data produced with direct volume rendering. Additionally, we suggest a voxel-based approach based on maximizing mutual information. This accounts for deformations of deeper lying structures considering the volume. Adaptively subdividing the data into piecewise linear patches and using 3D texture mapping, fast evaluation of the non-linear deformation is achieved

Collaboration


Dive into the Christof Rezk-Salama's collaboration.

Top Co-Authors

Avatar

Peter Hastreiter

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Ertl

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Markus Hadwiger

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Tomandl

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Günther Greiner

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Knut E. W. Eberhardt

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge