Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timo Ropinski is active.

Publication


Featured researches published by Timo Ropinski.


international conference on computer graphics and interactive techniques | 2008

Advanced illumination techniques for GPU-based volume raycasting

Markus Hadwiger; Patric Ljung; Christof Rezk Salama; Timo Ropinski

Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry. The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.


Computer Graphics Forum | 2008

Interactive Volume Rendering with Dynamic Ambient Occlusion and Color Bleeding

Timo Ropinski; Jennis Meyer-Spradow; Stefan Diepenbrock; Jörg Mensmann; Klaus H. Hinrichs

We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results.


Computers & Graphics | 2011

Visual Computing in Biology and Medicine: Survey of glyph-based visualization techniques for spatial multivariate medical data

Timo Ropinski; Steffen Oeltze; Bernhard Preim

In this survey article, we review glyph-based visualization techniques that have been exploited when visualizing spatial multivariate medical data. To classify these techniques, we derive a taxonomy of glyph properties that is based on classification concepts established in information visualization. Considering both the glyph visualization as well as the interaction techniques that are employed to generate or explore the glyph visualization, we are able to classify glyph techniques into two main groups: those supporting pre-attentive and those supporting attentive processing. With respect to this classification, we review glyph-based techniques described in the medical visualization literature. Based on the outcome of the literature review, we propose design guidelines for glyph visualizations in the medical domain.


IEEE Transactions on Visualization and Computer Graphics | 2010

Uncertainty-Aware Guided Volume Segmentation

Jorg-Stefan Prassni; Timo Ropinski; Klaus H. Hinrichs

Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the users attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentations reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach.


IEEE Transactions on Visualization and Computer Graphics | 2011

About the Influence of Illumination Models on Image Comprehension in Direct Volume Rendering

Florian Lindemann; Timo Ropinski

In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings.


ieee pacific visualization symposium | 2010

Interactive volumetric lighting simulating scattering and shadowing

Timo Ropinski; Christian Döring; Christof Rezk-Salama

In this paper we present a volumetric lighting model, which simulates scattering as well as shadowing in order to generate high quality volume renderings. By approximating light transport in inhomogeneous participating media, we are able to come up with an efficient GPU implementation, in order to achieve the desired effects at interactive frame rates. Moreover, in many cases the frame rates are even higher as those achieved with conventional gradient-based shading. To evaluate the impact of the proposed illumination model on the spatial comprehension of volumetric objects, we have conducted a user study, in which the participants had to perform depth perception tasks. The results of this study show, that depth perception is significantly improved when comparing our illumination model to conventional gradient-based volume shading. Additionally, since our volumetric illumination model is not based on gradient calculation, it is also less sensitive to noise and therefore also applicable to imaging modalities incorporating a higher degree of noise, as for instance magnet resonance tomography or 3D ultrasound.


eurographics | 2008

Stroke-based transfer function design

Timo Ropinski; Jörg-Stefan Praßni; Frank Steinicke; Klaus H. Hinrichs

In this paper we propose a user interface for the design of 1D transfer functions. The user can select a feature of interest by drawing one or more strokes directly onto the volume rendering near its silhouette. Based on the stroke(s), our algorithm performs a histogram analysis in order to identify the desired feature in histogram space. Once the feature of interest has been identified, we automatically generate a component transfer function, which associates optical properties with the previously determined intervals in the domain of the data values. By supporting direct interaction techniques, which are performed in the image domain, the transfer function design becomes more intuitive compared to the specification performed in the histogram domain. To be able to modify and combine the previously generated component transfer functions conveniently, we propose a user interface, which has been inspired by the layer mechanism commonly found in image processing software. With this user interface, the optical properties assigned through a component function can be altered, and the component functions to be combined into a final transfer function can be selected.


ieee pacific visualization symposium | 2010

Shape-based transfer functions for volume visualization

Jorg-Stefan Prassni; Timo Ropinski; Jörg Mensmann; Klaus H. Hinrichs

We present a novel classification technique for volume visualization that takes the shape of volumetric features into account. The presented technique enables the user to distinguish features based on their 3D shape and to assign individual optical properties to these. Based on a rough pre-segmentation that can be done by windowing, we exploit the curve-skeleton of each volumetric structure in order to derive a shape descriptor similar to those used in current shape recognition algorithms. The shape descriptor distinguishes three main shape classes: longitudinal, surface-like, and blobby shapes. In contrast to previous approaches, the classification is not performed on a per-voxel level but assigns a uniform shape descriptor to each feature and therefore allows a more intuitive user interface for the assignment of optical properties. By using the proposed technique, it becomes for instance possible to distinguish blobby heart structures filled with contrast agents from potentially occluding vessels and rib bones. After introducing the basic concepts, we show how the presented technique performs on real world data, and we discuss current limitations.


Computer Graphics Forum | 2014

A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering

Daniel Jönsson; Erik Sundén; Anders Ynnerman; Timo Ropinski

Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.


smart graphics | 2006

Visually Supporting Depth Perception in Angiography Imaging

Timo Ropinski; Frank Steinicke; Klaus H. Hinrichs

In this paper we propose interactive visualization techniques which support the spatial comprehension of angiogram images by emphasizing depth information and introducing combined depth cues. In particular, we propose a depth based color encoding, two variations of edge enhancement and the application of a modified depth of field effect in order to enhance depth perception of complex blood vessel systems. All proposed techniques have been developed to improve the human depth perception and have been adapted with special consideration of the spatial comprehension of blood vessel structures. To evaluate the presented techniques, we have conducted a user study, in which users had to accomplish certain depth perception tasks.

Collaboration


Dive into the Timo Ropinski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge