Sören Grimm
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sören Grimm.
IEEE Transactions on Visualization and Computer Graphics | 2006
Stefan Bruckner; Sören Grimm; Armin Kanitsar; M.E. Groller
In volume rendering, it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Highly transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important context information. In this paper, we present a new model for volume rendering, inspired by techniques from illustration. It provides a means of interactively inspecting the interior of a volumetric data set in a feature-driven way which retains context information. The context-preserving volume rendering model uses a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. It is controlled by two user-specified parameters. This new method represents an alternative to conventional clipping techniques, sharing their easy and intuitive user control, but does not suffer from the drawback of missing context information
ieee vgtc conference on visualization | 2005
Stefan Bruckner; Sören Grimm; Armin Kanitsar; M. Eduard Gröller
In volume rendering it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Very transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important context information. In this paper we present a new model for volume rendering, inspired by techniques from illustration that provides a means of interactively inspecting the interior of a volumetric data set in a feature-driven way which retains context information. The context-preserving volume rendering model uses a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. It is controlled by two user-specified parameters. This new method represents an alternative to conventional clipping techniques, shares their easy and intuitive user control, but does not suffer from the drawback of missing context information.
Volume Visualization and Graphics, 2004 IEEE Symposium on | 2005
Sören Grimm; Stefan Bruckner; Armin Kanitsar; Eduard Gröller
Most CPU-based volume raycasting approaches achieve high performance by advanced memory layouts, space subdivision, and excessive pre-computing. Such approaches typically need an enormous amount of memory. They are limited to sizes which do not satisfy the medical data used in daily clinical routine. We present a new volume raycasting approach based on image-ordered raycasting with object-ordered processing, which is able to perform high-quality rendering of very large medical data in real-time on commodity computers. For large medical data such as computed tomographic (CT) angiography run-offs (512 /spl times/ 512 /spl times/ 1202) we achieve rendering times up to 2.5 fps on a commodity notebook. We achieve this by introducing a memory efficient acceleration technique for on-the-fly gradient estimation and a memory efficient hybrid removal and skipping technique of transparent regions. We employ quantized binary histograms, granular resolution octrees, and a cell invisibility cache. These acceleration structures require just a small extra storage of approximately 10%.
IEEE Transactions on Visualization and Computer Graphics | 2008
David Williams; Sören Grimm; Ernesto Coto; Abdul V. Roudsari; Haralambos Hatzakis
Curved Planar Reformation (CPR) has proved to be a practical and widely used tool for the visualization of curved tubular structures within the human body. It has been useful in medical procedures involving the examination of blood vessels and the spine. However, it is more difficult to use it for large, tubular, structures such as the trachea and the colon because abnormalities may be smaller relative to the size of the structure and may not have such distinct density and shape characteristics.Our new approach improves on this situation by using volume rendering for hollow regions and standard CPR for the surrounding tissue. This effectively combines gray scale contextual information with detailed color information from the area of interest. The approach is successfully used with each of the standard CPR types and the resulting images are promising as an alternative to virtual endoscopy.Because the CPR and the volume rendering are tightly coupled, the projection method used has a significant effect on properties of the volume renderer such as distortion and isometry. We describe and compare the different CPR projection methods and how they affect the volume rendering process.A version of the algorithm is also presented which makes use of importance driven techniques; this ensures the users attention is always focused on the area of interest and also improves the speed of the algorithm.
ieee vgtc conference on visualization | 2006
Peter Rautek; Balázs Csébfalvi; Sören Grimm; Stefan Bruckner; M.E. Groller
Volume rendering techniques are conventionally classified as either direct or indirect methods. Indirect methods require to transform the initial volumetric model into an intermediate geometrical model in order to efficiently visualize it. In contrast, direct volume rendering (DVR) methods can directly process the volumetric data. Modern CT scanners usually provide data as a set of samples on a rectilinear grid, which is computed from the measured projections by discrete tomographic reconstruction. Therefore the rectilinear grid can already be considered as an intermediate volume representation. In this paper we introduce direct direct volume rendering (D2VR). D2VR does not require a rectilinear grid, since it is based on an immediate processing of the measured projections. Arbitrary samples for ray casting are reconstructed from the projections by using the Filtered Back-Projection algorithm. Our method removes a lossy resampling step from the classical volume rendering pipeline. It provides much higher accuracy than traditional grid-based resampling techniques do. Furthermore we also present a novel high-quality gradient estimation scheme, which is also based on the Filtered Back-Projection algorithm.
eurographics | 2004
Sören Grimm; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller
We present Volume dots (Vots), a new primitive for volumetric data modelling, processing, and rendering. Vots are a point‐based representation of volumetric data. An individual Vot is specified by the coefficients of a Taylor series expansion, i.e. the function value and higher order derivatives at a specific point. A Vot does not only represent a single sample point, it represents the underlying function within a region. With the Vots representation we have a more intuitive and high‐level description of the volume data. This allows direct analytical examination and manipulation of volumetric datasets. Vots enable the representation of the underlying scalar function with specified precision. User‐centric importance sampling is also possible, i.e., unimportant volume parts are still present but represented with just very few Vots. As proof of concept, we show Maximum Intensity Projection based on Vots.
Computers & Graphics | 2007
Ernesto Coto; Sören Grimm; David Williams
The watershed transform from markers is a very popular image segmentation operator. The image foresting transform (IFT) watershed is a common method to compute the watershed transform from markers using a priority queue, but which can consume too much memory when applied to three-dimensional medical datasets. This is a considerable limitation on the applicability of the IFT watershed, as the size of medical datasets keeps increasing at a faster pace than physical memory technologies develop. This paper presents the O-IFT watershed, a new type of IFT watershed based on the O-Buffer framework, and introduces an efficient data representation which considerably reduces the memory consumption of the algorithm. In addition, this paper introduces the O-Buckets, a new implementation of the priority queue which further reduces the memory consumption of the algorithm. The new O-IFT watershed with O-Buckets allows the application of the watershed transform from markers to large medical datasets.
Archive | 2004
Sören Grimm; Stefan Bruckner; Armin Kanitsar; E Gr ller
vision modeling and visualization | 2005
Ernesto Coto; Sören Grimm; Stefan Bruckner; M.E. Groller; Armin Kanitsar; Omaira Rodríguez
vision modeling and visualization | 2004
Sören Grimm; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller