M.E. Groller
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M.E. Groller.
ieee visualization | 2004
Ivan Viola; Armin Kanitsar; M.E. Groller
This work introduces importance-driven volume rendering as a novel technique for automatic focus and context display of volumetric data. Our technique is a generalization of cut-away views, which - depending on the viewpoint - remove or suppress less important parts of a scene to reveal more important underlying information. We automatize and apply this idea to volumetric data. Each part of the volumetric data is assigned an object importance, which encodes visibility priority. This property determines which structures should be readily discernible and which structures are less important. In those image regions, where an object occludes more important structures it is displayed more sparsely than in those areas where no occlusion occurs. Thus the objects of interest are clearly visible. For each object several representations, i.e., levels of sparseness, are specified. The display of an individual object may incorporate different levels of sparseness. The goal is to emphasize important structures and to maximize the information content in the final image. This work also discusses several possible schemes for level of sparseness specification and different ways how object importance can be composited to determine the final appearance of a particular object.
ieee visualization | 2005
Stefan Bruckner; M.E. Groller
Illustrations play a major role in the education process. Whether used to teach a surgical or radiologic procedure, to illustrate normal or aberrant anatomy, or to explain the functioning of a technical device, illustration significantly impacts learning. Although many specimens are readily available as volumetric data sets, particularly in medicine, illustrations are commonly produced manually as static images in a time-consuming process. Our goal is to create a fully dynamic three-dimensional illustration environment which directly operates on volume data. Single images have the aesthetic appeal of traditional illustrations, but can be interactively altered and explored. In this paper we present methods to realize such a system which combines artistic visual styles and expressive visualization techniques. We introduce a novel concept for direct multi-object volume visualization which allows control of the appearance of inter-penetrating objects via two-dimensional transfer functions. Furthermore, a unifying approach to efficiently integrate many non-photorealistic rendering models is presented. We discuss several illustrative concepts which can be realized by combining cutaways, ghosting, and selective deformation. Finally, we also propose a simple interface to specify objects of interest through three-dimensional volumetric painting. All presented methods are integrated into VolumeShop, an interactive hardware-accelerated application for direct volume illustration.
IEEE Transactions on Visualization and Computer Graphics | 2006
Ivan Viola; Miquel Feixas; Mateu Sbert; M.E. Groller
This paper introduces a concept for automatic focusing on features within a volumetric data set. The user selects a focus, i.e., object of interest, from a set of pre-defined features. Our system automatically determines the most expressive view on this feature. A characteristic viewpoint is estimated by a novel information-theoretic framework which is based on the mutual information measure. Viewpoints change smoothly by switching the focus from one feature to another one. This mechanism is controlled by changes in the importance distribution among features in the volume. The highest importance is assigned to the feature in focus. Apart from viewpoint selection, the focusing mechanism also steers visual emphasis by assigning a visually more prominent representation. To allow a clear view on features that are normally occluded by other parts of the volume, the focusing for example incorporates cut-away views
IEEE Transactions on Visualization and Computer Graphics | 2005
Ivan Viola; Armin Kanitsar; M.E. Groller
This paper presents importance-driven feature enhancement as a technique for the automatic generation of cut-away and ghosted views out of volumetric data. The presented focus+context approach removes or suppresses less important parts of a scene to reveal more important underlying information. However, less important parts are fully visible in those regions, where important visual information is not lost, i.e., more relevant features are not occluded. Features within the volumetric data are first classified according to a new dimension, denoted as object importance. This property determines which structures should be readily discernible and which structures are less important. Next, for each feature, various representations (levels of sparseness) from a dense to a sparse depiction are defined. Levels of sparseness define a spectrum of optical properties or rendering styles. The resulting image is generated by ray-casting and combining the intersected features proportional to their importance (importance compositing). The paper includes an extended discussion on several possible schemes for levels of sparseness specification. Furthermore, different approaches to importance compositing are treated.
IEEE Transactions on Visualization and Computer Graphics | 2006
Stefan Bruckner; M.E. Groller
Exploded views are an illustration technique where an object is partitioned into several segments. These segments are displaced to reveal otherwise hidden detail. In this paper we apply the concept of exploded views to volumetric data in order to solve the general problem of occlusion. In many cases an object of interest is occluded by other structures. While transparency or cutaways can be used to reveal a focus object, these techniques remove parts of the context information. Exploded views, on the other hand, do not suffer from this drawback. Our approach employs a force-based model: the volume is divided into a part configuration controlled by a number of forces and constraints. The focus object exerts an explosion force causing the parts to arrange according to the given constraints. We show that this novel and flexible approach allows for a wide variety of explosion-based visualizations including view-dependent explosions. Furthermore, we present a high-quality GPU-based volume ray casting algorithm for exploded views which allows rendering and interaction at several frames per second
Computer Graphics Forum | 2007
Stefan Bruckner; M.E. Groller
Illustrative volume visualization frequently employs non‐photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non‐photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data‐driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image‐based lighting model uses sphere maps to represent non‐photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature‐controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high‐quality volumetric illustrations.
IEEE Transactions on Visualization and Computer Graphics | 2006
Stefan Bruckner; Sören Grimm; Armin Kanitsar; M.E. Groller
In volume rendering, it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Highly transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important context information. In this paper, we present a new model for volume rendering, inspired by techniques from illustration. It provides a means of interactively inspecting the interior of a volumetric data set in a feature-driven way which retains context information. The context-preserving volume rendering model uses a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. It is controlled by two user-specified parameters. This new method represents an alternative to conventional clipping techniques, sharing their easy and intuitive user control, but does not suffer from the drawback of missing context information
ieee visualization | 2001
T. Theussl; Torsten Möller; M.E. Groller
The classification of volumetric data sets as well as their rendering algorithms are typically based on the representation of the underlying grid. Grid structures based on a Cartesian lattice are the de-facto standard for regular representations of volumetric data. In this paper we introduce a more general concept of regular grids for the representation of volumetric data. We demonstrate that a specific type of regular lattice-the so-called body-centered cubic-is able to represent the same data set as a Cartesian grid to the same accuracy but with 29.3% fewer samples. This speeds up traditional volume rendering algorithms by the same ratio, which we demonstrate by adopting a splatting implementation for these new lattices. We investigate different filtering methods required for computing the normals on this lattice. The lattice representation results also in lossless compression ratios that are better than previously reported. Although other regular grid structures achieve the same sample efficiency, the body-centered cubic is particularly easy to use. The only assumption necessary is that the underlying volume is isotropic and band-limited-an assumption that is valid for most practical data sets.
IEEE Transactions on Visualization and Computer Graphics | 2007
Stefan Bruckner; M.E. Groller
Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates.
ieee visualization | 2003
Armin Kanitsar; Rainer Wegenkittl; Dominik Fleischmann; M.E. Groller
Traditional volume visualization techniques may provide incomplete clinical information needed for applications in medical visualization. In the area of vascular visualization important features such as the lumen of a diseased vessel segment may not be visible. Curved planar reformation (CPR) has proven to be an acceptable practical solution. Existing CPR techniques, however, still have diagnostically relevant limitations. In this paper, we introduce two advances methods for efficient vessel visualization, based on the concept of CPR. Both methods benefit from relaxation of spatial coherence in favor of improved feature perception. We present a new technique to visualize the interior of a vessel in a single image. A vessel is resampled along a spiral around its central axis. The helical spiral depicts the vessel volume. Furthermore, a method to display an entire vascular tree without mutually occluding vessels is presented. Minimal rotations at the bifurcations avoid occlusions. For each viewing direction the entire vessel structure is visible.