Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus Hadwiger is active.

Publication


Featured researches published by Markus Hadwiger.


Computer Graphics Forum | 2005

Real‐Time Ray‐Casting and Advanced Shading of Discrete Isosurfaces

Markus Hadwiger; Christian Sigg; Henning Scharsach; Katja Bühler; Markus H. Gross

This paper presents a real-time rendering pipeline for implicit surfaces defined by a regular volumetric grid of samples. We use a ray-casting approach on current graphics hardware to perform a direct rendering of the isosurface. A two-level hierarchical representation of the regular grid is employed to allow object-order and image-order empty space skipping and circumvent memory limitations of graphics hardware. Adaptive sampling and iterative refinement lead to high-quality ray/surface intersections. All shading operations are deferred to image space, making their computational effort independent of the size of the input data. A continuous third-order reconstruction filter allows on-the-fly evaluation of smooth normals and extrinsic curvatures at any point on the surface without interpolating data computed at grid points. With these local shape descriptors, it is possible to perform advanced shading using high-quality lighting and non-photorealistic effects in real-time.


ieee visualization | 2003

High-quality two-level volume rendering of segmented data sets on consumer graphics hardware

Markus Hadwiger; Christoph Berger; Helwig Hauser

One of the most important goals in volume rendering is to be able to visually separate and selectively enable specific objects of interest contained in a single volumetric data set, which can be approached by using explicit segmentation information. We show how segmented data sets can be rendered interactively on current consumer graphics hardware with high image quality and pixel-resolution filtering of object boundaries. In order to enhance object perception, we employ different levels of object distinction. First, each object can be assigned an individual transfer function, multiple of which can be applied in a single rendering pass. Second, different rendering modes such as direct volume rendering, iso-surfacing, and non-photorealistic techniques can be selected for each object. A minimal number of rendering passes is achieved by processing sets of objects that share the same rendering mode in a single pass. Third, local compositing modes such as alpha blending and MIP can be selected for each object in addition to a single global mode, thus enabling high-quality two-level volume rendering on GPUs.


IEEE Transactions on Visualization and Computer Graphics | 2007

High-Quality Multimodal Volume Rendering for Preoperative Planning of Neurosurgical Interventions

Johanna Beyer; Markus Hadwiger; Stefan Wolfsberger; Katja Bühler

Surgical approaches tailored to an individual patients anatomy and pathology have become standard in neurosurgery. Precise preoperative planning of these procedures, however, is necessary to achieve an optimal therapeutic effect. Therefore, multiple radiological imaging modalities are used prior to surgery to delineate the patients anatomy, neurological function, and metabolic processes. Developing a three-dimensional perception of the surgical approach, however, is traditionally still done by mentally fusing multiple modalities. Concurrent 3D visualization of these datasets can, therefore, improve the planning process significantly. In this paper we introduce an application for planning of individual neurosurgical approaches with high-quality interactive multimodal volume rendering. The application consists of three main modules which allow to (1) plan the optimal skin incision and opening of the skull tailored to the underlying pathology; (2) visualize superficial brain anatomy, function and metabolism; and (3) plan the patient-specific approach for surgery of deep-seated lesions. The visualization is based on direct multi-volume raycasting on graphics hardware, where multiple volumes from different modalities can be displayed concurrently at interactive frame rates. Graphics memory limitations are avoided by performing raycasting on bricked volumes. For preprocessing tasks such as registration or segmentation, the visualization modules are integrated into a larger framework, thus supporting the entire workflow of preoperative planning.


international conference on computer graphics and interactive techniques | 2008

Advanced illumination techniques for GPU-based volume raycasting

Markus Hadwiger; Patric Ljung; Christof Rezk Salama; Timo Ropinski

Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry. The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.


international conference on computer graphics and interactive techniques | 2006

GPU-accelerated deep shadow maps for direct volume rendering

Markus Hadwiger; Andrea Kratz; Christian Sigg; Katja Bühler

Deep shadow maps unify the computation of volumetric and geometric shadows. For each pixel in the shadow map, a fractional visibility function is sampled, pre-filtered, and compressed as a piecewise linear function. However, the original implementation targets software-based off-line rendering. Similar previous algorithms on GPUs focus on geometric shadows and lose many important benefits of the original concept. We focus on shadows for interactive direct volume rendering, where shadow algorithms currently either compute additional per-voxel shadow data, or employ half-angle slicing to generate shadows during rendering. We adapt the original concept of deep shadow maps to volume ray-casting on GPUs, and show that it can provide anti-aliased high-quality shadows at interactive rates. Ray-casting is used for both generation of the shadow map data structure and actual rendering. High frequencies in the visibility function are captured by a pre-computed lookup table for piecewise linear segments. Direct volume rendering is performed with an additional deep shadow map lookup for each sample. Overall, we achieve interactive high-quality volume ray-casting with accurate shadows. To conclude, we briefly describe how semi-transparent geometry such as hair could be integrated as well, provided that rasterization can write to arbitrary locations in a texture. This would be a major step toward full deep shadow map functionality.


IEEE Computer Graphics and Applications | 2010

Ssecrett and NeuroTrace: Interactive Visualization and Analysis Tools for Large-Scale Neuroscience Data Sets

Won-Ki Jeong; Johanna Beyer; Markus Hadwiger; Rusty Blue; Amelio Vázquez-Reina; R. Clay Reid; Jeff W. Lichtman; Hanspeter Pfister

Data sets imaged with modern electron microscopes can range from tens of terabytes to about one petabyte. Two new tools, Ssecrett and NeuroTrace, support interactive exploration and analysis of large-scale optical-and electron-microscopy images to help scientists reconstruct complex neural circuits of the mammalian nervous system.


IEEE Transactions on Visualization and Computer Graphics | 2009

Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets

Won Ki Jeong; Johanna Beyer; Markus Hadwiger; Amelio Vazquez; Hanspeter Pfister; Ross T. Whitaker

Recent advances in scanning technology provide high resolution EM (electron microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes.


IEEE Transactions on Visualization and Computer Graphics | 2012

Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach

Markus Hadwiger; Johanna Beyer; Won-Ki Jeong; Hanspeter Pfister

This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.


IEEE Transactions on Visualization and Computer Graphics | 2008

Interactive Volume Exploration for Feature Detection and Quantification in Industrial CT Data

Markus Hadwiger; Fritz Laura; Christof Rezk-Salama; Thomas Höllt; Georg Geier; Thomas Pabel

This paper presents a novel method for interactive exploration of industrial CT volumes such as cast metal parts, with the goal of interactively detecting, classifying, and quantifying features using a visualization-driven approach. The standard approach for defect detection builds on region growing, which requires manually tuning parameters such as target ranges for density and size, variance, as well as the specification of seed points. If the results are not satisfactory, region growing must be performed again with different parameters. In contrast, our method allows interactive exploration of the parameter space, completely separated from region growing in an unattended pre-processing stage. The pre-computed feature volume tracks a feature size curve for each voxel over time, which is identified with the main region growing parameter such as variance. A novel 3D transfer function domain over (density, feature.size, time) allows for interactive exploration of feature classes. Features and feature size curves can also be explored individually, which helps with transfer function specification and allows coloring individual features and disabling features resulting from CT artifacts. Based on the classification obtained through exploration, the classified features can be quantified immediately.


ieee vgtc conference on visualization | 2006

Perspective isosurface and direct volume rendering for virtual endoscopy applications

Henning Scharsach; Markus Hadwiger; André Neubauer; Stefan Wolfsberger; Katja Bühler

Virtual endoscopy has proven to be a very powerful tool in endoscopic surgery. However, most virtual endoscopy systems are restricted to rendering isosurfaces or require segmentation in order to visualize additional objects behind occluding tissue. This paper presents a system for real-time perspective direct volume and isosurface rendering, which allows to simultaneously visualize both the interesting tissue and everything that is behind. Large volume data can be viewed seamlessly from inside or outside the volume without any pre-computation or segmentation. Our system uses a novel ray-casting pipeline for GPUs that has been optimized for the requirements of virtual endoscopy and also allows easy incorporation of auxiliary geometry, e.g., for displaying parts of the endoscopic device, pointers, or grid lines for orientation purposes. We present three main applications of this system and the underlying ray-casting algorithm. Although our ray-casting approach is of general applicability, we have specifically applied it to virtual colonoscopy, virtual angioscopy, and virtual pituitary surgery.

Collaboration


Dive into the Markus Hadwiger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Höllt

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ali K. Al-Awami

King Abdullah University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge