Armin Kanitsar
Agfa-Gevaert
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Armin Kanitsar.
ieee pacific visualization symposium | 2010
Martin Haidacher; Daniel Patel; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller
It is a difficult task to design transfer functions for noisy data. In traditional transfer-function spaces, data values of different materials overlap. In this paper we introduce a novel statistical transfer-function space which in the presence of noise, separates different materials in volume data sets. Our method adaptively estimates statistical properties, i.e. the mean value and the standard deviation, of the data values in the neighborhood of each sample point. These properties are used to define a transfer-function space which enables the distinction of different materials. Additionally, we present a novel approach for interacting with our new transfer-function space which enables the design of transfer functions based on statistical properties. Furthermore, we demonstrate that statistical information can be applied to enhance visual appearance in the rendering process. We compare the new method with 1D, 2D, and LH transfer functions to demonstrate its usefulness.
computer assisted radiology and surgery | 2010
Florian Schulze; Katja Bühler; André Neubauer; Armin Kanitsar; Leslie Holton; Stefan Wolfsberger
PurposeVirtual endoscopy has already proven its benefit for pre-operative planning of endoscopic pituitary surgery. The translation of such a system into the operating room is a logical consequence, but only a few general intra-operative image guided systems providing virtual endoscopic images have been proposed so far. A discussion of related visualization and interaction problems occurring during sinus and pituitary surgery is still missing.MethodsThis paper aims at filling this gap and proposes a system that integrates an existing virtual endoscopy system originally designed for pre-operative planning of pituitary surgery with a professional intra-operative navigation system. Visualization and interaction possibilities of the pre-operative planning system have been extended to fulfill the special requirements to the system if used for intra-operative navigation of endonasal transsphenoidal pituitary surgery.ResultsThe feasibility of the system has been successfully tested on 1 cadaver and 12 patients. The virtual endoscopic images were found useful (1) during the endonasal transsphenoidal approach in cases of anatomic variations and for the individually tailored opening of the sellar floor, and (2) during tumor resection for respecting the internal carotid artery. The visualization of hidden anatomical structures behind the bony walls of the sphenoid sinus during the sellar phase of the surgery has been found most beneficial.DiscussionAccording to our data, intra-operative virtual endoscopy provides additional anatomical information to the surgeon. By depicting individual anatomical variations in advance, it may add to the safety of this frequent neurosurgical procedure.
IEEE Transactions on Visualization and Computer Graphics | 2007
Peter Kohlmann; Stefan Bruckner; M. Eduard Gröller; Armin Kanitsar
Although real-time interactive volume rendering is available even for very large data sets, this visualization method is used quite rarely in the clinical practice. We suspect this is because it is very complicated and time consuming to adjust the parameters to achieve meaningful results. The clinician has to take care of the appropriate viewpoint, zooming, transfer function setup, clipping planes and other parameters. Because of this, most often only 2D slices of the data set are examined. Our work introduces LiveSync, a new concept to synchronize 2D slice views and volumetric views of medical data sets. Through intuitive picking actions on the slice, the users define the anatomical structures they are interested in. The 3D volumetric view is updated automatically with the goal that the users are provided with expressive result images. To achieve this live synchronization we use a minimal set of derived information without the need for segmented data sets or data-specific pre-computations. The components we consider are the picked point, slice view zoom, patient orientation, viewpoint history, local object shape and visibility. We introduce deformed viewing spheres which encode the viewpoint quality for the components. A combination of these deformed viewing spheres is used to estimate a good viewpoint. Our system provides the physician with synchronized views which help to gain deeper insight into the medical data with minimal user interaction.
visual computing for biomedicine | 2008
Martin Haidacher; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller
Transfer functions are an essential part of volume visualization. In multimodal visualization at least two values exist at every sample point. Additionally, other parameters, such as gradient magnitude, are often retrieved for each sample point. To find a good transfer function for this high number of parameters is challenging because of the complexity of this task. In this paper we present a general information-based approach for transfer function design in multimodal visualization which is independent of the used modality types. Based on information theory, the complex multi-dimensional transfer function space is fused to allow utilization of a well-known 2D transfer function with a single value and gradient magnitude as parameters. Additionally, a quantity is introduced which enables better separation of regions with complementary information. The benefit of the new method in contrast to other techniques is a transfer function space which is easy to understand and which provides a better separation of different tissues. The usability of the new approach is shown on examples of different modalities.
ieee pacific visualization symposium | 2009
Peter Kohlmann; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller
This paper presents a novel method for the interactive identification of contextual interest points within volumetric data by picking on a direct volume rendered image. In clinical diagnostics the points of interest are often located in the center of anatomical structures. In order to derive the volumetric position which allows a convenient examination of the intended structure, the system automatically extracts contextual meta information from the DICOM (Digital Imaging and Communications in Medicine) images and the setup of the medical workstation. Along a viewing ray for a volumetric picking, the ray profile is analyzed for structures which are similar to predefined templates from a knowledge base. We demonstrate with our results that the obtained position in 3D can be utilized to highlight a structure in 2D slice views, to interactively calculate centerlines of tubular objects, or to place labels at contextually-defined volumetric positions.
international symposium on biomedical imaging | 2008
Stefan Bruckner; Peter Kohlmann; Armin Kanitsar; M.E. Groller
One of the main obstacles in integrating 3D volume visualization in the clinical workflow is the time-consuming process of adjusting parameters such as viewpoint, transfer functions, and clipping planes required to generate a diagnostically relevant image. Current applications therefore make scarce use of volume rendering and instead primarily employ 2D views generated through standard techniques such as multi-planar reconstruction (MPR). However, in many cases 3D renditions can supply additional useful information. This paper discusses ongoing work which aims to improve the integration of 3D visualization into the diagnostic workflow by automatically generating meaningful renditions based on minimal user interaction. A method for automatically generating 3D views for structures in 2D slices based on a single picking interaction is presented.
Archive | 2007
Rainer Wegenkittl; Donald K. Dennison; John J. Potwarka; Lukas Mroz; Armin Kanitsar; Gunter Zeilinger
Archive | 2007
Rainer Wegenkittl; Donald K. Dennison; John J. Potwarka; Lukas Mroz; Armin Kanitsar; Gunter Zeilinger
Archive | 2007
Rainer Wegenkittl; Donald K. Dennison; John J. Potwarka; Lukas Mroz; Armin Kanitsar; Gunter Zeilinger
graphics interface | 2008
Peter Kohlmann; Stefan Bruckner; Armin Kanitsar; M. Eduard Gröller