Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jürgen Wallner is active.

Publication


Featured researches published by Jürgen Wallner.


PLOS ONE | 2017

HTC Vive MeVisLab integration via OpenVR for medical applications

Jan Egger; Markus Gall; Jürgen Wallner; Pedro Boechat; Alexander Hann; Xing Li; Xiaojun Chen; Dieter Schmalstieg

Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.


Journal of Cranio-maxillofacial Surgery | 2014

Minimal invasive biopsy of intraconal expansion by PET/CT/MRI image-guided navigation: A new method

Knut Reinbacher; Mauro Pau; Jürgen Wallner; Wolfgang Zemann; Angelika Klein; Christian Gstettner; Reingard Aigner; M. Feichtinger

UNLABELLED Intraorbital tumours are often undetected for a long period and may lead to compression of the optic nerve and loss of vision. Although CT, MRIs and ultrasound can help in determining the probable diagnosis, most orbital tumours are only diagnosed by surgical biopsy. In intraconal lesions this may prove especially difficult as the expansions are situated next to sensitive anatomical structures (eye bulb, optic nerve). In search of a minimally invasive access to the intraconal region, we describe a method of a three-dimensional, image-guided biopsy of orbital tumours using a combined technique of hardware fusion between (18)F-FDG Positron Emission Tomography ((18)F-FDG PET), magnetic resonance imaging (MRI) and Computed Tomography (CT). METHOD AND MATERIAL We present 6 patients with a total of 7 intraorbital lesions, all of them suffering from diplopia and/or exophthalmos. There were 3 female and 3 male patients. The patients age ranged from 20 to 75 years. One of the patients showed beginning loss of vision. Another of the patients had lesions in both orbits. The decision to obtain image-guided needle biopsies for treatment planning was discussed and decided at an interdisciplinary board comprising other sub-specialities (ophthalmology, neurosurgery, maxillofacial surgery, ENT, plastic surgery). All patients underwent 3D imaging preoperatively ((18)F-FDG PET/CT or (18)F-FDG PET/CT plus MRI). Data was transferred to 3D navigation system. Access to the lesions was planned preoperatively on a workstation monitor. Biopsy-needles were then calibrated intraoperatively and all patients underwent three-dimensional image-guided needle biopsies under general anaesthesia. RESULTS 7 biopsies were performed. The histologic subtype was idiopathic orbital inflammation in 2 lesions, lymphoma in 2, Merkel cell carcinoma in 1, hamartoma in 1 and 1 malignant melanoma. The different pathologies were subsequently treated in consideration of the actual state of the art. In cases where surgical removal of the lesion was performed the histological diagnosis was confirmed in all cases. CONCLUSION There is a wide range of possible treatment modalities for orbital tumours depending on the nature of the lesion. Histological diagnosis is mandatory to select the proper management and operation. The presented method allows minimal-invasive biopsy even in deep intraconal lesions, enabling the surgeon to spare critical anatomical structures. Vascular lesions such as cavernous haemangioma, tumour of the lacrimal gland or dermoid cysts present a contraindication and have to be excluded.


Journal of Cranio-maxillofacial Surgery | 2012

Three dimensional comparative measurement of polyurethane milled skull models based on CT and MRI data sets.

Knut Reinbacher; Jürgen Wallner; H. Kärcher; Mauro Pau; Franz Quehenberger; Matthias Feichtinger

OBJECTIVE Due to the increase in the number and complexity of surgical procedures, available to craniomaxillofacial surgeons, allied to the rapid progress of technological developments, the use and production of 3D models has become important, especially for planning complex cases. The radiation exposure of additional CT based examinations is always subject to debate, so the feasibility of producing 3D models for surgical planning based on MRI imaging has been raised. MATERIAL AND METHODS 12 male and 3 female patients (n=15) between 47 and 84 years of age (mean age=65) were selected in a prospective study. Both magnetic resonance and computed tomography data sets of the facial bones were collected. Two milled models per patient were prepared: one based on the MRI scan and one based on the CT scan. The milled models were compared in a coordinative surveying procedure within 7 representative distances using a tentative measurement method. RESULTS Difference values between CT and MRI based models ranged from 0.1mm to 5.9 mm. On average MRI based models were smaller by 0.381 mm (SD 1.176 mm) than those on CT based. The accuracy of models based on MRI data was similar to those based on CT data. MRI based three dimensional milled models provide precise structure accuracy.


Journal of Cranio-maxillofacial Surgery | 2016

Use of a modified high submandibular approach to treat condylar base fractures: Experience with 44 consecutive cases treated in a single institution

Mauro Pau; Kawe Navisany; Knut Reinbacher; Tomislav Zrnc; Jürgen Wallner; Katja Schwenzer-Zimmerer

PURPOSE The aim of this article is to present our experience treating fractures of the condylar base with a modification of the high submandibular approach (HSA). MATERIALS AND METHODS Between June 2012 and April 2015, 44 fractures of the condylar base were treated in the Department of Oral and Maxillofacial Surgery of the Medical Hospital of Graz using the modified HSA. RESULTS We did not observe any damage (even transient) to the facial nerve or any complication related to violation of the parotid capsule (such as a salivary fistula, Frey syndrome, or a sialocele). CONCLUSIONS This approach provides good access to the condylar base, ensuring easier internal fixation, excellent protection of the facial nerve and parotid gland, and good cosmetic results.


International Journal of Oral and Maxillofacial Surgery | 2013

Surgically assisted rapid maxillary expansion: feasibility of not releasing the nasal septum

Knut Reinbacher; Jürgen Wallner; Mauro Pau; Matthias Feichtinger; H. Kärcher; Franz Quehenberger; Wolfgang Zemann

Surgically assisted rapid maxillary expansion (SARME) is commonly used to correct maxillary transverse deficiency. The aim of this study was to analyse the need for intraoperative liberation of the nasal septum during the procedure. SARME was performed in 25 patients by combining a lateral osteotomy with an inter-radicular maxillary osteotomy. The deviation of the nasal septum after SARME was evaluated by comparing measurements between radiologically defined landmarks on pre- and postoperative computed tomographic images. Two defined angles (angle I, between crista galli-symphysis mandibulae and crista galli-septum nasi; angle II, between maxillary plane and septum nasi) were measured based on four representative planes and septal movement was analysed. The mean changes in angles I (0.03° ± 0.78°) and II (0.25° ± 1.04°) did not differ significantly from zero (p=0.87 and p=0.24, respectively). Observed variations and displacements were considered to be acceptable because they were insignificant in every respect. Intranasal airway function was also examined pre- and postoperatively to evaluate any loss of ventilation. The described surgical technique is a successful method of maxillary segment distraction. The authors found no compelling reason to release the nasal septum in the context of SARME.


Proceedings of SPIE | 2017

Integration of the HTC Vive into the medical platform MeVisLab

Jan Egger; Markus Gall; Jürgen Wallner; Pedro Boechat; Alexander Hann; Xing Li; Xiaojun Chen; Dieter Schmalstieg

Virtual Reality (VR) is an immersive technology that replicates an environment via computer-simulated reality. VR gets a lot of attention in computer games but has also great potential in other areas, like the medical domain. Examples are planning, simulations and training of medical interventions, like for facial surgeries where an aesthetic outcome is important. However, importing medical data into VR devices is not trivial, especially when a direct connection and visualization from your own application is needed. Furthermore, most researcher don’t build their medical applications from scratch, rather they use platforms, like MeVisLab, Slicer or MITK. The platforms have in common that they integrate and build upon on libraries like ITK and VTK, further providing a more convenient graphical interface to them for the user. In this contribution, we demonstrate the usage of a VR device for medical data under MeVisLab. Therefore, we integrated the OpenVR library into MeVisLab as an own module. This enables the direct and uncomplicated usage of head mounted displays, like the HTC Vive under MeVisLab. Summarized, medical data from other MeVisLab modules can directly be connected per drag-and-drop to our VR module and will be rendered inside the HTC Vive for an immersive inspection.


Virchows Archiv | 2018

Non-sebaceous lymphadenoma of the lacrimal gland: first report of a new localization

Mauro Pau; Luka Brcic; Raja R. Seethala; Angelika Klein-Theyer; Marton Magyar; Knut Reinbacher; Michael Schweiger; Jürgen Wallner; Norbert Jakse

Tumors of the lacrimal gland are rare, with an incidence of less than 1 per 1,000,000 individuals per year [1]. They represent 6–12% of all orbital space-occupying lesions. Approximately 22–28% of these are primary epithelial tumors [2–4]. Since lacrimal gland tumors generally recapitulate the clinicopathologic features of their salivary gland counterparts, the World Health Organization (WHO) classification of salivary gland tumors can be applied to these tumors as well [5]. Fifty percent of primary epithelial tumors of the lacrimal gland are malignant. The most frequently encountered type is adenoid cystic carcinoma, which comprises approximately 20–30% of malignant neoplasms [6]. However, a variety of malignant tumor types that mirror their salivary gland counterparts have been described including ductal carcinoma, acinic cell carcinoma, primary squamous cell carcinoma, mucoepidermoid carcinoma, oncocytic carcinoma, polymorphous low-grade adenocarcinoma, carcinoma ex pleomorphic adenoma, myoepithelial carcinoma, lymphoepithelial carcinoma, epithelial-myoepithelial carcinoma, cystadenocarcinoma, primary sebaceous adenocarcinoma, and basal cell adenocarcinoma. On the other hand, the most common benign tumor is pleomorphic adenoma, which comprises around 50% of all epithelial tumors [3, 4]. Other benign salivary type tumors that have been described in lacrimal gland are exceptionally rare [7] and include oncocytoma, cystadenoma, myoepithelioma, and Warthin tumor, also known as papillary cystadenoma lymphomatosum. Here, we present, to our knowledge, the first description of a case of non-sebaceous lymphadenoma of the lacrimal gland.


Proceedings of SPIE | 2018

Lower jawbone data generation for deep learning tools under MeVisLab

Birgit Pfarrkirchner; Christina Gsaxner; Lydia-Alice Lindner; Norbert Jakse; Jürgen Wallner; Dieter Schmalstieg; Jan Egger

In this contribution, the preparation of data for training deep learning networks that are used to segment the lower jawbone in computed tomography (CT) images is proposed. To train a neural network, we had initially only ten CT datasets of the head-neck region with a diverse number of image slices from the clinical routine of a maxillofacial surgery department. In these cases, facial surgeons segmented the lower jawbone in each image slice to generate the ground truth for the segmentation task. Since the number of present images was deemed insufficient to train a deep neural network efficiently, the data was augmented with geometric transformations and added noise. Flipping, rotating and scaling images as well as the addition of various noise types (uniform, Gaussian and salt-and-pepper) were connected within a global macro module under MeVisLab. Our macro module can prepare the data for general deep learning data in an automatic and flexible way. Augmentation methods for segmentation tasks can easily be incorporated.


PLOS ONE | 2018

Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action

Jürgen Wallner; Kerstin Hochegger; Xiaojun Chen; Irene Mischak; Knut Reinbacher; Mauro Pau; Tomislav Zrnc; Katja Schwenzer-Zimmerer; Wolfgang Zemann; Dieter Schmalstieg; Jan Egger

Introduction Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However—due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. Material and methods In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Results Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Discussion Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.


Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging | 2018

Exploit 18F-FDG enhanced urinary bladder in PET data for deep learning ground truth generation in CT scans

Christina Gsaxner; Birgit Pfarrkirchner; Lydia-Alice Lindner; Norbert Jakse; Jürgen Wallner; Dieter Schmalstieg; Jan Egger

Accurate segmentation of medical images is a key step in medical image processing. As the amount of medical images obtained in diagnostics, clinical studies and treatment planning increases, automatic segmentation algorithms become increasingly more important. Therefore, we plan to develop an automatic segmentation approach for the urinary bladder in computed tomography (CT) images using deep learning. For training such a neural network, a large amount of labeled training data is needed. However, public data sets of medical images with segmented ground truth are scarce. We overcome this problem by generating binary masks of images of the 18F-FDG enhanced urinary bladder obtained from a multi-modal scanner delivering registered CT and positron emission tomography (PET) image pairs. Since PET images offer good contrast, a simple thresholding algorithm suffices for segmentation. We apply data augmentation to these datasets to increase the amount of available training data. In this contribution, we present algorithms developed with the medical image processing and visualization platform MeVisLab to achieve our goals. With the proposed methods, accurate segmentation masks of the urinary bladder could be generated, and given datasets could be enlarged by a factor of up to 2500.

Collaboration


Dive into the Jürgen Wallner's collaboration.

Top Co-Authors

Avatar

Knut Reinbacher

Medical University of Graz

View shared research outputs
Top Co-Authors

Avatar

Jan Egger

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dieter Schmalstieg

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mauro Pau

Medical University of Graz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomislav Zrnc

Medical University of Graz

View shared research outputs
Top Co-Authors

Avatar

Xiaojun Chen

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

W. Zemann

Medical University of Graz

View shared research outputs
Researchain Logo
Decentralizing Knowledge