Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Helko Lehmann is active.

Publication


Featured researches published by Helko Lehmann.


Interface Focus | 2011

euHeart: personalized and integrated cardiac care using patient-specific cardiovascular modelling

Nic Smith; Adelaide de Vecchi; Matthew McCormick; David Nordsletten; Oscar Camara; Alejandro F. Frangi; Hervé Delingette; Maxime Sermesant; Jatin Relan; Nicholas Ayache; Martin W. Krueger; Walther H. W. Schulze; Rod Hose; Israel Valverde; Philipp Beerbaum; Cristina Staicu; Maria Siebes; Jos A. E. Spaan; Peter Hunter; Juergen Weese; Helko Lehmann; Dominique Chapelle; Reza Rezavi

The loss of cardiac pump function accounts for a significant increase in both mortality and morbidity in Western society, where there is currently a one in four lifetime risk, and costs associated with acute and long-term hospital treatments are accelerating. The significance of cardiac disease has motivated the application of state-of-the-art clinical imaging techniques and functional signal analysis to aid diagnosis and clinical planning. Measurements of cardiac function currently provide high-resolution datasets for characterizing cardiac patients. However, the clinical practice of using population-based metrics derived from separate image or signal-based datasets often indicates contradictory treatments plans owing to inter-individual variability in pathophysiology. To address this issue, the goal of our work, demonstrated in this study through four specific clinical applications, is to integrate multiple types of functional data into a consistent framework using multi-scale computational modelling.


medical image computing and computer assisted intervention | 2012

Automatic multi-model-based segmentation of the left atrium in cardiac MRI scans

Dominik Kutra; Axel Saalbach; Helko Lehmann; Alexandra Groth; Sebastian Peter Michael Dries; Martin W. Krueger; Olaf Dössel; Jürgen Weese

Model-based segmentation approaches have been proven to produce very accurate segmentation results while simultaneously providing an anatomic labeling for the segmented structures. However, variations of the anatomy, as they are often encountered e.g. on the drainage pattern of the pulmonary veins to the left atrium, cannot be represented by a single model. Automatic model selection extends the model-based segmentation approach to handling significant variational anatomies without user interaction. Using models for the three most common anatomical variations of the left atrium, we propose a method that uses an estimation of the local fit of different models to select the best fitting model automatically. Our approach employs the support vector machine for the automatic model selection. The method was evaluated on 42 very accurate segmentations of MRI scans using three different models. The correct model was chosen in 88.1% of the cases. In a second experiment, reflecting average segmentation results, the model corresponding to the clinical classification was automatically found in 78.0% of the cases.


international conference on functional imaging and modeling of heart | 2009

Integrating Viability Information into a Cardiac Model for Interventional Guidance

Helko Lehmann; Reinhard Kneser; Mirja Neizel; Jochen Peters; Olivier Ecabert; Harald P. Kühl; Malte Kelm; Jürgen Weese

It has been demonstrated that 3D anatomical models can be used effectively as roadmaps in image guided interventions. However, besides the anatomical information also the integrated display of functional information is desirable. In particular, a number of procedures such as the treatment of coro nary artery disease by revascularization and myocardial repair by targeted cell delivery require information about myocardial viability. In this paper we show how we can determine myocardial viability and integrate the information into a patient-specific cardiac 3D model. In contrast to other work we associate the viability information directly with the 3D patient anatomy. Thus we ensure that the functional information can be visualized in a way suitable for interventional guidance. Furthermore we propose a workflow that allows the nearly automatic generation of the patient-specific model. Our work is based on a previously published cardiac model that can be automatically adapted to images from different modalities like CT and MR. To enable integration of myocardial viability we first define a new myocardium surface model that encloses the left ventricular cavity in a way that suits robust viability measurements. We modify the model-based segmentation method to allow accurate adaptation of this new model. Second, we extend the model and the segmentation method to incorporate volumetric tissue properties. We validate the accuracy of the segmentation of the left ventricular cavity systematically using clinical data and illustrate the complete method for integrating myocardial viability by an example.


international conference of the ieee engineering in medicine and biology society | 2006

Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures

Gundolf Kiefer; Helko Lehmann; Jürgen Weese

Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future


medical image computing and computer-assisted intervention | 2010

The generation of patient-specific heart models for diagnosis and interventions

Jürgen Weese; Jochen Peters; Carsten Meyer; Irina Wächter; Reinhard Kneser; Helko Lehmann; Olivier Ecabert; Hans Barschdorf; Raghed Hanna; F. Weber; Olaf Dössel; Cristian Lorenz

A framework for the automatic extraction and generation of patient-specific organ models from different image modalities is presented. These models can be used to extract and represent diagnostic information about the heart and its function. Furthermore, the models can be used for treatment planning and an overlay of the models onto X-ray fluoroscopy images can support navigation when performing an intervention in the CathLab.


Proceedings of SPIE | 2012

Robust left ventricular myocardium segmentation for multi-protocol MR

Alexandra Groth; Jürgen Weese; Helko Lehmann

For a number of cardiac procedures like the treatments of ventricular tachycardia (VT), coronary artery disease (CAD) and heart failure (HF) both anatomical as well as vitality information about the left ventricular myocardium are required. To this end, two images for the anatomical and functional information, respectively, must be acquired and analyzed, e.g. using two different 3D MR protocols. To enable automatic analysis, a workflow has been proposed1 which allows to integrate the vitality information extracted from the functional image data into a patient-specific anatomical model generated from the anatomical image. However, in the proposed workflow the extraction of accurate vitality information from the functional image depends to a large extend on the accuracy of both the anatomical model and the mapping of the model to the functional image. In this paper we propose and evaluate methods for improving these two aspects. More specifically, on one hand we aim to improve the segmentation of the often low-contrast left ventricular epicardium in the anatomical 3D MR images by introducing a patient-specific shape-bias. On the other hand, we introduce a registration approach that facilitates the mapping of the anatomical model to images acquired by different protocols and modalities, such as functional 3D MR. The new methods are evaluated on clinical MR data, for which considerable improvements can be achieved.


Medical Imaging 2006: Visualization, Image-Guided Procedures, and Display | 2006

Visualizing the beating heart: interactive direct volume rendering of high-resolution CT time series using standard PC hardware

Helko Lehmann; Olivier Ecabert; Dieter Geller; Gundolf Kiefer; Jürgen Weese

Modern multi-slice CT (MSCT) scanners allow acquisitions of 3D data sets covering the complete heart at different phases of the cardiac cycle. This enables the physician to non-invasively study the dynamic behavior of the heart, such as wall motion artifacts. To this end an interactive 4D visualization of the heart in motion is desirable. However, the application of well-known volume rendering algorithms enforces considerable sacrifices in terms of image quality to ensure interactive frame rates, even when accelerated by standard graphics processors (GPUs). Thereby, the performance of pure CPU implementations of direct volume rendering algorithms is limited even for moderate volume sizes by both the number of required computations and the available memory bandwidth. Despite of offering higher computational performance and more memory bandwidth GPU accelerated implementations cannot provide interactive visualizations of large 4D data sets since data sets that do not fit into the onboard graphics memory are often not handled efficiently. In this paper we present a software architecture for GPU-based direct volume rendering algorithms that allows the interactive high-quality visualization of large medical time series data sets. In contrast to other work, our architecture exploits the complete memory hierarchy for high cache and bandwidth efficiency. Additionally, several data-dependent techniques are incorporated to reduce the amount of volume data to be transferred and rendered. None of these techniques sacrifices image quality in order to improve speed. By applying the method to several multi phase MSCT cardiac data sets we show that we can achieve interactive frame rates on currently available standard PC hardware.


Proceedings of SPIE | 2009

Toward knowledge-enhanced viewing using encyclopedias and model-based segmentation

Reinhard Kneser; Helko Lehmann; Dieter Geller; Yue-Chen Qian; Jürgen Weese

To make accurate decisions based on imaging data, radiologists must associate the viewed imaging data with the corresponding anatomical structures. Furthermore, given a disease hypothesis possible image findings which verify the hypothesis must be considered and where and how they are expressed in the viewed images. If rare anatomical variants, rare pathologies, unfamiliar protocols, or ambiguous findings are present, external knowledge sources such as medical encyclopedias are consulted. These sources are accessed using keywords typically describing anatomical structures, image findings, pathologies. In this paper we present our vision of how a patients imaging data can be automatically enhanced with anatomical knowledge as well as knowledge about image findings. On one hand, we propose the automatic annotation of the images with labels from a standard anatomical ontology. These labels are used as keywords for a medical encyclopedia such as STATdx to access anatomical descriptions, information about pathologies and image findings. On the other hand we envision encyclopedias to contain links to region- and finding-specific image processing algorithms. Then a finding is evaluated on an image by applying the respective algorithm in the associated anatomical region. Towards realization of our vision, we present our method and results of automatic annotation of anatomical structures in 3D MRI brain images. Thereby we develop a complex surface mesh model incorporating major structures of the brain and a model-based segmentation method. We demonstrate the validity by analyzing the results of several training and segmentation experiments with clinical data focusing particularly on the visual pathway.


Medical Imaging 2005: Visualization, Image-Guided Procedures, and Display | 2005

Visualization of large medical data sets using memory-optimized CPU and GPU algorithms

Gundolf Kiefer; Helko Lehmann; Juergen Weese

With the evolution of medical scanners towards higher spatial resolutions, the sizes of image data sets are increasing rapidly. To profit from the higher resolution in medical applications such as 3D-angiography for a more efficient and precise diagnosis, high-performance visualization is essential. However, to make sure that the performance of a volume rendering algorithm scales with the performance of future computer architectures, technology trends need to be considered. The design of such scalable volume rendering algorithms remains challenging. One of the major trends in the development of computer architectures is the wider use of cache memory hierarchies to bridge the growing gap between the faster evolving processing power and the slower evolving memory access speed. In this paper we propose ways to exploit the standard PC’s cache memories supporting the main processors (CPU’s) and the graphics hardware (graphics processing unit, GPU), respectively, for computing Maximum Intensity Projections (MIPs). To this end, we describe a generic and flexible way to improve the cache efficiency of software ray casting algorithms and show by means of cache simulations, that it enables cache miss rates close to the theoretical optimum. For GPU-based rendering we propose a similar, brick-based technique to optimize the utilization of onboard caches and the transfer of data to the GPU on-board memory. All algorithms produce images of identical quality, which enables us to compare the performance of their implementations in a fair way without eventually trading quality for speed. Our comparison indicates that the proposed methods perform superior, in particular for large data sets.


Archive | 2009

Automatic Multi-modal Image Segmentation for Applications in Cardiac Computational Physiology

Olivier Ecabert; Jochen Peters; Carsten Meyer; Reinhard Kneser; Helko Lehmann; Alexandra Groth; Jürgen Weese

The discipline of computational physiology has focused on the development of mathematical models integrating physiological behavior across a range of time and anatomical scales. However, the translation of this technology in clinical environments has not happened yet. This is mainly due to the challenge of efficiently customizing the model parameters such that the personalized model is able to reflect the pathological conditions of a patient.

Collaboration


Dive into the Helko Lehmann's collaboration.

Researchain Logo
Decentralizing Knowledge