Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manfred Liebmann is active.

Publication


Featured researches published by Manfred Liebmann.


IEEE Transactions on Biomedical Engineering | 2007

Algebraic Multigrid Preconditioner for the Cardiac Bidomain Model

Gernot Plank; Manfred Liebmann; R.W. dos Santos; Edward J. Vigmond; Gundolf Haase

The bidomain equations are considered to be one of the most complete descriptions of the electrical activity in cardiac tissue, but large scale simulations, as resulting from discretization of an entire heart, remain a computational challenge due to the elliptic portion of the problem, the part associated with solving the extracellular potential. In such cases, the use of iterative solvers and parallel computing environments are mandatory to make parameter studies feasible. The preconditioned conjugate gradient (PCG) method is a standard choice for this problem. Although robust, its efficiency greatly depends on the choice of preconditioner. On structured grids, it has been demonstrated that a geometric multigrid preconditioner performs significantly better than an incomplete LU (ILU) preconditioner. However, unstructured grids are often preferred to better represent organ boundaries and allow for coarser discretization in the bath far from cardiac surfaces. Under these circumstances, algebraic multigrid (AMG) methods are advantageous since they compute coarser levels directly from the system matrix itself, thus avoiding the complexity of explicitly generating coarser, geometric grids. In this paper, the performance of an AMG preconditioner (BoomerAMG) is compared with that of the standard ILU preconditioner and a direct solver. BoomerAMG is used in two different ways, as a preconditioner and as a standalone solver. Two 3-D simulation examples modeling the induction of arrhythmias in rabbit ventricles were used to measure performance in both sequential and parallel simulations. It is shown that the AMG preconditioner is very well suited for the solution of the bidomain equation, being clearly superior to ILU preconditioning in all regards, with speedups by factors in the range 5.9-7.7


ieee international conference on high performance computing data and analytics | 2009

A parallel algebraic multigrid solver on graphics processing units

Gundolf Haase; Manfred Liebmann; Craig C. Douglas; Gernot Plank

The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core.


international conference on high performance computing and simulation | 2009

Comparing CUDA and OpenGL implementations for a Jacobi iteration

Ronan M. Amorim; Gundolf Haase; Manfred Liebmann; Rodrigo Weber dos Santos

The use of the GPU as a general purpose processor is becoming more popular and there are different approaches for this kind of programming. In this paper we present a comparison between different implementations of the OpenGL and CUDA approaches for solving our test case, a weighted Jacobi iteration with a structured matrix originating from a finite element discretization of the elliptic PDE part of the cardiac bidomain equations. The CUDA approach using textures showed to be the fastest with a speedup of 31 over a CPU implementation using one core and SSE. CUDA showed to be an efficient and easy way of programming GPU for general purpose problems, though it is also easier to write inefficient codes.


IEEE Transactions on Biomedical Engineering | 2012

Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

Aurel Neic; Manfred Liebmann; Elena Hoetzl; Lawrence Mitchell; Edward J. Vigmond; Gundolf Haase; Gernot Plank

Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.


International Journal of Parallel, Emergent and Distributed Systems | 2007

A Hilbert-order multiplication scheme for unstructured sparse matrices

Gundolf Haase; Manfred Liebmann; Gernot Plank

We investigate a new storage format for unstructured sparse matrices based on the space-filling Hilbert curve. Numerical tests with matrix-vector multiplication show the potential of the fractal storage (FS) format in comparison to the traditional compressed row storage (CRS) format. The FS format outperforms the CRS format by up to 50% for matrix-vector multiplications with multiple right hand sides.


Concurrency and Computation: Practice and Experience | 2011

Accelerating cardiac excitation spread simulations using graphics processing units

Bernardo Martins Rocha; Fernando O. Campos; R. M. Amorim; Gernot Plank; R. W. dos Santos; Manfred Liebmann; Gundolf Haase

The modeling of the electrical activity of the heart is of great medical and scientific interest, because it provides a way to get a better understanding of the related biophysical phenomena, allows the development of new techniques for diagnoses and serves as a platform for drug tests. The cardiac electrophysiology may be simulated by solving a partial differential equation coupled to a system of ordinary differential equations describing the electrical behavior of the cell membrane. The numerical solution is, however, computationally demanding because of the fine temporal and spatial sampling required. The demand for real‐time high definition 3D graphics made the new graphic processing units (GPUs) a highly parallel, multithreaded, many‐core processor with tremendous computational horsepower. It makes the use of GPUs a promising alternative to simulate the electrical activity in the heart. The aim of this work is to study the performance of GPUs for solving the equations underlying the electrical activity in a simple cardiac tissue. In tests on 2D cardiac tissues with different cell models it is shown that the GPU implementation runs 20 times faster than a parallel CPU implementation running with 4 threads on a quad–core machine, parts of the code are even accelerated by a factor of 180. Copyright


Periodica Mathematica Hungarica | 2012

On the Davenport constant and on the structure of extremal zero-sum free sequences

Alfred Geroldinger; Manfred Liebmann; Andreas Philipp

Let


Biomedical Optics Express | 2011

High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware

Manuel Freiberger; Herbert Egger; Manfred Liebmann; Hermann Scharfetter


Computer Physics Communications | 2013

Examining the Analytic Structure of Green's Functions: Massive Parallel Complex Integration using GPUs

Andreas Windisch; Reinhard Alkofer; Gundolf Haase; Manfred Liebmann

G = C_{n_1 } \oplus \cdots \oplus C_{n_r }


parallel computing | 2010

Algebraic multigrid solver on clusters of CPUs and GPUs

Aurel Neic; Manfred Liebmann; Gundolf Haase; Gernot Plank

Collaboration


Dive into the Manfred Liebmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aurel Neic

Medical University of Graz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zoltán Horváth

Széchenyi István University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge