Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Galgon is active.

Publication


Featured researches published by Martin Galgon.


Journal of Computational and Applied Mathematics | 2013

Dissecting the FEAST algorithm for generalized eigenproblems

Lukas Krämer; Edoardo Di Napoli; Martin Galgon; Bruno Lang; Paolo Bientinesi

We analyze the FEAST method for computing selected eigenvalues and eigenvectors of large sparse matrix pencils. After establishing the close connection between FEAST and the well-known Rayleigh-Ritz method, we identify several critical issues that influence convergence and accuracy of the solver: the choice of the starting vector space, the stopping criterion, how the inner linear systems impact the quality of the solution, and the use of FEAST for computing eigenpairs from multiple intervals. We complement the study with numerical examples, and hint at possible improvements to overcome the existing problems.


International Journal of Parallel Programming | 2017

GHOST: Building Blocks for High Performance Sparse Linear Algebra on Heterogeneous Systems

Moritz Kreutzer; Jonas Thies; Melven Röhrig-Zöllner; Andreas Pieper; Faisal Shahzad; Martin Galgon; Achim Basermann; H. Fehske; Georg Hager; Gerhard Wellein

While many of the architectural details of future exascale-class high performance computer systems are still a matter of intense research, there appears to be a general consensus that they will be strongly heterogeneous, featuring “standard” as well as “accelerated” resources. Today, such resources are available as multicore processors, graphics processing units (GPUs), and other accelerators such as the Intel Xeon Phi. Any software infrastructure that claims usefulness for such environments must be able to meet their inherent challenges: massive multi-level parallelism, topology, asynchronicity, and abstraction. The “General, Hybrid, and Optimized Sparse Toolkit” (GHOST) is a collection of building blocks that targets algorithms dealing with sparse matrix representations on current and future large-scale systems. It implements the “MPI+X” paradigm, has a pure C interface, and provides hybrid-parallel numerical kernels, intelligent resource management, and truly heterogeneous parallelism for multicore CPUs, Nvidia GPUs, and the Intel Xeon Phi. We describe the details of its design with respect to the challenges posed by modern heterogeneous supercomputers and recent algorithmic developments. Implementation details which are indispensable for achieving high efficiency are pointed out and their necessity is justified by performance measurements or predictions based on performance models. We also provide instructions on how to make use of GHOST in existing software packages, together with a case study which demonstrates the applicability and performance of GHOST as a component within a larger software stack. The library code and several applications are available as open source.


Journal of Computational Physics | 2016

High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

Andreas Pieper; Moritz Kreutzer; Andreas Alvermann; Martin Galgon; H. Fehske; Georg Hager; Bruno Lang; Gerhard Wellein

We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need for matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 102 innermost eigenpairs of a topological insulator matrix with dimension 109 derived from quantum physics applications.


parallel computing | 2015

On the parallel iterative solution of linear systems arising in the FEAST algorithm for computing inner eigenvalues

Martin Galgon; Lukas Krämer; Jonas Thies; Achim Basermann; Bruno Lang

Parallel iterative solution of linear systems from FEAST algorithm.Hybrid parallel implementation.CG variant with multi-coloring approach for better performance on hybrid systems. Methods for the solution of sparse eigenvalue problems that are based on spectral projectors and contour integration have recently attracted more and more attention. Such methods require the solution of many shifted sparse linear systems of full size. In most of the literature concerning these eigenvalue solvers, only few words are said on the solution of the linear systems, but they turn out to be very hard to solve by iterative linear solvers in practice. In this work we identify a row projection method for the solution of the inner linear systems encountered in the FEAST algorithm and introduce a novel hybrid parallel and fully iterative implementation of the eigenvalue solver. Our approach ultimately aims at achieving extreme parallelism by exploiting the algorithms potential on several levels. We present numerical examples where graphene modeling is one of the target applications. In this application, several hundred or even thousands of eigenvalues from the interior of the spectrum are required, which is a big challenge for state-of-the-art numerical methods.


european conference on parallel processing | 2014

ESSEX - Equipping Sparse Solvers for Exascale

Andreas Alvermann; Achim Basermann; H. Fehske; Martin Galgon; Georg Hager; Moritz Kreutzer; Lukas Krämer; Bruno Lang; Andreas Pieper; Melven Röhrig-Zöllner; Faisal Shahzad; Jonas Thies; Gerhard Wellein

The ESSEX project investigates computational issues arising at exascale for large-scale sparse eigenvalue problems and develops programming concepts and numerical methods for their solution. The project pursues a coherent co-design of all software layers where a holistic performance engineering process guides code development across the classic boundaries of application, numerical method, and basic kernel library. Within ESSEX the numerical methods cover widely applicable solvers such as classic Krylov, Jacobi-Davidson, or the recent FEAST methods, as well as domain-specific iterative schemes relevant for the ESSEX quantum physics application. This report introduces the project structure and presents selected results which demonstrate the potential impact of ESSEX for efficient sparse solvers on highly scalable heterogeneous supercomputers.


Software for Exascale Computing | 2016

Towards an Exascale Enabled Sparse Solver Repository

Jonas Thies; Martin Galgon; Faisal Shahzad; Andreas Alvermann; Moritz Kreutzer; Andreas Pieper; Melven Röhrig-Zöllner; Achim Basermann; H. Fehske; Georg Hager; Bruno Lang; Gerhard Wellein

As we approach the exascale computing era, disruptive changes in the software landscape are required to tackle the challenges posed by manycore CPUs and accelerators. We discuss the development of a new ‘exascale enabled’ sparse solver repository (the ESSR) that addresses these challenges—from fundamental design considerations and development processes to actual implementations of some prototypical iterative schemes for computing eigenvalues of sparse matrices. Key features of the ESSR include holistic performance engineering, tight integration between software layers and mechanisms to mitigate hardware failures.


Software for Exascale Computing | 2016

Performance Engineering and Energy Efficiency of Building Blocks for Large, Sparse Eigenvalue Computations on Heterogeneous Supercomputers

Moritz Kreutzer; Jonas Thies; Andreas Pieper; Andreas Alvermann; Martin Galgon; Melven Röhrig-Zöllner; Faisal Shahzad; Achim Basermann; A. R. Bishop; H. Fehske; Georg Hager; Bruno Lang; Gerhard Wellein

Numerous challenges have to be mastered as applications in scientific computing are being developed for post-petascale parallel systems. While ample parallelism is usually available in the numerical problems at hand, the efficient use of supercomputer resources requires not only good scalability but also a verifiably effective use of resources on the core, the processor, and the accelerator level. Furthermore, power dissipation and energy consumption are becoming further optimization targets besides time-to-solution. Performance Engineering (PE) is the pivotal strategy for developing effective parallel code on all levels of modern architectures. In this paper we report on the development and use of low-level parallel building blocks in the GHOST library (“General, Hybrid, and Optimized Sparse Toolkit”). We demonstrate the use of PE in optimizing a density of states computation using the Kernel Polynomial Method, and show that reduction of runtime and reduction of energy are literally the same goal in this case. We also give a brief overview of the capabilities of GHOST and the applications in which it is being used successfully.


International Workshop on Eigenvalue Problems: Algorithms, Software and Applications in Petascale Computing | 2015

Improved Coefficients for Polynomial Filtering in ESSEX

Martin Galgon; Lukas Krämer; Bruno Lang; Andreas Alvermann; H. Fehske; Andreas Pieper; Georg Hager; Moritz Kreutzer; Faisal Shahzad; Gerhard Wellein; Achim Basermann; Melven Röhrig-Zöllner; Jonas Thies

The ESSEX project is an ongoing effort to provide exascale-enabled sparse eigensolvers, especially for quantum physics and related application areas. In this paper we first briefly summarize some key achievements that have been made within this project.


Pamm | 2014

Improving robustness of the FEAST algorithm and solving eigenvalue problems from graphene nanoribbons

Martin Galgon; Lukas Krämer; Bruno Lang; Andreas Alvermann; H. Fehske; Andreas Pieper


Pamm | 2011

The FEAST algorithm for large eigenvalue problems

Martin Galgon; Lukas Krämer; Bruno Lang

Collaboration


Dive into the Martin Galgon's collaboration.

Top Co-Authors

Avatar

Bruno Lang

University of Wuppertal

View shared research outputs
Top Co-Authors

Avatar

Andreas Pieper

University of Greifswald

View shared research outputs
Top Co-Authors

Avatar

H. Fehske

University of Greifswald

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georg Hager

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Jonas Thies

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar

Moritz Kreutzer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Faisal Shahzad

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge