Hermann Lederer
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hermann Lederer.
Proceedings of the National Academy of Sciences of the United States of America | 2003
Claudia Baar; Mark Eppinger; Guenter Raddatz; Jörg Simon; Christa Lanz; Oliver Klimmek; Ramkumar Nandakumar; Roland Gross; Andrea Rosinus; Heike Keller; Pratik Jagtap; Burkhard Linke; Folker Meyer; Hermann Lederer; Stephan C. Schuster
To understand the origin and emergence of pathogenic bacteria, knowledge of the genetic inventory from their nonpathogenic relatives is a prerequisite. Therefore, the 2.11-megabase genome sequence of Wolinella succinogenes, which is closely related to the pathogenic bacteria Helicobacter pylori and Campylobacter jejuni, was determined. Despite being considered nonpathogenic to its bovine host, W. succinogenes holds an extensive repertoire of genes homologous to known bacterial virulence factors. Many of these genes have been acquired by lateral gene transfer, because part of the virulence plasmid pVir and an N-linked glycosylation gene cluster were found to be syntenic between C. jejuni and genomic islands of W. succinogenes. In contrast to other host-adapted bacteria, W. succinogenes does harbor the highest density of bacterial sensor kinases found in any bacterial genome to date, together with an elaborate signaling circuitry of the GGDEF family of proteins. Because the analysis of the W. succinogenes genome also revealed genes related to soil- and plant-associated bacteria such as the nif genes, W. succinogenes may represent a member of the epsilon proteobacteria with a life cycle outside its host.
parallel computing | 2011
Thomas Auckenthaler; Volker Blum; Hans-Joachim Bungartz; Thomas Huckle; Rainer Johanni; Lukas Krämer; Bruno Lang; Hermann Lederer; Paul R. Willems
The computation of selected eigenvalues and eigenvectors of a symmetric (Hermitian) matrix is an important subtask in many contexts, for example in electronic structure calculations. If a significant portion of the eigensystem is required then typically direct eigensolvers are used. The central three steps are: reduce the matrix to tridiagonal form, compute the eigenpairs of the tridiagonal matrix, and transform the eigenvectors back. To better utilize memory hierarchies, the reduction may be effected in two stages: full to banded, and banded to tridiagonal. Then the back transformation of the eigenvectors also involves two stages. For large problems, the eigensystem calculations can be the computational bottleneck, in particular with large numbers of processors. In this paper we discuss variants of the tridiagonal-to-banded back transformation, improving the parallel efficiency for large numbers of processors as well as the per-processor utilization. We also modify the divide-and-conquer algorithm for symmetric tridiagonal matrices such that it can compute a subset of the eigenpairs at reduced cost. The effectiveness of our modifications is demonstrated with numerical experiments.
grid computing | 2011
Wolfgang Gentzsch; Denis Girou; Alison Kennedy; Hermann Lederer; Johannes Reetz; Morris Riedel; Andreas Schott; Andrea Vanni; Mariano Vázquez; Jules Wolfrat
The paper presents an overview of the current research and achievements of the DEISA project, with a focus on the general concept of the infrastructure, the operational model, application projects and science communities, the DEISA Extreme Computing Initiative, user and application support, operations and technology, services, collaborations and interoperability, and the use of standards and policies. The paper concludes with a discussion about the long-term sustainability of the DEISA infrastructure.
Journal of Physics: Condensed Matter | 2014
Andreas Marek; Volker Blum; Rainer Johanni; Ville Havu; Bruno Lang; Thomas Auckenthaler; Alexander Heinecke; Hans-Joachim Bungartz; Hermann Lederer
Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intels MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.
Nucleic Acids Research | 2006
Markus Rampp; Thomas Soddemann; Hermann Lederer
We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’).
Computer Physics Communications | 2002
Stefan Borowski; Stephan Thiel; Thorsten Klüner; Hans-Joachim Freund; Reinhard Tisma; Hermann Lederer
We present a massively parallel implementation to perform quantum dynamical wave packet calculations of molecules on surfaces. The employed algorithm propagates the wavefunction via the time-dependent Schrodinger equation within a finite basis representation by Split and Chebyshev schemes, respectively. For the parallelization, a problem adapted data decomposition in all dimensions is introduced that ensures an optimal load balancing. In a speedup analysis of the timing and scaling properties, the overall semi-linear scaling of the algorithm is verified. The almost linear speedup up to 512 processing elements indicates our implementation as a powerful tool for high-dimensional calculations. The implementation is applied to laser induced desorption of molecules from surfaces.
challenges of large applications in distributed environments | 2006
Phil Andrews; Maarten Buechli; Robert Harkness; R. Hatzky; Christopher T. Jordan; Hermann Lederer; Ralph Niederberger; Anthony Rimovsky; Andreas Schott; Thomas Soddemann; Volker Springel
A supercomputing hyper-grid spanning two continents was created to move a step towards interoperability of leading grids. A dedicated network connection was established between DEISA, the leading European supercomputing grid, and TeraGrid, the leading American supercomputing grid. Both grids have adopted the approach of establishing a common, high performance global file system, the wide-area version of IBMs GPFS. Teragrids approach is based on a single site server solution under Linux, hosted by San Diego Supercomputer Centre, DEISAs approach is a multi-site server solution, with currently servers in France, Germany and Italy. These two grid-internal global file systems were interconnected over a dedicated, trusted network connection. During the Supercomputing Conference 2005, grand challenge applications were carried out both within DEISA and within TeraGrid, and results were written transparently to the combined global file system with physically distributed locations of the involved disk systems. Simulations were carried out in Europe and in America, and results were directly written to the respective remote continent, accessible for all participating scientists in both continents, and were then directly further processed for visualization in a third location, the SC05 exhibition hall in Seattle. Grand challenge applications used for the demo included a protein structure prediction and a cosmological simulation carried out at San Diego Supercomputer Center (SDSC), US (www.sdsc.edu) and a Gyrokinetic turbulence simulation and also a cosmological simulation carried out at Garching Computing Centre of the Max Planck Society (RZG), Germany (www.rzg.mpg.de)
Biochimica et Biophysica Acta | 1987
Hermann Heumann; Hermann Lederer; Wolfgang Kammerer; Peter Palm; Willi Metzger; Gisela Baer
A procedure has been developed to isolate DNA fragments on a large scale. A DNA fragment of 130 base-pairs containing the strong promoter A1 of the phage T7 was purified to homogeneity in amounts of 10 mg. The procedure includes the rapid purification of gram amounts of plasmid DNA, a new, simple method to separate small DNA fragments from the vector by a phenol/water partitioning system, and a liquid-liquid PEG-dextran partition chromatography for the final purification of the fragment. The fragment was cloned in two vector systems: The vector pDS1, to1+ (1), containing an efficient terminator downstream from the promoter integration site, gives high yields, 3-4 mg plasmid DNA per liter medium. In the plasmid pWH802 (2), which is not specially designed for the amplification of a strong promoter, the integration of the promoter was possible but the yield decreased by a factor of about 50. The stability of the inserts was tested in both systems. Monomeric inserts were stable in both plasmids, multimeric inserts up to a tetramer were only stable in pWH802. Only one orientation of the fragment was found.
challenges of large applications in distributed environments | 2007
Hermann Lederer; R. Hatzky; Reinhard Tisma; A. Bottino; F. Jenko
For support of the world-wide ITER (International Thermonuclear Experimental Reactor) project [1], large scale numerical simulations will be a necessity. Plasma turbulence simulations play a key role for the design, construction and optimization of the necessary fusion devices. The simulations will be so compute and memory intensive that applications must be able to efficiently use tens of thousands of processors. Highly scalable applications are mandatory. In Europe, within the DEISA supercomputing grid project, the DEISA Extreme Computing Initiative supports application enabling work and usage of Europes most powerful supercomputers for challenging projects. With all support of the DEISA Extreme Computing Initiative and the DEISA Joint Research Activity in Plasma Physics, leading plasma turbulence simulations have been enabled for hyperscaling and for efficient and portable usage in the heterogeneous DEISA grid. Here we report about the large scale applications GENE and ORB5 with high relevance for ITER. GENE, a so-called Vlasov-code, was parallelized to such a high degree that efficient usage on 32,768 processors could be demonstrated. The PIC code ORB5 was enabled for hyperscaling especially through application of the domain cloning concept. Efficient usage of ORB5 code could be demonstrated on 8192 processors.
parallel computing | 2010
Wolfgang Gentzsch; Hermann Lederer
The DEISA Mini-Symposium held in conjunction with the ParCo 2009 Conference was devoted to ŞExtreme Computing in an Advanced Supercomputing EnvironmentŤ. For a representative overview of the application related key areas involved, the Mini-Symposium was structured into three sessions devoted to the description of the DEISA Extreme Computing Initiative DECI and the Science Community support, the application oriented support services, and examples of scientific projects performed in the Distributed European Infrastructure for Supercomputing Applications (DEISA) both from the DECI and the fusion energy research related science community behind the EU FP7 project EUFORIA.