Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eike Hermann Müller is active.

Publication


Featured researches published by Eike Hermann Müller.


Physical Review D | 2012

The Upsilon spectrum and the determination of the lattice spacing from lattice QCD including charm quarks in the sea

R. J. Dowdall; Brian Colquhoun; J. O. Daldrop; C. T. H. Davies; I. D. Kendall; E. Follana; T. Hammant; R. R. Horgan; G. P. Lepage; Chris Monahan; Eike Hermann Müller

eld congurations from the MILC collaboration. Using the 2 S 1S splitting to determine the lattice spacing, we are able to obtain the 1P 1S splitting to 1.4% and the 3S 1S splitting to 2.4%. Our improved result for M() M( b) is 70(9) MeV and we predict M( 0 ) M( 0) = 35(3) MeV. We also calculate ;K and s correlators using the Highly Improved Staggered Quark action and perform a chiral and continuum extrapolation to give values for M s (0.6893(12) GeV) and f s (0.1819(5) GeV) that allow us to tune the strange quark mass as well as providing an independent and consistent determination of the lattice spacing. Combining the NRQCD and HISQ analyses gives mb=ms = 54.7(2.5) and a value for the heavy quark potential parameter of r1 = 0.3209(26) fm.


Physical Review D | 2011

Precise B, B_s, and B_c meson spectroscopy from full lattice QCD

Eric B. Gregory; Junko Shigemitsu; Kit Yan Wong; E. Follana; Heechang Na; C. T. H. Davies; E. Gámiz; Jonna Koponen; I. D. Kendall; G. Peter Lepage; Eike Hermann Müller

We give the first accurate results for B and B{sub s} meson masses from lattice QCD including the effect of u, d, and s sea quarks, and we improve an earlier value for the B{sub c} meson mass. By using the highly improved staggered quark (HISQ) action for u/d, s, and c quarks and NRQCD for the b quarks, we are able to achieve an accuracy in the masses of around 10 MeV. Our results are: m{sub B}=5.291(18) GeV, m{sub B{sub s}}=5.363(11) GeV, and m{sub B{sub c}}=6.280(10) GeV. Note that all QCD parameters here are tuned from other calculations, so these are parameter free-tests of QCD against experiment. We also give scalar, B{sub s0}* and axial-vector, B{sub s1} meson masses. We find these to be slightly below threshold for decay to BK and B*K, respectively.


arXiv: Distributed, Parallel, and Cluster Computing | 2013

Massively parallel solvers for elliptic PDEs in Numerical Weather- and Climate Prediction

Eike Hermann Müller; Robert Scheichl

The demand for substantial increases in the spatial resolution of global weather and climate prediction models makes it necessary to use numerically efficient and highly scalable algorithms to solve the equations of large-scale atmospheric fluid dynamics. For stability and efficiency reasons, several of the operational forecasting centres, in particular the Met Office and the European Centre for Medium-Range Weather Forecasts (ECMWF) in the UK, use semi-implicit semi-Lagrangian time-stepping in the dynamical core of the model. The additional burden with this approach is that a three-dimensional elliptic partial differential equation (PDE) for the pressure correction has to be solved at every model time step and this often constitutes a significant proportion of the time spent in the dynamical core. In global models, this PDE must be solved in a thin spherical shell. To run within tight operational time-scales, the solver has to be parallelized and there seems to be a (perceived) misconception that elliptic solvers do not scale to large processor counts and hence implicit time-stepping cannot be used in very high-resolution global models. After reviewing several methods for solving the elliptic PDE for the pressure correction and their application in atmospheric models, we demonstrate the performance and very good scalability of Krylov subspace solvers and multigrid algorithms for a representative model equation with more than 1010 unknowns on 65 536 cores on the High-End Computing Terascale Resource (HECToR), the UKs national supercomputer. For this, we tested and optimized solvers from two existing numerical libraries (the Distributed and Unified Numerics Environment (DUNE) and Parallel High Performance Preconditioners (hypre)) and implemented both a conjugate gradient solver and a geometric multigrid algorithm based on a tensor-product approach, which exploits the strong vertical anisotropy of the discretized equation. We study both weak and strong scalability and compare the absolute solution times for all methods; in contrast to one-level methods, the multigrid solver is robust with respect to parameter variations.


Quarterly Journal of the Royal Meteorological Society | 2014

Massively parallel solvers for elliptic partial differential equations in numerical weather and climate prediction: scalability of elliptic solvers in NWP

Eike Hermann Müller; Robert Scheichl

The demand for substantial increases in the spatial resolution of global weather and climate prediction models makes it necessary to use numerically efficient and highly scalable algorithms to solve the equations of large-scale atmospheric fluid dynamics. For stability and efficiency reasons, several of the operational forecasting centres, in particular the Met Office and the European Centre for Medium-Range Weather Forecasts (ECMWF) in the UK, use semi-implicit semi-Lagrangian time-stepping in the dynamical core of the model. The additional burden with this approach is that a three-dimensional elliptic partial differential equation (PDE) for the pressure correction has to be solved at every model time step and this often constitutes a significant proportion of the time spent in the dynamical core. In global models, this PDE must be solved in a thin spherical shell. To run within tight operational time-scales, the solver has to be parallelized and there seems to be a (perceived) misconception that elliptic solvers do not scale to large processor counts and hence implicit time-stepping cannot be used in very high-resolution global models. After reviewing several methods for solving the elliptic PDE for the pressure correction and their application in atmospheric models, we demonstrate the performance and very good scalability of Krylov subspace solvers and multigrid algorithms for a representative model equation with more than 1010 unknowns on 65 536 cores on the High-End Computing Terascale Resource (HECToR), the UKs national supercomputer. For this, we tested and optimized solvers from two existing numerical libraries (the Distributed and Unified Numerics Environment (DUNE) and Parallel High Performance Preconditioners (hypre)) and implemented both a conjugate gradient solver and a geometric multigrid algorithm based on a tensor-product approach, which exploits the strong vertical anisotropy of the discretized equation. We study both weak and strong scalability and compare the absolute solution times for all methods; in contrast to one-level methods, the multigrid solver is robust with respect to parameter variations.


Computing and Visualization in Science | 2013

Matrix-free GPU implementation of a preconditioned conjugate gradient solver for anisotropic elliptic PDEs

Eike Hermann Müller; Xu Guo; Robert Scheichl; Sinan Shi

Many problems in geophysical and atmospheric modelling require the fast solution of elliptic partial differential equations (PDEs) in “flat” three dimensional geometries. In particular, an anisotropic elliptic PDE for the pressure correction has to be solved at every time step in the dynamical core of many numerical weather prediction (NWP) models, and equations of a very similar structure arise in global ocean models, subsurface flow simulations and gas and oil reservoir modelling. The elliptic solve is often the bottleneck of the forecast, and to meet operational requirements an algorithmically optimal method has to be used and implemented efficiently. Graphics Processing Units (GPUs) have been shown to be highly efficient (both in terms of absolute performance and power consumption) for a wide range of applications in scientific computing, and recently iterative solvers have been parallelised on these architectures. In this article we describe the GPU implementation and optimisation of a Preconditioned Conjugate Gradient (PCG) algorithm for the solution of a three dimensional anisotropic elliptic PDE for the pressure correction in NWP. Our implementation exploits the strong vertical anisotropy of the elliptic operator in the construction of a suitable preconditioner. As the algorithm is memory bound, performance can be improved significantly by reducing the amount of global memory access. We achieve this by using a matrix-free implementation which does not require explicit storage of the matrix and instead recalculates the local stencil. Global memory access can also be reduced by rewriting the PCG algorithm using loop fusion and we show that this further reduces the runtime on the GPU. We demonstrate the performance of our matrix-free GPU code by comparing it both to a sequential CPU implementation and to a matrix-explicit GPU code which uses existing CUDA libraries. The absolute performance of the algorithm for different problem sizes is quantified in terms of floating point throughput and global memory bandwidth.


Computer Physics Communications | 2009

Automated generation of lattice QCD Feynman rules

Alistair Hart; G. von Hippel; R. R. Horgan; Eike Hermann Müller

Abstract The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summary Program title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References: [1] A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340–353, doi:10.1016/j.jcp.2005.03.010 , arXiv:hep-lat/0411026 . [2] M. Luscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5 .


Physical Review D | 2009

Moving nonrelativistic QCD for heavy-to-light form factors on the lattice

R. R. Horgan; L. Khomskii; Stefan Meinel; Matthew Wingate; Kerryann M. Foley; G. P. Lepage; G. von Hippel; Alistair Hart; Eike Hermann Müller; C. T. H. Davies; A. Dougall; Kaven Henry Yau Wong

We formulate nonrelativistic quantum chromodynamics (NRQCD) on a lattice which is boosted relative to the usual discretization frame. Moving NRQCD allows us to treat the momentum for the heavy quark arising from the frame choice exactly. We derive moving NRQCD through


arXiv: High Energy Physics - Lattice | 2009

Rare B decays with moving NRQCD and improved staggered quarks

Stefan Meinel; Eike Hermann Müller; Lew Khomskii; Alistair Hart; R. R. Horgan; Matthew Wingate

\mathcal{O}(1/{m}^{2},{v}_{\mathrm{rel}}^{4})


Physical Review D | 2004

Locality of the square-root method for improved staggered quarks

Alistair Hart; Eike Hermann Müller

, as accurate as the NRQCD action in present use, both in the continuum and on the lattice with


European Physical Journal C | 2007

Aspects of radiative K + e3 decays

Bastian Kubis; Eike Hermann Müller; J. Gasser; Martin Schmid

\mathcal{O}({a}^{4})

Collaboration


Dive into the Eike Hermann Müller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. R. Horgan

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Horgan

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. K. Muite

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge