Michael S. Summers
Oak Ridge National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael S. Summers.
Physical Review B | 2011
Emanuel Gull; Peter Staar; Sebastian Fuchs; Phani Kumar V. V. Nukala; Michael S. Summers; Thomas Pruschke; Thomas C. Schulthess; Thomas A. Maier
(Dated: October 19, 2010)We present a sub-matrix update algorithm for the continuous-time auxiliary field method thatallows the simulation of large lattice and impurity problems. The algorithm takes optimal advantageof modern CPU architectures by consistently using matrix instead of vector operations, resulting in aspeedup of a factor of ≈ 8 and thereby allowing access to larger systems and lower temperature. Weillustrate the power of our algorithm at the example of a cluster dynamical mean field simulation ofthe N´eel transition in the three-dimensional Hubbard model, where we show momentum dependentself-energies for clusters with up to 100 sites.
ieee international conference on high performance computing data and analytics | 2008
Gonzalo Alvarez; Michael S. Summers; Don Maxwell; Markus Eisenbach; Jeremy S. Meredith; Jeffrey M. Larkin; John M. Levesque; Thomas A. Maier; Paul R. C. Kent; Eduardo F. D'Azevedo; Thomas C. Schulthess
Staggering computational and algorithmic advances in recent years now make possible systematic Quantum Monte Carlo (QMC) simulations of high temperature (high-Tc) superconductivity in a microscopic model, the two dimensional (2D) Hubbard model, with parameters relevant to the cuprate materials. Here we report the algorithmic and computational advances that enable us to study the effect of disorder and nano-scale inhomogeneities on the pair-formation and the superconducting transition temperature necessary to understand real materials. The simulation code is written with a generic and extensible approach and is tuned to perform well at scale. Significant algorithmic improvements have been made to make effective use of current supercomputing architectures. By implementing delayed Monte Carlo updates and a mixed single-/double precision mode, we are able to dramatically increase the efficiency of the code. On the Cray XT4 systems of the Oak Ridge National Laboratory (ORNL), for example, we currently run production jobs on 31 thousand processors and thereby routinely achieve a sustained performance that exceeds 200 TFIop/s. On a system with 49 thousand processors we achieved a sustained performance of 409 TFIop/s. We present a study of how random disorder in the effective Coulomb interaction strength affects the superconducting transition temperature in the Hubbard model.
Physical Review Letters | 2010
Thomas A. Maier; Gonzalo Alvarez; Michael S. Summers; Thomas C. Schulthess
Using dynamic cluster quantum Monte Carlo simulations, we study the superconducting behavior of a 1/8 doped two-dimensional Hubbard model with imposed unidirectional stripelike charge-density-wave modulation. We find a significant increase of the pairing correlations and critical temperature relative to the homogeneous system when the modulation length scale is sufficiently large. With a separable form of the irreducible particle-particle vertex, we show that optimized superconductivity is obtained for a moderate modulation strength due to a delicate balance between the modulation enhanced pairing interaction, and a concomitant suppression of the bare particle-particle excitations by a modulation reduction of the quasiparticle weight.
ieee international conference on high performance computing data and analytics | 2013
Peter Staar; Thomas A. Maier; Michael S. Summers; Gilles Fourestey; Raffaele Solcà; Thomas C. Schulthess
We present a new quantum cluster algorithm to simulate models of high-Tc superconductors. This algorithm extends current methods with continuous lattice self-energies, thereby removing artificial long-range correlations. This cures the fermionic sign problem in the underlying quantum Monte Carlo solver for large clusters and realistic values of the Coulomb interaction in the entire temperature range of interest. We find that the new algorithm improves time-to-solution by nine orders of magnitude compared to current, state of the art quantum cluster simulations. An efficient implementation is given, which ports to multi-core as well as hybrid CPU-GPU systems. Running on 18,600 nodes on ORNLs Titan supercomputer enables us to compute a converged value of Tc/t = 0.053±0.0014 for a 28 site cluster in the 2D Hubbard model with U/t = 7 at 10% hole doping. Typical simulations on Titan sustain between 9.2 and 15.4 petaflops (double precision measured over full run), depending on configuration and parameters used.
Physical Review B | 2016
Feng Bao; Guannan Zhang; Clayton G. Webster; Yanfei Tang; Vito Scarola; Michael S. Summers; Thomas A. Maier
The analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. and benchmark the resulting spectra with those obtained by the standard Maximum Entropy method for three representative test cases, in- cluding data taken from studies of the two-dimensional Hubbard model. We generally find that our FESOM approach gives spectra similar to the Maximum Entropy results. In particular, while the Maximum Entropy method gives superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency dependent uncertainty of the resulting spectra, while the Maximum Entropy method does so only for the spectral weight integrated over a finite frequency region. We therefore believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used Maximum Entropy method especially for data with poor quality.
Journal of Physics: Conference Series | 2009
Michael S. Summers; Gonzalo Alvarez; Jeremy S. Meredith; Thomas A. Maier; Thomas C. Schulthess
The DCA++ code was one of the early science applications that ran on jaguar at the National Center for Computational Sciences, and the first application code to sustain a petaflop/s under production conditions on a general-purpose supercomputer. The code implements a quantum cluster method with a Quantum Monte Carlo kernel to solve the 2D Hubbard model for high-temperature superconductivity. It is implemented in C++, making heavy use of the generic programming model. In this paper, we discuss how this code was developed, reaching scalability and high efficiency on the worlds fastest supercomputer in only a few years. We show how the use of generic concepts combined with systematic refactoring of codes is a better strategy for computational sciences than a comprehensive upfront design.
Physical Review B | 2009
Phani Kumar V. V. Nukala; Thomas A. Maier; Michael S. Summers; Gonzalo Alvarez; Thomas C. Schulthess
Bulletin of the American Physical Society | 2017
Urs Haehner; Raffaele Solcà; Peter Staar; Gonzalo Alvarez; Thomas Maier; Michael S. Summers; Thomas C. Schulthess
Journal of Superconductivity and Novel Magnetism | 2013
Luis G. G. V. Dias da Silva; Gonzalo Alvarez; Michael S. Summers; Elbio Dagotto
Bulletin of the American Physical Society | 2012
Shi-Quan Su; Michael S. Summers; Thomas A. Maier