Frederick H. Streitz
Lawrence Livermore National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frederick H. Streitz.
Journal of Physics: Condensed Matter | 2002
John A. Moriarty; James F. Belak; Robert E. Rudd; Per Söderlind; Frederick H. Streitz; L. H. Yang
We present an overview of recent work on quantum-based atomistic simulation of materials properties in transition metals performed in the Metals and Alloys Group at Lawrence Livermore National Laboratory. Central to much of this effort has been the development, from fundamental quantum mechanics, of robust many-body interatomic potentials for bcc transition metals via model generalized pseudopotential theory (MGPT), providing close linkage between ab?initio electronic-structure calculations and large-scale static and dynamic atomistic simulations. In the case of tantalum (Ta), accurate MGPT potentials have been so obtained that are applicable to structural, thermodynamic, defect, and mechanical properties over wide ranges of pressure and temperature. Successful application areas discussed include structural phase stability, equation of state, melting, rapid resolidification, high-pressure elastic moduli, ideal shear strength, vacancy and self-interstitial formation and migration, grain-boundary atomic structure, and dislocation core structure and mobility. A number of the simulated properties allow detailed validation of the Ta potentials through comparisons with experiment and/or parallel electronic-structure calculations. Elastic and dislocation properties provide direct input into higher-length-scale multiscale simulations of plasticity and strength. Corresponding effort has also been initiated on the multiscale materials modelling of fracture and failure. Here large-scale atomistic simulations and novel real-time characterization techniques are being used to study void nucleation, growth, interaction, and coalescence in series-end fcc transition metals. We have so investigated the microscopic mechanisms of void nucleation in polycrystalline copper (Cu), and void growth in single-crystal and polycrystalline Cu, undergoing triaxial expansion at a large, constant strain rate - a process central to the initial phase of dynamic fracture. The influence of pre-existing microstructure on the void growth has been characterized both for nucleation and for growth, and these processes are found to be in agreement with the general features of void distributions observed in experiment. We have also examined some of the microscopic mechanisms of plasticity associated with void growth.
conference on high performance computing (supercomputing) | 2007
James N. Glosli; David F. Richards; Kyle Caspersen; Robert E. Rudd; John A. Gunnels; Frederick H. Streitz
We report the computational advances that have enabled the first micron-scale simulation of a Kelvin-Helmholtz (KH) instability using molecular dynamics (MD). The advances are in three key areas for massively parallel computation such as on BlueGene/L (BG/L): fault tolerance, application kernel optimization, and highly efficient parallel I/O. In particular, we have developed novel capabilities for handling hardware parity errors and improving the speed of interatomic force calculations, while achieving near optimal I/O speeds on BG/L, allowing us to achieve excellent scalability and improve overall application performance. As a result we have successfully conducted a 2-billion atom KH simulation amounting to 2.8 CPU-millennia of run time, including a single, continuous simulation run in excess of 1.5 CPU-millennia. We have also conducted 9-billion and 62.5-billion atom KH simulations. The current optimized ddcMD code is benchmarked at 115.1 TFlop/s in our scaling study and 103.9 TFlop/s in a sustained science run, with additional improvements ongoing. These improvements enabled us to run the first MD simulations of micron-scale systems developing the KH instability.
Physical Review E | 2008
Jim Glosli; Frank Graziani; Richard M. More; Michael S. Murillo; Frederick H. Streitz; Michael P. Surh; Lorin X. Benedict; Stefan P. Hau-Riege; A. B. Langdon; Richard A. London
The temperature equilibration rate between electrons and protons in dense hydrogen has been calculated with molecular dynamics simulations for temperatures between 10 and 600eV and densities between 10;{20}cm;{-3}to10;{24}cm;{-3} . Careful attention has been devoted to convergence of the simulations, including the role of semiclassical potentials. We find that for Coulomb logarithms L greater, similar1 , a model by Gericke-Murillo-Schlanges (GMS) [D. O. Gericke, Phys. Rev. E 65, 036418 (2002)] based on a T -matrix method and the approach by Brown-Preston-Singleton [L. S. Brown, Phys. Rep. 410, 237 (2005)] agrees with the simulation data to within the error bars of the simulation. For smaller Coulomb logarithms, the GMS model is consistent with the simulation results. Landau-Spitzer models are consistent with the simulation data for L>4 .
Presented at: SciDAC 2006, Denver, CO, United States, Jun 25 - Jun 29, 2006 | 2006
Frederick H. Streitz; James N. Glosli; Mehul Patel; Bor Chan; Robert Kim Yates; Bronis R. de Supinski; James C. Sexton; John A. Gunnels
We investigate solidification in metal systems ranging in size from 64,000 to 524,288,000 atoms on the IBM BlueGene/L computer at LLNL. Using the newly developed ddcMD code, we achieve performance rates as high as 103 TFlops, with a performance of 101.7 TFlop sustained over a 7 hour run on 131,072 cpus. We demonstrate superb strong and weak scaling. Our calculations are significant as they represent the first atomic-scale model of metal solidification to proceed, without finite size effects, from spontaneous nucleation and growth of solid out of the liquid, through the coalescence phase, and into the onset of coarsening. Thus, our simulations represent the first step towards an atomistic model of nucleation and growth that can directly link atomistic to mesoscopic length scales.
ieee international conference on high performance computing data and analytics | 2009
David F. Richards; James N. Glosli; Bor Chan; M. Dorr; Erik W. Draeger; Jean-Luc Fattebert; William D. Krauss; Thomas E. Spelce; Frederick H. Streitz; Mike Surh; John A. Gunnels
With supercomputers anticipated to expand from thousands to millions of cores, one of the challenges facing scientists is how to effectively utilize this ever-increasing number. We report here an approach that creates a heterogeneous decomposition by partitioning effort according to the scaling properties of the component algorithms. We demonstrate our strategy by developing a capability to model hot dense plasma. We have performed benchmark calculations ranging from millions to billions of charged particles, including a 2.8 billion particle simulation that achieved 259.9 TFlop/s (26% of peak performance) on the 294,912 cpu JUGENE computer at the Jülich Supercomputing Centre in Germany. With this unprecedented simulation capability we have begun an investigation of plasma fusion physics under conditions where both theory and experiment are lacking-in the strongly-coupled regime as the plasma begins to burn. Our strategy is applicable to other problems involving long-range forces (i.e., biological or astrophysical simulations). We believe that the flexible heterogeneous decomposition approach demonstrated here will allow many problems to scale across current and next-generation machines.
Journal of Applied Physics | 2006
Jeffrey H. Nguyen; Daniel Orlikowski; Frederick H. Streitz; John A. Moriarty; Neil C. Holmes
We have recently carried out exploratory dynamic experiments where the samples were subjected to prescribed thermodynamic paths. In typical dynamic compression experiments, the samples are thermodynamically limited to the principal Hugoniot or quasi-isentrope. With recent developments in a functionally graded material impactor, we can prescribe and shape the applied pressure profile with similarly shaped, nonmonotonic impedance profile in the impactor. Previously inaccessible thermodynamic states beyond the quasi-isentropes and Hugoniot can now be reached in dynamic experiments with these impactors. In the light gas gun experiments on copper reported here, we recorded the particle velocities of the Cu–LiF interfaces and have employed hydrodynamic simulations to relate them to the thermodynamic phase diagram. Peak pressures for these experiments are on the order of megabars, and the time scales range from nanoseconds to several microseconds. The strain rates of these quasi-isentropic experiments are approx...
SHOCK COMPRESSION OF CONDENSED MATTER - 2003: Proceedings of the Conference of the American Physical Society Topical Group on Shock Compression of Condensed Matter | 2004
Jeffrey H. Nguyen; Daniel Orlikowski; Frederick H. Streitz; Neil C. Holmes; John A. Moriarty
We describe here a series of dynamic compression experiments using impactors with specifically prescribed density profiles. Building upon previous impactor designs, we compose our functionally graded density impactors of materials whose densities vary from about 0.1 g/cc to more than 15 g/cc. These impactors, whose density profiles are not restricted to be monotonic, can be used to generate prescribed thermodynamic paths in the targets. These paths include quasi-isentropes as well as combinations of shock, rarefraction, and quasi-isentropic compression waves. The time-scale of these experiments ranges from nanoseconds to several microseconds. Strain-rates in the quasi-isentropic compression experiments vary from approximately 10{sup 4}s{sup -1} to 10{sup 6}s{sup -1}. We applied this quasi-isentropic compression technique to resolidify water where ice is at a higher temperature than the initial water sample. The particle velocity of quasi-isentropically compressed water exhibits a two-wave structure and sample thickness scales consistently with water-ice phase transition time. Experiments on resolidification of molten bismuth are also promising.
Journal of Physics A | 2009
Jim Glosli; Frank Graziani; Richard M. More; Michael S. Murillo; Frederick H. Streitz; Michael P. Surh
Hot dense radiative (HDR) plasmas common to inertial confinement fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of Megabars to thousands of Gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validations of the models used for describing HDR plasmas are difficult to perform. Direct numerical simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation, but previous work either relies on the collisionless approximation or ignores radiation. We present a first attempt at a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The new technique passes a key test: it relaxes to a blackbody spectrum for a plasma in local thermodynamic equilibrium. This new tool also provides a method for assessing the accuracy of energy and momentum exchange models in hot dense plasmas. As an example, we simulate the evolution of non-equilibrium electron, ion and radiation temperatures for a hydrogen plasma using the new molecular dynamics simulation capability.
ieee international conference on high performance computing data and analytics | 2008
Bronis R. de Supinski; Martin Schulz; Vasily V. Bulatov; William H. Cabot; Bor Chan; Andrew W. Cook; Erik W. Draeger; James N. Glosli; Jeffrey Greenough; Keith Henderson; Alison Kubota; Steve Louis; Brian Miller; Mehul Patel; Thomas E. Spelce; Frederick H. Streitz; Peter L. Williams; Robert Kim Yates; Andy Yoo; George S. Almasi; Gyan Bhanot; Alan Gara; John A. Gunnels; Manish Gupta; José E. Moreira; James C. Sexton; Bob Walkup; Charles J. Archer; Francois Gygi; Timothy C. Germann
BlueGene/L (BG/L), developed through a partnership between IBM and Lawrence Livermore National Laboratory (LLNL), is currently the worlds largest system both in terms of scale, with 131,072 processors, and absolute performance, with a peak rate of 367 Tflop/s. BG/L has led the last four Top500 lists with a Linpack rate of 280.6 Tflop/s for the full machine installed at LLNL and is expected to remain the fastest computer in the next few editions. However, the real value of a machine such as BG/L derives from the scientific breakthroughs that real applications can produce by successfully using its unprecedented scale and computational power. In this paper, we describe our experiences with eight large scale applications on BG/ L from several application domains, ranging from molecular dynamics to dislocation dynamics and turbulence simulations to searches in semantic graphs. We also discuss the challenges we faced when scaling these codes and present several successful optimization techniques. All applications show excellent scaling behavior, even at very large processor counts, with one code even achieving a sustained performance of more than 100 Tflop/s, clearly demonstrating the real success of the BG/L design.
international conference on supercomputing | 2005
George S. Almasi; Gyan Bhanot; Alan Gara; Manish Gupta; James C. Sexton; Bob Walkup; Vasily V. Bulatov; Andrew W. Cook; Bronis R. de Supinski; James N. Glosli; Jeffrey Greenough; Francois Gygi; Alison Kubota; Steve Louis; Thomas E. Spelce; Frederick H. Streitz; Peter L. Williams; Robert Kim Yates; Charles J. Archer; José E. Moreira; Charles A. Rendleman
Blue Gene/L represents a new way to build supercomputers, using a large number of low power processors, together with multiple integrated interconnection networks. Whether real applications can scale to tens of thousands of processors (on a machine like Blue Gene/L) has been an open question. In this paper, we describe early experience with several physics and material science applications on a 32,768 node Blue Gene/L system, which was installed recently at the Lawrence Livermore National Laboratory. Our study shows some problems in the applications and in the current software implementation, but overall, excellent scaling of these applications to 32K nodes on the current Blue Gene/L system. While there is clearly room for improvement, these results represent the first proof point that MPI applications can effectively scale to over ten thousand processors. They also validate the scalability of the hardware and software architecture of Blue Gene/L.