Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David M. Beazley is active.

Publication


Featured researches published by David M. Beazley.


parallel computing | 1994

Message-passing multi-cell molecular dynamics on the Connection Machine 5

David M. Beazley; Peter S. Lomdahl

Abstract We present a new scalable algorithm for short-range molecular dynamics simulations on distributed memory MIMD multicomputers based on a message-passing multi-cell approach. We have implemented the algorithm on the Connection Machine 5 (CM-5) and demonstrate that meso-scale molecular dynamics with more than 108 particles is now possible on massively parallel MIMD computers. Typical runs show single particle update-times of 0.15 μs in 2 dimentions (2D) and approximately 1 μs in 3 dimensions (3D) on a 1024 node CM-5 without vector units, corresponding to more than 1.8 Gflops overall performance. We also present a scaling equation which agrees well with actually observed timings.


computational science and engineering | 1997

Computational steering. Software systems and strategies

Steven G. Parker; Christopher R. Johnson; David M. Beazley

With todays large and complex applications, scientists have increasing difficulty analyzing and visualizing vast amounts of data. Computational steering is an emerging technology that addresses this problem, providing a mechanism for integrating simulation, data analysis, visualization, and postprocessing.


conference on high performance computing (supercomputing) | 1996

Lightweight Computational Steering of Very Large Scale Molecular Dynamics Simulations

David M. Beazley; Peter S. Lomdahl

We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations over standard Internet connections. We demonstrate how we have been able to explore data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to integrate common scripting languages (including Python, Tcl/Tk, and Perl), simulation code, user extensions, and commercial data analysis packages.


conference on high performance computing (supercomputing) | 1993

50 GFlops molecular dynamics on the Connection Machine-5

Peter S. Lomdahl; Pablo Tamayo; Niels Grønbech-Jensen; David M. Beazley

The authors present timings and performances numbers for a new short range three dimensional (3-D) molecular dynamics (MD) code, SPaSM, on the Connection Machine-5 (CM-5). They demonstrate that runs with more than 10/sup 8/ particles are now possible on massively parallel MIMD computers. To the best of their knowledge this is at least an order of magnitude more particles than what was previously been reported. Typical production runs show sustained performance (including communication) in the range of 47-50 GFlops on a 1024 node CM-5 with vector units (VUs). The speed of the code scales linearly with the number of processors and with the number of particles and shows 95% parallel efficiency in the speedup.


International Journal of Modern Physics C | 1993

Multi-Million Particle Molecular Dynamics On The Cm-5

Peter S. Lomdahl; David M. Beazley; Pablo Tamayo; Niels Grønbech-Jensen

We outline a recently developed short-range molecular dynamics (MD) algorithm for message-passing MIMD computers. Timings and performance numbers are presented for a code, SPaSM, which implements the algorithm on the Connection Machine-5 (CM-5). We demonstrate that runs with more than 108 particles are now possible on massively parallel MIMD computers. The speed of the code scales linearly with the number of processors and with the number of particles and shows 95% parallel efficiency in the speedup. Recent results from 2D simulations of fracture dynamics are also presented


international parallel processing symposium | 1997

Extensible message passing application development and debugging with Python

David M. Beazley; Peter S. Lomdahl

The authors describe how they have parallelized Python, an interpreted object oriented scripting language, and used it to build an extensible message-passing molecular dynamics application for the CM-5, Cray T3D, and Sun multiprocessor servers running MPI. This allows one to interact with large-scale message-passing applications, rapidly prototype new features, and perform application specific debugging. It is even possible to write message passing programs in Python itself. They describe some of the tools they have developed to extend Python and results of this approach.


international parallel processing symposium | 1994

A high performance communications and memory caching scheme for molecular dynamics on the CM-5

David M. Beazley; Peter S. Lomdahl; Niels Grønbech-Jensen; Pablo Tamayo

Presents several techniques that we have used to optimize the performance of a message-passing C code for molecular dynamics on the CM-5. We describe our use of the CM-5 vector units and a parallel memory caching scheme that we have developed to speed up the code by more than 50%. A modification that decreases communication time by 35% is also presented, along with a discussion of how we have been able to take advantage of the CM-5 hardware without significantly compromising code portability. We have been able to speed up our original code by a factor of ten and we feel that our modifications may be useful in optimizing the performance of other message-passing C applications on the CM-5.<<ETX>>


parallel computing | 1995

Multi-Million Particle Molecular Dynamics on MPPs

Peter S. Lomdahl; David M. Beazley

We discuss the computational difficulties associated with performing large-scale molecular dynamics simulations involving more than 100 million atoms on modern massively parallel supercomputers. We discuss various performance and memory optimization strategies along with the method we have used to write a highly portable parallel application. Finally, we discuss some recent work addressing the problems associated with analyzing and visualizing the data generated from multi-million particle MD simulations.


Journal of Computer-aided Materials Design | 1996

Large-scale molecular dynamics simulations of fracture and deformation

S. J. Zhou; David M. Beazley; Peter S. Lomdahl; Brad Lee Holian

SummaryWe have discussed the prospects of applying massively parallel molecular dynamics simulation to investigate brittle versus ductile fracture behaviors and dislocation intersection. This idea is illustrated by simulating dislocation emission from a three-dimensional crack. Unprecedentedly, the dislocation loops emitted from the crack fronts have been observed. It is found that dislocation-emission modes, jogging or blunting, are very sensitive to boundary conditions and interatomic potentials. These 3D phenomena can be effectively visualized and analyzed by a new technique, namely, plotting only those atoms within the certain ranges of local potential energies.


Radiation Effects and Defects in Solids | 1997

Molecular dynamics of very large systems

Peter S. Lomdahl; David M. Beazley; S. J. Zhou; Brad Lee Holian

Abstract We briefly present recent results obtained with our parallel molecular dynamics (MD) code, SPaSM, performing large-scale multi-million atom simulations to study crack blunting and dislocation generation in copper. We also discuss some recent work addressing the problems associated with analyzing and visualizing the data generated from multi-million particle MD simulations.

Collaboration


Dive into the David M. Beazley's collaboration.

Top Co-Authors

Avatar

Peter S. Lomdahl

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

S. J. Zhou

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Brad Lee Holian

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pablo Tamayo

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramon Ravelo

University of Texas at El Paso

View shared research outputs
Researchain Logo
Decentralizing Knowledge