Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Paul Dannevik is active.

Publication


Featured researches published by William Paul Dannevik.


Physics of Fluids | 2002

Three-dimensional simulation of a Richtmyer-Meshkov instability with a two-scale initial perturbation

R.H. Cohen; William Paul Dannevik; Andris M. Dimits; Donald Eliason; Arthur A. Mirin; Ye Zhou; David H. Porter; Paul R. Woodward

Three-dimensional high-resolution simulations (up to 8 billion zones) have been performed for a Richtmyer–Meshkov instability produced by passing a shock through a contact discontinuity with a two-scale initial perturbation. The setup approximates shock-tube experiments with a membrane pushed through a wire mesh. The simulation produces mixing-layer widths similar to those observed experimentally. Comparison of runs at various resolutions suggests a mixing transition from unstable to turbulent flow as the numerical Reynolds number is increased. At the highest resolutions, the spectrum exhibits a region of power-law decay, in which the spectral flux is approximately constant, suggestive of an inertial range, but with weaker wave number dependence than Kolmogorov scaling, about k−6/5. Analysis of structure functions at the end of the simulation indicates the persistence of structures with velocities largest in the stream-wise direction. Comparison of three-dimensional and two-dimensional runs illustrates th...


parallel computing | 1995

Performance of a distributed memory finite difference atmospheric general circulation model

Michael F. Wehner; Arthur A. Mirin; Peter G. Eltgroth; William Paul Dannevik; Carlos R. Mechoso; John D. Farrara; Joseph A. Spahr

Abstract A new version of the UCLA atmospheric general circulation model suitable for massively parallel computer architectures has been developed. This paper presents the principles for the codes design and examines performance on a variety of distributed memory computers. A two dimensional domain decomposition strategy is used to achieve parallelism and is implemented by message passing. This parallel algorithm is shown to scale favorably as the number of processors is increased. In the fastest configuration, performance roughly equivalent to that of multitasking vector supercomputers is achieved.


Proceedings of the Fourth UNAM Supercomputing Conference | 2001

Very High Resolution Simulations of Compressible, Turbulent Flows

Paul R. Woodward; David H. Porter; Igor Sytine; Steve Anderson; Arthur A. Mirin; B. C. Curtis; R.H. Cohen; William Paul Dannevik; Andris M. Dimits; Donald Eliason; Karl-Heinz Winkler; Stephen W. Hodson

The steadily increasing power of supercomputing systems is enabling very high resolution simulations of compressible, turbulent flows in the high Reynolds number limit, which is of interest in astrophysics as well as in several other fluid dynamical applications. This paper discusses two such simulations, using grids of up to 8 billion cells. In each type of flow, convergence in a statistical sense is observed as the mesh is refined. The behavior of the convergent sequences indicates how a subgrid-scale model of turbulence could improve the treatment of these flows by high-resolution Euler schemes like PPM. The best resolved case, a simulation of a Richtmyer-Meshkov mixing layer in a shock tube experiment, also points the way toward such a subgrid-scale model. Analysis of the results of that simulation indicates a proportionality relationship between the energy transfer rate from large to small motions and the determinant of the deviatoric symmetric strain as well as the divergence of the velocity for the large-scale field.


high performance distributed computing | 1993

Toward a high performance distributed memory climate model

Michael F. Wehner; J. J. Ambrosiano; J.C. Brown; William Paul Dannevik; Peter G. Eltgroth; Arthur A. Mirin; John D. Farrara; Chung-Chun Ma; Carlos R. Mechoso; Joseph A. Spahr

As part of a long range plan to develop a comprehensive climate systems modeling capability, the authors have taken the atmospheric general circulation model originally developed by Arakawa and collaborators at UCLA and have recast it in a portable, parallel form. The code uses an explicit time-advance procedure on a staggered three-dimensional Eulerian mesh. They have implemented a two-dimensional latitude/longitude domain decomposition message passing strategy. Both dynamic memory management and interprocess communication are handled with macro constructs that are preprocessed prior to compilation. The code can be moved about a variety of platforms, including massively parallel processors, workstation clusters, and vector processors, with a mere change of three parameters. Performance on the various platforms as well as issues associated with coupling different models for major components of the climate system are discussed.<<ETX>>


Computer Physics Communications | 1994

CLIMATE SYSTEM MODELING USING A DOMAIN AND TASK DECOMPOSITION MESSAGE-PASSING APPROACH

Arthur A. Mirin; John Ambrosiano; J.H. Bolstad; A.J. Bourgeois; J.C. Brown; B. Chan; William Paul Dannevik; P.B. Duffy; Peter G. Eltgroth; C. Matarazzo; Michael F. Wehner

Abstract We have developed a Climate System Modeling Framework (CSMF) for high-performance computing systems, designed to schedule and couple multiple physics simulation packages in a flexible and transportable manner. Some of the major packages in the CSMF include models of atmospheric and oceanic circulation and chemistry, land surface and sea ice processes, and trace gas biogeochemistry. Parallelism is achieved through both domain decomposition and process-level concurrency, with data transfer and synchronization accomplished through message-passing. Both machine transportability and architecture-dependent optimization are handled through libraries and conditional compile directives. Preliminary experiments with the CSMF have been executed on a number of high-performance platforms, including the Intel Paragon, the TMC CM-5 and the Meiko CS-2, and we are in the very early stages of optimization. Progress to date is presented.


high performance distributed computing | 1992

Distributing a climate model across gigabit networks

Carlos R. Mechoso; Chung-Chun Ma; John D. Farrara; Joseph A. Spahr; Reagan Moore; William Paul Dannevik; Michael F. Wehner; Peter G. Eltgroth; Arthur A. Mirin

The authors investigate the distribution of a climate model across homogeneous and heterogeneous computer environments with nodes that can reside at geographically different locations. The application consists of an atmospheric general circulation model (AGCM) coupled to an oceanic general circulation model (OGCM). Three levels of code decomposition are considered to achieve a high degree of parallelism and to mask communication with computation. First, the domains of both the grid-point AGCM and OGCM are divided into sub-domains for which calculations are carried out concurrently (domain decomposition). Second, the model is decomposed based on the diversity of tasks performed by its major components (task decomposition). Last, computation and communication are organized in such a way that the exchange of data between different tasks is carried out in subdomains of the model domain (I/O decomposition). In a dedicated computer/network environment, the wall-clock time required by the resulting distributed application is reduced to that for the AGCM/Physics, with the other two model components and interprocessor communications running in parallel.<<ETX>>


The Journal of Supercomputing | 1993

Porting a global ocean model onto a shared-memory multiprocessor: observations and guideline

Richard J. Procassini; Scott Whitman; William Paul Dannevik

A three-dimensional global ocean circulation model has been modified to run on the BBN TC2000 multiple instruction stream/multiple data stream (MIMD) parallel computer. Two shared-memory parallel programming models have been used to implement the global ocean model on the TC2000: the TCF (TC2000 Fortran) fork-join model and the PFP (Parallel Fortran Preprocessor) split-join model. The method chosen for the parallelization of this global ocean model on a shared-memory MIMD machine is discussed. The performance of each version of the code has been measured by varying the processor count for a fixed-resolution test case. The statically scheduled PFP version of the code achieves a higher parallel computing efficiency than does the dynamically scheduled TCF version of the code. The observed differences in the performance of the TCF and PFP versions of the code are discussed. The parallel computing performance of the shared-memory implementation of the global ocean model is limited by several factors, most notably load imbalance and network contention. The experience gained while porting this large, “real world” application onto a shared-memory multiprocessor is also presented to provide insight to the reader who may be contemplating such an undertaking.


conference on high performance computing (supercomputing) | 1991

Computing climate change: can we beat nature?

Robert C. Malone; Robert M. Chervin; Richard D. Smith; William Paul Dannevik; John B. Drake

No abstract available


conference on high performance computing (supercomputing) | 1991

Computing modeling in a MIMD environment

William Paul Dannevik

No abstract available


Archive | 1990

Comparison of simulations and theory of low-frequency plasma turbulence

L. L. Lodestro; Bruce I. Cohen; R.H. Cohen; Andris M. Dimits; Yoshio Matsuda; W. M. Nevins; William A. Newcomb; Timothy J. Williams; Alice Evelyn Koniges; William Paul Dannevik

Collaboration


Dive into the William Paul Dannevik's collaboration.

Top Co-Authors

Avatar

Arthur A. Mirin

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Andris M. Dimits

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

R.H. Cohen

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald Eliason

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael F. Wehner

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Peter G. Eltgroth

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Steven A. Orszag

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

B. C. Curtis

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge