John M. Dennis
National Center for Atmospheric Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John M. Dennis.
ieee international conference on high performance computing data and analytics | 2012
John M. Dennis; Jim Edwards; Katherine J. Evans; Oksana Guba; Peter H. Lauritzen; Arthur A. Mirin; Amik St-Cyr; Mark A. Taylor; Patrick H. Worley
The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR’s High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite-element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most other aspects of the dynamical core are a hybrid of well-tested techniques from CAM’s finite volume and global spectral dynamical core options. Here we first give an overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high-resolution simulations, using 1/4 degree, 1/8 degree, and T341 spectral truncation horizontal grids.
Journal of Climate | 2010
Frank O. Bryan; Robert A. Tomas; John M. Dennis; Dudley B. Chelton; Norman G. Loeb; Julie L. McClean
Abstract The emerging picture of frontal scale air–sea interaction derived from high-resolution satellite observations of surface winds and sea surface temperature (SST) provides a unique opportunity to test the fidelity of high-resolution coupled climate simulations. Initial analysis of the output of a suite of Community Climate System Model (CCSM) experiments indicates that characteristics of frontal scale ocean–atmosphere interaction, such as the positive correlation between SST and surface wind stress, are realistically captured only when the ocean component is eddy resolving. The strength of the coupling between SST and surface stress is weaker than observed, however, as has been found previously for numerical weather prediction models and other coupled climate models. The results are similar when the atmospheric component model grid resolution is doubled from 0.5° to 0.25°, an indication that shortcomings in the representation of subgrid scale atmospheric planetary boundary layer processes, rather t...
Monthly Weather Review | 2008
Amik St-Cyr; Christiane Jablonowski; John M. Dennis; Henry M. Tufo; Stephen J. Thomas
Abstract In an effort to study the applicability of adaptive mesh refinement (AMR) techniques to atmospheric models, an interpolation-based spectral element shallow-water model on a cubed-sphere grid is compared to a block-structured finite-volume method in latitude–longitude geometry. Both models utilize a nonconforming adaptation approach that doubles the resolution at fine–coarse mesh interfaces. The underlying AMR libraries are quad-tree based and ensure that neighboring regions can only differ by one refinement level. The models are compared via selected test cases from a standard test suite for the shallow-water equations, and via a barotropic instability test. These tests comprise the passive advection of a cosine bell and slotted cylinder, a steady-state geostrophic flow, a flow over an idealized mountain, a Rossby–Haurwitz wave, and the evolution of a growing barotropic wave. Both static and dynamics adaptations are evaluated, which reveal the strengths and weaknesses of the AMR techniques. Overa...
ieee international conference on high performance computing data and analytics | 2005
John M. Dennis; Aimé Fournier; William F. Spotz; Amik St-Cyr; Mark A. Taylor; Stephen J. Thomas; Henry M. Tufo
We first demonstrate the parallel performance of the dynamical core of a spectral element atmospheric model. The model uses continuous Galerkin spectral elements to discretize the surface of the Earth, coupled with finite differences in the radial direction. Results are presented from two distributed memory, mesh interconnect supercomputers (ASCI Red and BlueGene/L), using a two-dimensional space filling curve domain decomposition. Better than 80% parallel efficiency is obtained for fixed grids on up to 8938 processors. These runs represent the largest processor counts ever achieved for a geophysical application. They show that the upcoming Red Storm and BlueGene/L super-computers are well suited for performing global atmospheric simulations with a 10 km average grid spacing. We then demonstrate the accuracy of the method by performing a full three-dimensional mesh refinement convergence study, using the primitive equations to model breaking Rossby waves on the polar vortex. Due to the excellent parallel performance, the model is run at several resolutions up to 36 km with 200 levels using only modest computing resources. Isosurfaces of scaled potential vorticity exhibit complex dynamical features, e.g. a primary potential vorticity tongue, and a secondary instability causing roll-up into a ring of five smaller subvortices. As the resolution is increased, these features are shown to converge while potential vorticity gradients steepen.
conference on high performance computing (supercomputing) | 2001
Richard D. Loft; Stephen J. Thomas; John M. Dennis
Climate modeling is a grand challenge problem where scientific progress is measured not in terms of the largest problem that can be solved but by the highest achievable integration rate. These models have been notably absent in previous Gordon Bell competitions due to their inability to scale to large processor counts. A scalable and efficient spectral element atmospheric model is presented. A new semi-implicit time stepping scheme accelerates the integration rate relative to an explicit model by a factor of two, achieving 130 years per day at T63L30 equivalent resolution. Execution rates are reported for the standard shallow water and Held-Suarez climate benchmarks on IBM SP clusters. The explicit T170 equivalent multi-layer shallow water model sustains 343 Gflops at NERSC, 206 Gflops at NPACI (SDSC) and 127 Gflops at NCAR. An explicit Held-Suarez integration sustains 369 Gflops on 128 16-way IBM nodes at NERSC.
ieee international conference on high performance computing data and analytics | 2012
John M. Dennis; Mariana Vertenstein; Patrick H. Worley; Arthur A. Mirin; Anthony P. Craig; Robert L. Jacob; Sheri A. Mickelson
With the fourth release of the Community Climate System Model, the ability to perform ultra-high-resolution climate simulations is now possible, enabling eddy-resolving ocean and sea-ice models to be coupled to a finite-volume atmosphere model for a range of atmospheric resolutions. This capability was made possible by enabling the model to use large scale parallelism, which required a significant refactoring of the software infrastructure. We describe the scalability of two ultra-high-resolution coupled configurations on leadership class computing platforms. We demonstrate the ability to utilize over 30,000 processor cores on a Cray XT5 system and over 60,000 cores on an IBM Blue Gene/P system to obtain climatologically relevant simulation rates for these configurations.
SIAM Journal on Scientific Computing | 2005
Allison H. Baker; John M. Dennis; Elizabeth R. Jessup
The increasing gap between processor performance and memory access time warrants the re-examination of data movement in iterative linear solver algorithms. For this reason, we explore and establish the feasibility of modifying a standard iterative linear solver algorithm in a manner that reduces the movement of data through memory. In particular, we present an alternative to the restarted GMRES algorithm for solving a single right-hand side linear system
high performance distributed computing | 2014
Allison H. Baker; Haiying Xu; John M. Dennis; Michael Nathan Levy; Doug Nychka; Sheri Mickelson; Jim Edwards; Mariana Vertenstein; Al Wegener
Ax=b
ieee international conference on high performance computing data and analytics | 2012
John M. Dennis; Jim Edwards; Raymond M. Loy; Robert L. Jacob; Arthur A. Mirin; Anthony P. Craig; Mariana Vertenstein
based on solving the block linear system
parallel computing | 1995
Steven W. Hammond; Richard D. Loft; John M. Dennis; Richard K. Sato
AX=B