Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James B. White is active.

Publication


Featured researches published by James B. White.


Journal of Climate | 2012

Climate system response to external forcings and climate change projections in CCSM4

Gerald A. Meehl; Warren M. Washington; Julie M. Arblaster; Aixue Hu; Haiyan Teng; Claudia Tebaldi; Benjamin M. Sanderson; Jean-Francois Lamarque; Andrew Conley; Warren G. Strand; James B. White

AbstractResults are presented from experiments performed with the Community Climate System Model, version 4 (CCSM4) for the Coupled Model Intercomparison Project phase 5 (CMIP5). These include multiple ensemble members of twentieth-century climate with anthropogenic and natural forcings as well as single-forcing runs, sensitivity experiments with sulfate aerosol forcing, twenty-first-century representative concentration pathway (RCP) mitigation scenarios, and extensions for those scenarios beyond 2100–2300. Equilibrium climate sensitivity of CCSM4 is 3.20°C, and the transient climate response is 1.73°C. Global surface temperatures averaged for the last 20 years of the twenty-first century compared to the 1986–2005 reference period for six-member ensembles from CCSM4 are +0.85°, +1.64°, +2.09°, and +3.53°C for RCP2.6, RCP4.5, RCP6.0, and RCP8.5, respectively. The ocean meridional overturning circulation (MOC) in the Atlantic, which weakens during the twentieth century in the model, nearly recovers to early...


conference on high performance computing (supercomputing) | 2003

Early Evaluation of the Cray X1

Thomas H. Dunigan; Mark R. Fahey; James B. White; Patrick H. Worley

Oak Ridge National Laboratory installed a 32 processor Cray X1 in March, 2003, and will have a 256 processor system installed by October, 2003. In this paper we describe our initial evaluation of the X1 architecture, focusing on microbenchmarks, kernels, and application codes that highlight the performance characteristics of the X1 architecture and indicate how to use the system most efficiently.


Concurrency and Computation: Practice and Experience | 2005

Practical performance portability in the Parallel Ocean Program (POP)

Philip W. Jones; Patrick H. Worley; Yoshikatsu Yoshida; James B. White; John M. Levesque

The design of the Parallel Ocean Program (POP) is described with an emphasis on portability. Performance of POP is presented on a wide variety of computational architectures, including vector architectures and commodity clusters. Analysis of POP performance across machines is used to characterize performance and identify improvements while maintaining portability. A new design of the POP model, including a cache blocking and land point elimination scheme, is described with some preliminary performance results. Published in 2005 by John Wiley & Sons, Ltd.


high performance interconnects | 2004

Performance evaluation of the Cray X1 distributed shared memory architecture

Thomas H. Dunigan; Jeffrey S. Vetter; James B. White; Patrick H. Worley

The Cray X1 supercomputer is a distributed shared memory vector multiprocessor, scalable to 4096 processors and up to 65 terabytes of memory. The X1s hierarchical design uses the basic building block of the multi-streaming processor (MSP), which is capable of 12.8 GF/s for 64-bit operations. The distributed shared memory (DSM) of the X1 presents a 64-bit global address space that is directly addressable from every MSP with an interconnect bandwidth per computation rate of one byte per floating point operation. Our results show that this high bandwidth and low latency for remote memory accesses translates into improved application performance on important applications, such as an Eulerian gyrokinetic-Maxwell solver. Furthermore, this architecture naturally supports programming models like the Cray shmem API, Unified Parallel C (UPC), and coarray FORTRAN (CAF), and it is imperative to select the appropriate models to exploit these features as our benchmarks demonstrate.


international conference on computational science | 2009

A Scalable and Adaptable Solution Framework within Components of the Community Climate System Model

Katherine J. Evans; Damian W. I. Rouson; Andrew G. Salinger; Mark A. Taylor; Wilbert Weijer; James B. White

A framework for a fully implicit solution method is implemented into (1) the High Order Methods Modeling Environment (HOMME), which is a spectral element dynamical core option in the Community Atmosphere Model (CAM), and (2) the Parallel Ocean Program (POP) model of the global ocean. Both of these models are components of the Community Climate System Model (CCSM). HOMME is a development version of CAM and provides a scalable alternative when run with an explicit time integrator. However, it suffers the typical time step size limit to maintain stability. POP uses a time-split semi-implicit time integrator that allows larger time steps but less accuracy when used with scale interacting physics. A fully implicit solution framework allows larger time step sizes and additional climate analysis capability such as model steady state and spin-up efficiency gains without a loss in scalability. This framework is implemented into HOMME and POP using a new Fortran interface to the Trilinos solver library, ForTrilinos, which leverages several new capabilities in the current Fortran standard to maximize robustness and speed. The ForTrilinos solution template was also designed for interchangeability; other solution methods and capability improvements can be more easily implemented into the models as they are developed without severely interacting with the code structure. The utility of this approach is illustrated with a test case for each of the climate component models.


ieee international conference on high performance computing data and analytics | 2005

Vectorizing the Community Land Model

Forrest M. Hoffman; Mariana Vertenstein; Hideyuki Kitabata; James B. White

In this paper we describe our extensive efforts to rewrite the Community Land Model (CLM) so that it provides good vector performance on the Earth Simulator in Japan and the Cray X1 at Oak Ridge National Laboratory. We present the technical details of the old and new internal data structures, the required code reorganization, and the resulting performance improvements. We describe and compare the performance and scaling of the final CLM Version 3.0 (CLM3.0) on the IBM Power4, the Earth Simulator, and the Cray X1. At 64 processors, the performance of the model is similar on the IBM Power4, the Earth Simulator, and the Cray X1. However, the Cray X1 offers the best performance of all three platforms tested from 4 to 64 processors when OpenMP is used. Moreover, at low processor counts (16 or fewer), the model performs significantly better on the Cray X1 than on the other platforms. The vectorized version of CLM was publicly released by the National Center for Atmospheric Research as the standalone CLM3.0, as a part of the new Community Atmosphere Model Version 3.0 (CAM3.0), and as a component of the Community Climate System Model Version 3.0 (CCSM3.0) on June 23, 2004.


Monthly Weather Review | 2011

Multiwavelet Discontinuous Galerkin-Accelerated Exact Linear Part (ELP) Method for the Shallow-Water Equations on the Cubed Sphere

Rick Archibald; Katherine J. Evans; John B. Drake; James B. White

Abstract In this paper a new approach is presented to increase the time-step size for an explicit discontinuous Galerkin numerical method. The attributes of this approach are demonstrated on standard tests for the shallow-water equations on the sphere. The addition of multiwavelets to the discontinuous Galerkin method, which has the benefit of being scalable, flexible, and conservative, provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact-linear-part time evolution schemes, which can remain stable for implicit-sized time steps, can help increase the time-step size for shallow-water equations on the sphere.


conference on high performance computing (supercomputing) | 2002

Early Evaluation of the IBM p690

Patrick H. Worley; Thomas H. Dunigan; Mark R. Fahey; James B. White; Arthur S. Bland

Oak Ridge National Laboratory recently received 27 32-way IBM pSeries 690 SMP nodes. In this paper, we describe our initial evaluation of the p690 architecture, focusing on the performance of benchmarks and applications that are representative of the expected production workload.


ieee international conference on high performance computing data and analytics | 2012

A modern solver interface to manage solution algorithms in the Community Earth System Model

Katherine J. Evans; Andrew G. Salinger; Patrick H. Worley; Stephen Price; William H. Lipscomb; Jeffrey A. Nichols; James B. White; Mauro Perego; Mariana Vertenstein; James Edwards; Jean-François Lemieux

Global Earth System Models (ESMs) can now produce simulations that resolve ~50 km features and include finer scale, interacting physical processes. However, the current explicit algorithms that dominate production ESMs require ever-decreasing time steps in order to achieve these fine-resolution solutions, which limits time to solution even when efficiently exploiting the spatial parallelism. Solution methods that overcome these bottlenecks can be quite intricate, and there is no single set of algorithms that perform well across the range of problems of interest. This creates significant implementation challenges, which is further compounded by the complexity of ESMs. Therefore, prototyping and evaluating new algorithms in these models requires a software interface that is flexible, extensible, and easily introduced into the existing software. We describe our efforts to create a parallel solver interface that links the Trilinos collection of solver libraries to the Glimmer Community Ice Sheet Model (Glimmer-CISM), a continental ice-sheet model used in the Community Earth System Model (CESM). We demonstrate this interface within both current and developmental versions of Glimmer-CISM and provide strategies for its integration into the rest of the CESM.


international conference on computational science | 2009

Time Acceleration Methods for Advection on the Cubed Sphere

Rick Archibald; Katherine J. Evans; John B. Drake; James B. White

Climate simulation will not grow to the ultrascale without new algorithms to overcome the scalability barriers blocking existing implementations. Until recently, climate simulations concentrated on the question of whether the climate is changing. The emphasis is now shifting to impact assessments, mitigation and adaptation strategies, and regional details. Such studies will require significant increases in spatial resolution and model complexity while maintaining adequate throughput. The barrier to progress is the resulting decrease in time step without increasing single-thread performance. In this paper we demonstrate how to overcome this time barrier for the first standard test defined for the shallow-water equations on a sphere. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes can overcome the time barrier for advection equations on a sphere. The discontinuous Galerkin method is a high-order method that is conservative, flexible, and scalable. The addition of multiwavelets to discontinuous Galerkin provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. Exact linear part time-evolution schemes are explicit schemes that remain stable for implicit-size time steps.

Collaboration


Dive into the James B. White's collaboration.

Top Co-Authors

Avatar

Patrick H. Worley

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Katherine J. Evans

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark R. Fahey

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Thomas H. Dunigan

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

John B. Drake

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Aixue Hu

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Salinger

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Benjamin M. Sanderson

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Claudia Tebaldi

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Damian W. I. Rouson

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge