Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark W. Govett is active.

Publication


Featured researches published by Mark W. Govett.


grid computing | 2010

Running the NIM Next-Generation Weather Model on GPUs

Mark W. Govett; Jacques Middlecoff; Tom Henderson

We are using GPUs to run a new weather model being developed at NOAA’s Earth System Research Laboratory (ESRL). The parallelization approach is to run the entire model on the GPU and only rely on the CPU for model initialization, I/O, and inter-processor communications. We have written a compiler to convert Fortran into CUDA, and used it to parallelize the dynamics portion of the model. Dynamics, the most computationally intensive part of the model, is currently running 34 times faster on a single GPU than the CPU. We also describe our approach and progress to date in running NIM on multiple GPUs.


ieee international conference on high performance computing data and analytics | 2011

Experience Applying Fortran GPU Compilers to Numerical Weather Prediction

Tom Henderson; J. Middlecoff; J. Rosinski; Mark W. Govett; P. Madden

Graphics Processing Units (GPUs) have enabled significant improvements in computational performance compared to traditional CPUs in several application domains. Until recently, GPUs have been programmed using C/C++ based methods such as CUDA (NVIDIA) and OpenCL (NVIDIA and AMD). Using these approaches, Fortran Numerical Weather Prediction (NWP) codes would have to be completely re-written to take full advantage of GPU performance gains. Emerging commercial Fortran compilers allow NWP codes to take advantage of GPU processing power with much less software development effort. The Non-hydrostatic Icosahedral Model (NIM) is a prototype dynamical core for global NWP. We use NIM to examine Fortran directive-based GPU compilers, evaluating code porting effort and computational performance.


parallel computing | 2003

The scalable modeling system: directive-based code parallelization for distributed and shared memory computers

Mark W. Govett; Leslie B. Hart; Tom Henderson; Jacques Middlecoff; D. Schaffer

A directive-based parallelization tool called the Scalable Modeling System (SMS) is described. The user inserts directives in the form of comments into existing Fortran code. SMS translates the code and directives into a parallel version that runs efficiently on shared and distributed memory high-performance computing platforms including the SGI Origin, IBM SP2, Cray T3E, Sun, and Alpha and Intel clusters. Twenty directives are available to support operations including array re-declarations, inter-process communications, loop translations, and parallel I/O operations. SMS also provides tools to support incremental parallelization and debugging that significantly reduces code parallelization time from months to weeks of effort. SMS is intended for applications using regular structured grids that are solved using finite difference approximation or spectral methods. It has been used to parallelize 10 atmospheric and oceanic models, but the tool is sufficiently general that it can be applied to other structured grids codes. Recent performance comparisons demonstrate that the Eta, Hybrid Coordinate Ocean model and Regional Ocean Modeling System model, parallelized using SMS, perform as well or better than their OpenMP or Message Passing Interface counterparts.


Bulletin of the American Meteorological Society | 2017

Parallelization and Performance of the NIM Weather Model on CPU, GPU and MIC Processors

Mark W. Govett; Jim Rosinski; Jacques Middlecoff; Tom Henderson; Jin Lee; Alexander E. MacDonald; Ning Wang; Paul Madden; Julie Schramm; Antonio Duarte

AbstractThe design and performance of the Non-Hydrostatic Icosahedral Model (NIM) global weather prediction model is described. NIM is a dynamical core designed to run on central processing unit (CPU), graphics processing unit (GPU), and Many Integrated Core (MIC) processors. It demonstrates efficient parallel performance and scalability to tens of thousands of compute nodes and has been an effective way to make comparisons between traditional CPU and emerging fine-grain processors. The design of the NIM also serves as a useful guide in the fine-grain parallelization of the finite volume cubed (FV3) model recently chosen by the National Weather Service (NWS) to become its next operational global weather prediction model.This paper describes the code structure and parallelization of NIM using standards-compliant open multiprocessing (OpenMP) and open accelerator (OpenACC) directives. NIM uses the directives to support a single, performance-portable code that runs on CPU, GPU, and MIC systems. Performance r...


Proceedings of the Eleventh ECMWF Workshop | 2005

THE GRID: AN IT INFRASTRUCTURE FOR NOAA IN THE 21ST CENTURY

Mark W. Govett; Mike Doney; Paul Hyder

This paper highlights the need to build a grid infrastructure to meet the challenges facing NOAA in the 21 century. Given the enormous expected demands for data, and increased size and density of observational systems, current systems will not be scalable for future needs without incurring enormous costs. NOAA’s IT infrastructure is currently a set of independent systems that have been built up over time to support its programs and requirements. NOAA needs integrated systems capable of handling a huge increase in data volumes from expected launches of GOES-R, NPOESS, new observing systems being proposed or developed, and to meet requirements of the Integrated Earth Observation System. Further, NOAA must continue moving toward integrated compute resources to reduce costs, to improve systems utilization, to support new scientific challenges and to run and verify increasingly complex models using next generation high-density data streams. Finally, NOAA needs a fast, well-managed network capable of meeting the needs of the organization: to efficiently distribute data to users, to provide secure access to IT resources, and be sufficiently adaptable and scalable to meet unanticipated needs in the future.


Proceedings of the Twelfth ECMWF Workshop | 2007

HPC ACTIVITIES IN THE EARTH SYSTEM RESEARCH LABORATORY

Mark W. Govett; J. Middlecoff; D. Schaffer; J. Smith

ESRL’s Advanced Computing Section mission is to enable new advancements in atmospheric and oceanic sciences by making modern high performance computers easier for researchers to use. Active areas of research include (1) the development of software to simplify programming, portability and performance of atmospheric and oceanic models that run on distributed or shared memory environments, (2) the development of software that explores and utilizes grid computing, and (3) the collaboration with researchers in the continued development of next generation weather forecast models for use in scientific studies or operational environments. This paper will describe two activities our group is engaged in: the integration of parallel debugging capabilities into the Weather Research and Forecasting Model (WRF), and the development of a modeling portal called WRF Portal.


ieee international conference on high performance computing data and analytics | 2001

THE SCALABLE MODELING SYSTEM: A HIGH-LEVEL ALTERNATIVE TO MPI

Mark W. Govett; Jacques Middlecoff; Leslie B. Hart; Tom Henderson; D. Schaffer

A directive-based parallelization tool called the Scalable Modeling System (SMS) is described. The user inserts directives in the form of comments into existing Fortran code. SMS translates the code and directives into a parallel version that runs efficiently on both shared and distributed memory high-performance computing platforms. SMS provides tools to support partial parallelization and debugging that significantly decreases code parallelization time. The performance of an SMS parallelized version of the Eta model is compared to the operational version running at the National Centers for Environmental Prediction (NCEP).


international conference on information fusion | 2018

Machine Learning: Defining Worldwide Cyclone Labels for Training

Christina Bonfanti; Lidija Trailovic; Jebb Stewart; Mark W. Govett


98th American Meteorological Society Annual Meeting | 2018

An Update on the Parallelization of the FV3 Model for CPU, GPU, and MIC Processors

Mark W. Govett


Archive | 2017

Position paper on high performance computing needs in Earth system prediction.

Jessie C. Carman; Thomas Clune; Francis X. Giraldo; Mark W. Govett; Anke Kamrath; Tsengdar Lee; D. McCarren; John Michalakes; Scott Sandgathe; Tim Whitcomb

Collaboration


Dive into the Mark W. Govett's collaboration.

Top Co-Authors

Avatar

Tom Henderson

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

Leslie B. Hart

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

John Michalakes

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Alex Reinecke

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Alexander E. MacDonald

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

Christina Bonfanti

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Middlecoff

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

J. Rosinski

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

Jebb Stewart

Colorado State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge