Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard A. Gerber is active.

Publication


Featured researches published by Richard A. Gerber.


high performance distributed computing | 2015

HPC System Lifetime Story: Workload Characterization and Evolutionary Analyses on NERSC Systems

Gonzalo Pedro Rodrigo Álvarez; Per-Olov Östberg; Erik Elmroth; Katie Antypas; Richard A. Gerber; Lavanya Ramakrishnan

High performance computing centers have traditionally served monolithic MPI applications. However, in recent years, many of the large scientific computations have included high throughput and data-intensive jobs. HPC systems have mostly used batch queue schedulers to schedule these workloads on appropriate resources. There is a need to understand future scheduling scenarios that can support the diverse scientific workloads in HPC centers. In this paper, we analyze the workloads on two systems (Hopper, Carver) at the National Energy Research Scientific Computing (NERSC) Center. Specifically, we present a trend analysis towards understanding the evolution of the workload over the lifetime of the two systems.


cluster computing and the grid | 2016

Towards Understanding Job Heterogeneity in HPC: A NERSC Case Study

Gonzalo Pedro Rodrigo Álvarez; Per-Olov Östberg; Erik Elmroth; Katie Antypas; Richard A. Gerber; Lavanya Ramakrishnan

The high performance computing (HPC) scheduling landscape is changing. Increasingly, there are large scientific computations that include high-throughput, data-intensive, and stream-processing compute models. These jobs increase the workload heterogeneity, which presents challenges for classical tightly coupled MPI job oriented HPC schedulers. Thus, it is important to define new analyses methods to understand the heterogeneity of the workload, and its possible effect on the performance of current systems. In this paper, we present a methodology to assess the job heterogeneity in workloads and scheduling queues. We apply the method on the workloads of three current National Energy Research Scientific Computing Center (NERSC) systems in 2014. Finally, we present the results of such analysis, with an observation that heterogeneity might reduce predictability in the jobs wait time.


Journal of Parallel and Distributed Computing | 2018

Towards understanding HPC users and systems: A NERSC case study

Gonzalo P. Rodrigo; Per-Olov Östberg; Erik Elmroth; Katie Antypas; Richard A. Gerber; Lavanya Ramakrishnan

High performance computing (HPC) scheduling landscape currently faces new challenges due to the changes in the workload. Previously, HPC centers were dominated by tightly coupled MPI jobs. HPC work ...


Lawrence Berkeley National Laboratory | 2005

SciDAC advances and applications in computational beam dynamics

Robert D. Ryne; D. Abell; A. Adelmann; J. Amundson; Courtlandt L. Bohn; John R. Cary; Phillip Colella; D. Dechow; V. Decyk; Alex J. Dragt; Richard A. Gerber; S. Habib; D. Higdon; T. Katsouleas; Kwan-Liu Ma; Peter McCorquodale; D. Mihalcea; C. Mitchell; W. B. Mori; C.T. Mottershead; F. Neri; Ilya V. Pogorelov; Ji Qiang; R. Samulyak; D. B. Serafini; John Shalf; C. Siegerist; Panagiotis Spentzouris; P. Stoltz; Balsa Terzic

SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators -- which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook -- are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.


Archive | 2014

DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

Richard A. Gerber; William Allcock; Chris Beggio; Stuart Campbell; Andrew Cherry; Shreyas Cholia; Eli Dart; Clay England; Tim J. Fahey; Fernanda Foertter; Robin J. Goldstone; Jason Hick; David Karelitz; Kaki Kelly; Laura Monroe; Prabhat; David Skinner; Julia White

Author(s): Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat; Skinner, David; White, Julia | Abstract: U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.


Concurrency and Computation: Practice and Experience | 2018

Preparing NERSC users for Cori, a Cray XC40 system with Intel many integrated cores

Yun He; Brandon Cook; Jack Deslippe; Brian Friesen; Richard A. Gerber; Rebecca Hartman-Baker; Alice Koniges; Thorsten Kurth; Stephen Leak; Woo-Sun Yang; Zhengji Zhao; E. Baron; Peter H. Hauschildt

The newest NERSC supercomputer Cori is a Cray XC40 system consisting of 2,388 Intel Xeon Haswell nodes and 9,688 Intel Xeon‐Phi “Knights Landing” (KNL) nodes. Compared to the Xeon‐based clusters NERSC users are familiar with, optimal performance on Cori requires consideration of KNL mode settings; process, thread, and memory affinity; fine‐grain parallelization; vectorization; and use of the high‐bandwidth MCDRAM memory. This paper describes our efforts preparing NERSC users for KNL through the NERSC Exascale Science Application Program, Web documentation, and user training. We discuss how we configured the Cori system for usability and productivity, addressing programming concerns, batch system configurations, and default KNL cluster and memory modes. System usage data, job completion analysis, programming and running jobs issues, and a few successful user stories on KNL are presented.


Archive | 2012

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

Richard A. Gerber; Harvey Wasserman

LARGE SCALE COMPUTING AND STORAGE REQUIREMENTS Basic Energy Sciences Report of the NERSC / BES / ASCR Requirements Workshop February 9 and 10, 2010


ieee international conference on high performance computing, data, and analytics | 2017

IXPUG: Experiences on Intel Knights Landing at the One Year Mark

Estela Suarez; Michael Lysaght; Simon J. Pennycook; Richard A. Gerber

One year on since the launch of the 2nd generation Knights Landing (KNL) Intel Xeon Phi platform, a significant amount of application experience has been gathered by the user community. This provided IXPUG (the Intel Xeon Phi User Group) a timely opportunity to share insights on how to best exploit this new many-core processor, and in particular, on how to achieve high performance on current and upcoming large-scale KNL-based systems.


Computing in Science and Engineering | 2015

The National Energy Research Scientific Computing Center: Forty Years of Supercomputing Leadership

Harvey Wasserman; Richard A. Gerber

The oil embargo of the early 1970s stalled cars but began a supercomputing story that continues to this day. The National Energy Research Scientific Computing Center, the state-of-the-art national facility that serves government, industry, and academic users today, celebrated its 40th anniversary in 2014. The guest editors of this special issue document that history and describe the articles they selected to highlight it.


Archive | 2014

Hopper Workload Analysis

Brian Austin; Tina Butler; Richard A. Gerber; Cary Whitney; Nicholas J. Wright; Woo-Sun Yang; Zhengji Zhao

The National Energy Research Scientific Computing (NERSC) Center is the primary computing facility for the United States Department of Energy, Office of Science. With over 5,000 users and over 600 different applications utilizing NERSC systems, it is critically important to examine the workload running on NERSCs large supercomputers in order to procure systems that perform well for a broad workload. In this paper we show the breakdown of the NERSC workload by science area, algorithm, memory, thread usage and more. We also describe the methods used to collect data from NERSCs Hopper (Cray XE6) system.

Collaboration


Dive into the Richard A. Gerber's collaboration.

Top Co-Authors

Avatar

Harvey Wasserman

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Katie Antypas

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Katherine Riley

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lavanya Ramakrishnan

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sudip S. Dosanjh

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tjerk Straatsma

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Skinner

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eli Dart

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge