Katie Antypas
Lawrence Berkeley National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katie Antypas.
parallel computing | 2009
Anshu Dubey; Katie Antypas; Murali K. Ganapathy; Lynn B. Reid; Katherine Riley; Daniel J. Sheeler; Andrew R. Siegel; Klaus Weide
FLASH is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH has been successful because its capabilities have been driven by the needs of scientific applications, without compromising maintainability, performance, and usability. In its newest incarnation, FLASH3 consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing verifiability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development. In this paper we describe the FLASH3 architecture, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes. We also include results from user surveys conducted in 2005 and 2007, which highlight the success of the code.
ieee international conference on high performance computing data and analytics | 2008
Hongzhang Shan; Katie Antypas; John Shalf
The unprecedented parallelism of new supercomputing platforms poses tremendous challenges to achieving scalable performance for I/O intensive applications. Performance assessments using traditional I/O system and component benchmarks are difficult to relate back to application I/O requirements. However, the complexity of full applications motivates development of simpler synthetic I/O benchmarks as proxies to the full application. In this paper we examine the I/O requirements of a range of HPC applications and describe how the LLNL IOR synthetic benchmark was chosen as suitable proxy for the diverse workload. We show a procedure for selecting IOR parameters to match the I/O patterns of the selected applications and show it can accurately predict the I/O performance of the full applications. We conclude that IOR is an effective replacement for full-application I/O benchmarks and can bridge the gap of understanding that typically exists between stand-alone benchmarks and the full applications they intend to model.
Lawrence Berkeley National Laboratory | 2008
Katie Antypas; John Shalf; Harvey Wasserman
This report describes efforts carried out during early 2008 to determine some of the science drivers for the NERSC-6 next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.
high performance distributed computing | 2015
Gonzalo Pedro Rodrigo Álvarez; Per-Olov Östberg; Erik Elmroth; Katie Antypas; Richard A. Gerber; Lavanya Ramakrishnan
High performance computing centers have traditionally served monolithic MPI applications. However, in recent years, many of the large scientific computations have included high throughput and data-intensive jobs. HPC systems have mostly used batch queue schedulers to schedule these workloads on appropriate resources. There is a need to understand future scheduling scenarios that can support the diverse scientific workloads in HPC centers. In this paper, we analyze the workloads on two systems (Hopper, Carver) at the National Energy Research Scientific Computing (NERSC) Center. Specifically, we present a trend analysis towards understanding the evolution of the workload over the lifetime of the two systems.
ieee international conference on high performance computing data and analytics | 2014
Anshu Dubey; Katie Antypas; Alan Clark Calder; Christopher S. Daley; Bruce Fryxell; Brad Gallagher; Donald Q. Lamb; Dongwook Lee; Kevin Olson; Lynn B. Reid; Paul Rich; Paul M. Ricker; Katherine Riley; R. Rosner; Andrew R. Siegel; Noel T. Taylor; Klaus Weide; Francis Xavier Timmes; Natasha Vladimirova; John A. ZuHone
The FLASH code has evolved into a modular and extensible scientific simulation software system over the decade of its existence. During this time it has been cumulatively used by over a thousand researchers to investigate problems in astrophysics, cosmology, and in some areas of basic physics, such as turbulence. Recently, many new capabilities have been added to the code to enable it to simulate problems in high-energy density physics. Enhancements to these capabilities continue, along with enhancements enabling simulations of problems in fluid-structure interactions. The code started its life as an amalgamation of already existing software packages and sections of codes developed independently by various participating members of the team for other purposes. The code has evolved through a mixture of incremental and deep infrastructural changes. In the process, it has undergone four major revisions, three of which involved a significant architectural advancement. Along the way, a software process evolved that addresses the issues of code verification, maintainability, and support for the expanding user base. The software process also resolves the conflicts arising out of being in development and production simultaneously with multiple research projects, and between performance and portability. This paper describes the process of code evolution with emphasis on the design decisions and software management policies that have been instrumental in the success of the code. The paper also makes the case for a symbiotic relationship between scientific research and good software engineering of the simulation software.
Journal of Parallel and Distributed Computing | 2018
Gonzalo P. Rodrigo; Per-Olov Östberg; Erik Elmroth; Katie Antypas; Richard A. Gerber; Lavanya Ramakrishnan
High performance computing (HPC) scheduling landscape currently faces new challenges due to the changes in the workload. Previously, HPC centers were dominated by tightly coupled MPI jobs. HPC work ...
computational science and engineering | 2013
Anshu Dubey; Katie Antypas; Alan Clark Calder; Bruce Fryxell; D. Q. Lamb; Paul M. Ricker; Lynn B. Reid; Katherine Riley; R. Rosner; Andrew R. Siegel; F. X. Timmes; Natalia Vladimirova; Klaus Weide
The FLASH code has evolved into a modular and extensible scientific simulation software system over the decade of its existence. During this time it has been cumulatively used by over a thousand researchers in several scientific communities (i.e. astrophysics, cosmology, high-energy density physics, turbulence, fluid-structure interactions) to obtain results for research. The code started its life as an amalgamation of two already existing software packages and sections of other codes developed independently by various participating members of the team for other purposes. In the evolution process it has undergone four major revisions, three of which involved a significant architectural advancement. A corresponding evolution of the software process and policies for maintenance occurred simultaneously. The code is currently in its 4.x release with a substantial user community. Recently there has been an upsurge in the contributions by external users; some provide significant new capability. This paper outlines the software development and evolution processes that have contributed to the success of the FLASH code.
CUG2016 Proceedings | 2016
Wahid Bhimji; Debbie Bard; Melissa Romanus; D Paul; Andrey Ovsyannikov; B Friesen; M Bryson; J Correa; Glenn K. Lockwood; Tsulaia; S Byna; Steve Farrell; D Gursoy; C Daley; Beckner; B Van Straalen; David Trebotich; C Tull; Gunther H. Weber; Nicholas J. Wright; Katie Antypas; Prabhat
parallel computing | 2011
Anshu Dubey; Katie Antypas; Christopher S. Daley
cluster computing and the grid | 2016
Gonzalo Pedro Rodrigo Álvarez; Per-Olov Östberg; Erik Elmroth; Katie Antypas; Richard A. Gerber; Lavanya Ramakrishnan