Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harvey Wasserman is active.

Publication


Featured researches published by Harvey Wasserman.


ieee international conference on cloud computing technology and science | 2010

Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

Keith Jackson; Lavanya Ramakrishnan; Krishna Muriki; Shane Canon; Shreyas Cholia; John Shalf; Harvey Wasserman; Nicholas J. Wright

Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very different from those at traditional supercomputing centers. It is therefore critical to evaluate the performance of HPC applications in today’s cloud environments to understand the tradeoffs inherent in migrating to the cloud. This work represents the most comprehensive evaluation to date comparing conventional HPC platforms to Amazon EC2, using real applications representative of the workload at a typical supercomputing center. Overall results indicate that EC2 is six times slower than a typical mid-range Linux cluster, and twenty times slower than a modern HPC system. The interconnect on the EC2 cloud platform severely limits performance and causes significant variability.


workshop on software and performance | 1998

Poems: end-to-end performance design of large parallel adaptive computational systems

Ewa Deelman; Aditya Dube; Adolfy Hoisie; Yong Luo; Richard L. Oliver; David Sundaram-Stukel; Harvey Wasserman; Vikram S. Adve; Rajive L. Bagrodia; James C. Browne; Elias N. Houstis; Olaf M. Lubeck; John R. Rice; Patricia J. Teller; Mary K. Vernon

The POEMS project is creating an environment for end-to-end performance modeling of complex parallel and distributed systems, spanning the domains of application software, runtime and operating system software, and hardware architecture. Toward this end, the POEMS framework supports composition of component models from these different domains into an end-to-end system model. This composition can be specified using a generalized graph model of a parallel system, together with interface specifications that carry information about component behaviors and evaluation methods. The POEMS Specification Language compiler, under development, will generate an end-to-end system model automatically from such a specification. The components of the target system may be modeled using different modeling paradigms (analysis, simulation, or direct measurement) and may be modeled at various levels of detail. As a result, evaluation of a POEMS end-to-end system model may require a variety of evaluation tools including specialized equation solvers, queuing network solvers, and discrete-event simulators. A single application representation based on static and dynamic task graphs serves as a common workload representation for all these modeling approaches. Sophisticated parallelizing compiler techniques allow this representation to be generated automatically for a given parallel program. POEMS includes a library of predefined analytical and simulation component models of the different domains and a knowledge base that describes performance properties of widely used algorithms. This paper provides an overview of the POEMS methodology and illustrates several of its key components. The methodology and modeling capabilities are demonstrated by predicting the performance of alternative configurations of Sweep3D, a complex benchmark for evaluating wavefront application technologies and high-performance, parallel architectures. Index Terms—Performance modeling, parallel system, message passing, analytical modeling, parallel simulation, processor simulation, task graph, parallelizing compiler, compositional modeling, recommender system.


Lawrence Berkeley National Laboratory | 2008

NERSC-6 Workload Analysis and Benchmark Selection Process

Katie Antypas; John Shalf; Harvey Wasserman

This report describes efforts carried out during early 2008 to determine some of the science drivers for the NERSC-6 next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.


Archive | 2007

Understanding and Mitigating Multicore Performance Issues on theAMD Opteron Architecture

John M. Levesque; Jeff Larkin; Martyn Foster; Joe Glenski; Garry Geissler; Stephen Whalen; Brian Waldecker; Jonathan Carter; David Skinner; Helen He; Harvey Wasserman; John Shalf; Hongzhang Shan; Erich Strohmaier

Over the past 15 years, microprocessor performance hasdoubled approximately every 18 months through increased clock rates andprocessing efficiency. In the past few years, clock frequency growth hasstalled, and microprocessor manufacturers such as AMD have moved towardsdoubling the number of cores every 18 months in order to maintainhistorical growth rates in chip performance. This document investigatesthe ramifications of multicore processor technology on the new Cray XT4?systems based on AMD processor technology. We begin by walking throughthe AMD single-core and dual-core and upcoming quad-core processorarchitectures. This is followed by a discussion of methods for collectingperformance counter data to understand code performance on the Cray XT3?and XT4? systems. We then use the performance counter data to analyze theimpact of multicore processors on the performance of microbenchmarks suchas STREAM, application kernels such as the NAS Parallel Benchmarks, andfull application codes that comprise the NERSC-5 SSP benchmark suite. Weexplore compiler options and software optimization techniques that canmitigate the memory bandwidth contention that can reduce computingefficiency on multicore processors. The last section provides a casestudy of applying the dual-core optimizations to the NAS ParallelBenchmarks to dramatically improve their performance.


IEEE Computer | 2011

Codesign for Systems and Applications: Charting the Path to Exascale Computing

Vladimir Getov; Adolfy Hoisie; Harvey Wasserman

Computational science has become a vital tool in the 21st century, central to progress at the frontiers of nearly every scientific and engineering discipline, including many areas with significant societal impact. A persistent need for more computing power has provided an impetus for the high-performance computing (HPC) community to embark upon the path to exascale computing. The challenges associated with achieving efficient, highly effective exascale computing are extraordinary. Past growth in HPC has been driven by performance and has relied on a combination of faster clock speeds and increasingly larger systems. Achieving exascale performance under reliability and power constraints and in the presence of levels of parallelism increased by orders of magnitude will change the path of system and application development, A recent DARPA study showed that even if it were technically feasible, exascale systems built following the current trajectory would require an energy budget in the hundredsof-megawatts-per-hour range and reliability estimates that would render them impractical. 1 Thus, the clock speed benefits of Moore’s law have ended, and the emphasis must now unavoidably yield to the goal of achieving performance under stringent power and reliability constraints. The clock speed benefits of Moore’s law have ended, and researchers must codesign future exascale HPC systems and applications concurrently in an integrated manner to achieve higher performance under stringent power and reliability constraints.


Archive | 2012

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

Richard A. Gerber; Harvey Wasserman

LARGE SCALE COMPUTING AND STORAGE REQUIREMENTS Basic Energy Sciences Report of the NERSC / BES / ASCR Requirements Workshop February 9 and 10, 2010


Computing in Science and Engineering | 2015

The National Energy Research Scientific Computing Center: Forty Years of Supercomputing Leadership

Harvey Wasserman; Richard A. Gerber

The oil embargo of the early 1970s stalled cars but began a supercomputing story that continues to this day. The National Energy Research Scientific Computing Center, the state-of-the-art national facility that serves government, industry, and academic users today, celebrated its 40th anniversary in 2014. The guest editors of this special issue document that history and describe the articles they selected to highlight it.


Archive | 2014

High Performance Computing and Storage Requirements for Nuclear Physics:Target 2017

Richard A. Gerber; Harvey Wasserman

In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review


Lawrence Berkeley National Laboratory | 2011

Large Scale Computing and Storage Requirements for High Energy Physics

Richard A. Gerber; Harvey Wasserman

LARGE SCALE COMPUTING AND STORAGE REQUIREMENTS High Energy Physics Report of the NERSC / HEP / ASCR Requirements Workshop November 12 and 13, 2009


international parallel and distributed processing symposium | 2008

Workshop 22 introduction: Workshop on Large-Scale Parallel Processing - LSPP

Darren J. Kerbyson; Ram Rajamony; Charles C. Weems; Johnnie W. Baker; Howard Jay Siegel; Ghoerge Almasi; Taisuke Boku; Barbara M. Chapman; Hank G. Dietz; Daniel S. Katz; John M. Levesque; John Michalakes; Celso L. Mendes; Bernd Mohr; Stathis Papaefstathiou; Michael Scherger; Robert A. Walker; Harvey Wasserman; Gerhard Wellein; Pat Worley

The workshop on Large-Scale Parallel Processing is a forum that focuses on computer systems that utilize thousands of processors and beyond. This is a very active area given the goals by many worldwide to enhance science-by-simulation by installing large-scale peta-flop systems at the start of the next decade. Large-scale systems, referred to by some as extreme-scale and Ultra-scale, have many important research aspects that need detailed examination in order for their effective design, deployment, and utilization to take place. These include handling the substantial increase in multi-core on a chip, the ensuing interconnection hierarchy, communication, and synchronization mechanisms. The workshop aims to bring together researchers from different communities working on challenging problems in this area for a dynamic exchange of ideas. Work at early stages of development as well as work that has been demonstrated in practice is equally welcome.

Collaboration


Dive into the Harvey Wasserman's collaboration.

Top Co-Authors

Avatar

Adolfy Hoisie

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard A. Gerber

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

John Shalf

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Olaf M. Lubeck

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Darren J. Kerbyson

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong Luo

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Erich Strohmaier

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Federico Bassetti

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge