Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sharon Brunett is active.

Publication


Featured researches published by Sharon Brunett.


Journal of Computational Chemistry | 1997

MOLECULAR DYNAMICS FOR VERY LARGE SYSTEMS ON MASSIVELY PARALLEL COMPUTERS : THE MPSIM PROGRAM

Kian-Tat Lim; Sharon Brunett; Mihail Iotov; Richard B. McClurg; Nagarajan Vaidehi; Siddharth Dasgupta; Stephen Taylor; William A. Goddard

We describe the implementation of the cell multipole method (CMM) in a complete molecular dynamics (MD) simulation program (MPSim) for massively parallel supercomputers. Tests are made of how the program scales with size (linearly) and with number of CPUs (nearly linearly) in applications involving up to 107 particles and up to 500 CPUs. Applications include estimating the surface tension of Ar and calculating the structure of rhinovirus 14 without requiring icosahedral symmetry.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

Implementing distributed synthetic forces simulations in metacomputing environments

Sharon Brunett; Dan M. Davis; Thomas D. Gottschalk; Paul C. Messina; Carl Kesselman

A distributed, parallel implementation of the widely used Modular Semi-Automated Forces (ModSAF) Distributed Interactive Simulation (DIS) is presented, with scalable parallel processors (SPPs) used to simulate more than 50,000 individual vehicles. The single-SPP code is portable and has been used on a variety of different SPP architectures for simulations with up to 15,000 vehicles. A general metacomputing framework for DIS on multiple SPPs is discussed and results are presented for an initial system using explicit Gateway processes to manage communications among the SPPs. These 50K-vehicle simulations utilized 1,904 processors at six sites across seven time zones, including platforms from three manufacturers. Ongoing activities to both simplify and enhance the metacomputing system using Globus are described.


Advances in Engineering Software | 2000

A test suite for high-performance parallel Java

Jochem Hauser; Thorsten Ludewig; Roy Williams; Ralf Winkelmann; Torsten Gollnick; Sharon Brunett; Jean Muylaert

The Java programming language has a number of features that make it attractive for writing high-quality, portable parallel programs. A pure object formulation, strong typing and the exception model make programs easier to create, debug, and maintain. The elegant threading provides a simple route to parallelism on shared-memory machines. Anticipating great improvements in numerical performance, this paper presents a suite of simple programs that indicate how a pure Java Navier-Stokes solver might perform. The suite includes a parallel Euler solver. We present results from a 32-processor Hewlett-Packard machine and a 4-processor Sun server. While speedup is excellent on both machines, indicating a high-quality thread scheduler, the single-processor performance needs much improvement.


parallel computing | 1998

A large-scale metacomputing framework for the ModSAF real-time simulation

Sharon Brunett; Thomas D. Gottschalk

Abstract A distributed, parallel implementation of the widely-used Modular Semi-Automated Forces (ModSAF) Distributed Interactive Simulation (DIS) is presented, with Scalable Parallel Processors (SPPs) used to simulate more than 50,000 individual vehicles. The single-SPP version is described and shown to be scalable. This code is portable and has been run on a variety of different SPP architectures. Results for simulations with up to 15,000 vehicles are presented for a number of distinct SPP architectures. The initial multi-SPP (metacomputing) run used explicit Gateway communication processes to exchange data among several SPPs simulating separate portions of the full battle space. The 50K-vehicle simulations utilized 1904 processors on SPPs at six sites across seven time zones, including platforms from three computer manufacturers. (Four of the SPP sites in the large run used the single-SPP code described in this work, with a somewhat different single-SPP ModSAF implementation used at the other two sites.) Particular attention is given to analyses of inter-SPP data rates and Gateway performance in the multi-SPP runs. An alternative, next-generation implementation based on Globus is presented, including discussions of initial experiments, comparisons to the Gateway model, and planned near-term extensions. Finally, comparisons are made between this work and ongoing mainstream DIS activities.


conference on high performance computing (supercomputing) | 1998

An Initial Evaluation of the Tera Multithreaded Architecture and Programming System Using the C3I Parallel Benchmark Suite

Sharon Brunett; John Thornley; Marrq Ellenbecker

The Tera Multithreaded Architecture (MTA) is a radical new architecture intended to revolutionize high-performance computing in both the scientific and commercial marketplaces. Each processor supports 128 threads in hardware. Extremely fast thread switching is used to mask latency in a uniform-access memory system without caching. It is claimed that these hardware characteristics allow compilers to easily transform sequential programs into efficient multithreaded programs for the Tera MTA. In this paper, we attempt to provide an objective initial evaluation of the performance of the Tera multithreaded architecture and programming system for general-purpose applications. The basis of our investigation is two programs from the C3I Parallel Benchmark Suite (C3IPBS). Both these programs have previously been shown to have the potential for large-scale parallelization. We compare the performance of these programs on (i) a fast uniprocessor, (ii) two conventional shared-memory multiprocessors, and (iii) the first installed Tera MTA (at the San Diego Supercomputer Center). On these platforms, we compare the effectiveness of both automatic and manual parallelization.


ieee international conference on high performance computing, data, and analytics | 2003

Performance Analysis of Blue Gene/L Using Parallel Discrete Event Simulation

Ed Upchurch; Paul L. Springer; Maciej Brodowicz; Sharon Brunett; Thomas D. Gottschalk

High performance computers currently under construction, such as IBMs Blue Gene/L, consisting of large numbers (64K) of low cost processing elements with relatively small local memories (256MB) connected via relatively low bandwidth (0.0625 Bytes/FLOP) low cost interconnection networks promise exceptional cost-performance for some scientific applications. Due to the large number of processing elements and adaptive routing networks in such systems, performance analysis of meaningful application kernels requires innovative methods. This paper describes a method that combines application analysis, tracing and parallel discrete event simulation to provide early performance prediction. Specifically, results of performance analysis of a Lennard-Jones Spatial (LJS) Decomposition molecular dynamics benchmark code for Blue Gene/L are given.


Lecture Notes in Computer Science | 1998

Balancing the Load in Large-Scale Distributed Enitity-Level Simulations

Sharon Brunett

A distributed, parallel implementation of the widely used Modular Semi-Automated Forces (ModSAF) Distributed Interactive Simulation (DIS) is presented, using networked high-performance resources to simulate large-scale entity-level exercises. Processing, communication and I/O demands increase dramatically as the simulation grows in terms of size or complexity. A general framework for functional decomposition and scalable communications architecture is presented. An analysis of the communications load within a single computer and between computers is presented. Ongoing activities to address more dynamically communication limitations and processing load using Globus are discussed.


ieee international conference on high performance computing data and analytics | 2008

TeraGrid: Analysis of organization, system architecture, and middleware enabling new types of applications

Charlie Catlett; William E. Allcock; Phil Andrews; Ruth A. Aydt; Ray Bair; Natasha Balac; Bryan Banister; Trish Barker; Mark Bartelt; Peter H. Beckman; Francine Berman; Gary R. Bertoline; Alan Blatecky; Jay Boisseau; Jim Bottum; Sharon Brunett; J. Bunn; Michelle Butler; David Carver; John W Cobb; Tim Cockerill; Peter Couvares; Maytal Dahan; Diana Diehl; Thom H. Dunning; Ian T. Foster; Kelly P. Gaither; Dennis Gannon; Sebastien Goasguen; Michael Grobe


high performance distributed computing | 1998

Application experiences with the Globus toolkit

Sharon Brunett; Karl Czajkowski; Steven Fitzgerald; Ian T. Foster; Andrew E. Johnson; Carl Kesselman; Jason Leigh; Steven Tuecke


Proceedings Sixth Heterogeneous Computing Workshop (HCW'97) | 1997

Distributed interactive simulation for synthetic forces

Paul C. Messina; Sharon Brunett; Dan M. Davis; Thomas D. Gottschalk; D. Curkendall; L. Ekroot; Howard Jay Siegel

Collaboration


Dive into the Sharon Brunett's collaboration.

Top Co-Authors

Avatar

Thomas D. Gottschalk

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ed Upchurch

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maciej Brodowicz

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Paul L. Springer

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Roy Williams

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carl Kesselman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Dan M. Davis

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Paul C. Messina

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge