Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barbara M. Chapman is active.

Publication


Featured researches published by Barbara M. Chapman.


ieee international conference on high performance computing data and analytics | 2011

The International Exascale Software Project roadmap

Jack J. Dongarra; Pete Beckman; Terry Moore; Patrick Aerts; Giovanni Aloisio; Jean Claude Andre; David Barkai; Jean Yves Berthou; Taisuke Boku; Bertrand Braunschweig; Franck Cappello; Barbara M. Chapman; Xuebin Chi; Alok N. Choudhary; Sudip S. Dosanjh; Thom H. Dunning; Sandro Fiore; Al Geist; Bill Gropp; Robert J. Harrison; Mark Hereld; Michael A. Heroux; Adolfy Hoisie; Koh Hotta; Zhong Jin; Yutaka Ishikawa; Fred Johnson; Sanjay Kale; R.D. Kenway; David E. Keyes

Over the last 20 years, the open-source community has provided more and more software on which the world’s high-performance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project.


Scientific Programming | 1992

Programming in Vienna Fortran

Barbara M. Chapman; Piyush Mehrotra; Hans P. Zima

Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. In contrast to current programming practice, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the data distribution. In this paper, we present the language features of Vienna Fortran for FORTRAN 77, together with examples illustrating the use of these features.


Proceedings of the IEEE | 1993

Compiling for distributed-memory systems

Hans P. Zima; Barbara M. Chapman

Compilation techniques for the source-to-source translation of programs in an extended FORTRAN 77 to equivalent parallel message-passing programs are discussed. A machine-independent language extension to FORTRAN 77, Data Parallel FORTRAN (DPF), is introduced. It allows the user to write programs for distributed-memory multiprocessing systems (DMMPS) using global addresses, and to specify the distribution of data across the processors of the machine. Message-Passing FORTRAN (MPF), a FORTRAN extension that allows the formulation of explicitly parallel programs that communicate via explicit message passing, is also introduced. Procedures and optimization techniques for both languages are discussed. Additional optimization methods and advanced parallelization techniques, including run-time analysis, are also addressed. An extensive overview of related work is given. >


Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model | 2010

Introducing OpenSHMEM: SHMEM for the PGAS community

Barbara M. Chapman; Tony Curtis; Swaroop Pophale; Stephen W. Poole; Jeffery A. Kuehn; Chuck Koelbel; Lauren Smith

The OpenSHMEM community would like to announce a new effort to standardize SHMEM, a communications library that uses one-sided communication and utilizes a partitioned global address space. OpenSHMEM is an effort to bring together a variety of SHMEM and SHMEM-like implementations into an open standard using a community-driven model. By creating an open-source specification and reference implementation of OpenSHMEM, there will be a wider availability of a PGAS library model on current and future architectures. In addition, the availability of an OpenSHMEM model will enable the development of performance and validation tools. We propose an OpenSHMEM specification to help tie together a number of divergent implementations of SHMEM that are currently available. To support an existing and growing user community, we will develop the OpenSHMEM web presence, including a community wiki and training material, and face-to-face interaction, including workshops and conference participation.


Concurrency and Computation: Practice and Experience | 2007

OpenUH: an optimizing, portable OpenMP compiler

Chunhua Liao; Oscar R. Hernandez; Barbara M. Chapman; Wenguang Chen; Weimin Zheng

OpenMP has gained wide popularity as an API for parallel programming on shared memory and distributed shared memory platforms. Despite its broad availability, there remains a need for a portable, robust, open source, optimizing OpenMP compiler for C/C++/Fortran 90, especially for teaching and research, for example into its use on new target architectures, such as SMPs with chip multi‐threading, as well as learning how to translate for clusters of SMPs. In this paper, we present our efforts to design and implement such an OpenMP compiler on top of Open64, an open source compiler framework, by extending its existing analysis and optimization and adopting a source‐to‐source translator approach where a native back end is not available. The compilation strategy we have adopted and the corresponding runtime support are described. The OpenMP validation suite is used to determine the correctness of the translation. The compilers behavior is evaluated using benchmark tests from the EPCC microbenchmarks and the NAS parallel benchmark. Copyright


parallel computing | 2011

High performance computing using MPI and OpenMP on multi-core parallel systems

Haoqiang Jin; Dennis C. Jespersen; Piyush Mehrotra; Rupak Biswas; Lei Huang; Barbara M. Chapman

The rapidly increasing number of cores in modern microprocessors is pushing the current high performance computing (HPC) systems into the petascale and exascale era. The hybrid nature of these systems - distributed memory across nodes and shared memory with non-uniform memory access within each node - poses a challenge to application developers. In this paper, we study a hybrid approach to programming such systems - a combination of two traditional programming models, MPI and OpenMP. We present the performance of standard benchmarks from the multi-zone NAS Parallel Benchmarks and two full applications using this approach on several multi-core based systems including an SGI Altix 4700, an IBM p575+ and an SGI Altix ICE 8200EX. We also present new data locality extensions to OpenMP to better match the hierarchical memory structure of multi-core architectures.


IEEE Parallel & Distributed Technology: Systems & Applications | 1994

Extending HPF for Advanced Data-Parallel Applications

Barbara M. Chapman; Hans P. Zima; Piyush Mehrotra

High Performance Fortran can support regular numerical algorithms, but it cannot adequately express advanced applications such as particle-in-cell codes or unstructured mesh solvers.This article addresses this problem and outlines possible development paths.


parallel computing | 1992

Vienna Fortran—a Fortran language extension for distributed memory multiprocessors

Barbara M. Chapman; Piyush Mehrotra; Hans P. Zima

Abstract Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. However, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the placement of data. In this paper, we present the basic features of Vienna Fortran along with a set of examples illustrating the use of these features.


Scientific Programming | 1997

Opusc A Coordination Language for Multidisciplinary Applications

Barbara M. Chapman; Matthew Haines; Piyush Mehrota; Hans P. Zima; John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


Archive | 2009

Evolving OpenMP in an Age of Extreme Parallelism

Matthias S. Müller; Bronis R. de Supinski; Barbara M. Chapman

Fifth International Workshop on OpenMP IWOMP 2009.- Parallel Simulation of Bevel Gear Cutting Processes with OpenMP Tasks.- Evaluation of Multicore Processors for Embedded Systems by Parallel Benchmark Program Using OpenMP.- Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore.- Scalability Evaluation of Barrier Algorithms for OpenMP.- Use of Cluster OpenMP with the Gaussian Quantum Chemistry Code: A Preliminary Performance Analysis.- Evaluating OpenMP 3.0 Run Time Systems on Unbalanced Task Graphs.- Dynamic Task and Data Placement over NUMA Architectures: An OpenMP Runtime Perspective.- Scalability of Gaussian 03 on SGI Altix: The Importance of Data Locality on CC-NUMA Architecture.- Providing Observability for OpenMP 3.0 Applications.- A Microbenchmark Suite for Mixed-Mode OpenMP/MPI.- Performance Profiling for OpenMP Tasks.- Tile Reduction: The First Step towards Tile Aware Parallelization in OpenMP.- A Proposal to Extend the OpenMP Tasking Model for Heterogeneous Architectures.- Identifying Inter-task Communication in Shared Memory Programming Models.

Collaboration


Dive into the Barbara M. Chapman's collaboration.

Top Co-Authors

Avatar

Hans P. Zima

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oscar R. Hernandez

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lei Huang

University of Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthias S. Müller

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rengan Xu

University of Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bronis R. de Supinski

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge