Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bryan Carpenter is active.

Publication


Featured researches published by Bryan Carpenter.


international parallel processing symposium | 1999

ARMCI: A Portable Remote Memory Copy Libray for Ditributed Array Libraries and Compiler Run-Time Systems

Jarek Nieplocha; Bryan Carpenter

This paper introduces a new portable communication library called ARMCI. ARMCI provides one-sided communication capabilities for distributed array libraries and compiler run-time systems. It supports remote memory copy, accumulate, and synchronization operations optimized for non-contiguous data transfers including strided and generalized UNIX I/O vector interfaces. The library has been employed in the Global Arrays shared memory programming toolkit and Adlib, a Parallel Compiler Run-time Consortium run-time system.


Concurrency and Computation: Practice and Experience | 2000

MPJ: MPI‐like message passing for Java

Bryan Carpenter; Vladimir Getov; Glenn Judd; Anthony Skjellum; Geoffrey C. Fox

Recently, there has been a lot of interest in using Java for parallel programming. Efforts have been hindered by lack of standard Java parallel programming APIs. To alleviate this problem, various groups started projects to develop Java message passing systems modelled on the successful Message Passing Interface (MPI). Official MPI bindings are currently defined only for C, Fortran, and C++, so early MPI-like environments for Java have been divergent. This paper relates an effort undertaken by a working group of the Java Grande Forum, seeking a consensus on an MPI-like API, to enhance the viability of parallel programming using Java.


international parallel processing symposium | 1999

MPIJAVA: An Object-Oriented JAVA Interface to MPI

Mark Baker; Bryan Carpenter; Geoffrey C. Fox; Sung Hoon Ko; Sang Lim

A basic prerequisite for parallel programming is a good communication API. The recent interest in using Java for scientific and engineering application has led to several international efforts to produce a message passing interface to support parallel computation. In this paper we describe and then discuss the syntax, functionality and performance of one such interface, mpiJava, an object-oriented Java interface to MPI. We first discuss the design of the mpiJava API and the issues associated with its development. We then more on to briefly outline the steps necessary to ‘port’ mpiJava onto a range of operating systems, including Windows NT, Linux and Solaris. In the second part of the paper we present and then discuss some performance measurements made of communications bandwidth and latency to compare mpiJava on these systems. Finally, we summarise our experiences and then briefly mention work that we plan to undertake.


Proceedings of the ACM 1999 conference on Java Grande | 1999

Object serialization for marshalling data in a Java interface to MPI

Bryan Carpenter; Geoffrey C. Fox; Sung Hoon Ko; Sang Lim

Several Java bindings to Message Passing Interface (MPI) software have been developed recently. Message bu ers have usually been restricted to arrays with elements of primitive type. We discuss adoption of the Java object serialization model for marshalling general communication data in MPI-like APIs. This approach is compared with a Java transcription of the standard MPI derived datatype mechanism. We describe an implementation of the mpiJava interface to MPI that incorporates automatic object serialization. Benchmark results con rm that current JDK implementations of serialization are not fast enough for high performance messaging applications. Means of solving this problem are discussed, and benchmarks for greatly improved schemes are presented.


international parallel and distributed processing symposium | 2000

MPJ: A Proposed Java Message Passing API and Environment for High Performance Computing

Mark Baker; Bryan Carpenter

In this paper we sketch out a proposed reference implementation for message passing in Java (MPJ), an MPI-like API from the Message-Passing Working Group of the Java Grande Forum [1,2]. The proposal relies heavily on RMI and Jini for finding computational resources, creating slave processes, and handling failures. User-level communication is implemented efficiently directly on top of Java sockets.


european conference on parallel processing | 1998

Towards a Java Environment for SPMD Programming

Bryan Carpenter; Guansong Zhang; Geoffrey C. Fox; Xiaoming Li; Xinying Li; Yuhong Wen

As a relatively straightforward object-oriented language, Java is a plausible basis for a scientific parallel programming language. We outline a conservative set of language extensions to support this kind of programming. The programming style advocated is Single Program Multiple Data (SPMD), with parallel arrays added as language primitives. Communications involving distributed arrays are handled through a standard library of collective operations. Because the underlying programming model is SPMD programming, direct calls to other communication packages are also possible from this language.


languages and compilers for parallel computing | 1997

PCRC-based HPF Compilation

Guansong Zhang; Bryan Carpenter; Geoffrey C. Fox; Xiaoming Li; Xinying Li; Yuhong Wen

This paper describes an ongoing effort supported by ARPA PCRC (Parallel Compiler Runtime Consortium) project. In particular, we discuess the design and implementation of an HPF compilation system based on PCRC runtime. The approaches to issues such as directive analysis and communication detection are discussed in detail. The discussion includes fragments of code generated by the compiler.


languages and compilers for parallel computing | 1997

Java as a Language for Scientific Parallel Programming

Bryan Carpenter; Yuh-Jye Chang; Geoffrey C. Fox; Xiaoming Li

Java may be a natural language for portable parallel programming. We discuss the basis of this claim in general terms, then illustrate the use of Java for message-passing and data-parallel programming through series of case studies. In the process we introduce some proposals for a Java binding of MPI, and describe the use of a Java class-library to implement HPF-style distributed data. Prospects for future Java-based parallel programming environments are discussed.


languages and compilers for parallel computing | 1998

Considerations in HPJava Language Design and Implementation

Guansong Zhang; Bryan Carpenter; Geoffrey C. Fox; Xinying Li; Yuhong Wen

This paper discusses some design and implementation issues in the HPJava language.Th e language is briefly reviewed, then the class library that forms the foundation of the translation scheme is described. Through example codes, we illustrate how HPJava source codes can be translated straightforwardly to ordinary SPMD Java programs calling this library.Th is is followed by a discussion of the rationale for introducing the language in the first place, and of how various language features have been designed to facilitate efficient implementation.


high level parallel programming models and supportive environments | 1998

Language bindings for a data-parallel runtime

Bryan Carpenter; Geoffrey C. Fox; Donald Leskiw; Xiaoming Li; Yuhong Wen; Guansong Zhang

The NPAC kernel runtime, developed in the PCRC (Parallel Compiler Runtime Consortium) project, is a runtime library with special support for the High Performance Fortran data model. It provides array descriptors for a generalized class of HPF like distributed arrays, support for parallel access to their elements, and a rich library of collective communication and arithmetic operations for manipulating these arrays. The library has been successfully used as a component in experimental HPF translation systems. With prospects for early appearance of fully featured, efficient HPF compilers looking questionable, we discuss a class of more easily implementable data parallel language extensions that preserve many of the attractive features of HPF, while providing the programmer with direct access to runtime libraries such as the NPAC PCRC kernel.

Collaboration


Dive into the Bryan Carpenter's collaboration.

Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sang Boem Lim

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sung Hoon Ko

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge