Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karen H. Warren is active.

Publication


Featured researches published by Karen H. Warren.


Scientific Programming | 1992

The Parallel C Preprocessor

Eugene D. Brooks; Brent C. Gorda; Karen H. Warren

We describe a parallel extension of the C programming language designed for multiprocessors that provide a facility for sharing memory between processors. The programming model was initially developed on conventional shared memory machines with small processor counts such as the Sequent Balance and Alliant FX/8, but has more recently been used on a scalable massively parallel machine, the BBN TC2000. The programming model is split-join rather than fork-join. Concurrency is exploited to use a fixed number of processors more efficiently rather than to exploit more processors as in the fork-join model. Team splitting, a mechanism to split the team of processors executing a code into subteams to handle parallel subtasks, is used to provide an efficient mechanism to exploit nested concurrency. We have found the split-join programming model to have an inherent implementation advantage, compared to the fork-join model, when the number of processors in a machine becomes large.


international workshop on openmp | 2003

DMPL: an OpenMP DLL debugging interface

James Cownie; John DelSignore; Bronis R. de Supinski; Karen H. Warren

OpenMP is a widely adopted standard for threading directives across compiler implementations. The standard is very successful since it provides application writers with a simple, portable programming model for introducing shared memory parallelism into their codes. However, the standards do not address key issues for supporting that programming model in development tools such as debuggers. In this paper, we present DMPL, an OpenMP debugger interface that can be implemented as a dynamically loaded library. DMPL is currently being considered by the OpenMP Tools Committee as a mechanism to bridge the development tool gap in the OpenMP standard.


conference on high performance computing (supercomputing) | 1997

A Study of Performance on SMP and Distributed Memory Architectures Using a Shared Memory Programming Model

Eugene D. Brooks; Karen H. Warren

We examine the use of a shared memory programming model to address the problem of portability between distributed memory and shared memory architectures. We conduct this evaluation by extending an existing programming model, the Parallel C Preprocessor, with a type qualifier interpretation of the data sharing keywords borrowed from the Split-C and AC compilers. We evaluate the performance of the resulting programming model on a wide range of shared memory and distributed memory computing platforms using several numerical algorithms as benchmarks. We find the type-qualifier-based programming model capable of efficient execution on distributed memory and shared memory architectures.


COMPCON Spring '91 Digest of Papers | 1991

Gauss elimination: a case study on parallel machines

Karen H. Warren; Eugene D. Brooks

The authors report their experiences with the Gauss elimination algorithm on several parallel machines. Several different software designs are demonstrated, ranging from a simple shared memory implementation to the use of a message passing programming model. It is found that the efficient use of local memory is critical to obtaining good performance on scalable machines. Machines with large coherent caches appear to require the least software effort in order to obtain effective performance.<<ETX>>


COMPCON Spring '91 Digest of Papers | 1991

BBN TC2000 architecture and programming models

Eugene D. Brooks; Brent C. Gorda; Karen H. Warren; Tammy S. Welcome

The BBN TC2000 is a scalable general-purpose parallel architecture capable of efficiently supporting both shared memory and message passing programming paradigms. The TC2000 machine architecture and the programming models that have been implemented on it are described. In particular, the split-join model, its memory model, and the message passing model are described. Specifics on how the implementation of these models take advantage of the architecture are included. The synchronization primitives offered in PCP (parallel C processor) and PFP (parallel Fortran preprocessor) are discussed, the debugging and performance monitoring abilities within the models are considered. The time and space scheduling mechanism used on the machine is described.<<ETX>>


Scientific Programming | 1996

PDDP, a data parallel programming model

Karen H. Warren

PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.


Archive | 2004

A White Paper Prepared for the OpenMP Architectural Review Board on DMPL: An OpenMP DLL Debugging Interface

James Cownie; B R de Supinski; Karen H. Warren

OpenMP is a widely adopted standard for threading directives across compiler implementations. The standard is very successful since it provides application writers with a simple, portable programming model for introducing shared memory parallelism into their codes. However, the standards do not address key issues for supporting that programming model in development tools such as debuggers. In this paper, we present DMPL, an OpenMP debugger interface that can be implemented as a dynamically loaded library. DMPL is currently being considered by the OpenMP Tools Committee as a mechanism to bridge the development tool gap in the OpenMP standard.


Proceedings of the First International ACPC Conference on Parallel Computation | 1991

The PCP/PFP Programming Models on the BBN TC2000

Eugene D. Brooks; Brent C. Gorda; Karen H. Warren

We describe the PCP/PFP programming models which we are using on the BBN TC2000. The parallel programming models are implemented in a portable manner and will be useful on the scalable shared memory machines we expect to see in the future. We then describe the TC20machine architecture which is a scalable general purpose parallel architecture capable of efficiently supporting both shared memory and message passing programming paradigms. We also briefly describe a PCP implementation of the Gauss elimination algorithm which exploits the large local memories on the TC2000.


Archive | 2000

Introduction to UPC and Language Specification

William Carlson; Jesse M. Draper; David E. Culler; Katherine A. Yelick; Eugene D. Brooks; Karen H. Warren


international conference on parallel processing | 1991

Spilt-Join and Message Passing Programming Models on the BBN TC2OOO.

Eugene D. Brooks; Brent C. Gorda; Karen H. Warren; Tammy S. Welcome

Collaboration


Dive into the Karen H. Warren's collaboration.

Top Co-Authors

Avatar

Eugene D. Brooks

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Brent C. Gorda

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tammy S. Welcome

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bronis R. de Supinski

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katherine A. Yelick

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge