Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert P. Weaver is active.

Publication


Featured researches published by Robert P. Weaver.


Journal of Parallel and Distributed Computing | 1991

The DINO parallel programming language

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver

Abstract DINO (DIstributed Numerically Oriented language) is a language for writing parallel programs for distributed memory (MIMD) multiprocessors. It is oriented toward expressing data parallel algorithms, which predominate in parallel numerical computation. Its goal is to make programming such algorithms natural and easy, without hindering their run-time efficiency. DINO consists of standard C augmented by several high-level parallel constructs that are intended to allow the parallel program to conform to the way an algorithm designer naturally thinks about parallel algorithms. The key constructs are the ability to declare a virtual parallel computer that is best suited to the parallel computation, the ability to map distributed data structures onto this virtual machine, and the ability to define procedures that will run on each processor of the virtual machine concurrently. Most of the remaining details of distributed parallel computation, including process management and interprocessor communication, result implicitly from these high-level constructs and are handled automatically by the compiler. This paper describes the syntax and semantics of the DINO language, gives examples of DINO programs, presents a critique of the DINO language features, and discusses the performance of code generated by the DINO compiler.


hypercube concurrent computers and applications | 1988

Dino: summary and examples

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver

Dino is a new language, consisting of high level modifications to C, for writing numerical programs on distributed memory multiprocessors. Our intent is to raise interprocess communication and process control to a higher and more natural level than using messages. We achieve this by allowing the user to define a virtual machine onto which data structures can be distributed. Interprocess communication is implicitly invoked by reading and writing the distributed data. Parallelism is achieved by making concurrent procedure calls. This paper provides a summary of the syntax and semantics of Dino, and illustrates its features through several sample programs. We also briefly discuss a prototype of the language we have developed using C++.


distributed memory computing conference | 1990

Mapping Data to Processors in Distributed Memory Computations

Matthew Rosing; Robert P. Weaver

Abstract : The authors present a structured scheme for allowing a programmer to specify the mapping of data to distributed memory multiprocessors. This scheme lets the programmer specify information about communication patterns as well as information about distributing data structures onto processors (including partitioning with replication). This mapping scheme allows the user to map arrays of data to arrays of processors. The user specifies how each axis of the data structure is mapped onto an axis of the processor structure. This mapping may either be one to one or one to many depending on the parallelism, load balancing, and communication requirements. The authors discuss the basics of how this scheme is implemented in the DINO language, the areas in which it has worked well, the few areas in which there were significant problems, and some ideas for future improvements.


parallel computing | 1992

Scientific programming languages for distributed memory multiprocessors: paradigms and research issues

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver

Abstract This paper attempts to identify some of the central concepts, issues, and challenges that are emerging in the development of imperative, data parallel programming languages for distributed memory multiprocessors. It first describes a common paradigm for such languages that appears to be emerging. The key elements of this paradigm are the specification of distributed data structures, the specification of a virtual parallel computer, and the use of some model of parallel computation and communication. The paper illustrates these concepts briefly with the DINO programming language. Then it discusses some key research issues associated with each element of the paradigm. The most interesting aspect is the model of parallel computation and communication, where there is a considerable diversity of approaches. The paper proposes a new categorization for these approaches, and discusses the relative advantages or disadvantages of the different models.


Sigplan Notices | 1993

A programmable preprocessor approach to efficient parallel language design

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver

S u m m a r y . This paper briefly describes the design and philosophy of the new stage of our parallel language research, which we are just now beginning. The basic goals for this new language are to support a very broad range of parallel computations, to provide ways to obtain efficient code over this entire range, and to provide ease of programming as far as this is consistent with the first two goals. To achieve these goals, we are designing an explicitly parallel language that allows the expression of fundamental parallel constructs, including synchronization, communication, and data distribution, at either lowor high-levels, with a well-structured progression between levels. The multi-level approach to parallel language constructs should allow the language to meet the needs of a broad variety of sophisticated and unsophisticated users, as well as providing expressiveness and efficiency for a broad range of applications. This approach will also allow us to develop a fully expressive prototype of our language quickly. Goa l s of t h e n e w l anguage . The goal of the language is to be able to express a very broad range of scientific computations in ways that lead to nearly optimal utilization of the parallel computer, and to provide convenient programming as far as is consistent with the desired expressiveness and efficiency. This leads to the following strongly interrelated subgoals: Broad Expressiveness And Applicability. The language should be capable of expressing a very broad range of parallel computations. To do this it will need to support a variety of user programmable high-level paradigms. This includes, for example, support for parallel operations on large dense arrays and support for irregular data structures. It will also need to provide clean support for multi-level and multi-phase parallel algorithms. Execution E.~cieney. The language should be capable of delivering nearly the same efficiency that is obtainable by programming using the low-level, vendor-supplied primitives. This implies that the language needs to support access to low-level synchronization, communication, and storage constructs. Ease Of Use. The language should allow the programmer to express parallel algorithms using convenient, high-level constructs where appropriate and low-level constructs when needed. It is also important that the language have as few constructs as possible so as not to become overly baroque and difficult to understand. S o m e R e l a t e d W o r k . A variety of current research projects and language design efforts, including FortranD [FHK+90], Vienna Fortran [CMZ92], and High Performance Fortran, are exploring the approach of obtaining parallel programs by annotating a standard sequential programs with a rich variety of data distribution statements, and possibly with parallel


Archive | 1989

Expressing Complex Parallel Algorithms in DINO

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver


Archive | 1990

Massive Parallelism and Process Contraction in Dino

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver


ieee international conference on high performance computing data and analytics | 1992

Automatic mapping and load balancing of pointer-based dynamic data structures on distributed memory machines

Robert P. Weaver; Robert B. Schnabel


Archive | 1991

Scientific Programming Languages for Distributed Memory Multiprocessors: Paradigms and Research Issues ; CU-CS-537-91

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver


Archive | 1990

The DINO Parallel Programming Language ; CU-CS-457-90

Matthew Rosing; Robert B. Schnabel; Robert P. Weaver

Collaboration


Dive into the Robert P. Weaver's collaboration.

Top Co-Authors

Avatar

Matthew Rosing

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Robert B. Schnabel

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge