Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Van Rosendale is active.

Publication


Featured researches published by John Van Rosendale.


Scientific Programming | 1997

Opusc A Coordination Language for Multidisciplinary Applications

Barbara M. Chapman; Matthew Haines; Piyush Mehrota; Hans P. Zima; John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


joint international conference on vector and parallel processing parallel processing | 1994

A Software Architecture for Multidisciplinary Applications: Integrating Task and Data Parallelism

Barbara M. Chapman; Piyush Mehrotra; John Van Rosendale; Hans P. Zima

Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDAs). SDAs are an extension of Fortran 90 modules, representing a pool of common data, together with a set of methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.


International Journal of Parallel Programming | 1987

Semi-automatic process partitioning for parallel computation

Charles Koelbel; Piyush Mehrotra; John Van Rosendale

Automatic process partitioning is the operation of automatically rewriting an algorithm as a collection of tasks, each operating primarily on its own portion of the data, to carry out the computation in parallel. Hybrid shared memory systems provide a hierarchy of globally accessible memories. To achieve high performance on such machines one must carefully distribute the work and the data so as to keep the workload balanced while optimizing the access to nonlocal data. In this paper we consider a semi-automatic approach to process partitioning in which the compiler, guided by advice from the user, automatically transforms programs into such an interacting set of tasks. This approach is illustrated with a picture processing example written in BLAZE, which is transformed by the compiler into a task system maximizing locality of memory reference.


parallel computing | 1998

High performance Fortran: history, status and future

Piyush Mehrotra; John Van Rosendale; Hans P. Zima

Abstract High Performance Fortran (HPF) is a data-parallel language that was designed to provide the user with a high-level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message-passing program. The main objective of this paper is to study the expressivity of the language and related performance issues. After giving an outline of developments that led to HPF and shortly explaining its major features, we discuss in detail a variety of approaches for solving multiblock problems and applications dealing with unstructured meshes. We argue that the efficient solution of these problems does not only need the full range of the HPF Approved Extensions, but also requires additional features such as the explicit control of communication schedules and support for value-based alignment. The final part of the paper points out some classes of problems that are difficult to deal with efficiently within the HPF paradigm.


conference on object oriented programming systems languages and applications | 1995

SmartFiles: an OO approach to data file interoperability

Matthew Haines; Piyush Mehrotra; John Van Rosendale

Data files for scientific and engineering codes typically consist of a series of raw data values whose description is buried in the programs that interact with these files. In this situation, making even minor changes in the file structure or sharing files between programs (interoperability) can only be done after careful examination of the data files and the I/O statements of the programs interacting with this file. In short, scientific data files lack self-description, and other self-describing data techniques are not always appropriate or useful for scientific data files. By applying an object-oriented methodology to data files, we can add the intelligence required to improve data interoperability and provide an elegant mechanism for supporting complex, evolving, or multidisciplinary applications, while still supporting legacy codes. As a result, scientists and engineers should be able to share datasets with far greater ease, simplifying multidisciplinary applications and greatly facilitating remote collaboration between scientists.


Applied Mathematics and Computation | 1983

Algorithms and data structures for adaptive multigrid elliptic solvers

John Van Rosendale

With the advent of multigrid iteration, the large linear systems arising in numerical treatment of elliptic boundary value problems can be solved quickly and reliably. This frees the researcher to focus on the other issues involved in numerical solution of elliptic problems: adaptive refinement, error estimation and control, and grid generation. Progress is being made on each of these issues and the technology now seems almost at hand to put together general purpose elliptic software having reliability and efficiency comparable to that of library software for ordinary differential equations. This paper looks at the components required in such general elliptic solvers and suggests new approaches to some of the issues involved. One of these issues is adaptive refinement and the complicated data structures required to support it. These data structures must be carefully tuned, especially in three dimensions where the time and storage requirements of algorithms are crucial. Another major issue is grid generation. The options available seem to be curvilinear fitted grids, constructed on interactive graphics systems, and unfitted Cartesian grids, which can be constructed automatically. On several grounds, including storage requirements, the second option seems preferrable for the well behaved scalar elliptic problems considered here. A variety of techniques for treatment of boundary conditions on such grids have been described previously and are reviewed here. A new approach, which may overcome some of the difficulties encountered with previous approaches, is also presented.


parallel computing | 1998

High Performance Fortran: Status and Prospects

Piyush Mehrotra; John Van Rosendale; Hans P. Zima

High Performance Fortran (HPF) is a data-parallel language that was designed to provide the user with a high-level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message-passing program. In this paper, we give an outline of developments that led to HPF, shortly explain its major features, and illustrate its use for irregular applications. The final part of the paper points out some classes of problems that are difficult to deal with efficiently within the HPF paradigm.


languages and compilers for parallel computing | 1990

Programming Distributed Memory Architectures Using Kali

Piyush Mehrotra; John Van Rosendale


international conference on supercomputing | 1998

High-level management of communication schedules in HPF-like languages

Siegfried Benkner; Piyush Mehrotra; John Van Rosendale; Hans P. Zima


PPEALS | 1990

Supporting shared data structures on distributed memory machines

Charles Koelbel; Piyush Mehrotra; John Van Rosendale

Collaboration


Dive into the John Van Rosendale's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans P. Zima

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge