Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Haines is active.

Publication


Featured researches published by Matthew Haines.


Scientific Programming | 1997

Opusc A Coordination Language for Multidisciplinary Applications

Barbara M. Chapman; Matthew Haines; Piyush Mehrota; Hans P. Zima; John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


hawaii international conference on system sciences | 1997

Thread migration in the presence of pointers

David Cronk; Matthew Haines; Piyush Mehrotra

Dynamic migration of lightweight threads supports both data locality and load balancing. However, migrating threads that contain pointers referencing data in both the stack and heap remains an open problem. We describe a technique by which threads with pointers referencing both stack and non shared heap data can be migrated such that the pointers remain valid after migration. As a result, threads containing pointers can now be migrated between processors in a homogeneous distributed memory environment.


conference on object oriented programming systems languages and applications | 1995

SmartFiles: an OO approach to data file interoperability

Matthew Haines; Piyush Mehrotra; John Van Rosendale

Data files for scientific and engineering codes typically consist of a series of raw data values whose description is buried in the programs that interact with these files. In this situation, making even minor changes in the file structure or sharing files between programs (interoperability) can only be done after careful examination of the data files and the I/O statements of the programs interacting with this file. In short, scientific data files lack self-description, and other self-describing data techniques are not always appropriate or useful for scientific data files. By applying an object-oriented methodology to data files, we can add the intelligence required to improve data interoperability and provide an elegant mechanism for supporting complex, evolving, or multidisciplinary applications, while still supporting legacy codes. As a result, scientists and engineers should be able to share datasets with far greater ease, simplifying multidisciplinary applications and greatly facilitating remote collaboration between scientists.


Concurrency and Computation: Practice and Experience | 2000

on the implementation of the opus coordination language

Erwin Laure; Matthew Haines; Piyush Mehrotra; Hans P. Zima

Opus is a new programming language designed to assist in coordinating the execution of multiple, independent program modules. With the help of Opus, coarse grained tush parallelism between data parallel modules can be expressed in a clean and structured way, In this paper we address the problems of how to build a compilation and runtime support system that can efficiently implement the Opus constructs, Our design considers the often-conflicting goals of efficiency and modular construction through software re-use, In particular, we present the system requirements for an efficient Opus implementation, the Opus runtime system, and describe how they work together to provide the underlying services that the Opus compiler needs for a broad class of machines, Copyright (C) 2000 John Wiley & Sons, Ltd.


acm symposium on applied computing | 1999

Mars: runtime support for coordinated applications

Neal Sample; Carl Bartlett; Matthew Haines

Expert knowledge from many disciplines is frequently embodied in stand-alone codes used to solve particular problems. Codes from various disciplines can be composed into cooperative ensembles that can answer questions larger than any solitary code can. These multi-code compositions are called multidisciplinary applications and are a growing area of research. To support the integration of existing codes into multidisciplinary applications, we have constructed the Multidisciplinary Application Runtime System (MARS). MARS supports legacy modules, heterogeneous execution environments, conditional execution flows, dynamic module invocation and realignment, runtime binding of output data paths, and a simple specification language to script module actions.


Journal of Parallel and Distributed Computing | 1996

Special Issue on Multithreading for Multiprocessors

Matthew Haines; Piyush Mehrotra

into a group of threads that are scheduled for execution based on the availability of the data needed for their computations. Thread size is limited to the number of consecutive instructions that can be performed without needing to reference a memory location. Examples of multithreading hardware systems include the HEP [25], the J-Machine [8], Monsoon [21], *T [20], and Tera [1]. Unfortunately, because multithreaded architectures require special-purpose chip designs, they have difficulty in competing with the ever-increasing performance of commodity components, particularly microprocessors. Thus, of the machines mentioned above, only the Tera is still commercially viable. Software multithreading is achieved by multiplexing several threads atop a single processor. From this definition it might appear that the standard Unix operating system [4] exhibits software multithreading in that it multiplexes a number of processes atop a single processor. However, the difference between Unix and software multithreading lies in the definitions of processes and threads. A Unix process is defined to include both a sequential unit of computation and an address space in which the computation takes place. Each process therefore maintains its own set of address mapping tables, and a context switch between two processes involves changing these tables, as well as the context of the computation. A thread, on the other hand, is defined only as a sequential unit of computation; there is no notion of an address space within a thread. Rather, one or more threads execute within an address space provided by the operating system. A context switch between threads involves only changing the context of the computations, and is therefore often termed to be lightweight with respect to a context switch between two processes. In terms of speed, a context switch between two lightweight threads is often two orders of magnitude faster than a context switch between two Unix processes. Another design question for software multithreading is whether the threads are implemented by the operating system in kernel space, also known as kernel-level threads, or in user space, also known as user-level threads. Kernel threads have the advantage of being tightly integrated with the scheduling and interrupt-handling facilities, but have JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING 37, 1–4 (1996) ARTICLE NO. 0103


european conference on parallel processing | 1999

Compiling Data Parallel Tasks for Coordinated Execution

Erwin Laure; Matthew Haines; Piyush Mehrotra; Hans P. Zima

Many advanced scientific applications are heterogeneous and multidisciplinary in nature, consisting of multiple, independent modules. Such applications require efficient means of coordination for their program units. The programming language Opus was designed recently to assist in coordinating the execution of multiple, independent program modules. In this paper we address the problem of how to compile an Opus program such that it can be efficiently executed on a broad class of machines.


usenix annual technical conference | 1997

ON DESIGNING LIGHTWEIGHT THREADS FOR SUBSTRATE SOFTWARE

Matthew Haines


Archive | 2001

Optimizing Search Strategies in k-d Trees

Neal Sample; Matthew Haines; Mark Arnold; Timothy Purcell


parallel and distributed processing techniques and applications | 1999

Pipeline Expansion in Coordinated Applications.

Carl Bartlett; Neal Sample; Matthew Haines

Collaboration


Dive into the Matthew Haines's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans P. Zima

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erwin Laure

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory D. Benson

University of San Francisco

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge