Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hans P. Zima is active.

Publication


Featured researches published by Hans P. Zima.


ieee international conference on high performance computing data and analytics | 2007

Parallel Programmability and the Chapel Language

Bradford L. Chamberlain; David Callahan; Hans P. Zima

In this paper we consider productivity challenges for parallel programmers and explore ways that parallel language design might help improve end-user productivity. We offer a candidate list of desirable qualities for a parallel programming language, and describe how these qualities are addressed in the design of the Chapel language. In doing so, we provide an overview of Chapels features and how they help address parallel productivity. We also survey current techniques for parallel programming and describe ways in which we consider them to fall short of our idealized productive programming model.


parallel computing | 1988

SUPERB: A tool for semi-automatic MIMD/SIMD parallelization☆

Hans P. Zima; Heinz-J Bast; Michael Gerndt

Abstract This paper describes the design of an interactive system for the semi-automatic transformation of FORTRAN 77 programs into parallel programs for the SUPERNUM machine. The system is characterized by a powerful analysis component, a catalog of MIMD and SIMD parallelization transformations, and a flexible dialog facility. It contains specific knowledge about the parallelization of an important class of numerical algorithms.


Scientific Programming | 1992

Programming in Vienna Fortran

Barbara M. Chapman; Piyush Mehrotra; Hans P. Zima

Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. In contrast to current programming practice, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the data distribution. In this paper, we present the language features of Vienna Fortran for FORTRAN 77, together with examples illustrating the use of these features.


high level parallel programming models and supportive environments | 2004

The cascade high productivity language

David Callahan; Branford L. Chamberlain; Hans P. Zima

The strong focus of recent high end computing efforts on performance has resulted in a low-level parallel programming paradigm characterized by explicit control over message-passing in the framework of a fragmented programming model. In such a model, object code performance is achieved at the expense of productivity, conciseness, and clarity. This paper describes the design of Chapel, the cascade high productivity language, which is being developed in the DARPA-funded HPCS project Cascade led by Cray Inc. Chapel pushes the state-of-the-art in languages for HEC system programming by focusing on productivity, in particular by combining the goal of highest possible object code performance with that of programmability offered by a high-level user interface. The design of Chapel is guided by four key areas of language technology: multithreading, locality-awareness, object-orientation, and generic programming. The Cascade architecture, which is being developed in parallel with the language, provides key architectural support for its efficient implementation.


Proceedings of the IEEE | 1993

Compiling for distributed-memory systems

Hans P. Zima; Barbara M. Chapman

Compilation techniques for the source-to-source translation of programs in an extended FORTRAN 77 to equivalent parallel message-passing programs are discussed. A machine-independent language extension to FORTRAN 77, Data Parallel FORTRAN (DPF), is introduced. It allows the user to write programs for distributed-memory multiprocessing systems (DMMPS) using global addresses, and to specify the distribution of data across the processors of the machine. Message-Passing FORTRAN (MPF), a FORTRAN extension that allows the formulation of explicitly parallel programs that communicate via explicit message passing, is also introduced. Procedures and optimization techniques for both languages are discussed. Additional optimization methods and advanced parallelization techniques, including run-time analysis, are also addressed. An extensive overview of related work is given. >


international conference on supercomputing | 1993

A static parameter based performance prediction tool for parallel programs

Thomas Fahringer; Hans P. Zima

This paper presents a Parameter based Performance Prediction Tool (PPPT) which is part of the Vienna Fortran Compilation System (VFCS), a compiler that automatically translates Fortran programs into message passing programs for massively parallel architectures. The PPPT is applied to an explicitly parallel program generated by the VFCS, which may contain synchronous as well as asynchronous communication and is attributed with parameters computed in a previous profiling run. It statically computes a set of optional parameters that characterize the behavior of the parallel program. This includes work distribution, the number of data transfers, the amount of data transferred, transfer times, network contention, and the number of cache misses. These parameters can be selectively determined for statements, loops, procedures, and the entire program; furthermore, their effect with respect to individual processors can be examined. The tool plays an important role in the VFCS by providing the system as well as the user with vital performance information about the program. In particular, it supports automatic data distribution generation and the intelligent selection of transformation strategies, based on properties of the algorithm and characteristics of the target architecture. The tool has been implemented. Experiments show a strong correlation between the statically computed parameters and actual measurements; furthermore it turns out that the predicted parameter values allow a realistic ranking of different program versions with respect to the actual runtime.


IEEE Parallel & Distributed Technology: Systems & Applications | 1994

Extending HPF for Advanced Data-Parallel Applications

Barbara M. Chapman; Hans P. Zima; Piyush Mehrotra

High Performance Fortran can support regular numerical algorithms, but it cannot adequately express advanced applications such as particle-in-cell codes or unstructured mesh solvers.This article addresses this problem and outlines possible development paths.


parallel computing | 1992

Vienna Fortran—a Fortran language extension for distributed memory multiprocessors

Barbara M. Chapman; Piyush Mehrotra; Hans P. Zima

Abstract Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. However, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the placement of data. In this paper, we present the basic features of Vienna Fortran along with a set of examples illustrating the use of these features.


Scientific Programming | 1997

Opusc A Coordination Language for Multidisciplinary Applications

Barbara M. Chapman; Matthew Haines; Piyush Mehrota; Hans P. Zima; John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


conference on high performance computing (supercomputing) | 2002

Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

Thomas L. Sterling; Hans P. Zima

Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

Collaboration


Dive into the Hans P. Zima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark James

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul L. Springer

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Gerndt

Technische Universität München

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge