Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seema Hiranandani is active.

Publication


Featured researches published by Seema Hiranandani.


conference on high performance computing (supercomputing) | 1991

Compiler optimizations for Fortran D on MIMD distributed-memory machines

Seema Hiranandani; Ken Kennedy; Chau-Wen Tseng

No abstract available


languages and compilers for parallel computing | 1991

An Overview of the Fortran D Programming System

Seema Hiranandani; Ken Kennedy; Charles Koelbel; Ulrich Kremer; Chau-Wen Tseng

The success of large-scale parallel architectures is limited by the difficulty of developing machine-independent parallel programs. We have developed Fortran D, a version of Fortran extended with data decomposition specifications, to provide a portable data-parallel programming model. This paper presents the design of two key components of the Fortran D programming system: a prototype compiler and an environment to assist automatic data decomposition. The Fortran D compiler addresses program partitioning, communication generation and optimization, data decomposition analysis, run-time support for unstructured computations, and storage management. The Fortran D programming environment provides a static performance estimator and an automatic data partitioner. We believe that the Fortran D programming system will significantly ease the task of writing machine-independent data-parallel programs.


parallel computing | 1992

Computer support for machine-independent parallel programming in Fortran D

Seema Hiranandani; Ken Kennedy; Chau-Wen Tseng

Abstract Because of the complexity and variety of parallel architectures, an efficient machine-independent parallel programming model is needed to make parallel computing truly usable for scientific programmers. We believe that Fortran D, a version of Fortran enhanced with data decomposition specifications, can provide such a programming model. This paper presents the design of a prototype Fortran D compiler for the iPSC/860, a MIMD distributed-memory machine. Issues addressed include data decomposition analysis, guard introduction, communications generation and optimization, program transformations, and storage assignment. A test suite of scientific programs will be used to evaluate the effectiveness of both the compiler technology and programming model for the Fortran D compiler.


international conference on supercomputing | 1992

Evaluation of compiler optimizations for Fortran D on MIMD distributed memory machines

Seema Hiranandani; Ken Kennedy; Chau-Wen Tseng

The Fortran D compiler uses data decomposition specifications to automatically translate Fortran programs for execution on MIMD distributed-memory machines. This paper introduces and classifies a number of advanced optimizations needed to achieve acceptable performance; they are analyzed and empirically evaluated for stencil computations. Profitability formulas are derived for each optimization. Results show that exploiting parallelism for pipelined computations, reductions, and scans is vital. Message vectorization, collective communication, and efficient coarse-grain pipelining also significantly affect performance.


conference on high performance computing (supercomputing) | 1992

Interprocedural compilation of Fortran D for MIMD distributed-memory machines

Mary W. Hall; Seema Hiranandani; Ken Kennedy; Chau-Wen Tseng

Algorithms exist for compiling Fortran D for MIMD (multiple-instruction multiple-data) distributed-memory machines, but they are significantly restricted in the presence of procedure calls. The authors present interprocedural analysis, optimization, and code generation algorithms for Fortran D that limit compilation to only one pass over each procedure. This is accomplished by collecting summary information after edits, and then compiling procedures in reverse topological order to propagate necessary information. Delaying instantiation of the computation partition, communication, and dynamic data decomposition is key to enabling interprocedural optimization. Recompilation analysis preserves the benefits of separate compilation. Empirical results show that interprocedural optimization is crucial in achieving acceptable performance for a common application.<<ETX>>


conference on high performance computing (supercomputing) | 1993

Preliminary experiences with the Fortran D compiler

Chau-Wen Tseng; Seema Hiranandani; Ken Kennedy

Fortran D is a version of Fortran enhanced with data decomposition specifications. Case studies illustrate strengths and weaknesses of the prototype Fortran D compiler when compiling linear algebra codes and whole programs. Statement groups, execution conditions, inter-loop communication optimizations, multi-reductions, and array kills for replicated arrays are identified as new compilation issues. On the Intel iPSC/860, the output of the prototype Fortran D compiler approaches the performance of hand-optimized code for parallel computations, but needs improvement for linear algebra and pipelined codes. The Fortran D compiler outperforms and the CM Fortran compiler (2.1 beta) by a factor of four or more on the TMC CM-5 when not using vector units. Better analysis, run-time support, and flexibility are required for the prototype compiler to be useful for a wider range of programs.


Journal of Parallel and Distributed Computing | 1994

Evaluating compiler optimizations for Fortran D

Seema Hiranandani; Ken Kennedy; Chau-Wen Tseng

Abstract The Fortran D compiler uses data decomposition specifications to automatically translate Fortran programs for execution on MIMD distributed-memory machines. This paper introduces and classifies a number of advanced optimizations needed to achieve acceptable performance; they are analyzed and empirically evaluated for stencil computations. Communication optimizations reduce communication overhead by decreasing the number of messages and hide communication overhead by overlapping the cost of remaining messages with local computation. Parallelism optimizations exploit parallel and pipelined computations and may need to restructure the computation to increase parallelism. Profitability formulas are derived for each optimization. Empirical results show that exploiting parallelism for pipelined computations, reductions, and scans is vital. Message vectorization, collective communication, and efficient coarse-grain pipelining also significantly affect performance. Scalability of communication and parallelism optimizations are analyzed. The effectiveness of communication optimizations is dictated by the ratio of communication to computation in the program. An optimization strategy is developed based on these analyses.


international conference on supercomputing | 1994

Compilation techniques for block-cyclic distributions

Seema Hiranandani; Ken Kennedy; John M. Mellor-Crummey; Ajay Sethi

Compilers for data-parallel languages such as Fortran D and High-Performance Fortran use data alignment and distribution specifications as the basis for translating programs for execution on MIMD distributed-memory machines. This paper describes techniques for generating efficient code for programs that use block-cyclic distributions. These techniques can be applied to programs with symbolic loop bounds, symbolic array dimensions, and loops with non-unit strides. We present algorithms for computing the data elements that need to be communicated among processors both for loops with unit and non-unit strides, a linear-time algorithm for computing the memory access sequence for loops with non-unit strides, and experimental results for a hand-compiled test case using block-cyclic distributions


Computing Systems in Engineering | 1992

Software support for irregular and loosely synchronous problems

Alok N. Choudhary; Geoffrey C. Fox; Seema Hiranandani; Ken Kennedy; Charles Koelbel; Sanjay Ranka; Joel H. Saltz

Abstract A large class of scientific and engineering applications may be classified as irregular and loosely synchronous from the perspective of parallel processing. We present a partial classification of such problems. This classification has motivated us to enhance Fortran D to provide language support for irregular, loosely synchronous problems. We present techniques for parallelization of such problems in the context of Fortran D.


conference on high performance computing (supercomputing) | 1994

The D Editor: a new interactive parallel programming tool

Seema Hiranandani; Ken Kennedy; Chau-Wen Tseng; Scott K. Warren

Fortran D and High Performance Fortran are languages designed to support efficient data-parallel programming on a variety of parallel architectures. The goal of the D Editor is to provide a tool that allows scientists to use these languages efficiently. The D Editor combines analyses for shared memory machines and compiler optimizations for distributed memory machines. By cooperating with the underlying compiler, it can provide novel information on partitioning, parallelism, and communication based on compile time analysis at the level of the original Fortran program. The D Editor uses color coding and a collection of graphical displays to help the user to zoom in on portions of the program containing sequentialized code or expensive communication. The prototype implementation is useful for interactively displaying the results of compile time analysis; however, it has a number of shortcomings that must be addressed. Future enhancements will provide additional advice and transformation capabilities. We believe the D Editor is representative of a new generation of tools that will be needed to assist scientists to fully exploit languages such as High Performance Fortran.<<ETX>>

Collaboration


Dive into the Seema Hiranandani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge