Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mahadevan Ganapathi is active.

Publication


Featured researches published by Mahadevan Ganapathi.


ACM Transactions on Programming Languages and Systems | 1989

Code generation using tree matching and dynamic programming

Alfred V. Aho; Mahadevan Ganapathi; Steven W. K. Tjiang

Compiler-component generators, such as lexical analyzer generators and parser generators, have long been used to facilitate the construction of compilers. A tree-manipulation language called twig has been developed to help construct efficient code generators. Twig transforms a tree-translation scheme into a code generator that combines a fast top-down tree-pattern matching algorithm with dynamic programming. Twig has been used to specify and construct code generators for several experimental compilers targeted for different machines.


ACM Transactions on Programming Languages and Systems | 1985

Affix grammar driven code generation

Mahadevan Ganapathi; Charles N. Fischer

Affix grammars are used to describe the instruction set of a target architecture for purposes of compiler code generation. A code generator is obtained automatically for a compiler using attributed parsing techniques. A compiler built on this model can automatically perform most popular machine-dependent optimizations, including peephole optimizations. Code generators based on this model demonstrate <italic>retargetability</italic> for the VAX<supscrpt>1</supscrpt>-11, iAPX<supscrpt>2</supscrpt>-86, Z-8000<supscrpt>3</supscrpt>, PDP<supscrpt>4</supscrpt>-11, MC-68000, NS32032, FOM, and IBM-370 architectures.


Software - Practice and Experience | 1989

Interprocedual optimization: experimental results

Stephen Richardson; Mahadevan Ganapathi

The problem of tracking data flow across procedure boundaries has a long history of theoretical study by people who believed that such information would be useful for code optimization. Building upon previous work, an algorithm for interprocedural data flow analysis has been implemented. The algorithm produces three flow‐insensitive summary sets: MOD, USE and ALIASES. The utility of the resulting information was investigated using an optimizing Pascal compiler. Over a sampling of 27 bench‐marks, new optimizations performed as a result of interprocedural summary information contributed almost nothing to program execution speed. Finally, related optimization techniques of possibly greater potential are discussed.


Information Processing Letters | 1989

Interprocedural analysis vs. procedure integration

Stephen Richardson; Mahadevan Ganapathi

A set of experimental results show the exact run-time improvement due to both procedure integration and the use of interprocedural data-flow information, as well as the relative impact on compilation time and object code size


IEEE Computer | 1989

Code optimization across procedures

Stephen Richardson; Mahadevan Ganapathi

Procedure calls can be a major obstacle to the analysis of computer programs, preventing significant improvements in program speed. A broad range of techniques, each of which is in some sense interprocedural by nature, is considered to overcome this obstacle. Some techniques rely on interprocedural dataflow in their analysis. Others require interprocedural information in the form of detailed profile data or information concerning the scope of a given procedure in relation to other procedures. These include procedure integration, interprocedural register allocation, pointer and alias tracking, and dependency analysis.<<ETX>>


symposium on principles of programming languages | 1985

Efficient tree pattern matching (extended abstract): an aid to code generation

Alfred V. Aho; Mahadevan Ganapathi

We show that tree pattern matching has significant advantages in the specification and implementation of efficient code generators. We present a top-down tree-matching algorithm that is particularly well suited to code generation applications. Finally, we present a new back-end language that incorporates tree pattern matching with dynamic programming into a uniform framework for the specification and implementation of efficient code generators.


Software - Practice and Experience | 1984

Attributed linear intermediate representations for retargetable code generators

Mahadevan Ganapathi; Charles N. Fischer

This paper illustrates the usefulness of an attributed prefix linear intermediate representation for compiler code generation. In separating the machine‐independent and machine‐dependent aspects of a compiler, we discuss the advantages and disadvantages of an attributed linear intermediate representation with respect to tree‐structured intermediate representations. Some of these issues are relevant to fundamental questions of compiler structure with particular emphasis on retargetability. We discuss our implementation experience using this linear intermediate representation with a table‐driven code generation scheme for a variety of target architectures.


IEEE Computer | 1989

Issues in Ada compiler technology

Mahadevan Ganapathi; Geoffrey O. Mendal

The key technical issues involved in producing high-quality Ada compilers and related support tools are discussed. These include real-time issues, programming tools and environments, and code optimization. Also addressed are some important problems that compiler designers face, for example, determining which deficiencies of existing Ada systems can be attributed to the language and which are simply hard-to-implement features or unresolved issued in Ada compiler technology.<<ETX>>


Software - Practice and Experience | 1993

Compile-time copy elimination

Peter Schnorf; Mahadevan Ganapathi; John L. Hennessy

Single‐assignment and functional languages have value semantics that do not permit side‐effects. This lack of side‐effects makes automatic detection of parallelism and optimization for data locality in programs much easier. However, the same property poses a challenge in implementing these languages efficiently. This paper describes an optimizing compiler system that solves the key problem of aggregate copy elimination. The methods developed rely exclusively on compile‐time algorithms, including interprocedural analysis, that are applied to an intermediate data flow representation. By dividing the problem into update‐in‐place and build‐in‐place analysis, a small set of relatively simple techniques—edge substitution, graph pattern matching, substructure sharing and substructure targeting—was found to be very powerful. If combined properly and implemented carefully, the algorithms eliminate unnecessary copy operations to a very high degree. No run‐time overhead is imposed on the compiled programs.


Acta Informatica | 1988

Integrating code generation and peephole optimization

Mahadevan Ganapathi; Charles N. Fischer

SummaryPeephole optimization when integrated with automatic code generation into a uniform framework has significant advantages in the specification and implementation of efficient compiler back-ends. Attribute grammars provide a framework for expression of machine-specific code optimizations. We present a grammar-driven peephole optimization algorithm that is particularly well suited to attributed-parsing code generators. Integration via semantic attributes corrects interrelated phase-ordering problems and produces a faster and smaller compiler back-end.

Collaboration


Dive into the Mahadevan Ganapathi's collaboration.

Top Co-Authors

Avatar

Charles N. Fischer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. L. Hennessy

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge