Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where G. Ramalingam is active.

Publication


Featured researches published by G. Ramalingam.


Journal of Algorithms | 1996

An Incremental Algorithm for a Generalization of the Shortest-Path Problem

G. Ramalingam; Thomas W. Reps

Thegrammar problem, a generalization of the single-source shortest-path problem introduced by D. E. Knuth (Inform. Process. Lett.6(1) (1977), 1?5) is to compute the minimum-cost derivation of a terminal string from each nonterminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsumes the problem of finding optimal hyperpaths in directed hypergraphs (under varying optimization criteria) that has received attention recently. In this paper we present an incremental algorithm for a version of the grammar problem. As a special case of this algorithm we obtain an efficient incremental algorithm for the single-source shortest-path problem with positive edge lengths. The aspect of our work that distinguishes it from other work on the dynamic shortest-path problem is its ability to handle “multiple heterogeneous modifications”: between updates, the input graph is allowed to be restructured by an arbitrary mixture of edge insertions, edge deletions, and edge-length changes.


ACM Transactions on Programming Languages and Systems | 2000

Context-sensitive synchronization-sensitive analysis is undecidable

G. Ramalingam

Static program analysis is concerned with the computation of approximations of the runtime behavior of programs. Precise information about a programs runtime behavior is, in general, uncomputable for various different reasons, and each reason may necessitate making certain approximations in the information computed. This article illustrates one source of difficulty in static analysis of concurrent programs. Specifically, the article shows that an analysis that is simultaneously both context-sensitive and synchronization-sensitive (that is, a context-sensitive analysis that precisely takes into account the constraints on execution order imposed by the synchronization statements in the program) is impossible even for the simplest of analysis problems.


ACM Transactions on Programming Languages and Systems | 1994

The undecidability of aliasing

G. Ramalingam

Alias analysis is a prerequisite for performing most of the common program analyses such as reaching-definitions analysis or live-variables analysis. Landi [1992] recently established that it is impossible to compute statically precise alias information—either may-alias or must-alias—in languages with if statements, loops, dynamic storage, and recursive data structures: more precisely, he showed that the may-alias relation is not recursive, while the must-alias relation is not even recursively enumerable. This article presents simpler proofs of the same results.


Theoretical Computer Science | 1996

On the computational complexity of dynamic graph problems

G. Ramalingam; Thomas W. Reps

Abstract A common way to evaluate the time complexity of an algorithm is to use asymptotic worst-case analysis and to express the cost of the computation as a function of the size of the input. However, for an incremental algorithm this kind of analysis is sometimes not very informative. (By an “incremental algorithm”, we mean an algorithm for a dynamic problem.) When the cost of the computation is expressed as a function of the size of the (current) input, several incremental algorithms that have been proposed run in time asymptotically no better, in the worst-case, than the time required to perform the computation from scratch. Unfortunately, this kind of information is not very helpful if one wishes to compare different incremental algorithms for a given problem. This paper explores a different way to analyze incremental algorithms. Rather than express the cost of an incremental computation as a function of the size of the current input, we measure the cost in terms of the sum of the sizes of the changes in the input and the output. The change in approach allows us to develop a more informative theory of computational complexity for dynamic problems. An incremental algorithm is said to be bounded if the time taken by the algorithm to perform an update can be bounded by some function of the sum of the sizes of the changes in the input and the output. A dynamic problem is said to be unbounded with respect to a model of computation if it has no bounded incremental algorithm within that model of computation. The paper presents new upper-bound results as well as new lower-bound results with respect to a class of algorithms called the locally persistent algorithms. Our results, together with some previously known ones, shed light on the organization of the complexity hierarchy that exists when dynamic problems are classified according to their incremental complexity with respect to locally persistent algorithms. In particular, these results separate the classes of polynomially bounded problems, inherently exponentially bounded problems, and unbounded problems.


international symposium on software testing and analysis | 2006

Effective typestate verification in the presence of aliasing

Stephen J. Fink; Eran Yahav; Nurit Dor; G. Ramalingam; Emmanuel Geay

This paper addresses the challenge of sound typestate verification, with acceptable precision, for real-world Java programs. We present a novel framework for verification of typestate properties, including several new techniques to precisely treat aliases without undue performance costs. In particular, we present a flowsensitive, context-sensitive, integrated verifier that utilizes a parametric abstract domain combining typestate and aliasing information.To scale to real programs without compromising precision, we present a staged verification system in which faster verifiers run as early stages which reduce the workload for later, more precise, stages.We have evaluated our framework on a number of real Java programs, checking correct API usage for various Java standard libraries. The results show that our approach scales to hundreds of thousands of lines of code, and verifies correctness for 93% of the potential points of failure.


symposium on principles of programming languages | 1993

A categorized bibliography on incremental computation

G. Ramalingam; Thomas W. Reps

In many kinds of emnputatiomd contexts, modifications of the input data are to be processed at once so as to have immediate effect on the output. Because small changes in the input to a computation often cause only small changes in the outpu~ the challenge is to compute the new output incrementally by updating parts of the old outpu~ rather than by recomputing the entire output from scratch (as a “batch computation”). Put another way, the goal is to make use of the solution to one problem instance to find the solution to a “nearby” problem irtstanee. The abstract ~oblem of incremental computation can be phrased as follows: The goal is to compute a function ~ on the user’s “input” data x—where x is often some data structure, such as a tree, graph, or matrix-and to keep the output ~ (x) updated as the input undergoes changes. An incremental algorithm for computing ~ takes as input the “batch input” x, the “batch output” ~ (x), possibly some auxiliary information, and the change in the “batch input” Ax. The algorithm computes the new “batch output” ~ (x + Ax), where x + Ax denotes the modified input, and updates the auxiliary information as necessary. F~om the s~dpoint of the progr amming-languages emnrmmity, interest in incremental computation stems horn the following four resesrch topics:


symposium on principles of programming languages | 1995

Parametric program slicing

John Field; G. Ramalingam; Frank Tip

Program slicing is a technique for isolating computational threads in programs. In this paper, we show how to mechanically extract a family of practical algorithms for computing slices directly from semantic specifications. These algorithms are based on combining the notion of dynamic dependence tracking in term rewriting systems with a program representation whose behavior is defined via an equational logic. Our approach is distinguished by the fact that changes to the behavior of the slicing algorithm can be accomplished through simple changes in rewriting rules that define the semantics of the program representation. Thus, e.g., different notions of dependence may be specified, properties of language-specific datatypes can be exploited, and various time, space, and precision tradeoffs may be made. This flexibility enables us to generalize the traditional notions of static and dynamic slices to that of a constrained slice, where any subset of the inputs of a program may be supplied.


programming language design and implementation | 1996

Data flow frequency analysis

G. Ramalingam

Conventional dataflow analysis computes information about what facts may or will not hold during the execution of a program. Sometimes it is useful, for program optimization, to know how often or with what probability a fact holds true during program execution. In this paper, we provide a precise formulation of this problem for a large class of dataflow problems --- the class of finite bi-distributive subset problems. We show how it can be reduced to a generalization of the standard dataflow analysis problem, one that requires a sum-over-all-paths quantity instead of the usual meet-overall-paths quantity. We show that Kildalls result expressing the meet-over-all-paths value as a maximal-fixed-point carries over to the generalized setting. We then outline ways to adapt the standard dataflow analysis algorithms to solve this generalized problem, both in the intraprocedural and the interprocedural case.


programming language design and implementation | 2010

Safe programmable speculative parallelism

Prakash Prabhu; G. Ramalingam; Kapil Vaswani

Execution order constraints imposed by dependences can serialize computation, preventing parallelization of code and algorithms. Speculating on the value(s) carried by dependences is one way to break such critical dependences. Value speculation has been used effectively at a low level, by compilers and hardware. In this paper, we focus on the use of speculation by programmers as an algorithmic paradigm to parallelize seemingly sequential code. We propose two new language constructs, speculative composition and speculative iteration. These constructs enable programmers to declaratively express speculative parallelism in programs: to indicate when and how to speculate, increasing the parallelism in the program, without concerning themselves with mundane implementation details. We present a core language with speculation constructs and mutable state and present a formal operational semantics for the language. We use the semantics to define the notion of a correct speculative execution as one that is equivalent to a non-speculative execution. In general, speculation requires a runtime mechanism to undo the effects of speculative computation in the case of mis predictions. We describe a set of conditions under which such rollback can be avoided. We present a static analysis that checks if a given program satisfies these conditions. This allows us to implement speculation efficiently, without the overhead required for rollbacks. We have implemented the speculation constructs as a C# library, along with the static checker for safety. We present an empirical evaluation of the efficacy of this approach to parallelization.


computer aided verification | 2008

Thread Quantification for Concurrent Shape Analysis

Josh Berdine; Tal Lev-Ami; Roman Manevich; G. Ramalingam; Mooly Sagiv

In this paper we address the problem of shape analysis for concurrent programs. We present new algorithms, based on abstract interpretation, for automatically verifying properties of programs with an unbounded number of threads manipulating an unbounded shared heap. Our algorithms are based on a new abstract domain whose elements represent thread-quantifiedinvariants: i.e., invariants satisfied by all threads. We exploit existing abstractions to represent the invariants. Thus, our technique lifts existing abstractions by wrapping universal quantification around elements of the base abstract domain. Such abstractions are effective because they are thread modular: e.g., they can capture correlations between the local variables of the same thread as well as correlations between the local variables of a thread and global variables, but forget correlations between the states of distinct threads. (The exact nature of the abstraction, of course, depends on the base abstraction lifted in this style.) We present techniques for computing sound transformers for the new abstraction by using transformers of the base abstract domain. We illustrate our technique in this paper by instantiating it to the Boolean Heap abstraction, producing a Quantified Boolean Heap abstraction. We have implemented an instantiation of our technique with Canonical Abstraction as the base abstraction and used it to successfully verify linearizability of data-structures in the presence of an unbounded number of threads.

Collaboration


Dive into the G. Ramalingam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eran Yahav

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas W. Reps

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raghavan Komondoor

Indian Institute of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge