Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raghavan Komondoor is active.

Publication


Featured researches published by Raghavan Komondoor.


static analysis symposium | 2001

Using Slicing to Identify Duplication in Source Code

Raghavan Komondoor; Susan Horwitz

Programs often have a lot of duplicated code, which makes both understanding and maintenance more difficult. This problem can be alleviated by detecting duplicated code, extracting it into a separate new procedure, and replacing all the clones (the instances of the duplicated code) by calls to the new procedure. This paper describes the design and initial implementation of a tool that finds clones and displays them to the programmer. The novel aspect of our approach is the use of program dependence graphs (PDGs) and program slicing to find isomorphic PDG subgraphs that represent clones. The key benefits of this approach are that our tool can find non-contiguous clones (clones whose components do not occur as contiguous text in the program), clones in which matching statements have been reordered, and clones that are intertwined with each other. Furthermore, the clones that are found are likely to be meaningful computations, and thus good candidates for extraction.


international conference on management of data | 1999

Update propagation protocols for replicated databates

Yuri Breitbart; Raghavan Komondoor; Rajeev Rastogi; S. Seshadri; Abraham Silberschatz

Replication is often used in many distributed systems to provide a higher level of performance, reliability and availability. Lazy replica update protocols, which propagate updates to replicas through independent transactions after the original transaction commits, have become popular with database vendors due to their superior performance characteristics. However, if lazy protocols are used indiscriminately, they can result in non-serializable executions. In this paper, we propose two new lazy update protocols that guarantee serializability but impose a much weaker requirement on data placement than earlier protocols. Further, many naturally occurring distributed systems, like distributed data warehouses, satisfy this requirement. We also extend our lazy update protocols to eliminate all requirements on data placement. The extension is a hybrid protocol that propagates as many updates as possible in a lazy fashion. We implemented our protocols on the Datablitz database system product developed at Bell Labs. We also conducted an extensive performance study which shows that our protocols outperform existing protocols over a wide range of workloads.


symposium on principles of programming languages | 2000

Semantics-preserving procedure extraction

Raghavan Komondoor; Susan Horwitz

Procedure extraction is an important program transformation that can be used to make programs easier to understand and maintain, to facilitate code reuse, and to convert “monolithic” code to modular or object-oriented code. Procedure extraction involves the following steps:The statements to be extracted are identified (by the programmer or by a programming tool). If the statements are not contiguous, they are moved together so that they form a sequence that can be extracted into a procedure, and so that the semantics of the original code is preserved. The statements are extracted into a new procedure, and are replaced with an appropriate call. This paper addresses step 2: in particular, the conditions under which it is possible to move a set of selected statements together so that they become “extractable”, while preserving semantics. Since semantic equivalence is, in general, undecidable, we identify sufficient conditions based on control and data dependences, and define an algorithm that moves the selected statements together when the conditions hold. We also include an outline of a proof that our algorithm is semantics-preserving. While there has been considerable previous work on procedure extraction, we believe that this is the first paper to provide an algorithm for semantics-preserving procedures extraction given an arbitrary set of selected statements in an arbitrary control-flow graph.


tools and algorithms for construction and analysis of systems | 2005

Dependent types for program understanding

Raghavan Komondoor; G. Ramalingam; Satish Chandra; John Field

Weakly-typed languages such as Cobol often force programmers to represent distinct data abstractions using the same low-level physical type. In this paper, we describe a technique to recover implicitly-defined data abstractions from programs using type inference. We present a novel system of dependent types which we call guarded types, a path-sensitive algorithm for inferring guarded types for Cobol programs, and a semantic characterization of correct guarded typings. The results of our inference technique can be used to enhance program understanding for legacy applications, and to enable a number of type-based program transformations.


european symposium on programming | 2001

Tool Demonstration: Finding Duplicated Code Using Program Dependences

Raghavan Komondoor; Susan Horwitz

The results of several studies [1,7,8] indicate that 7-23% of the source code for large programs is duplicated code. Duplication makes programs harder to maintain because when enhancements or bug fixes are made in one instance of the duplicated code, it is necessary to search for the other instances in order to perform the corresponding modification.


working conference on reverse engineering | 2007

Recovering Data Models via Guarded Dependences

Raghavan Komondoor; G. Ramalingam

This paper presents an algorithm for reverse engineering semantically sound object-oriented data models from programs written in weakly-typed languages like COBOL. Our inference is based on a novel form of guarded transitive data dependence, and improves upon prior semantics-based model inference algorithms by producing simpler, easier to understand, models, and by inferring them more efficiently.


international conference on software engineering | 2006

Semantics-based reverse engineering of object-oriented data models

G. Ramalingam; Raghavan Komondoor; John Field; Saurabh Sinha

We present an algorithm for reverse engineering object-oriented (OO) data models from programs written in weakly-typed languages like Cobol. These models, similar to UML class diagrams, can facilitate a variety of program maintenance and migration activities. Our algorithm is based on a semantic analysis of the programs code, and we provide a bisimulation-based formalization of what it means for an OO data model to be correct for a program.


conference on object-oriented programming systems, languages, and applications | 2011

Null dereference verification via over-approximated weakest pre-conditions analysis

Ravichandhran Madhavan; Raghavan Komondoor

Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.


international conference on software maintenance | 2010

A case study in matching service descriptions to implementations in an existing system

Hari Shanker Gupta; Deepak D'Souza; Raghavan Komondoor; Girish Maskeri Rama

A number of companies are trying to migrate large monolithic software systems to Service Oriented Architectures. A common approach to do this is to first identify and describe desired services (i.e., create a model), and then to locate portions of code within the existing system that implement the described services. In this paper we describe a detailed case study we undertook to match a model to an open-source business application. We describe the systematic methodology we used, the results of the exercise, as well as several observations that throw light on the nature of this problem. We also suggest and validate heuristics that are likely to be useful in partially automating the process of matching service descriptions to implementations.


working conference on reverse engineering | 2007

Parametric Process Model Inference

Saurabh Sinha; G. Ramalingam; Raghavan Komondoor

Legacy applications can be difficult and time-consuming to understand and update due to the lack of modern abstraction mechanisms in legacy languages, as well as the gradual deterioration of code due to repeated maintenance activities. We present an approach for reverse engineering process model abstractions from legacy code. Such a process model can provide a quick initial understanding of an application, and can be a useful starting point for further program exploration. Our approach takes as input a user specification of interesting events, and creates a representation (i.e., a process model) that concisely depicts the occurrences of the events and the possible control-flow among them. The key features of our approach are the use of a logical data model of the program for specifying the events, and graph-projection techniques for creating the process model.

Collaboration


Dive into the Raghavan Komondoor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susan Horwitz

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Deepak D'Souza

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

K. Vasanta Lakshmi

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

S. Narendran

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Sudha Balodia

Indian Institute of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge