Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Murali Krishna Ramanathan is active.

Publication


Featured researches published by Murali Krishna Ramanathan.


sensor, mesh and ad hoc communications and networks | 2005

Redundant reader elimination in RFID systems

Bogdan Carbunar; Murali Krishna Ramanathan; Mehmet Koyutürk; Christoph M. Hoffmann

While recent technological advances have motivated large-scale deployment of RFID systems, a number of critical design issues remain unresolved. In this paper we deal with de- tecting redundant RFID readers (the redundant reader problem). The underlying difficulty associated with this problem arises from the lack of collision detection mechanisms, the potential inability of RFID readers to relay packets generated by other readers, and severe resource constraints on RFID tags. We prove that an optimal solution to the redundant reader problem is NP-hard and propose a randomized, distributed, and localized approximation algorithm, RRE. We provide a detailed probabilistic analysis of the accuracy and time complexity of RRE and conduct elaborate simulations to demonstrate their correctness and efficiency. I. INTRODUCTION


international conference on software engineering | 2007

Path-Sensitive Inference of Function Precedence Protocols

Murali Krishna Ramanathan; Suresh Jagannathan

Function precedence protocols define ordering relations among function calls in a program. In some instances, precedence protocols are well-understood (e.g., a call to pthread_mutex_init must always be present on all program paths before a call to pthread_mutex_lock). Oftentimes, however, these protocols are neither well- documented, nor easily derived. As a result, protocol violations can lead to subtle errors that are difficult to identify and correct. In this paper, we present CHRONICLER, a tool that applies scalable inter-procedural path-sensitive static analysis to automatically infer accurate function precedence protocols. Chronicler computes precedence relations based on a programs control-flow structure, integrates these relations into a repository, and analyzes them using sequence mining techniques to generate a collection of feasible precedence protocols. Deviations from these protocols found in the program are tagged as violations, and represent potential sources of bugs. We demonstrate CHRONICLERs effectiveness by deriving protocols for a collection of benchmarks ranging in size from 66 K to 2 M lines of code. Our results not only confirm the existence of bugs in these programs due to precedence protocol violations, but also highlight the importance of path sensitivity on accuracy and scalability.


international conference on peer-to-peer computing | 2005

Search with probabilistic guarantees in unstructured peer-to-peer networks

Ronaldo A. Ferreira; Murali Krishna Ramanathan; Asad Awan; Suresh Jagannathan

Search is a fundamental service in peer-to-peer (P2P) networks. However, despite numerous research efforts, efficient algorithms for guaranteed location of shared content in unstructured P2P networks are yet to be devised. In this paper, the authors presented a simple but highly effective protocol for object location that gives probabilistic guarantees of finding even rare objects independently of the network topology. The protocol relies on randomized techniques for replication of objects (or their references) and for query propagation. The authors proved analytically, and demonstrated experimentally, that this scheme provides high probabilistic guarantees of success, while incurring minimal overhead. The performance of this scheme was quantified in terms of network messages, probability of success, and response time. The robustness of this protocol was also evaluated in the presence of node failures (departures). Using simulation, it is shown that this scheme performs no worse than the best known access-frequency based protocols, without compromising access to rare objects.


programming language design and implementation | 2007

Static specification inference using predicate mining

Murali Krishna Ramanathan; Suresh Jagannathan

The reliability and correctness of complex software systems can be significantly enhanced through well-defined specifications that dictate the use of various units of abstraction (e.g., modules, or procedures). Often times, however, specifications are either missing, imprecise, or simply too complex to encode within a signature, necessitating specification inference. The process of inferring specifications from complex software systems forms the focus of this paper. We describe a static inference mechanism for identifying the preconditions that must hold whenever a procedure is called. These preconditions may reflect both data flow properties (e.g., whenever p is called, variable x must be non-null) as well as control-flow properties (e.g., every call to p must bepreceded by a call to q). We derive these preconditions using a ninter-procedural path-sensitive dataflow analysis that gathers predicates at each program point. We apply mining techniques to these predicates to make specification inference robust to errors. This technique also allows us to derive higher-level specifications that abstract structural similarities among predicates (e.g., procedure p is called immediately after a conditional test that checks whether some variable v is non-null.) We describe an implementation of these techniques, and validate the effectiveness of the approach on a number of large open-source benchmarks. Experimental results confirm that our mining algorithms are efficient, and that the specifications derived are both precise and useful-the implementation discovers several critical, yet previously, undocumented preconditions for well-tested libraries.


Journal of Parallel and Distributed Computing | 2009

Efficient tag detection in RFID systems

Bogdan Carbunar; Murali Krishna Ramanathan; Mehmet Koyutürk; Suresh Jagannathan

Recent technological advances have motivated large-scale deployment of RFID systems. However, a number of critical design issues relating to efficient detection of tags remain unresolved. In this paper, we address three important problems associated with tag detection in RFID systems: (i) accurately detecting RFID tags in the presence of reader interference (reader collision avoidance problem); (ii) eliminating redundant tag reports by multiple readers (optimal tag reporting problem); and (iii) minimizing redundant reports from multiple readers by identifying a minimal set of readers that cover all tags present in the system (optimal tag coverage problem). The underlying difficulties associated with these problems arise from the lack of collision detection mechanisms, the potential inability of RFID readers to relay packets generated by other readers, and severe resource constraints on RFID tags. In this paper we present a randomized, distributed and localized Reader Collision Avoidance (RCA) algorithm and provide detailed probabilistic analysis to establish the accuracy and the efficiency of this algorithm. Then, we prove that the optimal tag coverage problem is NP-hard even with global knowledge of reader and tag locations. We develop a distributed and localized Redundant Reader Elimination (RRE) algorithm, that efficiently identifies redundant readers and avoids redundant reporting by multiple readers. In addition to rigorous analysis of performance and accuracy, we provide results from elaborate simulations for a wide range of system parameters, demonstrating the correctness and efficiency of the proposed algorithms under various scenarios.


automated software engineering | 2006

Sieve: A Tool for Automatically Detecting Variations Across Program Versions

Murali Krishna Ramanathan; Suresh Jagannathan

Software systems often undergo many revisions during their lifetime as new features are added, bugs repaired, abstractions simplified and refactored, and performance improved. When a revision, even a minor one, does occur, the changes it induces must be tested to ensure that invariants assumed in the original version are not violated unintentionally. In order to avoid testing components that are unchanged across revisions, impact analysis is often used to identify code blocks or functions that are affected by a change. In this paper, we present a novel solution to this general problem that uses dynamic programming on instrumented traces of different program binaries to identify longest common subsequences in strings generated by these traces. Our formulation allows us to perform impact analysis and also to detect the smallest set of locations within the functions where the effect of the changes actually manifests itself. Sieve is a tool that incorporates these ideas. Sieve is unobtrusive, requiring no programmer or compiler intervention to guide its behavior. Our experiments on multiple versions of op ensource C programs shows that Sieve is an effective and scalable tool to identify impact sets and can locate regions in the affected functions where the changes manifest. These results lead us to conclude that Sieve can play a beneficial role in program testing and software maintenance


acm symposium on applied computing | 2008

PHALANX: a graph-theoretic framework for test case prioritization

Murali Krishna Ramanathan; Mehmet Koyutürk; Suresh Jagannathan

Test case prioritization for regression testing can be performed using different metrics (e.g., statement coverage, path coverage) depending on the application context. Employing different metrics requires different prioritization schemes (e.g., maximum coverage, dissimilar paths covered). This results in significant algorithmic and implementation complexity in the testing process associated with various metrics and prioritization schemes. In this paper, we present a novel approach to the test case prioritization problem that addresses this limitation. We devise a framework, Phalanx, that identifies two distinct aspects of the problem. The first relates to metrics that define ordering relations among test cases; the second defines mechanisms that implement these metrics on test suites. We abstract the information into a test-case dissimilarity graph -- a weighted graph in which nodes specify test cases and weighted edges specify user-defined proximity measures between test cases. We argue that a declustered linearization of nodes in the graph results in a desirable prioritization of test cases, since it ensures that dissimilar test cases are applied first. We explore two mechanisms for declustering the test case dissimilarity graph -- Fiedler (spectral) ordering and a greedy approach. We implement these orderings in Phalanx, a highly flexible and customizable testbed, and demonstrate excellent performance for test-case prioritization. Our experiments on test suites available from the Subject Infrastructure Repository (SIR) show that a variety of user-defined metrics can be easily incorporated in Phalanx.


Distributed Computing | 2007

Randomized Leader Election

Murali Krishna Ramanathan; Ronaldo A. Ferreira; Suresh Jagannathan; Wojciech Szpankowski

We present an efficient randomized algorithm for leader election in large-scale distributed systems. The proposed algorithm is optimal in message complexity (O(n) for a set of n total processes), has round complexity logarithmic in the number of processes in the system, and provides high probabilistic guarantees on the election of a unique leader. The algorithm relies on a balls and bins abstraction and works in two phases. The main novelty of the work is in the first phase where the number of contending processes is reduced in a controlled manner. Probabilistic quorums are used to determine a winner in the second phase. We discuss, in detail, the synchronous version of the algorithm, provide extensions to an asynchronous version and examine the impact of failures.


international conference on peer-to-peer computing | 2005

Randomized protocols for duplicate elimination in peer-to-peer storage systems

Ronaldo A. Ferreira; Murali Krishna Ramanathan; Suresh Jagannathan

Distributed peer-to-peer storage systems rely on voluntary participation of peers to effectively manage a storage pool. Files are generally replicated in several sites to provide acceptable levels of availability. If disk space on these peers is not carefully monitored and provisioned, the system may not be able to provide availability for certain files. In particular, identification and elimination of redundant data are important problems that may arise in long-lived systems. Scalability and availability are competing goals in these networks: scalability concerns would dictate aggressive elimination of replicas, while availability considerations would argue conversely. In this paper, the authors provided a novel and efficient solution that addresses both these goals with respect to management of redundant data. Specifically, the problem of duplicate elimination in the context of systems connected over an unstructured peer-to-peer network in which there is no a priori binding between an object and its location was addressed. A new randomized protocol was proposed to solve this problem in a scalable and decentralized fashion that does not compromise availability requirements of the application. Performance results using both large-scale simulations, and a prototype built on PlanetLab, demonstrate that the protocols provide high probabilistic guarantees of success, while incurring minimal administrative overheads.


fundamental approaches to software engineering | 2006

Trace-Based memory aliasing across program versions

Murali Krishna Ramanathan; Suresh Jagannathan

One of the major costs of software development is associated with testing and validation of successive versions of software systems. An important problem encountered in testing and validation is memory aliasing, which involves correlation of variables across program versions. This is useful to ensure that existing invariants are preserved in newer versions and to match program execution histories. Recent work in this area has focused on trace-based techniques to better isolate affected regions. A variation of this general approach considers memory operations to generate more refined impact sets. The utility of such an approach eventually relies on the ability to effectively recognize aliases. n nIn this paper, we address the general memory aliasing problem and present a probabilistic trace-based technique for correlating memory locations across execution traces, and associated variables in program versions. Our approach is based on computing the log-odds ratio, which defines the affinity of locations based on observed patterns. As part of the aliasing process, the traces for initial test inputs are aligned without considering aliasing. From the aligned traces, the log-odds ratio of the memory locations is computed. Subsequently, aliasing is used for alignment of successive traces. Our technique can easily be extended to other applications where detecting aliasing is necessary. As a case study, we implement and use our approach in dynamic impact analysis for detecting variations across program versions. Using detailed experiments on real versions of software systems, we observe significant improvements in detection of affected regions when aliasing occurs.

Collaboration


Dive into the Murali Krishna Ramanathan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bogdan Carbunar

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Mehmet Koyutürk

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koushik Sen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge