Rahul Purandare
University of Nebraska–Lincoln
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rahul Purandare.
international conference on software engineering | 2007
Matthew B. Dwyer; Sebastian G. Elbaum; Suzette Person; Rahul Purandare
Model checkers search the space of possible program behaviors to detect errors and to demonstrate their absence. Despite major advances in reduction and optimization techniques, state-space search can still become cost-prohibitive as program size and complexity increase. In this paper, we present a technique for dramatically improving the cost- effectiveness of state-space search techniques for error detection using parallelism. Our approach can be composed with all of the reduction and optimization techniques we are aware of to amplify their benefits. It was developed based on insights gained from performing a large empirical study of the cost-effectiveness of randomization techniques in state-space analysis. We explain those insights and our technique, and then show through a focused empirical study that our technique speeds up analysis by factors ranging from 2 to over 1000 as compared to traditional modes of state-space search, and does so with relatively small numbers of parallel processors.
automated software engineering | 2007
Matthew B. Dwyer; Rahul Purandare
Programmers using complex libraries and frameworks are faced with the difficult task of ensuring that their implementations comply with complex and informally described rules for proper sequencing of API calls. Recent advances in static and dynamic techniques for checking explicit specifications of program typestate properties have shown promise in addressing this challenge. Unfortunately, static typestate analyses are limited in their scalability and dynamic analyses can suffer from significant run-time overhead. In this paper, we present an approach that exploits information calculated by flow-sensitive static typestate analyses to reformulate the original analysis problem as a residual dynamic typestate analysis. We demonstrate that residual analyses retain the error reporting of unoptimized dynamic analysis while offering the potential for significantly reducing analysis cost
conference on object-oriented programming systems, languages, and applications | 2010
Rahul Purandare; Matthew B. Dwyer; Sebastian G. Elbaum
There has been significant interest in equipping programs with runtime checks aimed at detecting errors to improve fault detection during testing and in the field. Recent work in this area has studied methods for efficiently monitoring a program executions conformance to path property specifications, e.g., such as those captured by a finite state automaton. These techniques show great promise, but their broad applicability is hampered by the fact that for certain combinations of programs and properties the overhead of checking can slow the program down by up to 3500%. We have observed that, in many cases, the overhead of runtime monitoring is due to the behavior of program loops. We present a general framework for optimizing the monitoring of loops relative to a property. This framework allows monitors to process a loop in constant-time rather than time that is proportional to the number of loop iterations. We present the results of an empirical study that demonstrates that significant overhead reduction that can be achieved by applying the framework to monitor properties of several large Java programs.
runtime verification | 2010
Matthew B. Dwyer; Rahul Purandare; Suzette Person
Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished. In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.
runtime verification | 2011
Rahul Purandare; Matthew B. Dwyer; Sebastian G. Elbaum
Monitoring complex applications to detect violations from specified properties is a promising field that has seen the development of many novel techniques and tools in the last decade. In spite of this effort, limiting, understanding, and predicting the cost of monitoring has been a challenge. Existing techniques primarily target the overhead caused by the large number of monitor instances to be maintained and the large number of events generated by the program that are related to the property. However, other factors, in particular, the algorithm used to process the sequence of events can significantly influence runtime overhead. In this work, we describe three basic algorithmic approaches to finite state monitoring and distill some of their relative strengths by conducting preliminary studies. The results of the studies reveal non-trivial differences in runtime overhead when using different monitoring algorithms that can inform future work.
international symposium on software testing and analysis | 2013
Rahul Purandare; Matthew B. Dwyer; Sebastian G. Elbaum
Runtime monitoring has proven effective in detecting property violations, but it can incur high overhead when monitoring just a single property - particularly when the property relates multiple objects. In practice developers will likely monitor multiple properties in the same execution which will lead to even higher overhead. This paper presents the first study of overhead arising during the simultaneous monitoring of multiple properties. We present two techniques for mitigating overhead in such cases that exploit reductions in cost that arise from sharing of information between property monitors. This sharing permits a single monitor to function in place of many monitors. We evaluate these techniques on the DaCapo benchmarks and 8 temporal correctness properties and find that they offer significant overhead reductions, as high as 57%, as the number of monitored properties increases.
partial evaluation and semantic-based program manipulation | 2015
Venkatesh Vinayakarao; Rahul Purandare; Aditya V. Nori
Software developers rarely write code from scratch. With the existence of Wikipedia, discussion forums, books and blogs, it is hard to imagine a software developer not looking up these sites for sample code while building any non-trivial software system. While researchers have proposed approaches to retrieve relevant posts and code snippets, the need for finding variant implementations of functionally similar code snippets has been ignored. In this work, we propose an approach to automatically create a repository of structurally heterogeneous but functionally similar source code examples from unstructured sources. We evaluate the approach on stackoverflow, a discussion forum that has approximately 19 million posts. The results of our evaluation indicates that the approach extracts structurally different snippets with a precision of 83%. A repository of such heterogeneous source code examples will be useful to programmers in learning different implementation strategies and for researchers working on problems such as program comprehension, semantic clones and code search.
intelligent robots and systems | 2012
Rahul Purandare; Javier Darsie; Sebastian G. Elbaum; Matthew B. Dwyer
Modern robotics systems rely on distributed event-based frameworks to facilitate the assembly of software out of collections of reusable components. These frameworks express component dependencies in data that encode event publish-subscribe relations. This loosely coupled architecture makes it difficult for developers to understand the dependencies and to predict the impacts of a change to a component as the components grow in number and complexity. Moreover, this encoding of dependencies renders traditional techniques for analyzing component dependencies inapplicable, because the dependencies are bound by communication channels rather than data. In this work, we present a program analysis technique that automatically extracts a model of component dependencies from distributed system source code. This model identifies not only the temporal dependencies among components, but also the conditions under which those dependencies are realized. We have implemented the analysis and applied it to systems developed in ROS. The resulting models are succinct and precise, which suggests that programmers will find them comprehensible, and they can be used to document important global dependencies in a system, to compare different versions to identify the impacts of component changes, and to help locate errors.
foundations of software engineering | 2015
Aritra Dhar; Rahul Purandare; Mohan Dhawan; Suresh Rangaswamy
Software is susceptible to malformed data originating from untrusted sources. Occasionally the programming logic or constructs used are inappropriate to handle the varied constraints imposed by legal and well-formed data. Consequently, softwares may produce unexpected results or even crash. In this paper, we present CLOTHO, a novel hybrid approach that saves such softwares from crashing when failures originate from malformed strings or inappropriate handling of strings. CLOTHO statically analyses a program to identify statements that are vulnerable to failures related to associated string data. CLOTHO then generates patches that are likely to satisfy constraints on the data, and in case of failures produces program behavior which would be close to the expected. The precision of the patches is improved with the help of a dynamic analysis. We have implemented CLOTHO for the JAVA String API, and our evaluation based on several popular open-source libraries shows that CLOTHO generates patches that are semantically similar to the patches generated by the programmers in the later versions. Additionally, these patches are activated only when a failure is detected, and thus CLOTHO incurs no runtime overhead during normal execution, and negligible overhead in case of failures.
Proceedings of the 1st International Conference on Mobile Software Engineering and Systems | 2014
Samit Anwer; Aniya Aggarwal; Rahul Purandare; Vinayak Naik
Each Android application runs in its own virtual machine, with its own Linux user account and corresponding permissions. Although this ensures that permissions are given as per each applications requirements, each permission itself is still broad enough to possible exploitation. Such an exploitation may result in over consumption of phones resources, in terms of processing, battery, and communication bandwidth. In this paper, we propose a tool, called Chiromancer, for the developers and phone users to control applications permissions at a fine granularity and to tune the applications resource consumption to their satisfaction. The framework is based on static code analysis and code injection. It takes in compiled code and so does not require access to source code of the application. As a case study, we passed publicly available applications from Google Play through Chiromancer to fine tune their performance. We compared energy and data consumed by these applications before and after the code injection to corroborate our claims of improvement in performance. We observed substantial improvements.