Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amitabha Sanyal is active.

Publication


Featured researches published by Amitabha Sanyal.


ACM Transactions on Programming Languages and Systems | 2007

Heap reference analysis using access graphs

Uday P. Khedker; Amitabha Sanyal; Amey Karkare

Despite significant progress in the theory and practice of program analysis, analyzing properties of heap data has not reached the same level of maturity as the analysis of static and stack data. The spatial and temporal structure of stack and static data is well understood while that of heap data seems arbitrary and is unbounded. We devise bounded representations that summarize properties of the heap data. This summarization is based on the structure of the program that manipulates the heap. The resulting summary representations are certain kinds of graphs called access graphs. The boundedness of these representations and the monotonicity of the operations to manipulate them make it possible to compute them through data flow analysis. An important application that benefits from heap reference analysis is garbage collection, where currently liveness is conservatively approximated by reachability from program variables. As a consequence, current garbage collectors leave a lot of garbage uncollected, a fact that has been confirmed by several empirical studies. We propose the first ever end-to-end static analysis to distinguish live objects from reachable objects. We use this information to make dead objects unreachable by modifying the program. This application is interesting because it requires discovering data flow information representing complex semantics. In particular, we formulate the following new analyses for heap data: liveness, availability, and anticipability and propose solution methods for them. Together, they cover various combinations of directions of analysis (i.e., forward and backward) and confluence of information (i.e. union and intersection). Our analysis can also be used for plugging memory leaks in C/C++ languages.


software engineering and formal methods | 2006

A PVS Based Framework for Validating Compiler Optimizations

Aditya Kanade; Amitabha Sanyal; Uday P. Khedker

An optimization can be specified as sequential compositions of predefined transformation primitives. For each primitive, we can define soundness conditions which guarantee that the transformation is semantics preserving. An optimization of a program preserves semantics, if all applications of the primitives in the optimization satisfy their respective soundness conditions on the versions of the input program on which they are applied. This scheme does not directly check semantic equivalence of the input and the optimized programs and is therefore amenable to automation. Automating this scheme however requires a trusted framework for simulating transformation primitives and checking their soundness conditions. In this paper, we present the design of such a framework based on PVS. We have used it for specifying and validating several optimizations viz. common subexpression elimination, optimal code placement, lazy code motion, loop invariant code motion, full and partial dead code elimination, etc


Electronic Notes in Theoretical Computer Science | 2007

Structuring Optimizing Transformations and Proving Them Sound

Aditya Kanade; Amitabha Sanyal; Uday P. Khedker

A compiler optimization is sound if the optimized program that it produces is semantically equivalent to the input program. The proofs of semantic equivalence are usually tedious. To reduce the efforts required, we identify a set of common transformation primitives that can be composed sequentially to obtain specifications of optimizing transformations. We also identify the conditions under which the transformation primitives preserve semantics and prove their sufficiency. Consequently, proving the soundness of an optimization reduces to showing that the soundness conditions of the underlying transformation primitives are satisfied. The program analysis required for optimization is defined over the input program whereas the soundness conditions of a transformation primitive need to be shown on the version of the program on which it is applied. We express both in a temporal logic. We also develop a logic called temporal transformation logic to correlate temporal properties over a program (seen as a Kripke structure) and its transformation. An interesting possibility created by this approach is a novel scheme for validating optimizer implementations. An optimizer can be instrumented to generate a trace of its transformations in terms of the transformation primitives. Conformance of the trace with the optimizer can be checked through simulation. If soundness conditions of the underlying primitives are satisfied by the trace then it preserves semantics.


asian symposium on programming languages and systems | 2005

Heterogeneous fixed points with application to points-to analysis

Aditya Kanade; Uday P. Khedker; Amitabha Sanyal

Many situations can be modeled as solutions of systems of simultaneous equations. If the functions of these equations monotonically increase in all bound variables, then the existence of extremal fixed point solutions for the equations is guaranteed. Among all solutions, these fixed points uniformly take least or greatest values for all bound variables. Hence, we call them homogeneous fixed points. However, there are systems of equations whose functions monotonically increase in some variables and decrease in others. The existence of solutions of such equations cannot be guaranteed using classical fixed point theory. In this paper, we define general conditions to guarantee the existence and computability of fixed point solutions of such equations. In contrast to homogeneous fixed points, these fixed points take least values for some variables and greatest values for others. Hence, we call them heterogeneous fixed points. We illustrate heterogeneous fixed point theory through points-to analysis.


compiler construction | 2014

Liveness-Based Garbage Collection

Rahul Asati; Amitabha Sanyal; Amey Karkare; Alan Mycroft

Current garbage collectors leave much heap-allocated data uncollected because they preserve data reachable from a root set. However, only live data—a subset of reachable data—need be preserved.


tools and algorithms for construction and analysis of systems | 2018

Property Checking Array Programs Using Loop Shrinking.

Shrawan Kumar; Amitabha Sanyal; R. Venkatesh; Punit Shah

Most verification tools find it difficult to prove properties of programs containing loops that process arrays of large or unknown size. These methods either fail to abstract the array at the right granularity and are therefore limited in precision or scalability, or they attempt to synthesize an appropriate invariant that is quantified over the elements of the array, a task known to be difficult. In this paper, we present a different approach based on a notion called loop shrinkability, in which an array processing loop is transformed to a loop of much smaller bound that processes only a few non-deterministically chosen elements of the array. The result is a finite state program with a drastically reduced state space that can be analyzed by bounded model checkers. We show that the proposed transformation is an over-approximation, i.e. if the transformed program is correct, so is the original. In addition, when applicable, the method is impervious to the size or existence of the bound of the array. As an assessment of usefulness, we tested a tool based on our method on the ArraysReach category of SV-COMP 2017 benchmarks. After excluding programs with feature not handled by our tool, we could successfully verify 87 of the 93 remaining programs.


international symposium on memory management | 2016

Liveness-based garbage collection for lazy languages

K Prasanna Kumar; Amitabha Sanyal; Amey Karkare

We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data—a subset of reachable data—during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. Whereas this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable.


tools and algorithms for construction and analysis of systems | 2015

Value Slice: A New Slicing Concept for Scalable Property Checking

Shrawan Kumar; Amitabha Sanyal; Uday P. Khedker

A backward slice is a commonly used preprocessing step for scaling property checking. For large programs though, the reduced size of the slice may still be too large for verifiers to handle. We propose an aggressive slicing method that, apart from slicing out the same statements as backward slice, also eliminates computations that only decide whether the point of property assertion is reachable. However, for precision, we also carefully identify and retain all computations that influence the values of the variables in the property. The resulting slice, called value slice, is smaller and scales better for property checking than backward slice. We carry experiments on property checking of industry strength programs using three comparable slicing techniques: backward slice, value slice and an even more aggressive slicing technique called thin slice that retains only those statements on which the variables in the property are data dependent. While backward slicing enables highest precision and thin slice scales best, value slice based property checking comes close to the best in both scalability and precision. This makes value slice a good compromise between backward and thin slice for property checking.


Archive | 2009

Data Flow Analysis: Theory and Practice

Uday P. Khedker; Amitabha Sanyal; Bageshri Karkare


arXiv: Programming Languages | 2006

Effectiveness of Garbage Collection in MIT/GNU Scheme

Amey Karkare; Amitabha Sanyal; Uday P. Khedker

Collaboration


Dive into the Amitabha Sanyal's collaboration.

Top Co-Authors

Avatar

Uday P. Khedker

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Amey Karkare

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Aditya Kanade

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Shrawan Kumar

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

K Prasanna Kumar

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Bageshri Karkare

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Punit Shah

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

R. Venkatesh

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

Rahul Asati

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Alan Mycroft

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge