Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aditya V. Nori is active.

Publication


Featured researches published by Aditya V. Nori.


foundations of software engineering | 2006

SYNERGY: a new algorithm for property checking

Bhargav S. Gulavani; Thomas A. Henzinger; Yamini Kannan; Aditya V. Nori; Sriram K. Rajamani

We consider the problem if a given program satisfies a specified safety property. Interesting programs have infinite state spaces, with inputs ranging over infinite domains, and for these programs the property checking problem is undecidable. Two broad approaches to property checking are testing and verification. Testing tries to find inputs and executions which demonstrate violations of the property. Verification tries to construct a formal proof which shows that all executions of the program satisfy the property. Testing works best when errors are easy to find, but it is often difficult to achieve sufficient coverage for correct programs. On the other hand, verification methods are most successful when proofs are easy to find, but they are often inefficient at discovering errors. We propose a new algorithm, Synergy, which combines testing and verification. Synergy unifies several ideas from the literature, including counterexample-guided model checking, directed testing, and partition refinement.This paper presents a description of the Synergy algorithm, its theoretical properties, a comparison with related algorithms, and a prototype implementation called Yogi.


international conference on software engineering | 2009

HOLMES: Effective statistical debugging via efficient path profiling

Trishul M. Chilimbi; Ben Liblit; Krishna Kumar Mehra; Aditya V. Nori; Kapil Vaswani

Statistical debugging aims to automate the process of isolating bugs by profiling several runs of the program and using statistical analysis to pinpoint the likely causes of failure. In this paper, we investigate the impact of using richer program profiles such as path profiles on the effectiveness of bug isolation. We describe a statistical debugging tool called HOLMES that isolates bugs by finding paths that correlate with failure. We also present an adaptive version of HOLMES that uses iterative, bug-directed profiling to lower execution time and space overheads. We evaluate HOLMES using programs from the SIR benchmark suite and some large, real-world applications. Our results indicate that path profiles can help isolate bugs more precisely by providing more information about the context in which bugs occur. Moreover, bug-directed profiling can efficiently isolate bugs with low overheads, providing a scalable and accurate alternative to sparse random sampling.


symposium on principles of programming languages | 2010

Compositional may-must program analysis: unleashing the power of alternation

Patrice Godefroid; Aditya V. Nori; Sriram K. Rajamani; Sai Deep Tetali

Program analysis tools typically compute two types of information: (1) may information that is true of all program executions and is used to prove the absence of bugs in the program, and (2) must information that is true of some program executions and is used to prove the existence of bugs in the program. In this paper, we propose a new algorithm, dubbed SMASH, which computes both may and must information compositionally . At each procedure boundary, may and must information is represented and stored as may and must summaries, respectively. Those summaries are computed in a demand driven manner and possibly using summaries of the opposite type. We have implemented SMASH using predicate abstraction (as in SLAM) for the may part and using dynamic test generation (as in DART) for the must part. Results of experiments with 69 Microsoft Windows 7 device drivers show that SMASH can significantly outperform may-only, must-only and non-compositional may-must algorithms. Indeed, our empirical results indicate that most complex code fragments in large programs are actually often either easy to prove irrelevant to the specific property of interest using may analysis or easy to traverse using directed testing. The fine-grained coupling and alternation of may (universal) and must (existential) summaries allows SMASH to easily navigate through these code fragments while traditional may-only, must-only or non-compositional may-must algorithms are stuck in their specific analyses.


IEEE Software | 2008

Automating Software Testing Using Program Analysis

Patrice Godefroid; P. de Halleux; Aditya V. Nori; Sriram K. Rajamani; Wolfram Schulte; Nikolai Tillmann; M.Y. Levin

During the last 10 years, code inspection for standard programming errors has largely been automated with static code analysis. During the next 10 years, we expect to see similar progress in automating testing, and specifically test generation, thanks to advances in program analysis, efficient constraint solvers, and powerful computers. Three new tools from Microsoft combine techniques from static program analysis, dynamic analysis, model checking, and automated constraint solving while targeting different application domains.


international symposium on software testing and analysis | 2008

Proofs from tests

Nels E. Beckman; Aditya V. Nori; Sriram K. Rajamani; Robert J. Simmons

We present an algorithm DASH to check if a program P satisfies a safety property φ. The unique feature of this algorithm is that it uses only test generation operations, and it refines and maintains a sound program abstraction as a consequence of failed test generation operations. Thus, each iteration of the algorithm is inexpensive, and can be implemented without any global may-alias information. In particular, we introduce a new refinement operator WPα that uses only the alias information obtained by symbolically executing a test to refine abstractions in a sound manner. We present a full exposition of the DASH algorithm and its theoretical properties. We have implemented DASH in a tool called YOGI that plugs into Microsofts Static Driver Verifier framework. We have used this framework to run YOGI on 69 Windows Vista drivers with 85 properties and find that YOGI scales much better than SLAM, the current engine driving Microsofts Static Driver Verifier.


programming language design and implementation | 2009

Merlin: specification inference for explicit information flow problems

V. Benjamin Livshits; Aditya V. Nori; Sriram K. Rajamani; Anindya Banerjee

The last several years have seen a proliferation of static and runtime analysis tools for finding security violations that are caused by explicit information flow in programs. Much of this interest has been caused by the increase in the number of vulnerabilities such as cross-site scripting and SQL injection. In fact, these explicit information flow vulnerabilities commonly found in Web applications now outnumber vulnerabilities such as buffer overruns common in type-unsafe languages such as C and C++. Tools checking for these vulnerabilities require a specification to operate. In most cases the task of providing such a specification is delegated to the user. Moreover, the efficacy of these tools is only as good as the specification. Unfortunately, writing a comprehensive specification presents a major challenge: parts of the specification are easy to miss, leading to missed vulnerabilities; similarly, incorrect specifications may lead to false positives. This paper proposes Merlin, a new approach for automatically inferring explicit information flow specifications from program code. Such specifications greatly reduce manual labor, and enhance the quality of results, while using tools that check for security violations caused by explicit information flow. Beginning with a data propagation graph, which represents interprocedural flow of information in the program, Merlin aims to automatically infer an information flow specification. Merlin models information flow paths in the propagation graph using probabilistic constraints. A naive modeling requires an exponential number of constraints, one per path in the propagation graph. For scalability, we approximate these path constraints using constraints on chosen triples of nodes, resulting in a cubic number of constraints. We characterize this approximation as a probabilistic abstraction, using the theory of probabilistic refinement developed by McIver and Morgan. We solve the resulting system of probabilistic constraints using factor graphs, which are a well-known structure for performing probabilistic inference. We experimentally validate the Merlin approach by applying it to 10 large business-critical Web applications that have been analyzed with CAT.NET, a state-of-the-art static analysis tool for .NET. We find a total of 167 new confirmed specifications, which result in a total of 322 additional vulnerabilities across the 10 benchmarks. More accurate specifications also reduce the false positive rate: in our experiments, Merlin-inferred specifications result in 13 false positives being removed; this constitutes a 15% reduction in the CAT.NET false positive rate on these 10 programs. The final false positive rate for CAT.NET after applying Merlin in our experiments drops to under 1%.


tools and algorithms for construction and analysis of systems | 2008

Automatically refining abstract interpretations

Bhargav S. Gulavani; Supratik Chakraborty; Aditya V. Nori; Sriram K. Rajamani

Abstract interpretation techniques prove properties of programs by computing abstract fixpoints. All such analyses suffer from the possibility of false errors. We present three techniques to automatically refine such abstract interpretations to reduce false errors: (1) a new operator called interpolated widen, which automatically recovers precision lost due to widen, (2) a new way to handle disjunctions that arise due to refinement, and (3) a new refinement algorithm, which refines abstract interpretations that use the join operator to merge abstract states at join points. We have implemented our techniques in a tool Dagger. Our experimental results show our techniques are effective and that their combination is even more effective than any one of them in isolation. We also show that Dagger is able to prove properties of C programs that are beyond current abstraction-refinement tools, such as Slam [4], Blast [15], Armc [19], and our earlier tool [12].


tools and algorithms for construction and analysis of systems | 2009

The Yogi Project: Software Property Checking via Static Analysis and Testing

Aditya V. Nori; Sriram K. Rajamani; Sai Deep Tetali; Aditya V. Thakur

We present Yogi , a tool that checks properties of C programs by combining static analysis and testing. Yogi implements the Dash algorithm which performs verification by combining directed testing and abstraction. We have engineered Yogi in such a way that it plugs into Microsofts Static Driver Verifier framework. We have used this framework to run Yogi on 69 Windows Vista drivers with 85 properties. We find that the new algorithm enables Yogi to scale much better than Slam , which is the current engine driving Microsofts Static Driver Verifier.


computer aided verification | 2012

Interpolants as classifiers

Rahul Sharma; Aditya V. Nori; Alex Aiken

We show how interpolants can be viewed as classifiers in supervised machine learning. This view has several advantages: First, we are able to use off-the-shelf classification techniques, in particular support vector machines (SVMs), for interpolation. Second, we show that SVMs can find relevant predicates for a number of benchmarks. Since classification algorithms are predictive, the interpolants computed via classification are likely to be invariants. Finally, the machine learning view also enables us to handle superficial non-linearities. Even if the underlying problem structure is linear, the symbolic constraints can give an impression that we are solving a non-linear problem. Since learning algorithms try to mine the underlying structure directly, we can discover the linear structure for such problems. We demonstrate the feasibility of our approach via experiments over benchmarks from various papers on program verification.


symposium on principles of programming languages | 2007

Preferential path profiling: compactly numbering interesting paths

Kapil Vaswani; Aditya V. Nori; Trishul M. Chilimbi

Path profiles provide a more accurate characterization of a programs dynamic behavior than basic block or edge profiles, but are relatively more expensive to collect. This has limited their use in practice despite demonstrations of their advantages over edge profiles for a wide variety of applications.We present a new algorithm called preferential path profiling (PPP), that reduces the overhead of path profiling. PPP leverages the observation that most consumers of path profiles are only interested in a subset of all program paths. PPP achieves low overhead by separating interesting paths from other paths and assigning a set of unique and compact numbers to these interesting paths. We draw a parallel between arithmetic coding and path numbering, and use this connection to prove an optimality result for the compactness of path numbering produced by PPP. This compact path numbering enables our PPP implementation to record path information in an array instead of a hash table. Our experimental results indicate that PPP reduces the runtime overhead of profiling paths exercised by the largest (ref) inputs of the SPEC CPU2000 benchmarks from 50% on average (maximum of 132%) to 15% on average (maximum of 26%) as compared to a state-of-the-art path profiler.

Collaboration


Dive into the Aditya V. Nori's collaboration.

Top Co-Authors

Avatar

Priti Shankar

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mayur Naik

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ravi Mangal

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xin Zhang

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge