Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amit M. Paradkar is active.

Publication


Featured researches published by Amit M. Paradkar.


international symposium on software testing and analysis | 2008

Finding bugs in dynamic web applications

Shay Artzi; Adam Kiezun; Julian Dolby; Frank Tip; Danny Dig; Amit M. Paradkar; Michael D. Ernst

Web script crashes and malformed dynamically-generated Web pages are common errors, and they seriously impact usability of Web applications. Current tools for Web-page validation cannot handle the dynamically-generated pages that are ubiquitous on todays Internet. In this work, we apply a dynamic test generation technique, based on combined concrete and symbolic execution, to the domain of dynamic Web applications. The technique generates tests automatically, uses the tests to detect failures, and minimizes the conditions on the inputs exposing each failure, so that the resulting bug reports are small and useful in finding and fixing the underlying faults. Our tool Apollo implements the technique for PHP. Apollo generates test inputs for the Web application, monitors the application for crashes, and validates that the output conforms to the HTML specification. This paper presents Apollos algorithms and implementation, and an experimental evaluation that revealed 214 faults in 4 PHP Web applications.


IEEE Transactions on Software Engineering | 2010

Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit-State Model Checking

Shay Artzi; Adam Kiezun; Julian Dolby; Frank Tip; Danny Dig; Amit M. Paradkar; Michael D. Ernst

Web script crashes and malformed dynamically generated webpages are common errors, and they seriously impact the usability of Web applications. Current tools for webpage validation cannot handle the dynamically generated pages that are ubiquitous on todays Internet. We present a dynamic test generation technique for the domain of dynamic Web applications. The technique utilizes both combined concrete and symbolic execution and explicit-state model checking. The technique generates tests automatically, runs the tests capturing logical constraints on inputs, and minimizes the conditions on the inputs to failing tests so that the resulting bug reports are small and useful in finding and fixing the underlying faults. Our tool Apollo implements the technique for the PHP programming language. Apollo generates test inputs for a Web application, monitors the application for crashes, and validates that the output conforms to the HTML specification. This paper presents Apollos algorithms and implementation, and an experimental evaluation that revealed 673 faults in six PHP Web applications.


international conference on software engineering | 2012

Inferring method specifications from natural language API descriptions

Rahul Pandita; Xusheng Xiao; Hao Zhong; Tao Xie; Stephen Oney; Amit M. Paradkar

Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text in API documents against code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts.


foundations of software engineering | 2012

Automated extraction of security policies from natural-language software documents

Xusheng Xiao; Amit M. Paradkar; Suresh Thummalapenta; Tao Xie

Access Control Policies (ACP) specify which principals such as users have access to which resources. Ensuring the correctness and consistency of ACPs is crucial to prevent security vulnerabilities. However, in practice, ACPs are commonly written in Natural Language (NL) and buried in large documents such as requirements documents, not amenable for automated techniques to check for correctness and consistency. It is tedious to manually extract ACPs from these NL documents and validate NL functional requirements such as use cases against ACPs for detecting inconsistencies. To address these issues, we propose an approach, called Text2Policy, to automatically extract ACPs from NL software documents and resource-access information from NL scenario-based functional requirements. We conducted three evaluations on the collected ACP sentences from publicly available sources along with use cases from both open source and proprietary projects. The results show that Text2Policy effectively identifies ACP sentences with the precision of 88.7% and the recall of 89.4%, extracts ACP rules with the accuracy of 86.3%, and extracts action steps with the accuracy of 81.9%.


international conference on software testing, verification, and validation | 2010

Text2Test: Automated Inspection of Natural Language Use Cases

Avik Sinha; Stanley M. Sutton; Amit M. Paradkar

The modularity and customer centric approach of use cases make them the preferred methods for requirement elicitation, especially in iterative software development processes as in agile programming. Numerous guidelines exist for use case style and content, but enforcing compliance to such guidelines in the industry currently requires specialized training and a strongly managed requirement elicitation process. However, often due to aggressive development schedules, organizations shy away from such extensive processes and end up capturing use cases in an ad-hoc fashion with little guidance. This results in poor quality use cases that are seldom fit for any downstream software activities. We have developed an approach for automated and “edittime”inspection of use cases based on the construction and analysis of models of use cases. Our models contain linguistic properties of the use case text along with the functional properties of the system under discussion. In this paper, we present a suite of model analysis techniques that leverage such models to validate uses cases simultaneously for their style and content. Such model analysis techniques can be combined with a robust NLP techniques to develop integrated development environments for use case authoring, as we do in Text2Test.When used in an industrial setting, Text2Test resulted in better compliance of use cases, in enhanced productivity


international symposium on software reliability engineering | 1995

Test generation for Boolean expressions

Amit M. Paradkar; Kuo-Chung Tai

We propose a new strategy for generating test cases for Boolean expressions. In the past, we reported the BOR (Boolean Operator) strategy for generating test cases for predicates which are singular: which contain only one occurrence of each constituent Boolean variable. We also reported results of the empirical studies that were carried out to study the effectiveness of the strategy, but the BOR algorithm did not work well with non-singularities: multiple occurrences of constituent Boolean variables. The solution we propose for the problem is a combination of the original BOR strategy and the MI (Meaning Impact) strategy reported elsewhere. Our approach is to divide a Boolean expression into components that do not have common variables, apply the MI strategy to non-singular components, and the BOR strategy to singular components, and then apply the BOR strategy to combine the test sets generated for all component. Our empirical results indicate that our hybrid approach produces fewer tests for a Boolean expression than the MI strategy. The fault detection capability of our proposed approach has also been found to be comparable to that of the MI strategy. Our test generation strategy can be used to improve the reliability and safety of a program.


dependable systems and networks | 2009

A linguistic analysis engine for natural language use case description and its application to dependability analysis in industrial use cases

Avik Sinha; Amit M. Paradkar; Palani Kumanan; Branimir Boguraev

We present 1) a novel linguistic engine made of configurable linguistic components for understanding natural language use case specification; and 2) results of the first of a kind large scale experiment of application of linguistic techniques to industrial use cases. Requirement defects are well known to have adverse effects on dependability of software systems. While formal techniques are often cited as a remedy for specification errors, natural language remains the predominant mode for specifying requirements. Therefore, for dependable system development, a natural language processing technique is required that can translate natural language textual requirements into validation ready computer models. In this paper, we present the implementation details of such a technique and the results of applying a prototype implementation of our technique to 80 industrial and academic use case descriptions. We report on the accuracy and effectiveness of our technique. The results of our experiment are very encouraging.


ACM Sigsoft Software Engineering Notes | 2005

A software flaw taxonomy: aiming tools at security

Sam Weber; Paul A. Karger; Amit M. Paradkar

Although proposals were made three decades ago to build static analysis tools to either assist software security evaluations or to find security flaws, it is only recently that static analysis and model checking technology has reached the point where such tooling has become feasible. In order to target their technology on a rational basis, it would be useful for tool-builders to have available a taxonomy of software security flaws organizing the problem space. Unfortunately, the only existing suitable taxonomies are sadly out-of-date, and do not adequately represent security flaws that are found in modern software.In our work, we have coalesced previous efforts to categorize security problems as well as incident reports in order to create a security flaw taxonomy. We correlate this taxonomy with available information about current high-priority security threats, and make observations regarding the results. We suggest that this taxonomy is suitable for tool developers and to outline possible areas of future research.


international symposium on software testing and analysis | 2006

Model-based functional conformance testing of web services operating on persistent data

Avik Sinha; Amit M. Paradkar

We propose a model based approach to functional conformance test generation for web services which operate in the presence of persistent data. Typically, web services are described in a standard notation called Web Services Description Language (WSDL). Unfortunately, WSDL standard does not allow behavioral specification (such as pre- and postconditions)of web services in the presence of persistent data. New standards which remedy this situation are being proposed (such as WSDL-S). In this paper, we propose the use of existing test generation techniques based on Extended Finite State Machine (EFSM) specification to address the generation of functional conformance testes for web services which operate on persistent data. The novel contribution of this paper is an algorithm which translates a WSDL-S behavioral specification of operations of a web service into an equivalent EFSM representation which can be exploited to generate an effective set of test cases.


international symposium on software reliability engineering | 1996

Automatic test generation for predicates

Amit M. Paradkar; Kuo-Chung Tai; Mladen A. Vouk

We propose a new technique for automatic generation of test cases for predicates. Earlier we proposed an efficient and effective test generation strategy for Boolean expressions. We now extend this strategy to predicates. Our new strategy addresses a number of issues, including: analysis of dependencies between relational expressions in a predicate P; generation of test constraints for P based on the detection of Boolean and relational operator faults in P; and generation of actual tests according to the generated test constraints for P. We propose the use of constraint logic programming (CLP) to automate test data generation for a predicate. Furthermore, we propose an incremental approach to apply CLP techniques to solve a constraint system. Since our technique is specification-based, it can facilitate generation of expected outputs for actual tests.

Researchain Logo
Decentralizing Knowledge