Ulka Shrotri
Tata Consultancy Services
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ulka Shrotri.
secure software integration and reliability improvement | 2009
Prasad Bokil; Priyanka Darke; Ulka Shrotri; R. Venkatesh
Preparation of test data that adequately tests a given piece of code is very expensive and effort intensive. This paper presents a tool AutoGen that reduces this cost and effort by automatically generating test data for C code. AutoGen takes the C code and a criterion such as statement coverage, decision coverage, or Modified Condition/Decision Coverage (MCDC) and generates non-redundant test data that satisfies the specified criterion. This paper also presents our experience in using this tool to generate MCDC test data for three embedded reactive system applications. The effort required using the tool was one third of the manual effort required. The main contributions of this paper are a tool that can generate data for various kinds of coverage including MCDC and the experience of running this tool on real applications.
software visualization | 2005
Deepali Kholkar; G. Murali Krishna; Ulka Shrotri; R. Venkatesh
Unified Modelling Language (UML) is popular mainly due to the various visual notations it provides for specifying large systems. In UML the details of a use case are specified in natural language using standard templates [Cockburn 2000]. This is a critical gap leading to detailed requirements of an application being specified in natural language. As a result, inadequate analysis of business requirements is a source of many defects in software application development. Here we propose to bridge this gap by extending the set of UML diagrams with three new diagrams that enable rigorous specification, analysis and simulation of requirements.The above is achieved by modelling business policies as global invariants and operational tasks as user interactions. We propose visual notations to specify both the global invariants as well as the user interactions. The two specifications are checked for consistency using the model checker, SAL. Inconsistencies detected by the model checker are then presented back to the analyst in the form of easy to understand diagrams. These inconsistencies help detect incompleteness in the functional specification of an application as well as help in making the functional specifications rigorous and detailed. This simplifies the task of the designers and implementers. SAL is also used to simulate the system and generate some sample runs. These sample runs are presented back to the developer in visual form enabling better understanding of the behaviour of the system. The advantages of this approach are demonstrated through our experiences with a case study as well as a project executed at Tata Consultancy Services(TCS).
international conference on industrial technology | 2012
R. Venkatesh; Ulka Shrotri; Priyanka Darke; Prasad Bokil
Modeling tools such as Statemate, Simulink and Stateflow are widely used in the automotive industry to specify low level requirements and design. Systematic testing of models to achieve structural coverage such as state coverage, transition coverage or modified condition decision coverage (MCDC) helps in early defect detection. Automatic generation of test data can help in reducing the cost and improving the quality of systematic testing. Test data can be automatically generated either 1) directly from the models or 2) from the code generated from these models. In this paper we argue for and recommend the second approach. We propose generating test data from C, a formalism-independent intermediate language, as it is widely used in the embedded domain and most modeling tools have C code generators. Accurate representation of floating point number type is possible in C (that being the representation in the final executable) and there are various analysis tools that are available for C. A major challenge in using code to generate test data is scalability. To overcome this problem, we built a tool that combines available static slicing and model-checking techniques to generate test data. We conducted experiments to check if this tool can generate test data for large complex models from the automotive domain. To demonstrate formalism independence and scalability we chose industry size Statemate as well as Simulink/Stateflow models. The set up and the findings of these experiments are also presented in this paper. We successfully generated test data for code sizes as large as 50KLOC and detected several bugs in four already tested industry models thus proving the benefits of this approach.
asia-pacific software engineering conference | 2012
Priyanka Darke; Mayur Khanzode; Arun Nair; Ulka Shrotri; R. Venkatesh
Static analysis of code is very effective in finding common programmer errors but it comes at a price - a large number of false positives. Model checking, on the other hand, is very precise but does not scale up. We have developed a tool that combines both techniques and also implements a novel loop abstraction. The tool was run on 2 million lines of embedded code to analyze for two properties - division by zero and array index out of bounds. In other experiments we compared the precision of our tool to that achieved by tools implementing abstract interpretation. This paper presents details of the tool and the results of evaluations that we have carried out to measure the scalability and to compare the precision of our method on industry code against other static analysis tools.
design, automation, and test in europe | 2015
Priyanka Darke; Bharti Chimdyalwar; R. Venkatesh; Ulka Shrotri; Ravindra Metta
Bounded Model Checkers (BMCs) are widely used to detect violations of program properties up to a bounded execution length of the program. However when it comes to proving the properties, BMCs are unable to provide a sound result for programs with loops of large or unknown bounds. To address this limitation, we developed a new loop over-approximation technique LA. LA replaces a given loop in a program with an abstract loop having a smaller known bound by combining the techniques of output abstraction and a novel abstract acceleration, suitably augmented with a new application of induction. The resulting transformed program can then be fed to any bounded model checker to provide a sound proof of the desired properties. We call this approach, of LA followed by BMC, as LABMC. We evaluated the effectiveness of LABMC on some of the SV-COMP14 loop benchmarks, each with a property encoded into it. Well known BMCs failed to prove most of these properties due to loops of large, infinite or unknown bounds while LABMC obtained promising results. We also performed experiments on a real world automotive application on which the well known BMCs were able to prove only one of the 186 array accesses to be within array bounds. LABMC was able to successfully prove 131 of those array accesses to be within array bounds.
software engineering and formal methods | 2003
Ulka Shrotri; Purandar Bhaduri; R. Venkatesh
Visual notations like class diagrams, and use case diagrams are very popular with practitioners for capturing requirements of software applications. These notations unfortunately have little or no semantics, and hence cannot be analyzed by tools. Formal notations, on the other hand, have associated tools that check specifications for stated properties but are difficult to integrate with software development processes in use. Strengths of both approaches can be exploited by giving formal semantics to popular notations. Here we propose a novel usage of UML object diagrams for specifying pre- and post-conditions for use cases and capturing global system properties as class invariants. A translation is defined from object diagrams to the formal notation TLA/sup +/. The TLA/sup +/ specification is then formally verified using the model checker TLC. The proposed notation is intuitive, expressive and formal. We present a small case study to illustrate its strengths.
foundations of software engineering | 2013
Shrawan Kumar; Bharti Chimdyalwar; Ulka Shrotri
Abstract interpretation is widely used to perform static code analysis with non-relational (interval) as well as relational (difference-bound matrices, polyhedral) domains. Analysis using non-relational domains is highly scalable but delivers imprecise results, whereas, use of relational domains produces precise results but does not scale up. We have developed a tool that implements K-limited path sensitive interval domain analysis to get precise results without losing on scalability. The tool was able to successfully analyse 10 million lines of embedded code for different properties such as division by zero, array index out of bound (AIOB), overflow-underflow and so on. This paper presents details of the tool and results of our experiments for detecting AIOB property. A comparison with the existing tools in the market demonstrates that our tool is more precise and scales better.
design, automation, and test in europe | 2014
R. Venkatesh; Ulka Shrotri; G. Murali Krishna; Supriya Agrawal
Requirements of reactive systems express the relationship between sensors and actuators and are usually described in a natural language and a mix of state-based and stream-based paradigms. Translating these into a formal language is an important pre-requisite to automate the verification of requirements. The analysis effort required for the translation is a prime hurdle to formalization gaining acceptance among software engineers and testers. We present Expressive Decision Tables (EDT), a novel formal notation designed to reduce the translation efforts from both state-based and stream-based informal requirements. We have also built a tool, EDTTool, to generate test data and expected output from EDT specifications. In a case study consisting of more than 200 informal requirements of a real-life automotive application, translation of the informal requirements into EDT needed 43% lesser time than their translation into Statecharts. Further, we tested the Statecharts using test data generated by EDTTool from the corresponding EDT specifications. This testing detected one bug in a mature feature and exposed several missing requirements in another. The paper presents the EDT notation, comparison to other similar notations and the details of the case study.
Archive | 2007
Aniket Kulkarni; Ravindra Metta; Ulka Shrotri; R. Venkatesh
A typical formal development method includes specification of the functionality, formal analysis of the specification and finally code generation on to a platform. Often formal analysis is done using model-checking and scalability of model-checking is an area of concern. In this paper we describe our work on inte- grating two specific tools - Statemate and SAL, to scale up model-checking. More specifically we highlight the benefits, in terms of scalability, that can be obtained by exploiting peculiar usage patterns in the specifications under consideration. The paper briefly introduces the tools and their respective notations, describes a trans- lation strategy as a means to integrate the notations, and presents how we achieved improved scalability of verification using SAL by exploiting peculiar usage of lan- guage constructs in the Statecharts of interest. We also present the results of using our tool on some randomly selected Statecharts demonstrating the scalability of our approach.
TAIC PART'10 Proceedings of the 5th international academic and industrial conference on Testing - practice and research techniques | 2010
P. Vijay Suman; Tukaram Muske; Prasad Bokil; Ulka Shrotri; R. Venkatesh
Boundary value testing in the white-box setting tests relational expressions with boundary values. These relational expressions are often a part of larger conditional expressions or decisions. It is therefore important, for effective testing that the outcome of a relational expression independently influences the outcome of the expression or decision in which it is embedded. Extending MC/DC to boundary value testing was proposed in the literature as a technique to achieve this independence. Based on this idea, in this paper we formally define a new coverage criterion - masking boundary value coverage (MBVC). MBVC is an adaptation of masking of conditions to boundary value testing. Mutation based analysis is used to show that test data satisfying MBVC is more effective in detecting relational mutants than test data satisfying BVC. In this paper, we give a formal argument justifying why test data for MBVC is more effective compared to that for BVC in detecting relational mutants. We performed an experiment to evaluate effectiveness and efficiency of MBVC test data relative to that for BVC, in detecting relational mutants. Firstly, mutation adequacy of the test set for MBVC was higher than that for BVC in 56% of cases, and never lower. Secondly, the test data for MBVC killed 80.7% of the total number of mutants generated, whereas the test data for BVC killed only 70.3% of them. A further refined analysis revealed that some mutants are such that they cannot be killed. We selected a small set of mutants randomly to get an estimate of percentage of such mutants. Then the extrapolated mutation adequacies were 92.75% and 80.8% respectively. We summarize the effect of masking on efficiency. Details of the experiment, tools developed for automation and analysis of the results are also provided in this paper.