Tukaram Muske
Tata Consultancy Services
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tukaram Muske.
source code analysis and manipulation | 2013
Tukaram Muske; Ankit Baid; Tushar Rohidas Sanas
Static analysis has been successfully employed in software verification, however the number of generated warnings and cost incurred in their manual review is a major concern. In this paper we present a novel idea to reduce manual review efforts by identifying redundancy in this review process. We propose two partitioning techniques to identify redundant warnings - 1) partitioning of the warnings with each partition having one leader warning such that if the leader is a false positive, so are all the warnings in its partition which need not be reviewed and 2) further partitioning the leader warnings based on similarity of the modification points of variables referred to in their expressions. The second technique makes the review process faster by identifying further redundancies and it also makes the reviewing of a warning easier due to the associated information of modification points. Empirical results obtained with these grouping techniques indicate that, on an average, 60% of warnings are redundant in the review context and skipping their review would lead to a reduction of 50-60% in manual review efforts.
source code analysis and manipulation | 2016
Tukaram Muske; Alexander Serebrenik
Static analysis tools have showcased their importance and usefulness in automated detection of code anomalies and defects. However, the large number of alarms reported and cost incurred in their manual inspections have been the major concerns with the usage of static analysis tools. Existing studies addressing these concerns differ greatly in their approaches to handle the alarms, varying from automatic postprocessing of alarms, supporting the tool-users during manual inspections of the alarms, to designing of light-weight static analysis tools. A comprehensive study of approaches for handling alarms is, however, not found. In this paper, we review 79 alarms handling studies collected through a systematic literature search and classify the approaches proposed into seven categories. The literature search is performed by combining the keywords-based database search and snowballing. Our review is intended to provide an overview of various alarms handling approaches, their merits and shortcomings, and different techniques used in their implementations. Our findings include that the categorized alarms handling approaches are complementary and they can be combined together in different ways. The categorized approaches and techniques employed in them can help the designers and developers of static analysis tools to make informed choices.
hawaii international conference on system sciences | 2012
Padmanabhan Krishnan; R. Venkatesh; Prasad Bokil; Tukaram Muske; P. Vijay Suman
Embedded systems like those used in automobiles have two peculiar attributes - they are reactive systems where each reaction is influenced by the current state of the system, and their inputs come from small domains. We hypothesise that, because inputs come from small domains, random testing is likely to cover all values in the domain and hence have an effectiveness comparable to other techniques. We also hypothesise that because of the reactive nature long sequences of interactions will be important for testing effectiveness. To test these hypotheses we conducted three experiments on three pieces of code selected from an automotive application. The first two experiments were designed to compare the effectiveness of randomly generated test cases against test cases that achieve the modified condition decision coverage (MCDC) and also evaluate the impact of length of the test cases on effectiveness. The third experiment compares the effectiveness of handwritten test cases against randomly generated test cases of similar length. Our objective is to help practitioners choose an effective technique to test their systems. Our findings from the limited experiments indicate that random test case generation is as effective as manual test generation at the system level. However, for unit testing test case generation to achieve MCDC coverage is more effective than random generation. Combining unit test cases with system level testing increases effectiveness. Our final observation is that increasing the test case length improves the effectiveness of a test suite both at the unit and system level.
ieee international conference on software analysis evolution and reengineering | 2015
Tukaram Muske; Prasad Bokil
Static analysis tools are widely used in practice due to their ability to detect defects early in the software development life-cycle and that too while proving absence of defects of certain patterns. There exists a large number of such tools, and they are found to be varying depending on several tool characteristics like analysis techniques, programming languages supported, verification checks performed, scalability, and performance. Many studies about these tools and their variations, have been performed to improve the analysis results or figure out a better tool amongst a set of available static analysis tools. It is our observation that, in these studies only the aforementioned tool characteristics are considered and compared, and other implementational variations are usually ignored. In this paper, we study the implementational variations occurring among the static analysis tools, and experimentally demonstrate their impact on the tool characteristics and other analysis related attributes. The aim of this paper is twofold - a) to provide the studied implementational variations as choices, along with their pros and cons, to the designers or developers of static analysis tools, and b) to provide an educating material to the tool users so that the analysis results are better understood.
TAIC PART'10 Proceedings of the 5th international academic and industrial conference on Testing - practice and research techniques | 2010
P. Vijay Suman; Tukaram Muske; Prasad Bokil; Ulka Shrotri; R. Venkatesh
Boundary value testing in the white-box setting tests relational expressions with boundary values. These relational expressions are often a part of larger conditional expressions or decisions. It is therefore important, for effective testing that the outcome of a relational expression independently influences the outcome of the expression or decision in which it is embedded. Extending MC/DC to boundary value testing was proposed in the literature as a technique to achieve this independence. Based on this idea, in this paper we formally define a new coverage criterion - masking boundary value coverage (MBVC). MBVC is an adaptation of masking of conditions to boundary value testing. Mutation based analysis is used to show that test data satisfying MBVC is more effective in detecting relational mutants than test data satisfying BVC. In this paper, we give a formal argument justifying why test data for MBVC is more effective compared to that for BVC in detecting relational mutants. We performed an experiment to evaluate effectiveness and efficiency of MBVC test data relative to that for BVC, in detecting relational mutants. Firstly, mutation adequacy of the test set for MBVC was higher than that for BVC in 56% of cases, and never lower. Secondly, the test data for MBVC killed 80.7% of the total number of mutants generated, whereas the test data for BVC killed only 70.3% of them. A further refined analysis revealed that some mutants are such that they cannot be killed. We selected a small set of mutants randomly to get an estimate of percentage of such mutants. Then the extrapolated mutation adequacies were 92.75% and 80.8% respectively. We summarize the effect of masking on efficiency. Details of the experiment, tools developed for automation and analysis of the results are also provided in this paper.
international symposium on software testing and analysis | 2018
Tukaram Muske; Rohith Talluri; Alexander Serebrenik
The large number of alarms reported by static analysis tools is often recognized as one of the major obstacles to industrial adoption of such tools. We present repositioning of alarms, a novel automatic postprocessing technique intended to reduce the number of reported alarms without affecting the errors uncovered by them. The reduction in the number of alarms is achieved by moving groups of related alarms along the control flow to a program point where they can be replaced by a single alarm. In the repositioning technique, as the locations of repositioned alarms are different than locations of the errors uncovered by them, we also maintain traceability links between a repositioned alarm and its corresponding original alarm(s). The presented technique is tool-agnostic and orthogonal to many other techniques available for postprocessing alarms. To evaluate the technique, we applied it as a postprocessing step to alarms generated for 4 verification properties on 16 open source and 4 industry applications. The results indicate that the alarms repositioning technique reduces the alarms count by up to 20% over the state-of-the-art alarms grouping techniques with a median reduction of 7.25%.
Archive | 2011
Vijay Suman Pasupuleti; Tukaram Muske; Prasad Bokil; Ulka Shrotri; Venkatesh Ramanathan; Priyanka Darke
VALID 2013, The Fifth International Conference on Advances in System Testing and Validation Lifecycle | 2013
Tukaram Muske; Advaita Datar; Mayur Khanzode; Kumar Madhukar
Archive | 2014
Tukaram Muske; Ankit Baid; Tushar Rohidas Sanas
Archive | 2017
Tukaram Muske