Sachin Patel
Tata Consultancy Services
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sachin Patel.
2013 4th International Workshop on Product LinE Approaches in Software Engineering (PLEASE) | 2013
Sachin Patel; Priya Gupta; Vipul Shah
Testing variability intensive systems is a formidable task due to the combinatorial explosion of feature interactions that result from all variations. We developed and validated an approach of combinatorial test generation using Multi-Perspective Feature Models (MPFM). MPFMs are a set of feature models created to achieve Separation of Concerns within the model. This approach improves test coverage of variability. Results from an experiment on a real-life case show that up to 37% of the test effort could be reduced and up to 79% defects from the live system could be detected. We discuss the learning from this experiment and further research potential in testing variability intensive systems.
international conference on software testing verification and validation | 2014
Sachin Patel; Ramesh Kumar Kollana
Organizations are making substantial investments in Enterprise Software Implementation (ESI). IT service providers (ITSP) execute a large number of ESI projects, and these projects tend to have severe cost and schedule constraints. Tata Consultancy Services (TCS) is an ITSP that has developed a reusable test case repository with an objective of reducing the cost of testing in ESI projects. We conducted a study of the reuse program to analyze its impact. A set of metrics were defined and relevant data was gathered from users of the repository. In this paper, we document the findings of the study. Users of this repository have reported cost savings of up to 14.5% in their test design efforts. We describe the repository structure, reuse metrics and experience gained through reuse. We motivate the need for further research on test notations/meta-models for transaction-oriented business applications and variability management within these models.
international conference on software testing verification and validation workshops | 2013
Sachin Patel; Priya Gupta; Vipul Shah
Testing product lines and similar software involves the important task of testing feature interactions. The challenge is to test all those feature interactions that result in testing of all variations across all dimensions of variation. In this context, we propose the use of combinatorial test generation, with Multi-Perspective Feature Models (MPFM) as the input model. MPFMs are a set of feature models created to achieve Separation of Concerns within the model. We believe that the MPFM is useful as an input model for combinatorial testing and it is easy to create and understand. This approach helps achieve a better coverage of variability in the product line. Results from an experiment on a real-life case show that up to 37% of the test effort could be reduced and up to 79% defects from the live system could be detected.
software product lines | 2015
Sachin Patel; Vipul Shah
The benefits offered by cloud technologies have compelled enterprises to adopt the Software-as-a-Service (SaaS) model for their enterprise software needs. A SaaS has to be configured or customized to suit the specific requirements of every enterprise that subscribes to it. IT service providers have to deal with the problem of testing many such configurations created for different enterprises. The software gets upgraded periodically and the configurations need to be tested on an ongoing basis to ensure business continuity. In order to run the testing organization efficiently, it is imperative that the test cycle is automated. Developing automated test scripts for a large number of configurations is a non-trivial task because differences across them may range from a few user interface changes to business process level changes. We propose an approach that combines the benefits of model driven engineering and variability modeling to address this issue. The approach comprises of the Enterprise Software Test Modeling Language to model the test cases. We use the Common Variability Language to model variability in the test cases and apply model transformations on a base model to generate a test model for each configuration. These models are used to generate automated test scripts for all the configurations. We describe the test modelling language and an experiment which shows that the approach can be used to automatically generate variations in automated test scripts.
variability modelling of software intensive systems | 2014
Sachin Patel
There is growing trend towards using Commercial off the shelf (COTS) software within enterprises, as against developing custom-built software. IT service providers, who specialize in executing COTS implementation projects, have to deal with the problem of managing variability within the implementations at different customer enterprises. Customer specific implementations would have variations across different dimensions such as product used, industry vertical, business processes, navigational flows, user interface, technology platform and so on. In this paper, we describe the practical problems faced by service providers in managing variability within test cases for COTS implementations. We draw upon the experience shared with us by practitioners from COTS implementation testing teams and teams who have been developing reusable test cases for various COTS products. We motivate the need for further research on test notations/meta-models for business applications and variability management within these models.
multimedia signal processing | 2017
Gauri Deshpande; Venkata Subramanian Viraraghavan; Mayuri Duggirala; V. Ramu Reddy; Sachin Patel
Emotion recognition is important at the workplace because it impacts a multitude of outcomes, such as performance, engagement and well-being. Emotion recognition from audio is an attractive option due to its non-obtrusive nature and availability of microphones in devices at the workplace. We describe building a classifier that analyzes the para-linguistic features of audio streams to classify them into positive, neutral and negative affect. Since speech at the workplace is different from acted speech, and because it is important that the training data be situated in the right context, we designed and executed an emotion induction procedure to generate a corpus of non-acted speech data of 33 speakers. The corpus was used to train a set of classification models and a comparative analysis of these models was used to choose the feature parameters. Bootstrap aggregation (bagging) was then used on the best combination of algorithm (Random Forest) and features (60 millisecond window size). The resulting classification accuracy of 73% is on par with, or exceeds, accuracies reported in the current literature for non-acted speech for a speaker-dependent test set. For reference, we also report the speaker-dependent recognition accuracy (95%) of the same classifier trained and tested on acted speech for three emotions in the Emo-DB database.
winter simulation conference | 2016
Meghendra Singh; Mayuri Duggirala; Harshal Hayatnagarkar; Sachin Patel; Vivek Balaraman
Agent based simulation modelers have found it difficult to build grounded fine grained simulation models of human behavior. By grounded we mean that the model elements must rest on valid observations of the real world, by fine grained we mean the ability to factor in multiple dimensions of behavior such as personality, affect and stress. In this paper, we present a set of guidelines to build such models that use fragments of behavior mined from past literature in the social sciences as well as behavioral studies conducted in the field. The behavior fragments serve as the building blocks to compose grounded fine grained behavior models. The models can be used in simulations for studying the dynamics of any set of behavioral dimensions in some situation of interest. These guidelines are a result of our experience with creating a fine grained simulation model of a support services organization.
software engineering and knowledge engineering | 2010
Sachin Patel; Priya Gupta; Prafullakumar Surve
summer computer simulation conference | 2016
Mayuri Duggirala; Meghendra Singh; Harshal Hayatnagarkar; Sachin Patel; Vivek Balaraman
Archive | 2014
Sachin Patel; Priya Gupta; Vipul Arvind Sham; Sampatkumar N. Dixit