Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anurag Dwarakanath is active.

Publication


Featured researches published by Anurag Dwarakanath.


Proceedings of the Second International Workshop on CrowdSourcing in Software Engineering | 2015

Crowd Build: A Methodology for Enterprise Software Development Using Crowdsourcing

Anurag Dwarakanath; Upendra Chintala; N. C. Shrikanth; Gurdeep Virdi; Alex Kass; Anitha Chandran; Shubhashis Sengupta; Sanjoy Paul

We present and evaluate a software development methodology that addresses key challenges for the application of Crowd sourcing to an enterprise application development. Our methodology presents a mechanism to systematically break the overall business application into small tasks such that the tasks can be completed independently and in parallel by the crowd. Our methodology supports automated testing and automatic integration. We evaluate our methodology by developing a web application through Crowd sourcing. The methodology was tested through two Crowd sourcing models: one through contests and the other through hiring freelancers. We present various metrics of the Crowd sourcing experiment and compare against the estimate for the traditional software development methodology.


international conference on software engineering | 2016

Trustworthiness in enterprise crowdsourcing: a taxonomy & evidence from data

Anurag Dwarakanath; N. C. Shrikanth; Kumar Abhinav; Alex Kass

In this paper we study the trustworthiness of the crowd for crowdsourced software development. Through the study of literature from various domains, we present the risks that impact the trustworthiness in an enterprise context. We survey known techniques to mitigate these risks. We also analyze key metrics from multiple years of empirical data of actual crowdsourced software development tasks from two leading vendors. We present the metrics around untrustworthy behavior and the performance of certain mitigation techniques. Our study and results can serve as guidelines for crowdsourced enterprise software development.


international conference on testing software and systems | 2014

Minimum Number of Test Paths for Prime Path and Other Structural Coverage Criteria

Anurag Dwarakanath; Aruna Jankiti

The software system under test can be modeled as a graph comprising of a set of vertices, V and a set of edges, E. Test Cases are Test Paths over the graph meeting a particular test criterion. In this paper, we present a method to achieve the minimum number of Test Paths needed to cover different structural coverage criteria. Our method can accommodate Prime Path, Edge-Pair, Simple & Complete Round Trip, Edge and Node coverage criteria. Our method obtains the optimal solution by transforming the graph into a flow graph and solving the minimum flow problem. We present an algorithm for the minimum flow problem that matches the best known solution complexity of


applications of natural language to data bases | 2012

Litmus: generation of test cases from functional requirements in natural language

Anurag Dwarakanath; Shubhashis Sengupta

O\left(\left\vert{V}\right\vert\left\vert{E}\right\vert\right)


ieee international conference on requirements engineering | 2017

Detecting Vague Words & Phrases in Requirements Documents in a Multilingual Environment

Breno Dantas Cruz; Bargav Jayaraman; Anurag Dwarakanath; Collin McMillan

. Our method is evaluated through two sets of tests. In the first, we test against graphs representing actual software. In the second test, we create random graphs of varying complexity. In each test we measure the number of Test Paths, the length of Test Paths, the lower bound on minimum number of Test Paths and the execution time.


international symposium on software testing and analysis | 2018

Identifying implementation bugs in machine learning based image classifiers using metamorphic testing

Anurag Dwarakanath; Manish Ahuja; Samarth Sikand; Raghotham M. Rao; R. P. Jagadeesh Chandra Bose; Neville Dubash; Sanjay Podder

Generating Test Cases from natural language requirements pose a formidable challenge as requirements often do not follow a defined structure. In this paper, we present a tool to generate Test Cases from a functional requirement document. No restriction on the structure of the sentence is imposed. The tool works on each requirement sentence and generates one or more Test Cases through a five step process --- 1) The sentence is analyzed through a syntactic parser to identify whether it is testable; 2) A compound or complex testable sentence is split into individual simple sentences; 3) Test Intents are generated from each simple sentence (Test Intents map to the aspects on which the requirement is to be tested); 4) The Test Intents are grouped and sequenced in temporal order to generate Positive Test Cases. A Positive Test Case verifies the affirmative action of the system; 5) Wherever applicable, Boundary Value Analysis and other techniques are used generate Negative Test Cases. Negative Test Cases verifies the behavior of the system in exception conditions. The automated generation of the Test Cases has been implemented in a tool called Litmus. We provide experimental results of our tool on actual requirement documents across domains and discuss the advantages and shortcomings of our approach.


international conference on software testing verification and validation | 2017

Accelerating Test Automation through a Domain Specific Language

Anurag Dwarakanath; Dipin Era; Aditya Priyadarshi; Neville Dubash; Sanjay Podder

Vagueness in software requirements documents can lead to several maintenance problems, especially when the customer and development team do not share the same language. Currently, companies rely on human translators to maintain communication and limit vagueness by translating the requirement documents by hand. In this paper, we describe two approaches that automatically identify vagueness in requirements documents in a multilingual environment. We perform two studies for calibration purposes under strict industrial limitations, and describe the tool that we ultimately deploy. In the first study, six participants, two native Portuguese speakers and four native Spanish speakers, evaluated both approaches. Then, we conducted a field study to test the performance of the best approach in real-world environments at two companies. We describe several lessons learned for research and industrial deployment.


Archive | 2012

Analysis system for test artifact generation

Shubhashis Sengupta; Anurag Dwarakanath; Roshni R. Ramnani

We have recently witnessed tremendous success of Machine Learning (ML) in practical applications. Computer vision, speech recognition and language translation have all seen a near human level performance. We expect, in the near future, most business applications will have some form of ML. However, testing such applications is extremely challenging and would be very expensive if we follow todays methodologies. In this work, we present an articulation of the challenges in testing ML based applications. We then present our solution approach, based on the concept of Metamorphic Testing, which aims to identify implementation bugs in ML based image classifiers. We have developed metamorphic relations for an application based on Support Vector Machine and a Deep Learning based application. Empirical validation showed that our approach was able to catch 71% of the implementation bugs in the ML applications.


Archive | 2012

SYSTEM FOR GENERATING TEST SCENARIOS AND TEST CONDITIONS AND EXPECTED RESULTS

David E. Ingram; Brian Ahern; Shubhashis Sengupta; Anurag Dwarakanath; Kapil Singi; Anitha Chandran

Test automation involves the automatic execution of test scripts instead of being manually run. This significantly reduces the amount of manual effort needed and thus is of great interest to the software testing industry. There are two key problems in the existing tools & methods for test automation - a) Creating an automation test script is essentially a code development task, which most testers are not trained on, and b) the automation test script is seldom readable, making the task of maintenance an effort intensive process. We present the Accelerating Test Automation Platform (ATAP) which is aimed at making test automation accessible to non-programmers. ATAP allows the creation of an automation test script through a domain specific language based on English. The English-like test scripts are automatically converted to machine executable code using Selenium WebDriver. ATAPs English-like test script makes it easy for non-programmers to author. The functional flow of an ATAP script is easy to understand as well thus making maintenance simpler (you can understand the flow of the test script when you revisit it many months later). ATAP has been built around the Eclipse ecosystem and has been used in a real-life testing project. We present the details of the implementation of ATAP and the results from its usage in practice.


ieee international conference on requirements engineering | 2013

Automatic extraction of glossary terms from natural language requirements

Anurag Dwarakanath; Roshni R. Ramnani; Shubhashis Sengupta

Collaboration


Dive into the Anurag Dwarakanath's collaboration.

Top Co-Authors

Avatar

Kumar Abhinav

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge