Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debdoot Mukherjee is active.

Publication


Featured researches published by Debdoot Mukherjee.


international conference on service oriented computing | 2008

Determining QoS of WS-BPEL Compositions

Debdoot Mukherjee; Pankaj Jalote; Mangala Gowri Nanda

With a large number of web services offering the same functionality, the Quality of Service (QoS) rendered by a web service becomes a key differentiator. WS-BPEL has emerged as the de facto industry standard for composing web services. Thus, determining the QoS of a composite web service expressed in BPEL can be extremely beneficial. While there has been much work on QoS computation of structured workflows, there exists no tool to ascertain QoS for BPEL processes, which are semantically richer than conventional workflows. We propose a model for estimating three key QoS parameters - Response Time, Cost and Reliability - of an executable BPEL process from the QoS information of its partner services and certain control flow parameters. We have built a tool to compute QoS of a WS-BPEL process that accounts for most workflow patterns that may be expressed by standard WS-BPEL. Another feature of our QoS approach and the tool is that it allows a designer to explore the impact on QoS of using different software fault tolerance techniques like Recovery blocks, N-version programming etc., thereby provisioning QoS computation of mission critical applications that may employ these techniques to achieve high reliability and/or performance.


international conference on web services | 2009

Efficient Testing of Service-Oriented Applications Using Semantic Service Stubs

Senthil Mani; Vibha Singhal Sinha; Saurabh Sinha; Pankaj Dhoolia; Debdoot Mukherjee; Soham Chakraborty

Service-oriented applications can be expensive to test because services are hosted remotely, are potentially shared among many users, and may have costs associated with their invocation. In this paper, we present an approach for reducing the costs of testing such applications. The key observation underlying our approach is that certain aspects of an application can be tested using locally deployed semantic service stubs, instead of actual remote services.A semantic service stub incorporates some of the service functionality, such as verifying preconditions and generating output messages based on post conditions. We illustrate how semantic stubs can enable the client test suite to be partitioned into subsets, some of which need not be executed using remote services. We also present a case study that demonstrates the feasibility of the approach, and potential cost savings for testing. The main benefits of our approach are that it can (1) reduce the number of test cases that need to be run to invoke remote services, (2) ensure that certain aspects of application functionality are well-tested before service integration occurs.


conference on object-oriented programming systems, languages, and applications | 2009

Consultant assistant: a tool for collaborative requirements gathering and business process documentation

Pietro Mazzoleni; SweeFen Goh; Richard Goodwin; Manisha D. Bhandar; ShyhKwei Chen; Juhnyoung Lee; Vibha Singhal Sinha; Senthil Mani; Debdoot Mukherjee; Biplav Srivastava; Pankaj Dhoolia; Elad Fein; Natalia Razinkov

In this paper we present Consultant Assistant (CA), a tool to assist business consultants in collaborative requirements gathering and business process documentation. CA is a web tool that uses a model-based approach to capture the requirements. CA allows users to select relevant components of industry-specific process hierarchies, reuse documents from past engagements, collaboratively author requirements, and publish these requirements in a document based format. These documents can further be published to an asset repository for future reuse.


business process management | 2010

From informal process diagrams to formal process models

Debdoot Mukherjee; Pankaj Dhoolia; Saurabh Sinha; Aubrey J. Rembert; Mangala Gowri Nanda

Process modeling is an important activity in business transformation projects. Free-form diagramming tools, such as PowerPoint and Visio, are the preferred tools for creating process models. However, the designs created using such tools are informal sketches, which are not amenable to automated analysis. Formal models, although desirable, are rarely created (during early design) because of the usability problems associated with formal-modeling tools. In this paper, we present an approach for automatically inferring formal process models from informal business process diagrams, so that the strengths of both types of tools can be leveraged. We discuss different sources of structural and semantic ambiguities, commonly present in informal diagrams, which pose challenges for automated inference. Our approach consists of two phases. First, it performs structural inference to identify the set of nodes and edges that constitute a process model. Then, it performs semantic interpretation, using a classifier that mimics human reasoning to associate modeling semantics with the nodes and edges. We discuss both supervised and unsupervised techniques for training such a classifier. Finally, we report results of empirical studies, conducted using flow diagrams from real projects, which illustrate the effectiveness of our approach.


ieee international conference on services computing | 2010

AHA: Asset Harvester Assistant

Debdoot Mukherjee; Senthil Mani; Vibha Singhal Sinha; Rema Ananthanarayanan; Biplav Srivastava; Pankaj Dhoolia; Prahlad Chowdhury

Information assets in service enterprises are typically available as unstructured documents. There is an increasing need for unraveling information from these documents into a structured and semantic format. Structured data can be more effectively queried, which increases information reuse from asset repositories. This paper addresses the problem of extracting XML models, which follow a given target schema, from enterprise documents. We discuss why existing approaches for information extraction do not suffice for the enterprise documents created during service delivery. To address this limitation, we present the Asset Harvester Assistant (AHA), a tool that automatically extracts structured models from MS-Word documents, and supports manual refinement of the extracted models within an interactive environment. We present the results of empirical studies conducted using business-process documents from real service-delivery engagements. Our results indicate that the AHA approach can be effective in extracting accurate models from unstructured documents and improving user productivity.


mining software repositories | 2013

Bug resolution catalysts: Identifying essential non-committers from bug repositories

Senthil Mani; Seema Nagar; Debdoot Mukherjee; Ramasuri Narayanam; Vibha Singhal Sinha; Amit Anil Nanavati

Bugs are inevitable in software projects. Resolving bugs is the primary activity in software maintenance. Developers, who fix bugs through code changes, are naturally important participants in bug resolution. However, there are other participants in these projects who do not perform any code commits. They can be reporters reporting bugs; people having a deep technical know-how of the software and providing valuable insights on how to solve the bug; bug-tossers who re-assign the bugs to the right set of developers. Even though all of them act on the bugs by tossing and commenting, not all of them may be crucial for bug resolution. In this paper, we formally define essential non-committers and try to identify these bug resolution catalysts. We empirically study 98304 bug reports across 11 open source and 5 commercial software projects for validating the existence of such catalysts. We propose a network analysis based approach to construct a Minimal Essential Graph that identifies such people in a project. Finally, we suggest ways of leveraging this information for bug triaging and bug report summarization.


international conference on software engineering | 2014

API as a social glue

Rohan Padhye; Debdoot Mukherjee; Vibha Singhal Sinha

The rapid growth of social platforms such as Facebook, Twitter and LinkedIn underscores the need for people to connect to existing and new contacts for recreational and professional purposes. A parallel of this phenomenon exists in the software development arena as well. Open-source code sharing platforms such as GitHub provide the ability to follow people and projects of interest. However, users are manually required to identify projects or other users whom they might be interested in following. We observe that most software projects use third-party libraries and that developers who contribute to multiple projects often use the same library APIs across projects. Thus, the library APIs seem to be a good fingerprint of their skill set. Hence, we argue that library APIs can form the social glue to connect people and projects having similar interests. We propose APINet, a system that mines API usage profiles from source code version management systems and create a social network of people, projects and libraries. We describe our initial implementation that uses data from 568 open-source projects hosted on GitHub. Our system recommends to a user new projects and people that they may be interested in, suggests communities of people who use related libraries and finds experts for a given topic who are closest in a users social graph.


mining software repositories | 2013

Which work-item updates need your response?

Debdoot Mukherjee; Malika Garg

Work-item notifications alert the team collaborating on a work-item about any update to the work-item (e.g., addition of comments, change in status). However, as software professionals get involved with multiple tasks in project(s), they are inundated by too many notifications from the work-item tool. Users are upset that they often miss the notifications that solicit their response in the crowd of mostly useless ones. We investigate the severity of this problem by studying the work-item repositories of two large collaborative projects and conducting a user study with one of the project teams. We find that, on an average, only 1 out of every 5 notifications that are received by the users require a response from them. We propose TWINY - a machine learning based approach to predict whether a notification will prompt any action from its recipient. Such a prediction can help to suitably mark up notifications and to decide whether a notification needs to be sent out immediately or be bundled in a message digest. We conduct empirical studies to evaluate the efficacy of different classification techniques in this setting. We find that incremental learning algorithms are ideally suited, and ensemble methods appear to give the best results in terms of prediction accuracy.


acm conference on systems programming languages and applications software for humanity | 2012

Is text search an effective approach for fault localization: a practitioners perspective

Vibha Singhal Sinha; Senthil Mani; Debdoot Mukherjee

There has been widespread interest in both academia and industry around techniques to help in fault localization. Much of this work leverages static or dynamic code analysis and hence is constrained by the programming language used or presence of test cases. In order to provide more generically applicable techniques, recent work has focused on devising text search based approaches that recommend source files which a developer can modify to fix a bug. Text search may be used for fault localization in either of the following ways. We can search a repository of past bugs with the bug description to find similar bugs and recommend the source files that were modified to fix those bugs. Alternately, we can directly search the code repository to find source files that share words with the bug report text. Few interesting questions come to mind when we consider applying these text-based search techniques in real projects. For example, would searching on past fixed bugs yield better results than searching on code? What is the accuracy one can expect? Would giving preference to code words in the bug report better the search results? In this paper, we apply variants of text-search on four open source projects and compare the impact of different design considerations on search efficacy.


international conference on software engineering | 2011

Using MATCON to generate CASE tools that guide deployment of pre-packaged applications

Elad Fein; Natalia Razinkov; Shlomit Shachor; Pietro Mazzoleni; SweeFen Goh; Richard Goodwin; Manisha Bhand; Shyh-Kwei Chen; Juhnyoung Lee; Vibha Singhal Sinha; Senthil Mani; Debdoot Mukherjee; Biplav Srivastava; Pankaj Dhoolia

The complex process of adapting pre-packaged applications, such as Oracle or SAP, to an organizations needs is full of challenges. Although detailed, structured, and well-documented methods govern this process, the consulting team implementing the method must spend a huge amount of manual effort to make sure the guidelines of the method are followed as intended by the method author. MATCON breaks down the method content, documents, templates, and work products into reusable objects, and enables them to be cataloged and indexed so these objects can be easily found and reused on subsequent projects. By using models and meta-modeling the reusable methods, we automatically produce a CASE tool to apply these methods, thereby guiding consultants through this complex process. The resulting tool helps consultants create the method deliverables for the initial phases of large customization projects. Our MATCON output, referred to as Consultant Assistant, has shown significant savings in training costs, a 20 - 30% improvement in productivity, and positive results in large Oracle and SAP implementations.

Researchain Logo
Decentralizing Knowledge