Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gargi Dasgupta is active.

Publication


Featured researches published by Gargi Dasgupta.


Communications of The ACM | 2011

Workload management for power efficiency in virtualized data centers

Gargi Dasgupta; Amit Sharma; Akshat Verma; Anindya Neogi; Ravi Kothari

Power-aware dynamic application placement can address underutilization of servers as well as the rising energy costs in a data center.


winter simulation conference | 2011

Simulation-based evaluation of dispatching policies in service systems

Dipyaman Banerjee; Gargi Dasgupta; Nirmit Desai

A service system is an organization of the resources and processes, which interacts with the customer and produces service outcomes. Since a majority of the service systems are labor-intensive, the main re-sources are the service workers. Designing such service systems is nontrivial due to a large number of parameters and variations, but crucial for business decisions such as labor staffing. The most important design point of a service system is how and when service requests are assigned to service workers a.k.a. dispatching policy. This paper presents a framework for evaluation of dispatching policies in service systems. A discrete event simulation model of a service system in the data-center management domain is presented. We evaluate four dispatching policies on five real-life service systems. We observe that the simulation-based approach incorporates intricacies of service systems and allows comparative analysis of dispatching policies leading to more accurate decisions on labor staffing.


international conference on service oriented computing | 2006

DECO: data replication and execution CO-scheduling for utility grids

Vikas Agarwal; Gargi Dasgupta; Koustuv Dasgupta; Amit Purohit; Balaji Viswanathan

Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as Data Centers and Grids. We present various case studies on the use of GridSim in modeling and simulation of Business Grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management.


Proceedings of the 15th ACM Mardi Gras conference on From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities | 2008

Transparent grid enablement of weather research and forecasting

S. Masoud Sadjadi; Liana Fong; Rosa M. Badia; Javier Figueroa; Javier Delgado; Xabriel J. Collazo-Mojica; Khalid Saleem; Raju Rangaswami; Shu Shimizu; Héctor A Durán Limón; Pat Welsh; S. Pattnaik; Anthony Paul Praino; David Villegas; Selim Kalayci; Gargi Dasgupta; Onyeka Ezenwoye; Juan Carlos Martinez; Ivan Rodero; Shuyi S. Chen; Javier Muñoz; Diego Ruiz López; Julita Corbalan; Hugh E. Willoughby; Michael McFail; Christine L. Lisetti; Malek Adjouadi

The impact of hurricanes is so devastating throughout different levels of society that there is a pressing need to provide a range of users with accurate and timely information that can enable effective planning for and response to potential hurricane landfalls. The Weather Research and Forecasting (WRF) code is the latest numerical model that has been adopted by meteorological services worldwide. The current version of WRF has not been designed to scale out of a single organizations local computing resources. However, the high resource requirements of WRF for fine-resolution and ensemble forecasting demand a large number of computing nodes, which typically cannot be found within one organization. Therefore, there is a pressing need for the Grid-enablement of the WRF code such that it can utilize resources available in partner organizations. In this paper, we present our research on Grid enablement of WRF by leveraging our work in transparent shaping, GRID superscalar, profiling, code inspection, code modeling, meta-scheduling, and job flow management.


knowledge discovery and data mining | 2016

Anomaly Detection Using Program Control Flow Graph Mining From Execution Logs

Animesh Nandi; Atri Mandal; Shubham Atreja; Gargi Dasgupta; Subhrajit Bhattacharya

We focus on the problem of detecting anomalous run-time behavior of distributed applications from their execution logs. Specifically we mine templates and template sequences from logs to form a control flow graph (cfg) spanning distributed components. This cfg represents the baseline healthy system state and is used to flag deviations from the expected behavior of runtime logs. The novelty in our work stems from the new techniques employed to: (1) overcome the instrumentation requirements or application specific assumptions made in prior log mining approaches, (2) improve the accuracy of mined templates and the cfg in the presence of long parameters and high amount of interleaving respectively, and (3) improve by orders of magnitude the scalability of the cfg mining process in terms of volume of log data that can be processed per day. We evaluate our approach using (a) synthetic log traces and (b) multiple real-world log datasets collected at different layers of application stack. Results demonstrate that our template mining, cfg mining, and anomaly detection algorithms have high accuracy. The distributed implementation of our pipeline is highly scalable and has more than 500 GB/day of log data processing capability even on a 10 low-end VM based (Spark + Hadoop) cluster. We also demonstrate the efficacy of our end-to-end system using a case study with the Openstack VM provisioning system.


international conference on service oriented computing | 2008

Design and Implementation of a Fault Tolerant Job Flow Manager Using Job Flow Patterns and Recovery Policies

Selim Kalayci; Onyeka Ezenwoye; Balaji Viswanathan; Gargi Dasgupta; S. Masoud Sadjadi; Liana Fong

Currently, many grid applications are developed as job flows that are composed of multiple jobs. The execution of job flows requires the support of a job flow manager and a job scheduler. Due to the long running nature of job flows, the support for fault tolerance and recovery policies is especially important. This support is inherently complicated due to the sequencing and dependency of jobs within a flow, and the required coordination between workflow engines and job schedulers. In this paper, we describe the design and implementation of a job flow manager that supports fault tolerance. First, we identify and label job flow patterns within a job flow during deployment time. Next, at runtime, we introduce a proxy that intercepts and resolves faults using job flow patterns and their corresponding fault-recovery policies. Our design has the advantages of separation of the job flow and fault handling logic, requiring no manipulation at the modeling time, and providing flexibility with respect to fault resolution at runtime. We validate our design with a prototypical implementation based on the ActiveBPEL workflow engine and GridWay Meta-scheduler, and Montage application as the case study.


international workshop on quality of service | 2006

QoS-GRAF: A Framework for QoS based Grid Resource Allocation with Failure provisioning

Gargi Dasgupta; Koustuv Dasgupta; Amit Purohit; Balaji Viswanathan

In this paper, it describes the QoS-GRAF, a framework for providing revenue maximization in a utility computing grid where jobs have multiple resource dependencies and differentiated QoS pricing. To solve the revenue maximization problem the linear relaxation based algorithms, MRPA and MLBA, that achieve performance within 1-5% of the optimal solution and significantly outperform alternative approaches are used. Both show better revenue earnings across small, medium and large jobs, with efficient resource utilization. As a part ongoing work, the backup algorithms for multiple failures is developed. Scheduling algorithms are incorporated to produce maximum profitable schedule considering job deadlines


Operating Systems Review | 2008

A distributed job scheduling and flow management system

Norman Bobroff; Gargi Dasgupta; Liana Fong; Yanbin Liu; Balaji Viswanathan; Fabio Benedetti; Jonathan Mark Wagner

Grid computing, as a specific model of distributed systems, has sparked recent interest in managing job execution among distributed resource domains. Introduction of the meta-scheduler is a key feature in grid evolution, and the next step is to achieve collaborative interactions between meta-schedulers within and external to organizational boundaries to achieve scalability, balanced resource utilization, and location transparency to job submitters. This paper details a distributed system design that consists of a collaborative meta-scheduling framework, and an expanded resource model with schedulers and data as resources. With this framework, we also explore job scheduling and data management issues, and investigate job flow and meta-scheduling interactions for new applications that require job execution beyond simple sequential and conditional control.


international conference on service oriented computing | 2013

Behavioral Analysis of Service Delivery Models

Gargi Dasgupta; Renuka Sindhgatta; Shivali Agarwal

Enterprises and IT service providers are increasingly challenged with the goal of improving quality of service while reducing cost of delivery. Effective distribution of complex customer workloads among delivery teams served by diverse personnel under strict service agreements is a serious management challenge. Challenges become more pronounced when organizations adopt ad-hoc measures to reduce operational costs and mandate unscientific transformations. This paper simulates different delivery models in face of complex customer workload, stringent service contracts, and evolving skills, with the goal of scientifically deriving design principles of delivery organizations. Results show while Collaborative models are beneficial for highest priority work, Integrated models works best for volume-intensive work, through up-skilling the population with additional skills. In repetitive work environments where expertise can be gained, these training costs are compensated with higher throughput. This return-on-investment is highest when people have at most two skills. Decoupled models work well for simple workloads and relaxed service contracts.


international conference on service oriented computing | 2013

Does One-Size-Fit-All Suffice for Service Delivery Clients?

Shivali Agarwal; Renuka Sindhgatta; Gargi Dasgupta

The traditional mode of delivering IT services has been through customer-specific teams. A dedicated team is assigned to address all and only those requirements that are specific to the customer. However, this way of organizing service delivery leads to inefficiencies due to inability to use expertise and available resources across teams in a flexible manner. To address some of these challenges, in recent times, there has been interest in shared delivery of services, where instead of having customer specific teams working in silos, there are cross-customer teams shared resource pools that can potentially service more than one customer. However, this gives rise to the question of what is the best way of grouping the shared resources across customer? Especially, with the large variations in the technical and domain skills required to address customer requirements, what should be the service delivery model for diverse customer workloads? Should it be customer-focused? Business domain focused? Or Technology focused? This paper simulates different delivery models in face of complex customer workload, diverse customer profiles, stringent service contracts, and evolving skills, with the goal of scientifically deriving principles of decision making for a suitable delivery model. Results show that workload arrival pattern, customer work profile combinations and domain skills, all play a significant role in the choice of delivery model. Specifically, the complementary nature of work arrivals and degree of overlapping skill requirements among customers play a crucial role in the choice of models. Interestingly, the impact of skill expertise level of resources is overshadowed by these two factors.

Collaboration


Dive into the Gargi Dasgupta's collaboration.

Researchain Logo
Decentralizing Knowledge