Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruchi Mahindru is active.

Publication


Featured researches published by Ruchi Mahindru.


conference on information and knowledge management | 2009

Characteristics of document similarity measures for compliance analysis

Asad B. Sayeed; Soumitra Sarkar; Yu Deng; Rafah A. Hosn; Ruchi Mahindru; Nithya Rajamani

Due to increased competition in the IT Services business, improving quality, reducing costs and shortening schedules has become extremely important. A key strategy being adopted for achieving these goals is the use of an asset-based approach to service delivery, where standard reusable components developed by domain experts are minimally modified for each customer instead of creating custom solutions. One example of this approach is the use of contract templates, one for each type of service offered. A compliance checking system that measures how well actual contracts adhere to standard templates is critical for ensuring the success of such an approach. This paper describes the use of document similarity measures - Cosine similarity and Latent Semantic Indexing - to identify the top candidate templates on which a more detailed (and expensive) compliance analysis can be performed. Comparison of results of using the different methods are presented.


integrated network management | 2007

Automatic Structuring of IT Problem Ticket Data for Enhanced Problem Resolution

Xing Wei; Anca Sailer; Ruchi Mahindru; Gautam Kar

In this paper we propose a novel technique to automatically structure problem tickets consisting of free form, heterogeneous textual data, so that IT problem isolation and resolution can be performed rapidly. The originality of our technique consists in applying the conditional random fields (CRFs) supervised learning process to automatically identify individual units of information in the raw data. The CRFs have been shown to be effective on real-world tasks in various fields. We apply our technique to identify structural patterns specific to the problem ticket data used in call centers to enhance the problem resolution system used by remote technical assistance personnel. Most of the existing ticketing data is not explicitly structured, is highly noisy, and very heterogeneous in content, making it hard to effectively apply common data mining techniques to analyze and search the raw data. An example of such an analysis is the detection of the units of information containing the steps taken by the technical people to resolve a particular customer issue. We present a study of the accuracy of our results.


ieee international conference on services computing | 2008

Enhanced Maintenance Services with Automatic Structuring of IT Problem Ticket Data

Xing Wei; Anca Sailer; Ruchi Mahindru

We propose a novel technique to enhance the IT problem isolation and resolution Maintenance Services by automatically structuring problem tickets consisting of free form, heterogeneous textual data. The originality of our technique consists in applying the Conditional Random Fields (CRFs) supervised learning process to automatically identify individual units of information in the raw data. We apply our technique to identify structural patterns specific to the problem ticket data used in Call Centers since this data is not explicitly structured, is highly noisy, and very heterogeneous in content, making it hard to analyze and search by the remote technical support personnel. We present a study of the accuracy of our experiments.


international conference on cloud computing | 2016

Disaster Recovery for Cloud-Hosted Enterprise Applications

Long Wang; Richard E. Harper; Ruchi Mahindru; HariGovind V. Ramasamy

We describe disaster protection and recovery of cloud-hosted enterprise applications both at the cloud infrastructure level and at the application level. We explore scenarios which favor one option over the other, and scenarios where a combination of both are required for effective and end-to-end protection. Through case studies grounded in the experience of implementing disaster recovery for IBMs Cloud Managed Services (CMS) platform, we highlight the complexities of protecting enterprise applications on the cloud. For recovery planning and execution, we present a scheduling algorithm that recovers machines hosted on cloud by taking into account application-level logical dependencies and the business criticalities of the applications.


ieee international conference on cloud engineering | 2016

Enabling Enterprise-Class Workloads in the Cloud

Valentina Salapura; Ruchi Mahindru

Enterprise-level workloads - such as SAP and Oracle workloads - require infrastructure with high availability, clustering, or physical server appliances, features which are often not a part of a cloud offering. As a result, businesses are forced to run enterprise workloads in their legacy environments, and cannot take advantage of the clouds flexibility, elasticity, and low cost. IBM Cloud Managed Services (CMS) cloud implements shared storage, clustering support, and private networks. These features effectively enable a large number of SAP and Oracle workloads to run in both virtualized and non-virtualized cloud environments. In this paper, we discuss a diverse set of enterprise applications implemented in the IBM CMS cloud.


Ibm Journal of Research and Development | 2016

Enabling enterprise-level workloads in the enterprise-class cloud

Valentina Salapura; Ruchi Mahindru

Enterprise-level workloads, such as systems applications and products (SAP) workloads, require infrastructure with high availability, clustering, or physical server appliances, features that are often not a part of a typical cloud offering. Thus, businesses are often forced to run enterprise workloads in their legacy environments and cannot take advantage of the cloud computing flexibility, elasticity, and low cost. To enable enterprise customers to use these workloads in a cloud, we enabled a large number of SAP enterprise-level workloads in the IBM Cloud Managed Services (CMS) cloud for virtualized and nonvirtualized cloud environments. In this high-level paper, we discuss various general challenges and lessons learned, using a diverse set of platforms implemented in the IBM CMS cloud offering.


international conference on cloud computing and services science | 2017

Business Resiliency Framework for Enterprise Workloads in the Cloud.

Valentina Salapura; Ruchi Mahindru; Richard E. Harper

Businesses with enterprise-level workloads such as Systems Applications and Products (SAP) workloads require business level resiliency including high availability, clustering, or physical server appliances. To enable businesses to use enterprise workloads in a cloud, the IBM Cloud Managed Services (CMS) cloud offers many SAP enterprise-level workloads for both virtualized and non-virtualized cloud environments. Based on our experience with enabling resiliency for enterprise-level workloads like SAP and Oracle, we realize that as the end-to-end process is quite cumbersome, complex and expensive. Therefore, it would be highly beneficial for the customers and the cloud providers to have a systematic business resiliency framework in place, which would very well fit the cloud model with appropriate level of abstraction, automation, while allowing the desired cost benefits. In this paper, we introduce an end-to-end business resiliency framework and resiliency life cycle. We further introduce an algorithm to determine the optimal resiliency pattern for enterprise applications using a diverse set of platforms in the IBM CMS cloud


international conference on cloud computing and services science | 2016

Availability Considerations for Mission Critical Applications in the Cloud

Valentina Salapura; Ruchi Mahindru

Cloud environments offer flexibility, elasticity, and low cost compute infrastructure. Enterprise-level workloads â?? such as SAP and Oracle workloads - require infrastructure with high availability, clustering, or physical server appliances. These features are often not part of a typical cloud offering, and as a result, businesses are forced to run enterprise workloads in their legacy environments. To enable enterprise customers to use these workloads in a cloud, we enabled a large number of SAP and Oracle workloads in the IBM Cloud Managed Services (CMS) for both virtualized and non-virtualized cloud environments. In this paper, we discuss the challenges in enabling enterprise class applications in the cloud based on our experience on providing a diverse set of platforms implemented in the IBM CMS offering.


dependable systems and networks | 2016

Activating Protection and Exercising Recovery Against Large-Scale Outages on the Cloud

Long Wang; HariGovind V. Ramasamy; Ruchi Mahindru; Richard E. Harper

Cloud computing provides rapid provisioning, convenient deployment, and simplified management of computing resources and applications with pay-as-you-go pricing models [1]. As more and more workloads are created on the cloud or migrated to the cloud for economic and flexibility reasons, it is important for developers, users, and service providers alike to understand the challenges, opportunities, complexities, and benefits of building dependable systems and applications on the cloud. In this tutorial, we give participants first-hand experience in protecting and recovering against large-scale cloud outages, e.g., failure of an entire cloud site. The tutorial is organized as a full-day activity and is designed to be hands-on. The content is targeted towards a broad audience of users, developers, practitioners, and researchers in the area of cloud computing. The content level is beginner-to-intermediate, suited for anyone with an undergraduate background in computer science (or equivalent) and basic programming skills. In the theory part of the tutorial, we introduce terminology, concepts, and metrics for providing resiliency on a cloud platform. We catalog factors that make building resilient applications on the cloud easy in some cases and particularly complicated in other cases. We present a reference architecture and a standard set of use cases for resiliency on the cloud. The bulk of the tutorial focuses on educating the audience with a series of hands-on exercises. We use an example set of cloud requirements as the starting point, and then guide the participants through the process of creating a protection and recovery plan. The plan covers details such as how to prioritize different workloads based on their criticality during recovery, what protection and recovery technologies should be used, and whether they should be used at the server level or application level. During the hands-on exercises, participants form teams or work individually to access a pre-created cloud virtual infrastructure and applications hosted on the IBM Softlayer cloud [2], which is geographically distributed across multiple continents. Replication and recovery orchestration form the backbone of many cloud resiliency solutions. We guide the participants through the entire life cycle of a cloud resiliency solution: 1) activation of protection on a set of workloads, 2) recovery of protected workloads upon a large-scale outage, 3) failback of protected workloads from the recovery site to the original site upon restoration of the original site, and 4) test of the implemented protection and recovery solution to ensure the implementation conforms to the requirements. Using a real-world orchestration technology, participants activate protection against outages at multiple levels of the cloud stack, orchestrate recovery procedure for a simulated site-level outage, and orchestrate failback to the primary cloud site (simulating the reconstruction of that site). We perform exercises for protecting and recovering both servers and applications using different types of replication technologies. The hands-on exercises are tailored to enable audience members to gain a strong grasp of the practical challenges involved in cloud resiliency, e.g., determining recovery priorities based on business criticality, recovery groups, and coordinated recovery across multiple virtual machines constituting a business application. Through the exercises, we reinforce core design principles and design elements for building resilient cloud applications. We expose the participants to a wide range of protection and recovery options for achieving resiliency against site-level cloud outages. Cloud users can use this experience to make more informed decisions on protecting their workloads against large-scale failures. After the hands-on exercises, we conclude with a survey of commercial and academic solutions, emerging areas, and future research challenges in the area of cloud resiliency.


ieee international conference on services computing | 2013

An Ontology-Based Framework for Model-Driven Analysis of Situations in Data Centers

Yu Deng; Ronnie Sarkar; HariGovind V. Ramasamy; Rafah A. Hosn; Ruchi Mahindru

The capability to analyze systems and applications is commonly needed in data centers to address diverse problems such as root cause analysis of performance problems and failures, investigation of security attack propagation, and problem determination for predictive maintenance. Such analysis is typically facilitated by a hodgepodge of procedural code and scripts representing heuristics to be applied, and configuration databases representing state. As entities in the data center and relationships among them change, it is a challenge to keep the analysis tools up-to-date. We describe a framework that is based primarily on the principle of interpreting declarative representations of knowledge rather than capturing such knowledge in procedural code, and a variety of techniques for facilitating the continuous update of knowledge and state. A metamodel representing data center-specific domain knowledge forms the foundation for the framework. A model of the data center topological elements is an instantiation of the metamodel. Using the framework, we present a methodology for conducting a variety of analyses as a model-driven topology subgraph traversal, governed by knowledge embedded in the corresponding metamodel nodes. We apply the methodology to perform root cause analysis of performance problems in the domains of 3-tier Web and InfoSphere Streams applications.

Researchain Logo
Decentralizing Knowledge