Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nagapramod Mandagere is active.

Publication


Featured researches published by Nagapramod Mandagere.


Ibm Journal of Research and Development | 2014

Efficient and agile storage management in software defined environments

Alfredo Alba; Gabriel Alatorre; Christian Bolik; Ann Corrao; Thomas Keith Clark; Sandeep Gopisetty; Robert Haas; Ronen I. Kat; Bryan Langston; Nagapramod Mandagere; Dietmar Noll; Sumant Padbidri; Ramani R. Routray; Yang Song; Chung-Hao Tan; Avishay Traeger

The IT industry is experiencing a disruptive trend for which the entire data center infrastructure is becoming software defined and programmable. IT resources are provisioned and optimized continuously according to a declarative and expressive specification of the workload requirements. The software defined environments facilitate agile IT deployment and responsive data center configurations that enable rapid creation and optimization of value-added services for clients. However, this fundamental shift introduces new challenges to existing data center management solutions. In this paper, we focus on the storage aspect of the IT infrastructure and investigate its unique challenges as well as opportunities in the emerging software defined environments. Current state-of-the-art software defined storage (SDS) solutions are discussed, followed by our novel framework to advance the existing SDS solutions. In addition, we study the interactions among SDS, software defined compute (SDC), and software defined networking (SDN) to demonstrate the necessity of a holistic orchestration and to show that joint optimization can significantly improve the effectiveness and efficiency of the overall software defined environments.


international congress on big data | 2013

Storage Mining: Where IT Management Meets Big Data Analytics

Yang Song; Gabriel Alatorre; Nagapramod Mandagere; Aameek Singh

The emerging paradigm shift to cloud based data center infrastructures imposes remarkable challenges to IT management operations, e.g., due to virtualization techniques and more stringent requirements for cost and efficiency. On one hand, the voluminous data generated by daily IT operations such as logs and performance measurements contain abundant information and insights which can be leveraged to assist the IT management. On the other hand, traditional IT management solutions cannot consume and exploit the rich information contained in the data due to the daunting volume, velocity, variety, as well as the lack of scalable data mining and machine learning frameworks to extract insights from such raw data. In this paper, we present our on-going research thrust of designing novel IT management solutions by leveraging big data analytics frameworks. As an example, we introduce our project of Storage Mining, which exploits big data analytics techniques to facilitate storage cloud management. The challenges are discussed and our proof-of-concept big data analytics framework is presented.


international conference on cloud computing | 2014

Improving Hadoop Service Provisioning in a Geographically Distributed Cloud

Qi Zhang; Ling Liu; Kisung Lee; Yang Zhou; Aameek Singh; Nagapramod Mandagere; Sandeep Gopisetty; Gabriel Alatorre

With more data generated and collected in a geographically distributed manner, combined by the increased computational requirements for large scale data-intensive analysis, we have witnessed the growing demand for geographically distributed Cloud datacenters and hybrid Cloud service provisioning, enabling organizations to support instantaneous demand of additional computational resources and to expand inhouse resources to maintain peak service demands by utilizing cloud resources. A key challenge for running applications in such a geographically distributed computing environment is how to efficiently schedule and perform analysis over data that is geographically distributed across multiple datacenters. In this paper, we first compare multi-datacenter Hadoop deployment with single-datacenter Hadoop deployment to identify the performance issues inherent in a geographically distributed cloud. A generalization of the problem characterization in the context of geographically distributed cloud datacenters is also provided with discussions on general optimization strategies. Then we describe the design and implementation of a suite of system-level optimizations for improving performance of Hadoop service provisioning in a geo-distributed cloud, including prediction-based job localization, configurable HDFS data placement, and data prefetching. Our experimental evaluation shows that our prediction based localization has very low error ratio, smaller than 5%, and our optimization can improve the execution time of Reduce phase by 48.6%.


symposium on reliable distributed systems | 2011

Modeling the Fault Tolerance Consequences of Deduplication

Eric Rozier; William H. Sanders; Pin Zhou; Nagapramod Mandagere; Sandeep M. Uttamchandani; Mark L. Yakushev

Modern storage systems are employing data deduplication with increasing frequency. Often the storage systems on which these techniques are deployed contain important data, and utilize fault-tolerant hardware and software to improve the reliability of the system and reduce data loss. We suggest that data deduplication introduces inter-file relationships that may have a negative impact on the fault tolerance of such systems by creating dependencies that can increase the severity of data loss events. We present a framework composed of data analysis methods and a model of data deduplication that is useful in studying the reliability impact of data deduplication. The framework is useful for determining a deduplication strategy that is estimated to satisfy a set of reliability constraints supplied by a user.


international conference on service oriented computing | 2015

rSLA: Monitoring SLAs in Dynamic Service Environments

Heiko Ludwig; Katerina Stamou; Mohamed Mohamed; Nagapramod Mandagere; Bryan Langston; Gabriel Alatorre; Hiroaki Nakamura; Obinna Anya; Alexander Keller

Today’s application environments combine Cloud and on-premise infrastructure, as well as platforms and services from different providers to enable quick development and delivery of solutions to their intended users. The ability to use Cloud platforms to stand up applications in a short time frame, the wide availability of Web services, and the application of a continuous deployment model has led to very dynamic application environments. In those application environments, managing quality of service has become more important. The more external service vendors are involved the less control an application owner has and must rely on Service Level Agreements (SLAs). However, SLA management is becoming more difficult. Services from different vendors expose different instrumentation. In addition, the increasing dynamism of application environments entails that the speed of SLA monitoring set up must match the speed of changes to the application environment.


annual srii global conference | 2014

Intelligent Information Lifecycle Management in Virtualized Storage Environments

Gabriel Alatorre; Aameek Singh; Nagapramod Mandagere; Eric K. Butler; Sandeep Gopisetty; Yang Song

Data or information lifecycle management (ILM) is the process of managing data over its lifecycle in a manner that balances cost and performance. The task is made difficult by datas continuously changing business value. If done well, it can lower costs through the increased use of cost-effective storage but also runs the risk of negatively impacting performance if data is inadvertently placed on the wrong device (e.g., low performance storage or on an over-utilized storage device). To address this challenge, we designed and developed the Intelligent Storage Tiering Manager (ISTM), an analytics-driven storage tiering tool that automates the process of load balancing data across and within different storage tiers in virtualized storage environments. Using administrator-generated policies, ISTM finds data with the specified performance profiles and automatically moves them to the appropriate storage tier. Application impact is minimized by limiting overall migration load and keeping data accessible during migration. Automation results in significantly less labor and errors while reducing task completion time from several days (and in some cases weeks) to a few hours. In this paper, we provide an overview of information lifecycle management (ILM), discuss existing solutions, and finally focus on the design and deployment of our ILM solution, ISTM, within a production data center.


international conference on management of data | 2011

Warding off the dangers of data corruption with amulet

Nedyalko Borisov; Shivnath Babu; Nagapramod Mandagere; Sandeep M. Uttamchandani

Occasional corruption of stored data is an unfortunate byproduct of the complexity of modern systems. Hardware errors, software bugs, and mistakes by human administrators can corrupt important sources of data. The dominant practice to deal with data corruption today involves administrators writing ad hoc scripts that run data-integrity tests at the application, database, file-system, and storage levels. This manual approach is tedious, error-prone, and provides no understanding of the potential system unavailability and data loss if a corruption were to occur. We introduce the Amulet system that addresses the problem of verifying the correctness of stored data proactively and continuously. To our knowledge, Amulet is the first system that: (i) gives administrators a declarative language to specify their objectives regarding the detection and repair of data corruption; (ii) contains optimization and execution algorithms to ensure that the administrators objectives are met robustly and with least cost, e.g., using pay-as-you cloud resources; and (iii) provides timely notification when corruption is detected, allowing proactive repair of corruption before it impacts users and applications. We describe the implementation and a comprehensive evaluation of Amulet for a database software stack deployed on an infrastructure-as-a-service cloud provider.


international conference on cloud computing | 2016

rSLA: A Service Level Agreement Language for Cloud Services

Samir Tata; Mohamed Mohamed; Takashi Sakairi; Nagapramod Mandagere; Obinna Anya; Heiko Ludwig

The quality of Cloud services is a key determinant of the overall service level a provider offers to its customers. Service Level Agreements (SLAs) are crucial for Cloud customers to ensure that promised levels of services are met, and an important sales instrument and a differentiating factor for providers. Cloud providers offer services at different levels of abstraction, from infrastructure to applications. Also, Cloud providers and services are often selected more dynamically than in traditional IT services, and as a result, SLAs need to be set up and monitoring implemented to match this speed. This paper presents the rSLA language for specifying and enforcing SLAs for Cloud services, allowing for dynamic instrumentation of heterogeneous Cloud services and instantaneous deployment of SLA monitoring. This is predicated on formal representations of SLAs in the language. We describe how the rSLA language and its supporting framework as well as underlying SLA execution model enable the fast deployment of custom SLAs in heterogeneous and hybrid Cloud environments.


ieee international conference on services computing | 2016

The rSLA Framework: Monitoring and Enforcement of Service Level Agreements for Cloud Services

Mohamed Mohamed; Obinna Anya; Takashi Sakairi; Samir Tata; Nagapramod Mandagere; Heiko Ludwig

Managing service quality in heterogeneous Cloud environments is complex: different Cloud providers expose different management interfaces. To manage Service Level Agreements (SLAs) in this context, we have developed the rSLA framework that enables fast setup of SLA monitoring in dynamic and heterogeneous Cloud environments. The rSLA framework is made up of three main components: the rSLA language to formally represent SLAs, the rSLA Service, which interprets the SLAs and implements the behavior specified in them, and a set of Xlets - lightweight, dynamically bound adapters to monitoring and controlling interfaces. In this paper, we present the rSLA framework, and describe how it enables the monitoring and enforcement of service level agreements for heterogeneous Cloud services.


cluster computing and the grid | 2014

MapReduce Analysis for Cloud-Archived Data

Balaji Palanisamy; Aameek Singh; Nagapramod Mandagere; Gabriel Alatorre; Ling Liu

Public storage clouds have become a popular choice for archiving certain classes of enterprise data - for example, application and infrastructure logs. These logs contain sensitive information like IP addresses or user logins due to which regulatory and security requirements often require data to be encrypted before moved to the cloud. In order to leverage such data for any business value, analytics systems (e.g. Hadoop/MapReduce) first download data from these public clouds, decrypt it and then process it at the secure enterprise site. We propose VNCache: an efficient solution for MapReduceanalysis of such cloud-archived log data without requiring an apriori data transfer and loading into the local Hadoop cluster. VNcache dynamically integrates cloud-archived data into a virtual namespace at the enterprise Hadoop cluster. Through a seamless data streaming and prefetching model, Hadoop jobs can begin execution as soon as they are launched without requiring any apriori downloading. With VNcaches accurate pre-fetching and caching, jobs often run on a local cached copy of the data block significantly improving performance. When no longer needed, data is safely evicted from the enterprise cluster reducing the total storage footprint. Uniquely, VNcache is implemented with NO changes to the Hadoop application stack.

Researchain Logo
Decentralizing Knowledge