Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramani R. Routray is active.

Publication


Featured researches published by Ramani R. Routray.


Ibm Journal of Research and Development | 2014

Efficient and agile storage management in software defined environments

Alfredo Alba; Gabriel Alatorre; Christian Bolik; Ann Corrao; Thomas Keith Clark; Sandeep Gopisetty; Robert Haas; Ronen I. Kat; Bryan Langston; Nagapramod Mandagere; Dietmar Noll; Sumant Padbidri; Ramani R. Routray; Yang Song; Chung-Hao Tan; Avishay Traeger

The IT industry is experiencing a disruptive trend for which the entire data center infrastructure is becoming software defined and programmable. IT resources are provisioned and optimized continuously according to a declarative and expressive specification of the workload requirements. The software defined environments facilitate agile IT deployment and responsive data center configurations that enable rapid creation and optimization of value-added services for clients. However, this fundamental shift introduces new challenges to existing data center management solutions. In this paper, we focus on the storage aspect of the IT infrastructure and investigate its unique challenges as well as opportunities in the emerging software defined environments. Current state-of-the-art software defined storage (SDS) solutions are discussed, followed by our novel framework to advance the existing SDS solutions. In addition, we study the interactions among SDS, software defined compute (SDC), and software defined networking (SDN) to demonstrate the necessity of a holistic orchestration and to show that joint optimization can significantly improve the effectiveness and efficiency of the overall software defined environments.


Ibm Journal of Research and Development | 2008

Evolution of storage management: transforming raw data into information

Sandeep Gopisetty; Sandip Agarwala; Eric K. Butler; Divyesh Jadav; Stefan Jaquet; Madhukar R. Korupolu; Ramani R. Routray; Prasenjit Sarkar; Aameek Singh; Miriam Sivan-Zimet; Chung-Hao Tan; Sandeep M. Uttamchandani; David Merbach; Sumant Padbidri; Andreas Dieberger; Eben M. Haber; Eser Kandogan; Cheryl A. Kieliszewski; Dakshi Agrawal; Murthy V. Devarakonda; Kang-Won Lee; Kostas Magoutis; Dinesh C. Verma; Norbert G. Vogl

Exponential growth in storage requirements and an increasing number of heterogeneous devices and application policies are making enterprise storage management a nightmare for administrators. Back-of-the-envelope calculations, rules of thumb, and manual correlation of individual device data are too error prone for the day-to-day administrative tasks of resource provisioning, problem determination, performance management, and impact analysis. Storage management tools have evolved over the past several years from standardizing the data reported by storage subsystems to providing intelligent planners. In this paper, we describe that evolution in the context of the IBM Total Storage® Productivity Center (TPC)--a suite of tools to assist administrators in the day-to-day tasks of monitoring, configuring, provisioning, managing change, analyzing configuration, managing performance, and determining problems. We describe our ongoing research to develop ways to simplify and automate these tasks by applying advanced analytics on the performance statistics and raw configuration and event data collected by TPC using the popular Storage Management Initiative-Specification (SMI-S). In addition, we provide details of SMART (storage management analytics and reasoning technology) as a library that provides a collection of data-aggregation functions and optimization algorithms.


network operations and management symposium | 2008

ChargeView: An integrated tool for implementing chargeback in IT systems

Sandip Agarwala; Ramani R. Routray; Sandeep M. Uttamchandani

Most organizations are becoming increasingly reliant on IT product and services to manage their daily operations. The total cost of ownership (TCO), which includes the hardware and software purchase cost, management cost, etc., has significantly increased and forms one of the major portions of the total expenditure of the company. CIOs have been struggling to justify the increased costs and at the same time fulfill the IT needs of their organizations. For businesses to be successful, these costs need to be carefully accounted and attributed to specific processes or user groups/departments responsible for the consumption of IT resources. This process is called IT chargeback and although desirable, is hard to implement because of the increased consolidation of IT resources via technologies like virtualization. Current IT chargeback methods are either too complex or too adhoc, and often a times lead to unnecessary tensions between IT and business departments and fail to achieve the goal for which chargeback was implemented. This paper presents a new tool called ChargeView that automates the process of IT costing and chargeback. First, it provides a flexible hierarchical framework that encapsulates the cost of IT operations at different level of granularity. Second, it provides an easy way to account for different kind of hardware and management costs. Third, it permits implementation of multiple chargeback policies that fit the organization goals and establishes relationship between the cost and the usage by different users and departments within an organization. Finally, its advanced analytics functions can keep track of usage and cost trends, measure unused resources and aid in determining service pricing. We discuss the prototype implementation of ChargeView and show how it has been used for managing complex systems and storage networks.


Ibm Journal of Research and Development | 2008

Automated planners for storage provisioning and disaster recovery

Sandeep Gopisetty; Eric K. Butler; Stefan Jaquet; Madhukar R. Korupolu; Tapan Kumar Nayak; Ramani R. Routray; Mark James Seaman; Aameek Singh; Chung-Hao Tan; Sandeep M. Uttamchandani; Akshat Verma

Introducing an application into a data center involves complex interrelated decision-making for the placement of data (where to store it) and resiliency in the event of a disaster (how to protect it). Automated planners can assist administrators in making intelligent placement and resiliency decisions when provisioning for both new and existing applications. Such planners take advantage of recent improvements in storage resource management and provide guided recommendations based on monitored performance data and storage models. For example, the IBM Provisioning Planner provides intelligent decision-making for the steps involved in allocating and assigning storage for workloads. It involves planning for the number, size, and location of volumes on the basis of workload performance requirements and hierarchical constraints, planning for the appropriate number of paths, and enabling access to volumes using zoning, masking, and mapping. The IBM Disaster Recovery (DR) Planner enables administrators to choose and deploy appropriate replication technologies spanning servers, the network, and storage volumes to provide resiliency to the provisioned application. The DR Planner begins with a list of high-level application DR requirements and creates an integrated plan that is optimized on criteria such as cost and solution homogeneity. The Planner deploys the selected plan using orchestrators that are responsible for failover and failback.


networking architecture and storages | 2007

iSAN: Storage Area Network Management Modeling Simulation

Ramani R. Routray; Sandeep Gopisetty; Pallavi Galgali; Amit Modi; Shripad Nadgowda

Storage management plays an important role in ensuring the service level agreements (availability, reliability, performance etc..) that are critical to the operation of resilient business IT infrastructure. Also, storage resource management (SRM) is becoming the largest component in the overall cost of ownership of any large enterprise IT environment. Considering these functional requirements and business opportunities, several SRM suites have cropped up in the market place to provide uniform and interoperable management. But, development and test of these suites require access to huge set of heterogeneous multi- vendor storage area network (SAN) devices like fiber channel switches, storage subsystems, tape libraries, servers etc... Its almost impractical for a SRM Suite software manufacturer to own and manage these huge varieties of devices. Management modules of SAN devices have become logically independent components with the emergence of CIM and SMI-S. In this paper, we propose a framework and implementation named iSAN (imitation storage area network) that models the management module of the devices. Our tool can be used to perform a) simulation of management module ranging from individual device to large scale multi-vendor heterogeneous SAN for enterprise b) what-ifanalysis of enterprise IT environment before modeling the changes. This tool provides efficiency and cost- effectiveness with respect to development, test of SRM suites and planning of IT environment by removing their dependence on high cost SAN hardware. We have implemented this tool iSAN, and our experiment results show that one can attain the above mentioned functional objectives and bring significant productivity to enterprise IT environment management.


international conference on cloud computing | 2011

IO Tetris: Deep Storage Consolidation for the Cloud via Fine-Grained Workload Analysis

Rui Zhang; Ramani R. Routray; David M. Eyers; David D. Chambliss; Prasenjit Sarkar; Douglas Willcocks; Peter R. Pietzuch

Intelligent workload consolidation in storage systems leads to better Return On Investment (ROI), in terms of more efficient use of data center resources, better Quality of Service (QoS), and lower power consumption. This is particularly significant yet challenging in a cloud environment, in which a large set of different workloads multiplex on a shared, heterogeneous infrastructure. However, the increasing availability of fine grained workload logging facilities allows better insights to be gained from workload profiles. As a consequence, consolidation can be done more deeply, according to a detailed understanding of how well given workloads mix. We describe IO Tetris, which takes a first look at fine-grained consolidation in large-scale storage systems by leveraging temporal patterns found in real-world I/O traces gathered from enterprise storage environments. The core functionality of IO Tetris consists of two stages. A grouping stage performs hierarchical grouping of storage workloads to find complementary groupings that consolidate well together over time and conflicting ones that do not. After that, a migration stage examines the discovered groupings to determine how to maximize resource utilization efficiency while minimizing migration costs. Experiments based on customer I/O traces from a high-end enterprise class IBM storage controller show that a non-trivial number of IO Tetris groupings exist in real-world storage workloads, and that these groupings can be leveraged to achieve better storage consolidation in a cloud setting.


ieee international conference on services computing | 2007

BRAHMA: Planning Tool for Providing Storage Management as a Service

Sandeep M. Uttamchandani; Kaladhar Voruganti; Ramani R. Routray; Li Yin; Aameek Singh; Benji Yolken

Storage management is becoming the largest component in the overall cost of storage ownership. Most organizations are trying to either consolidate their storage management operations or outsource them to a storage service provider (SSP) in order to contain the management costs. Currently, there do not exist any planning tools that help the clients and the SSPs in figuring out the best outsourcing option. In this paper we present a planning tool, Brahma, that specifically addresses the above mentioned problem, as Brahma is capable of providing solutions where the management tasks are split between the client and SSP at a finer granularity. Our tool is unique because: (a) in addition to hardware/software resources, it also takes human skill set as an input; (b) it takes planning time window as input because plans that are optimal for a given time period (e.g. a month) might not necessarily be the most optimum for a different time period (e.g. a year); (c) it can be used separately by both the client and the SSP to do their respective planning; (d) it allows the client and the SSP to propose alternative solutions if certain input service level agreements can be relaxed. We have implemented BRAHMA, and our experiment results show that there definitely are cost benefits that one can attain by having a tool with the above mentioned functional properties.


network operations and management symposium | 2010

End-to-end disaster recovery planning: From art to science

Tapan Kumar Nayak; Ramani R. Routray; Aameek Singh; Sandeep M. Uttamchandani; Akshat Verma

We present the design and implementation of ENDEAVOUR - a framework for integrated end-to-end disaster recovery (DR) planning. Unlike existing research that provides DR planning within a single layer of the IT stack (e.g. storage controller based replication), ENDEAVOUR can choose technologies and composition of technologies across multiple layers like virtual machines, databases and storage controllers. ENDEAVOUR uses a canonical model of available replication technologies at all layers, explores strategies to compose them, and performs a novel map-search-reduce heuristic to identify the best DR plans for given administrator requirements. We present a detailed analysis of ENDEAVOUR including empirical characterization of various DR technologies, their composition, and a end-to-end case study.


Proceedings of the 7th International Workshop on Middleware for Grids, Clouds and e-Science | 2009

Towards a middleware for configuring large-scale storage infrastructures

David M. Eyers; Ramani R. Routray; Rui Zhang; Douglas Willcocks; Peter R. Pietzuch

The rapid proliferation of cloud and service-oriented computing infrastructure is creating an ever increasing thirst for storage within data centers. Ideally management applications in cloud deployments should operate in terms of high-level goals, and not present specific implementation details to administrators. Cloud providers often employ Storage Area Networks (SANs) to gain storage scalability. SAN configurations have a vast parameter space, which makes them one of the most difficult components to configure and manage in a cloud storage offering. As a step towards a general cloud storage configuration platform, this paper introduces a SAN configuration middleware that aids management applications in their task of updating and troubleshooting heterogeneous SAN deployments. The middleware acts as a proxy between management applications and a central repository of SAN configurations. The central repository is designed to validate SAN configurations against a knowledge base of best practice rules across cloud deployments. Management applications contribute local SAN configurations to the repository, and also subscribe to proactive notifications for configurations now no longer considered safe.


ieee international symposium on policies for distributed systems and networks | 2010

Policy Generation Framework for Large-Scale Storage Infrastructures

Ramani R. Routray; Rui Zhang; David M. Eyers; Douglas Willcocks; Peter R. Pietzuch; Prasenjit Sarkar

Cloud computing is gaining acceptance among mainstream technology users. Storage cloud providers often employ Storage Area Networks (SANs) to provide elasticity, rapid adaptability to changing demands, and policy based automation. As storage capacity grows, the storage environment becomes heterogeneous, increasingly complex, harder to manage, and more expensive to operate. This paper presents PGML (Policy Generation for largescale storage infrastructure configuration using Machine Learning), an automated, supervised machine learning framework for generation of best practices for SAN configuration that can potentially reduce configuration errors by up to 70% in a data center. A best practice or policy is nothing but a technique, guideline or methodology that, through experience and research, has proven to lead reliably to a better storage configuration. Given a standards-based representation of SAN management information, PGML builds on the machine learning constructs of inductive logic programming (ILP) to create a transparent mapping of hierarchical, object-oriented management information into multi-dimensional predicate descriptions. Our initial evaluation of PGML shows that given an input of SAN problem reports, it is able to generate best practices by analyzing these reports. Our simulation results based on extrapolated real-world problem scenarios demonstrate that ILP is an appropriate choice as a machine learning technique for this problem. I

Researchain Logo
Decentralizing Knowledge