Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandeep Gopisetty is active.

Publication


Featured researches published by Sandeep Gopisetty.


Ibm Journal of Research and Development | 2014

Efficient and agile storage management in software defined environments

Alfredo Alba; Gabriel Alatorre; Christian Bolik; Ann Corrao; Thomas Keith Clark; Sandeep Gopisetty; Robert Haas; Ronen I. Kat; Bryan Langston; Nagapramod Mandagere; Dietmar Noll; Sumant Padbidri; Ramani R. Routray; Yang Song; Chung-Hao Tan; Avishay Traeger

The IT industry is experiencing a disruptive trend for which the entire data center infrastructure is becoming software defined and programmable. IT resources are provisioned and optimized continuously according to a declarative and expressive specification of the workload requirements. The software defined environments facilitate agile IT deployment and responsive data center configurations that enable rapid creation and optimization of value-added services for clients. However, this fundamental shift introduces new challenges to existing data center management solutions. In this paper, we focus on the storage aspect of the IT infrastructure and investigate its unique challenges as well as opportunities in the emerging software defined environments. Current state-of-the-art software defined storage (SDS) solutions are discussed, followed by our novel framework to advance the existing SDS solutions. In addition, we study the interactions among SDS, software defined compute (SDC), and software defined networking (SDN) to demonstrate the necessity of a holistic orchestration and to show that joint optimization can significantly improve the effectiveness and efficiency of the overall software defined environments.


Ibm Journal of Research and Development | 1996

Automated forms-processing software and services

Sandeep Gopisetty; Raymond A. Lorie; Jianchang Mao; K. Moidin Mohiuddin; Alexander Sorin; Eyal Yair

While document-image systems for the management of collections of documents, such as forms, offer significant productivity improvements, the entry of information from documents remains a labor-intensive and costly task for most organizations. In this paper, we describe a software system for the machine reading of forms data from their scanned images. We describe its major components: form recognition and “dropout,” intelligent character recognition (ICR), and contextual checking. Finally, we describe applications for which our automated forms reader has been successfully used.


Ibm Journal of Research and Development | 2008

Evolution of storage management: transforming raw data into information

Sandeep Gopisetty; Sandip Agarwala; Eric K. Butler; Divyesh Jadav; Stefan Jaquet; Madhukar R. Korupolu; Ramani R. Routray; Prasenjit Sarkar; Aameek Singh; Miriam Sivan-Zimet; Chung-Hao Tan; Sandeep M. Uttamchandani; David Merbach; Sumant Padbidri; Andreas Dieberger; Eben M. Haber; Eser Kandogan; Cheryl A. Kieliszewski; Dakshi Agrawal; Murthy V. Devarakonda; Kang-Won Lee; Kostas Magoutis; Dinesh C. Verma; Norbert G. Vogl

Exponential growth in storage requirements and an increasing number of heterogeneous devices and application policies are making enterprise storage management a nightmare for administrators. Back-of-the-envelope calculations, rules of thumb, and manual correlation of individual device data are too error prone for the day-to-day administrative tasks of resource provisioning, problem determination, performance management, and impact analysis. Storage management tools have evolved over the past several years from standardizing the data reported by storage subsystems to providing intelligent planners. In this paper, we describe that evolution in the context of the IBM Total Storage® Productivity Center (TPC)--a suite of tools to assist administrators in the day-to-day tasks of monitoring, configuring, provisioning, managing change, analyzing configuration, managing performance, and determining problems. We describe our ongoing research to develop ways to simplify and automate these tasks by applying advanced analytics on the performance statistics and raw configuration and event data collected by TPC using the popular Storage Management Initiative-Specification (SMI-S). In addition, we provide details of SMART (storage management analytics and reasoning technology) as a library that provides a collection of data-aggregation functions and optimization algorithms.


Ibm Journal of Research and Development | 2008

Automated planners for storage provisioning and disaster recovery

Sandeep Gopisetty; Eric K. Butler; Stefan Jaquet; Madhukar R. Korupolu; Tapan Kumar Nayak; Ramani R. Routray; Mark James Seaman; Aameek Singh; Chung-Hao Tan; Sandeep M. Uttamchandani; Akshat Verma

Introducing an application into a data center involves complex interrelated decision-making for the placement of data (where to store it) and resiliency in the event of a disaster (how to protect it). Automated planners can assist administrators in making intelligent placement and resiliency decisions when provisioning for both new and existing applications. Such planners take advantage of recent improvements in storage resource management and provide guided recommendations based on monitored performance data and storage models. For example, the IBM Provisioning Planner provides intelligent decision-making for the steps involved in allocating and assigning storage for workloads. It involves planning for the number, size, and location of volumes on the basis of workload performance requirements and hierarchical constraints, planning for the appropriate number of paths, and enabling access to volumes using zoning, masking, and mapping. The IBM Disaster Recovery (DR) Planner enables administrators to choose and deploy appropriate replication technologies spanning servers, the network, and storage volumes to provide resiliency to the provisioned application. The DR Planner begins with a list of high-level application DR requirements and creates an integrated plan that is optimized on criteria such as cost and solution homogeneity. The Planner deploys the selected plan using orchestrators that are responsible for failover and failback.


networking architecture and storages | 2007

iSAN: Storage Area Network Management Modeling Simulation

Ramani R. Routray; Sandeep Gopisetty; Pallavi Galgali; Amit Modi; Shripad Nadgowda

Storage management plays an important role in ensuring the service level agreements (availability, reliability, performance etc..) that are critical to the operation of resilient business IT infrastructure. Also, storage resource management (SRM) is becoming the largest component in the overall cost of ownership of any large enterprise IT environment. Considering these functional requirements and business opportunities, several SRM suites have cropped up in the market place to provide uniform and interoperable management. But, development and test of these suites require access to huge set of heterogeneous multi- vendor storage area network (SAN) devices like fiber channel switches, storage subsystems, tape libraries, servers etc... Its almost impractical for a SRM Suite software manufacturer to own and manage these huge varieties of devices. Management modules of SAN devices have become logically independent components with the emergence of CIM and SMI-S. In this paper, we propose a framework and implementation named iSAN (imitation storage area network) that models the management module of the devices. Our tool can be used to perform a) simulation of management module ranging from individual device to large scale multi-vendor heterogeneous SAN for enterprise b) what-ifanalysis of enterprise IT environment before modeling the changes. This tool provides efficiency and cost- effectiveness with respect to development, test of SRM suites and planning of IT environment by removing their dependence on high cost SAN hardware. We have implemented this tool iSAN, and our experiment results show that one can attain the above mentioned functional objectives and bring significant productivity to enterprise IT environment management.


international conference on cloud computing | 2014

Improving Hadoop Service Provisioning in a Geographically Distributed Cloud

Qi Zhang; Ling Liu; Kisung Lee; Yang Zhou; Aameek Singh; Nagapramod Mandagere; Sandeep Gopisetty; Gabriel Alatorre

With more data generated and collected in a geographically distributed manner, combined by the increased computational requirements for large scale data-intensive analysis, we have witnessed the growing demand for geographically distributed Cloud datacenters and hybrid Cloud service provisioning, enabling organizations to support instantaneous demand of additional computational resources and to expand inhouse resources to maintain peak service demands by utilizing cloud resources. A key challenge for running applications in such a geographically distributed computing environment is how to efficiently schedule and perform analysis over data that is geographically distributed across multiple datacenters. In this paper, we first compare multi-datacenter Hadoop deployment with single-datacenter Hadoop deployment to identify the performance issues inherent in a geographically distributed cloud. A generalization of the problem characterization in the context of geographically distributed cloud datacenters is also provided with discussions on general optimization strategies. Then we describe the design and implementation of a suite of system-level optimizations for improving performance of Hadoop service provisioning in a geo-distributed cloud, including prediction-based job localization, configurable HDFS data placement, and data prefetching. Our experimental evaluation shows that our prediction based localization has very low error ratio, smaller than 5%, and our optimization can improve the execution time of Reduce phase by 48.6%.


ieee conference on mass storage systems and technologies | 2005

Security vs performance: tradeoffs using a trust framework

Aameek Singh; Kaladhar Voruganti; Sandeep Gopisetty; David Pease; Linda Marie Duyanovich; Ling Liu

We present an architecture of a trust framework that can be used to intelligently tradeoff between security and performance in a SAN file system. The primary idea is to differentiate between various clients in the system based on their trustworthiness and provide them with differing levels of security and performance. Client trustworthiness reflects its expected behavior and is evaluated in an online fashion using a customizable trust model. We also describe the interface of the trust framework with an example block level security solution for an out-of-band virtualization based SAN file system (SAN FS). The proposed framework can be easily extended to provide differential treatment based on data sensitivity, using a configurable parameter of the trust model. This allows associating stringent security requirements for more sensitive data, while trading off security for better performance for less critical data, a situation regularly desired in an enterprise.


annual srii global conference | 2014

Intelligent Information Lifecycle Management in Virtualized Storage Environments

Gabriel Alatorre; Aameek Singh; Nagapramod Mandagere; Eric K. Butler; Sandeep Gopisetty; Yang Song

Data or information lifecycle management (ILM) is the process of managing data over its lifecycle in a manner that balances cost and performance. The task is made difficult by datas continuously changing business value. If done well, it can lower costs through the increased use of cost-effective storage but also runs the risk of negatively impacting performance if data is inadvertently placed on the wrong device (e.g., low performance storage or on an over-utilized storage device). To address this challenge, we designed and developed the Intelligent Storage Tiering Manager (ISTM), an analytics-driven storage tiering tool that automates the process of load balancing data across and within different storage tiers in virtualized storage environments. Using administrator-generated policies, ISTM finds data with the specified performance profiles and automatically moves them to the appropriate storage tier. Application impact is minimized by limiting overall migration load and keeping data accessible during migration. Automation results in significantly less labor and errors while reducing task completion time from several days (and in some cases weeks) to a few hours. In this paper, we provide an overview of information lifecycle management (ILM), discuss existing solutions, and finally focus on the design and deployment of our ILM solution, ISTM, within a production data center.


ieee conference on mass storage systems and technologies | 2005

A hybrid access model for storage area networks

Aameek Singh; Kaladhar Voruganti; Sandeep Gopisetty; David Pease; Ling Liu

We present HSAN & a hybrid storage area network, which uses both in-band (like NFS (R. Sandberg et al., 1985)) and out-of-band visualization (like SAN FS (J. Menon et al., 2003)) access models. HSAN uses hybrid servers that can serve as both metadata and NAS servers to intelligently decide the access model per each request, based on the characteristics of requested data. This is in contrast to existing efforts that merely provide concurrent support for both models and do not exploit model appropriateness for requested data. The HSAN hybrid model is implemented using low overhead cache-admission and cache-replacement schemes and aims to improve overall response times for a wide variety of workloads. Preliminary analysis of the hybrid model indicates performance improvements over both models.


ieee international conference on services computing | 2017

Living in the Cloud or on the Edge: Opportunities and Challenges of IOT Application Architecture

Samir Tata; Rakesh Jain; Heiko Ludwig; Sandeep Gopisetty

The Internet of Things (IoT) refers to the network of objects, devices, machines, vehicles, buildings, and other physical systems with embedded sensing, computing, and communication capabilities, that sense and share and act on real-time information about the physical world. These objects, through standard communication protocols and unique addressing schemes provide services to the final users or systems. With the rise of low cost, low power single board computers, it is possible to perform certain business logic at the edge of the network utilizing such computers. This way the IoT application is distributed across many devices, some running at or near the edge of the network, at different locations and some running in the public or private cloud. This brings in new research and development challenges in the lifecycle of IoT applications, including modeling, deployment and support of non-functional requirements such as security, privacy, performance, provenance etc. In this paper we outline the challenges related to modeling and deployment of IoT applications and potential research directions in resolving these challenges.

Researchain Logo
Decentralizing Knowledge