Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vibhore Kumar is active.

Publication


Featured researches published by Vibhore Kumar.


international conference on autonomic computing | 2009

vManage: loosely coupled platform and virtualization management in data centers

Sanjay Kumar; Vanish Talwar; Vibhore Kumar; Parthasarathy Ranganathan; Karsten Schwan

Management is an important challenge for future enterprises. Previous work has addressed platform management (e.g., power and thermal management) separately from virtualization management (e.g., virtual machine (VM) provisioning, application performance). Coordinating the actions taken by these different management layers is important and beneficial, for reasons of performance, stability, and efficiency. Such coordination, in addition to working well with existing multi-vendor solutions, also needs to be extensible to support future new management solutions potentially operating on different sensors and actuators. In response to these requirements, this paper proposes vManage, a solution to loosely couple platform and virtualization management and facilitate coordination between them in data centers. Our solution is comprised of registry and proxy mechanisms that provide unified monitoring and actuation across platform and virtualization domains, and coordinators that provide policy execution for better VM placement and runtime management, including a formal approach to ensure system stability from inefficient management actions. The solution is instantiated in a Xen environment through a platform-aware virtualization manager at a cluster management node, and a virtualization-aware platform manager on each server. Experimental evaluations using enterprise benchmarks show that compared to traditional solutions, vManage can achieve additional power savings (10% lower power) with significantly improved service-level guarantees (71% less violations) and stability (54% fewer VM migrations), at low overhead.


international conference on autonomic computing | 2005

Distributed Stream Management using Utility-Driven Self-Adaptive Middleware

Vibhore Kumar; Brian F. Cooper; Karsten Schwan

We consider pervasive computing applications that process and aggregate data-streams emanating from highly distributed data sources to produce a stream of updates that have an implicit business-value. Middleware that enables such aggregation of data-streams must support scalable and efficient self-management to deal with changes in the operating conditions and should have an embedded business-sense. In this paper, we present a novel self-adaptation algorithm that has been designed to scale efficiently for thousands of streams and aims to maximize the overall business utility attained from running middleware-based applications. The outcome is that the middleware not only deals with changing network conditions or resource requirements, but also responds appropriately to changes in business policies. An important feature of the algorithm is a hierarchical node-partitioning scheme that decentralizes reconfiguration and suitably localizes its impact. Extensive simulation experiments and benchmarks attained with actual enterprise operational data corroborate this papers claims


international conference on autonomic computing | 2006

Implementing Diverse Messaging Models with Self-Managing Properties using IFLOW

Vibhore Kumar; Zhongtang Cai; Brian F. Cooper; Greg Eisenhauer; Karsten Schwan; Mohamed S. Mansour; Balasubramanian Seshasayee; Patrick M. Widener

Implementing self-management is hard, especially when building large scale distributed systems. Publish/subscribe middlewares, scientific visualization and collaboration tools and corporate operational information systems are examples of one class of systems, distributed information flow infrastructures, that could benefit from self management. This paper presents IFLOW, an autonomic middleware for implementing these different distributed systems in a self-managing way. IFLOW reduces different messaging models down to a common information flow abstraction, creates a self-managing implementation of that abstraction and then provides a substrate for building diverse information flow systems. We describe the design and implementation of IFLOW and describe case studies of implementing different messaging models as self-managing systems.


extending database technology | 2010

DEDUCE: at the intersection of MapReduce and stream processing

Vibhore Kumar; Henrique Andrade; Bugra Gedik; Kun-Lung Wu

MapReduce and stream processing are two emerging, but different, paradigms for analyzing, processing and making sense of large volumes of modern day data. While MapReduce offers the capability to analyze several terabytes of stored data, stream processing solutions offer the ability to process, possibly, a few million updates every second. However, there is an increasing number of data processing applications which need a solution that effectively and efficiently combines the benefits of MapReduce and stream processing to address their data processing needs. For example, in the automated stock trading domain, applications usually require periodic analysis of large amounts of stored data to generate a model using MapReduce, which is then used to process a stream of incident updates using a stream processing system. This paper presents Deduce, which extends IBMs System S stream processing middleware with support for MapReduce by providing (1) language and runtime support for easily specifying and embedding MapReduce jobs as elements of a larger data-flow, (2) capability to describe reusable modules that can be used as map and reduce tasks, and (3) configuration parameters that can be tweaked to control and manage the usage of shared resources by the MapReduce and stream processing components. We describe the motivation for Deduce and the design and implementation of the MapReduce extensions for System S, and then present experimental results.


acm ifip usenix international conference on middleware | 2010

FLEX: a slot allocation scheduling optimizer for MapReduce workloads

Joel L. Wolf; Deepak Rajan; Kirsten Hildrum; Rohit Khandekar; Vibhore Kumar; Sujay Parekh; Kun-Lung Wu; Andrey Balmin

Originally, MapReduce implementations such as Hadoop employed First In First Out (fifo) scheduling, but such simple schemes cause job starvation. The Hadoop Fair Scheduler (hfs) is a slot-based MapReduce scheme designed to ensure a degree of fairness among the jobs, by guaranteeing each job at least some minimum number of allocated slots. Our prime contribution in this paper is a different, flexible scheduling allocation scheme, known as flex. Our goal is to optimize any of a variety of standard scheduling theory metrics (response time, stretch, makespan and Service Level Agreements (slas), among others) while ensuring the same minimum job slot guarantees as in hfs, and maximum job slot guarantees as well. The flex allocation scheduler can be regarded as an add-on module that works synergistically with hfs. We describe the mathematical basis for flex, and compare it with fifo and hfs in a variety of experiments.


network operations and management symposium | 2008

A state-space approach to SLA based management

Vibhore Kumar; Karsten Schwan; Subu Iyer; Yuan Chen; Akhil Sahai

Large complex systems (such as Enterprise systems) are often composed of several interacting, independent components. In many such systems, although the behavior of the constituent components is well characterized, the behavior that results from interaction between such components is more or less intractable; making it hard for the administrators to efficiently manage the system in conformance with the service level agreements or the SLAs. This paper presents an approach for deriving component-level objectives from system-level objectives or agreements, which if conformed to, imply conformance to the higher-level SLA. Our approach partitions the systempsilas state-space into homogeneous sub-spaces, creates micro-models for such subspaces, and then uses such micro-models to translate the higher-level objectives to component-level objectives. We have implemented a system, termed Pranaali, for evaluating our approach in realistic settings.


acm ifip usenix international conference on middleware | 2007

iManage: policy-driven self-management for enterprise-scale systems

Vibhore Kumar; Brian F. Cooper; Greg Eisenhauer; Karsten Schwan

It is obvious that big, complex enterprise systems are hard to manage. What is not obvious is how to make them more manageable. Although there is a growing body of research into system self-management, many techniques are either too narrow, focusing on a single component rather than the entire system, or not robust enough, failing to scale or respond to the full range of an administrators needs. In our iManage system we have developed a policy-driven system modeling framework that aims to bridge the gap between manageable components and manageable systems. In particular, iManage provides: (1) system statespace partitioning, which divides a large system state-space into partitions that are more amenable to constructing system models and developing policies, (2) online model and policy adaptation to allow the self-management infrastructure to deal gracefully with changes in operating environment, system configuration, and workload, and (3) tractability and trust, where tractability allows an administrator to understand why the system chose a particular policy and also influence that decision, and trust allows an administrator to understand the systems confidence in a proposed, automated action. Simulations driven by scenarios given to us by our industrial collaborators demonstrate that iManage is effective both at constructing useful system models and in using those models to drive automated system management.


european symposium on programming | 2010

A universal calculus for stream processing languages

Robert Soulé; Martin Hirzel; Robert Grimm; Bugra Gedik; Henrique Andrade; Vibhore Kumar; Kun-Lung Wu

Stream processing applications such as algorithmic trading, MPEG processing, and web content analysis are ubiquitous and essential to business and entertainment. Language designers have developed numerous domain-specific languages that are both tailored to the needs of their applications, and optimized for performance on their particular target platforms. Unfortunately, the goals of generality and performance are frequently at odds, and prior work on the formal semantics of stream processing languages does not capture the details necessary for reasoning about implementations. This paper presents Brooklet, a core calculus for stream processing that allows us to reason about how to map languages to platforms and how to optimize stream programs. We translate from three representative languages, CQL, StreamIt, and Sawzall, to Brooklet, and show that the translations are correct. We formalize three popular and vital optimizations, data-parallel computation, operator fusion, and operator re-ordering, and show under which conditions they are correct. Language designers can use Brooklet to specify exactly how new features or languages behave. Language implementors can use Brooklet to show exactly under which circumstances new optimizations are correct. In ongoing work, we are developing an intermediate language for streaming that is based on Brooklet. We are implementing our intermediate language on System S, IBMs high-performance streaming middleware.


international conference on data engineering | 2006

Optimizing Multiple Queries in Distributed Data Stream Systems

Sangeetha Seshadri; Vibhore Kumar; Brian F. Cooper

We consider the problem of query optimization in distributed stream based systems where multiple continuous queries may be executing simultaneously. In such systems, distribution adds degrees of freedom to an already complex optimization problem. Thousands of network nodes may need to be considered for operator placements in order to support in-network processing - clearly overwhelming even from the perspective of distributed query optimization. Added to this complexity is the potential for significant savings by combining query plans in order to re-use the stream of intermediate results. These issues force us to develop new techniques for query optimization. We present a formal definition of the multi-query optimization problem in such systems and propose some initial directions.


data management for sensor networks | 2004

Predictive filtering: a learning-based approach to data stream filtering

Vibhore Kumar; Brian F. Cooper; Shamkant B. Navathe

Recent years have witnessed an increasing interest in filtering of distributed data streams, such as those produced by networked sensors. The focus is to conserve bandwidth and sensor battery power by limiting the number of updates sent from the source while maintaining an acceptable approximation of the value at the sink. We propose a novel technique called Predictive Filtering. We use matching predictors at the source and the sink simultaneously to predict the next update. The update is streamed only when the difference between the actual and the predicted value at the source increases beyond a threshold. Different predictors can be plugged into our framework, and we present a comparison of the effectiveness of various predictors. Through experiments performed on a bee-motion tracking log we demonstrate the effectiveness of our algorithm in limiting the number of updates while maintaining a good approximation of the streamed data at the sink.

Collaboration


Dive into the Vibhore Kumar's collaboration.

Top Co-Authors

Avatar

Karsten Schwan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhongtang Cai

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Greg Eisenhauer

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge