Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ludmila Cherkasova is active.

Publication


Featured researches published by Ludmila Cherkasova.


acm ifip usenix international conference on middleware | 2006

Enforcing performance isolation across virtual machines in Xen

Diwaker Gupta; Ludmila Cherkasova; Robert C. Gardner; Amin Vahdat

Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers. One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics. However, such multiplexing must often be done while observing per-VM performance guarantees or service level agreements. Thus, one important requirement in this environment is effective performance isolation among VMs. In this paper, we address performance isolation across virtual machines in Xen [1]. For instance, while Xen can allocate fixed shares of CPU among competing VMs, it does not currently account for work done on behalf of individual VMs in device drivers. Thus, the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place. In this paper, we present the design and evaluation of a set of primitives implemented in Xen to address this issue. First, XenMon accurately measures per-VM resource consumption, including work done on behalf of a particular VM in Xens driver domains. Next, our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU. Finally, ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits. Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations.


measurement and modeling of computer systems | 2007

Comparison of the three CPU schedulers in Xen

Ludmila Cherkasova; Diwaker Gupta; Amin Vahdat

The primary motivation for uptake of virtualization has been resource isolation, capacity management and resource customization allowing resource providers to consolidate their resources in virtual machines. Various approaches have been taken to integrate virtualization in to scientific Grids especially in the arena of High Performance Computing (HPC) to run grid jobs in virtual machines, thus enabling better provisioning of the underlying resources and customization of the execution environment on runtime. Despite the gains, virtualization layer also incur a performance penalty and its not very well understood that how such an overhead will impact the performance of systems where jobs are scheduled with tight deadlines. Since this overhead varies the types of workload whether they are memory intensive, CPU intensive or network I/O bound, and could lead to unpredictable deadline estimation for the running jobs in the system. In our study, we have attempted to tackle this problem by developing an intelligent scheduling technique for virtual machines which monitors the workload types and deadlines, and calculate the system over head in real time to maximize number of jobs finishing within their agreed deadlines.The primary motivation for enterprises to adopt virtualization technologies is to create a more agile and dynamic IT infrastructure -- with server consolidation, high resource utilization, the ability to quickly add and adjust capacity on demand -- while lowering total cost of ownership and responding more effectively to changing business conditions. However, effective management of virtualized IT environments introduces new and unique requirements, such as dynamically resizing and migrating virtual machines (VMs) in response to changing application demands. Such capacity management methods should work in conjunction with the underlying resource management mechanisms. In general, resource multiplexing and scheduling among virtual machines is poorly understood. CPU scheduling for virtual machines, for instance, has largely been borrowed from the process scheduling research in operating systems. However, it is not clear whether a straight-forward port of process schedulers to VM schedulers would perform just as well. We use the open source Xen virtual machine monitor to perform a comparative evaluation of three different CPU schedulers for virtual machines. We analyze the impact of the choice of scheduler and its parameters on application performance, and discuss challenges in estimating the application resource requirements in virtualized environments.


IEEE Transactions on Computers | 2002

Session-based admission control: a mechanism for peak load management of commercial Web sites

Ludmila Cherkasova; Peter Phaal

We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of clients individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads.


international conference on autonomic computing | 2007

A Regression-Based Analytic Model for Dynamic Resource Provisioning of Multi-Tier Applications

Qi Zhang; Ludmila Cherkasova; Evgenia Smirni

The multi-tier implementation has become the industry standard for developing scalable client-server enterprise applications. Since these applications are performance sensitive, effective models for dynamic resource provisioning and for delivering quality of service to these applications become critical. Workloads in such environments are characterized by client sessions of interdependent requests with changing transaction mix and load over time, making model adaptivity to the observed workload changes a critical requirement for model effectiveness. In this work, we apply a regression-based approximation of the CPU demand of client transactions on a given hardware. Then we use this approximation in an analytic model of a simple network of queues, each queue representing a tier, and show the approximations effectiveness for modeling diverse workloads with a changing transaction mix over time. Using the TPC- W benchmark and its three different transaction mixes we investigate factors that impact the efficiency and accuracy of the proposed performance prediction models. Experimental results show that this regression-based approach provides a simple and powerful solution for efficient capacity planning and resource provisioning of multi-tier applications under changing workload conditions.


ieee international symposium on workload characterization | 2007

Workload Analysis and Demand Prediction of Enterprise Data Center Applications

Daniel Gmach; Jerry Rolia; Ludmila Cherkasova; Alfons Kemper

Advances in virtualization technology are enabling the creation of resource pools of servers that permit multiple application workloads to share each server in the pool. Understanding the nature of enterprise workloads is crucial to properly designing and provisioning current and future services in such pools. This paper considers issues of workload analysis, performance modeling, and capacity planning. Our goal is to automate the efficient use of resource pools when hosting large numbers of enterprise services. We use a trace based approach for capacity management that relies on i) the characterization of workload demand patterns, ii) the generation of synthetic workloads that predict future demands based on the patterns, and m) a workload placement recommendation service. The accuracy of capacity planning predictions depends on our ability to characterize workload demand patterns, to recognize trends for expected changes in future demands, and to reflect business forecasts for otherwise unexpected changes in future demands. A workload analysis demonstrates the busrtiness and repetitive nature of enterprise workloads. Workloads are automatically classified according to their periodic behavior. The similarity among repeated occurrences of patterns is evaluated. Synthetic workloads are generated from the patterns in a manner that maintains the periodic nature, burstiness, and trending behavior of the workloads. A case study involving six months of data for 139 enterprise applications is used to apply and evaluate the enterprise workload analysis and related capacity planning methods. The results show that when consolidating to 8 processor systems, we predicted future per-server required capacity to within one processor 95% of the time. The accuracy of predictions for required capacity suggests that such resource savings can be achieved with little risk.


measurement and modeling of computer systems | 2000

Evaluating content management techniques for Web proxy caches

Martin F. Arlitt; Ludmila Cherkasova; John Dilley; Richard J. Friedrich; Tai Jin

The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. Current Web proxy caches utilize simple replacement policies to determine which files to retain in the cache. We utilize a trace of client requests to a busy Web proxy in an ISP environment to evaluate the performance of several existing replacement policies and of two new, parameterless replacement policies that we introduce in this paper. Finally, we introduce Virtual Caches, an approach for improving the performance of the cache for multiple metrics simultaneously.


international middleware conference | 2008

Profiling and Modeling Resource Usage of Virtualized Applications

Timothy Wood; Ludmila Cherkasova; Kivanc M. Ozonat; Prashant J. Shenoy

Next Generation Data Centers are transforming labor-inten- sive, hard-coded systems into shared, virtualized, automated, and fully managed adaptive infrastructures. Virtualization technologies promise great opportunities for reducing energy and hardware costs through server consolidation. However, to safely transition an application running natively on real hardware to a virtualized environment, one needs to estimate the additional resource requirements incurred by virtualization overheads. In this work, we design a general approach for estimating the resource requirements of applications when they are transferred to a virtual environment. Our approach has two key components: a set of microbenchmarks to profile the different types of virtualization overhead on a given platform, and a regression-based model that maps the native system usage profile into a virtualized one. This derived model can be used for estimating resource requirements of any application to be virtualized on a given platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. We illustrate the effectiveness of our methodology using Xen virtual machine monitor. Our evaluation shows that our automated model generation procedure effectively characterizes the different virtualization overheads of two diverse hardware platforms and that the models have median prediction error of less than 5% for both the RUBiS and TPC-W benchmarks.


Computer Networks | 2009

Resource pool management: Reactive versus proactive or let's be friends

Daniel Gmach; Jerry Rolia; Ludmila Cherkasova; Alfons Kemper

The consolidation of multiple workloads and servers enables the efficient use of server and power resources in shared resource pools. We employ a trace-based workload placement controller that uses historical information to periodically and proactively reassign workloads to servers subject to their quality of service objectives. A reactive migration controller is introduced that detects server overload and underload conditions. It initiates the migration of workloads when the demand for resources exceeds supply. Furthermore, it dynamically adds and removes servers to maintain a balance of supply and demand for capacity while minimizing power usage. A host load simulation environment is used to evaluate several different management policies for the controllers in a time effective manner. A case study involving three months of data for 138 SAP applications compares three integrated controller approaches with the use of each controller separately. The study considers trade-offs between: (i) required capacity and power usage, (ii) resource access quality of service for CPU and memory resources, and (iii) the number of migrations. Our study sheds light on the question of whether a reactive controller or proactive workload placement controller alone is adequate for resource pool management. The results show that the most tightly integrated controller approach offers the best results in terms of capacity and quality but requires more migrations per hour than the other strategies.


network and operating system support for digital audio and video | 2002

Characterizing locality, evolution, and life span of accesses in enterprise media server workloads

Ludmila Cherkasova; Minaxi Gupta

The main issue we address in this paper is the workload analysis of todays enterprise media servers. This analysis aims to establish a set of properties specific for enterprise media server workloads and to compare them with well known related observations about web server workloads. We propose two new metrics to characterize the dynamics and evolution of the accesses, and the rate of change in the site access pattern, and illustrate them with the analysis of two different enterprise media server workloads collected over a significant period of time. Another goal of our workload analysis study is to develop a media server log analysis tool, called MediaMetrics, that produces a media server traffic access profile and its system resource usage in a way useful to service providers.


network and operating system support for digital audio and video | 2003

MediSyn: a synthetic streaming media service workload generator

Wenting Tang; Yun Fu; Ludmila Cherkasova; Amin Vahdat

Currently, Internet hosting centers and content distribution networks leverage statistical multiplexing to meet the performance requirements of a number of competing hosted network services. Developing efficient resource allocation mechanisms for such services requires an understanding of both the short-term and long-term behavior of client access patterns to these competing services. At the same time, streaming media services are becoming increasingly popular, presenting new challenges for designers of shared hosting services. These new challenges result from fundamentally new characteristics of streaming media relative to traditional web objects, principally different client access patterns and significantly larger computational and bandwidth overhead associated with a streaming request. To understand the characteristics of these new workloads we use two long-term traces of streaming media services to develop MediSyn, a publicly available streaming media workload generator. In summary, this paper makes the following contributions: i) we model the long-term behavior of network services capturing the process of file introduction and changing file popularity, ii) we present a novel generalized Zipf-like distribution that captures recently-observed popularity of both web objects and streaming media not captured by existing Zipf-like distributions, and iii) we capture a number of characteristics unique to streaming media services, including file duration, encoding bit rate, session duration and non-stationary popularity of media accesses.

Collaboration


Dive into the Ludmila Cherkasova's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhuoyao Zhang

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boon Thau Loo

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Gmach

Technische Universität München

View shared research outputs
Researchain Logo
Decentralizing Knowledge