Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deepal Jayasinghe is active.

Publication


Featured researches published by Deepal Jayasinghe.


ieee international conference on services computing | 2011

Improving Performance and Availability of Services Hosted on IaaS Clouds with Structural Constraint-Aware Virtual Machine Placement

Deepal Jayasinghe; Calton Pu; Tamar Eilam; Malgorzata Steinder; Ian Whally; Ed C. Snible

The increasing popularity of modern virtualization-based datacenters continues to motivate both industry and academia to provide answers to a large variety of new and challenging questions. In this paper we aim to answer focusing on one such question: how to improve performance and availability of services hosted on IaaS clouds. Our system, structural constraint-aware virtual machine placement (SCAVP), supports three types of constraints: demand, communication and availability. We formulate SCAVP as an optimization problem and show its hardness. We design a hierarchical placement approach with four approximation algorithms that efficiently solves the SCAVP problem for large problem sizes. We provide a formal model for the application (to better understand structural constraints) and the datacenter (to effectively capture capabilities), and use the two models as inputs to the placement problem. We evaluate SCAVP in a simulated environment to illustrate the efficiency and importance of the proposed approach.


international conference on distributed computing systems | 2011

Economical and Robust Provisioning of N-Tier Cloud Workloads: A Multi-level Control Approach

Pengcheng Xiong; Zhikui Wang; Simon Malkowski; Qingyang Wang; Deepal Jayasinghe; Calton Pu

Resource provisioning for N-tier web applications in Clouds is non-trivial due to at least two reasons. First, there is an inherent optimization conflict between cost of resources and Service Level Agreement (SLA) compliance. Second, the resource demands of the multiple tiers can be different from each other, and varying along with the time. Resources have to be allocated to multiple (virtual) containers to minimize the total amount of resources while meeting the end-to-end performance requirements for the application. In this paper we address these two challenges through the combination of the resource controllers on both application and container levels. On the application level, a decision maker (i.e., an adaptive feedback controller) determines the total budget of the resources that are required for the application to meet SLA requirements as the workload varies. On the container level, a second controller partitions the total resource budget among the components of the applications to optimize the application performance (i.e., to minimize the round trip time). We evaluated our method with three different workload models -- open, closed, and semi-open -- that were implemented in the RUBiS web application benchmark. Our evaluation indicates two major advantages of our method in comparison to previous approaches. First, fewer resources are provisioned to the applications to achieve the same performance. Second, our approach is robust enough to address various types of workloads with time-varying resource demand without reconfiguration.


international conference on cloud computing | 2011

Variations in Performance and Scalability When Migrating n-Tier Applications to Different Clouds

Deepal Jayasinghe; Simon Malkowski; Qingyang Wang; Jack Li; Pengcheng Xiong; Calton Pu

The increasing popularity of computing clouds continues to drive both industry and research to provide answers to a large variety of new and challenging questions. We aim to answer some of these questions by evaluating performance and scalability when an n-tier application is migrated from a traditional datacenter environment to an IaaS cloud. We used a representative n-tier macro-benchmark (RUBBoS) and compared its performance and scalability in three different test beds: Amazon EC2, Open Cirrus (an open scientific research cloud), and Emulab (academic research test bed). Interestingly, we found that the best-performing configuration in Emulab can become the worst-performing configuration in EC2. Subsequently, we identified the bottleneck components, high context switch overhead and network driver processing overhead, to be at the system level. These overhead problems were confirmed at a finer granularity through micro-benchmark experiments that measure component performance directly. We describe concrete alternative approaches as practical solutions for resolving these problems.


international parallel and distributed processing symposium | 2011

The Impact of Soft Resource Allocation on n-Tier Application Scalability

Qingyang Wang; Simon Malkowski; Yasuhiko Kanemasa; Deepal Jayasinghe; Pengcheng Xiong; Calton Pu; Motoyuki Kawaba; Lilian Harada

Good performance and efficiency, in terms of high quality of service and resource utilization for example, are important goals in a cloud environment. Through extensive measurements of an n-tier application benchmark (RUBBoS), we show that overall system performance is surprisingly sensitive to appropriate allocation of soft resources (e.g., server thread pool size). Inappropriate soft resource allocation can quickly degrade overall application performance significantly. Concretely, both under-allocation and over-allocation of thread pool can lead to bottlenecks in other resources because of non-trivial dependencies. We have observed some non-obvious phenomena due to these correlated bottlenecks. For instance, the number of threads in the Apache web server can limit the total useful throughput, causing the CPU utilization of the C-JDBC clustering middleware to decrease as the workload increases. We provide a practical iterative solution approach to this challenge through an algorithmic combination of operational queuing laws and measurement data. Our results show that soft resource allocation plays a central role in the performance scalability of complex systems such as n-tier applications in cloud environments.


international congress on big data | 2013

Performance Overhead among Three Hypervisors: An Experimental Study Using Hadoop Benchmarks

Jack Li; Qingyang Wang; Deepal Jayasinghe; Junhee Park; Tao Zhu; Calton Pu

Hyper visors are widely used in cloud environments and their impact on application performance has been a topic of significant research and practical interest. We conducted experimental measurements of several benchmarks using Hadoop MapReduce to evaluate and compare the performance impact of three popular hyper visors: a commercial hyper visor, Xen, and KVM. We found that differences in the workload type (CPU or I/O intensive), workload size and VM placement yielded significant performance differences among the hyper visors. In our study, we used the three hyper visors to run several MapReduce benchmarks, such as Word Count, TestDSFIO, and TeraSort and further validated our observed hypothesis using micro benchmarks. In our observation for CPU-bound benchmark, the performance difference between the three hyper visors was negligible, however, significant performance variations were seen for I/O-bound benchmarks. Moreover, adding more virtual machines on the same physical host degraded the performance on all three hyper visors, yet we observed different degradation trends amongst them. Concretely, the commercial hyper visor is 46% faster at TestDFSIO Write than KVM, but 49% slower in the TeraSort benchmark. In addition, increasing the workload size for TeraSort yielded completion times for CVM that were two times that of Xen and KVM. The performance differences shown between the hyper visors suggests that further analysis and consideration of hyper visors are needed in the future when deploying applications to cloud environments.


international conference on cloud computing | 2012

Expertus: A Generator Approach to Automate Performance Testing in IaaS Clouds

Deepal Jayasinghe; Galen S. Swint; Simon Malkowski; Jack Li; Qingyang Wang; Junhee Park; Calton Pu

Cloud computing is an emerging technology paradigm that revolutionizes the computing landscape by providing on-demand delivery of software, platform, and infrastructure over the Internet. Yet, architecting, deploying, and configuring enterprise applications to run well on modern clouds remains a challenge due to associated complexities and non-trivial implications. The natural and presumably unbiased approach to these questions is thorough testing before moving applications to production settings. However, thorough testing of enterprise applications on modern clouds is cumbersome and error-prone due to a large number of relevant scenarios and difficulties in testing process. We address some of these challenges through Expertus---a flexible code generation framework for automated performance testing of distributed applications in Infrastructure as a Service (IaaS) clouds. Expertus uses a multi-pass compiler approach and leverages template-driven code generation to modularly incorporate different software applications on IaaS clouds. Expertus automatically handles complex configuration dependencies of software applications and significantly reduces human errors associated with manual approaches for software configuration and testing. To date, Expertus has been used to study three distributed applications on five IaaS clouds with over 10,000 different hardware, software, and virtualization configurations. The flexibility and extensibility of Expertus and our own experience on using it shows that new clouds, applications, and software packages can easily be incorporated.


acm symposium on applied computing | 2010

CloudXplor: a tool for configuration planning in clouds based on empirical data

Simon Malkowski; Markus Hedwig; Deepal Jayasinghe; Calton Pu; Dirk Neumann

Configuration planning for modern information systems is a highly challenging task due to the implications of various factors such as the cloud paradigm, multi-bottleneck workloads, and Green IT efforts. Nonetheless, there is currently little or no support to help decision makers find sustainable configurations that are systematically designed according to economic principles (e.g., profit maximization). This paper explicitly addresses this shortcoming and presents a novel approach to configuration planning in clouds based on empirical data. The main contribution of this paper is our unique approach to configuration planning based on an iterative and interactive data refinement process. More concretely, our methodology correlates economic goals with sound technical data to derive intuitive domain insights. We have implemented our methodology as the CloudXplor Tool to provide a proof of concept and exemplify a concrete use case. CloudXplor, which can be modularly embedded in generic resource management frameworks, illustrates the benefits of empirical configuration planning. In general, this paper is a working example on how to navigate large quantities of technical data to provide a solid foundation for economical decisions.


international conference on cloud computing | 2012

Challenges and Opportunities in Consolidation at High Resource Utilization: Non-monotonic Response Time Variations in n-Tier Applications

Simon Malkowski; Yasuhiko Kanemasa; Hanwei Chen; Masao Yamamoto; Qingyang Wang; Deepal Jayasinghe; Calton Pu; Motoyuki Kawaba

A central goal of cloud computing is high resource utilization through hardware sharing; however, utilization often remains modest in practice due to the challenges in predicting consolidated application performance accurately. We present a thorough experimental study of consolidated n-tier application performance at high utilization to address this issue through reproducible measurements. Our experimental method illustrates opportunities for increasing operational efficiency by making consolidated application performance more predictable in high utilization scenarios. The main focus of this paper are non-trivial dependencies between SLA-critical response time degradation effects and software configurations (i.e., readily available tuning knobs). Methodologically, we directly measure and analyze the resource utilizations, request rates, and performance of two consolidated n-tier application benchmark systems (RUBBoS) in an enterprise-level computer virtualization environment. We find that monotonically increasing the workload of an n-tier application system may unexpectedly spike the overall response time of another co-located system by 300 percent despite stable throughput. Based on these findings, we derive a software configuration best-practice to mitigate such non-monotonic response time variations by enabling higher request-processing concurrency (e.g., more threads) in all tiers. More generally, this experimental study increases our quantitative understanding of the challenges and opportunities in the widely used (but seldom supported, quantified, or even mentioned) hypothesis that applications consolidate with linear performance in cloud environments.


international conference on distributed computing systems | 2013

Detecting Transient Bottlenecks in n-Tier Applications through Fine-Grained Analysis

Qingyang Wang; Yasuhiko Kanemasa; Jack Li; Deepal Jayasinghe; Toshihiro Shimizu; Masazumi Matsubara; Motoyuki Kawaba; Calton Pu

Identifying the location of performance bottlenecks is a non-trivial challenge when scaling n-tier applications in computing clouds. Specifically, we observed that an n-tier application may experience significant performance loss when there are transient bottlenecks in component servers. Such transient bottlenecks arise frequently at high resource utilization and often result from transient events (e.g., JVM garbage collection) in an n-tier system and bursty workloads. Because of their short lifespan (e.g., milliseconds), these transient bottlenecks are difficult to detect using current system monitoring tools with sampling at intervals of seconds or minutes. We describe a novel transient bottleneck detection method that correlates throughput (i.e., request service rate) and load (i.e., number of concurrent requests) of each server in an n-tier system at fine time granularity. Both throughput and load can be measured through passive network tracing at millisecond-level time granularity. Using correlation analysis, we can identify the transient bottlenecks at time granularities as short as 50ms. We validate our method experimentally through two case studies on transient bottlenecks caused by factors at the system software layer (e.g., JVM garbage collection) and architecture layer (e.g., Intel SpeedStep).


IEEE Transactions on Services Computing | 2014

Variations in Performance and Scalability: An Experimental Study in IaaS Clouds Using Multi-Tier Workloads

Deepal Jayasinghe; Simon Malkowski; Jack Li; Qingyang Wang; Zhikui Wang; Calton Pu

The increasing popularity of clouds drives researchers to find answers to a large variety of new and challenging questions. Through extensive experimental measurements, we show variance in performance and scalability of clouds for two non-trivial scenarios. In the first scenario, we target the public Infrastructure as a Service (IaaS) clouds, and study the case when a multi-tier application is migrated from a traditional datacenter to one of the three IaaS clouds. To validate our findings in the first scenario, we conduct similar study with three private clouds built using three mainstream hypervisors. We used the RUBBoS benchmark application and compared its performance and scalability when hosted in Amazon EC2, Open Cirrus, and Emulab. Our results show that a best-performing configuration in one cloud can become the worst-performing configuration in another cloud. Subsequently, we identified several system level bottlenecks such as high context switching and network driver processing overheads that degraded the performance. We experimentally evaluate concrete alternative approaches as practical solutions to address these problems. We then built the three private clouds using a commercial hypervisor (CVM), Xen, and KVM respectively and evaluated performance characteristics using both RUBBoS and Cloudstone benchmark applications. The three clouds show significant performance variations; for instance, Xen outperforms CVM by 75 percent on the read-write RUBBoS workload and CVM outperforms Xen by over 10 percent on the Cloudstone workload. These observed problems were confirmed at a finer granularity through micro-benchmark experiments that measure component performance directly.

Collaboration


Dive into the Deepal Jayasinghe's collaboration.

Top Co-Authors

Avatar

Calton Pu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qingyang Wang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jack Li

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Simon Malkowski

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Junhee Park

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pengcheng Xiong

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tao Zhu

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge