Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chee Shin Yeo is active.

Publication


Featured researches published by Chee Shin Yeo.


high performance computing and communications | 2008

Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

Rajkumar Buyya; Chee Shin Yeo; Srikumar Venugopal

This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision.


Journal of Parallel and Distributed Computing | 2011

Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

Saurabh Kumar Garg; Chee Shin Yeo; Arun Anandasivam; Rajkumar Buyya

The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to deliver such a computing infrastructure using data centers so that HPC users can access applications and data from a Cloud anywhere in the world on demand and pay based on what they use. However, the growing demand drastically increases the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high energy cost which will reduce the profit margin of Cloud providers, but also high carbon emissions which are not environmentally sustainable. Hence, there is an urgent need for energy-efficient solutions that can address the high increase in the energy consumption from the perspective of not only the Cloud provider, but also from the environment. To address this issue, we propose near-optimal scheduling policies that exploit heterogeneity across multiple data centers for a Cloud provider. We consider a number of energy efficiency factors (such as energy cost, carbon emission rate, workload, and CPU power efficiency) which change across different data centers depending on their location, architectural design, and management system. Our carbon/energy based scheduling policies are able to achieve on average up to 25% of energy savings in comparison to profit based scheduling policies leading to higher profit and less carbon emissions.


international conference on cluster computing | 2005

Service Level Agreement based Allocation of Cluster Resources: Handling Penalty to Enhance Utility

Chee Shin Yeo; Rajkumar Buyya

Jobs submitted into a cluster have varying requirements depending on user-specific needs and expectations. Therefore, in utility-driven cluster computing, cluster resource management systems (RMSs) need to be aware of these requirements in order to allocate resources effectively. Service level agreements (SLAs) can be used to differentiate different value of jobs as they define service conditions that the cluster RMS agrees to provide for each different job. The SLA acts as a contract between a user and the cluster whereby the user is entitled to compensation whenever the cluster RMS fails to deliver the required service. In this paper, we present a proportional share allocation technique called LibraSLA that takes into account the utility of accepting new jobs into the cluster based on their SLA. We study how LibraSLA performs with respect to several SLA requirements that include: (i) deadline type whether the job can be delayed, (ii) deadline when the job needs to be finished, (iii) budget to be spent for finishing the job, and (iv) penalty rate for compensating the user for failure to meet the deadline


Software - Practice and Experience | 2004

A taxonomy of computer-based simulations and its mapping to parallel and distributed systems simulation tools

Anthony Sulistio; Chee Shin Yeo; Rajkumar Buyya

In recent years, extensive research has been conducted in the area of simulation to model large complex systems and understand their behavior, especially in parallel and distributed systems. At the same time, a variety of design principles and approaches for computer‐based simulation have evolved. As a result, an increasing number of simulation tools have been designed and developed. Therefore, the aim of this paper is to develop a comprehensive taxonomy for design of computer‐based simulations, and apply this taxonomy to categorize and analyze various simulation tools for parallel and distributed systems. Copyright


ieee international conference on high performance computing data and analytics | 2007

Pricing for Utility-Driven Resource Management and Allocation in Clusters

Chee Shin Yeo; Rajkumar Buyya

Users perceive varying levels of utility for each different job completed by the cluster. Therefore, there is a need for existing cluster resource management systems (RMS) to provide a means for the user to express its perceived utility during job submission. The cluster RMS can then obtain and consider these user-centric needs such as Quality-of-Service requirements in order to achieve utility-driven resource management and allocation. We advocate the use of computational economy for this purpose. In this paper, we describe an architectural framework for a utility-driven cluster RMS. We present a user-level job submission specification for soliciting user-centric information that is used by the cluster RMS for making better resource allocation decisions. In addition, we propose a dynamic pricing function that the cluster owner can use to determine the level of sharing within a cluster. Finally, we define two user-centric performance evaluation metrics: Job QoS Satisfaction and Cluster Profitability for measuring the effectiveness of the proposed pricing function in realizing utility-driven resource management and allocation.


Future Generation Computer Systems | 2010

Autonomic metered pricing for a utility computing service

Chee Shin Yeo; Srikumar Venugopal; Xingchen Chu; Rajkumar Buyya

An increasing number of providers are offering utility computing services which require users to pay only when they use them. Most of these providers currently charge users for metered usage based on fixed prices. In this paper, we analyze the pros and cons of charging fixed prices as compared to variable prices. In particular, charging fixed prices do not differentiate pricing based on different user requirements. Hence, we highlight the importance of deploying an autonomic pricing mechanism that self-adjusts pricing parameters to consider both application and service requirements of users. Performance results observed in the actual implementation of an enterprise Cloud show that the autonomic pricing mechanism is able to achieve higher revenue than various other common fixed and variable pricing mechanisms.


Future Generation Computer Systems | 2011

Optimizing the makespan and reliability for workflow applications with reputation and a look-ahead genetic algorithm

Xiaofeng Wang; Chee Shin Yeo; Rajkumar Buyya; Jinshu Su

For applications in large-scale distributed systems, it is becoming increasingly important to provide reliable scheduling by evaluating the reliability of resources. However, most existing reputation models used for reliability evaluation ignore the critical influence of task runtime. In addition, most previous work uses list heuristics to optimize the makespan and reliability of workflow applications instead of genetic algorithms (GAs), which can give several satisfying solutions for choice. Hence, in this paper, we first propose the reliability-driven (RD) reputation, which is time dependent, and can be used to effectively evaluate the reliability of a resource in widely distributed systems. We then propose a look-ahead genetic algorithm (LAGA) which utilizes the RD reputation to optimize both the makespan and the reliability of a workflow application. The LAGA uses a novel evolution and evaluation mechanism: (i) the evolution operators evolve the task-resource mapping of a scheduling solution and (ii) the evaluation step determines the task order of solutions by using our proposed max-min strategy, which is the first two-phase strategy that can work with GAs. Our experiments show that the RD reputation improves the reliability of an application with more accurate reputations, while the LAGA provides better solutions than existing list heuristics and evolves to better solutions more quickly than a traditional GA.


international conference on parallel processing | 2011

Green cloud framework for improving carbon efficiency of clouds

Saurabh Kumar Garg; Chee Shin Yeo; Rajkumar Buyya

The energy efficiency of ICT has become a major issue with the growing demand of Cloud Computing. More and more companies are investing in building large datacenters to host Cloud services. These datacenters not only consume huge amount of energy but are also very complex in the infrastructure itself. Many studies have been proposed to make these datacenter energy efficient using technologies such as virtualization and consolidation. Still, these solutions are mostly cost driven and thus, do not directly address the critical impact on the environmental sustainability in terms of CO2 emissions. Hence, in this work, we propose a user-oriented Cloud architectural framework, i.e. Carbon Aware Green Cloud Architecture, which addresses this environmental problem from the overall usage of Cloud Computing resources. We also present a case study on IaaS providers. Finally, we present future research directions to enable the wholesome carbon efficiency of Cloud Computing.


international parallel and distributed processing symposium | 2007

Integrated Risk Analysis for a Commercial Computing Service

Chee Shin Yeo; Rajkumar Buyya

Utility computing has been anticipated to be the next generation of computing usage. Users have the freedom to easily switch to any commercial computing service to complete jobs whenever the need arises and simply pay only on usage, without any investment costs. A commercial computing service however has certain objectives or goals that it aims to achieve. In this paper, we identify three essential objectives for a commercial computing service: (i) meet SLA, (ii) maintain reliability, and (iii) earn profit. This leads to the problem of whether a resource management policy implemented in the commercial computing service is able to meet the required objectives or not. So, we also develop two evaluation methods that are simple and intuitive: (i) separate and (ii) integrated risk analysis to analyze the effectiveness of resource management policies in achieving the required objectives. Evaluation results based on five policies successfully demonstrate the applicability of separate and integrated risk analysis to assess policies in terms of the required objectives.


international conference on computational science | 2003

Visual modeler for grid modeling and simulation (GridSim) toolkit

Anthony Sulistio; Chee Shin Yeo; Rajkumar Buyya

The Grid Modeling and Simulation (GridSim) toolkit provides a comprehensive facility for simulation of application scheduling in different Grid computing environments. However, using the GridSim toolkit to create a Grid simulation model can be a challenging task, especially when the user has no prior experience in using the toolkit before. This paper presents a Java-based Graphical User Interface (GUI) tool called Visual Modeler (VM) which is developed as an additional component on top of the GridSim toolkit. It aims to reduce the learning curve of users and enable fast creation of simulation models. The usefulness of VM is illustrated by a case study on simulating a Grid computing environment similar to that of the World-Wide Grid (WWG) testbed [1].

Collaboration


Dive into the Chee Shin Yeo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun Anandasivam

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinshu Su

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaofeng Wang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jia Yu

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge