Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Vecchiola is active.

Publication


Featured researches published by Christian Vecchiola.


international symposium on pervasive systems, algorithms, and networks | 2009

High-Performance Cloud Computing: A View of Scientific Applications

Christian Vecchiola; Suraj Pandey; Rajkumar Buyya

Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.


international conference on cloud computing | 2009

Cloudbus Toolkit for Market-Oriented Cloud Computing

Rajkumar Buyya; Suraj Pandey; Christian Vecchiola

This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.


Future Generation Computer Systems | 2012

The Aneka platform and QoS-driven resource provisioning for elastic applications on hybrid Clouds

Rodrigo N. Calheiros; Christian Vecchiola; Dileban Karunamoorthy; Rajkumar Buyya

Cloud computing alters the way traditional software systems are built and run by introducing a utility-based model for delivering IT infrastructure, platforms, applications, and services. The consolidation of this new paradigm in both enterprises and academia demanded reconsideration in the way IT resources are used, so Cloud computing can be used together with available resources. A case for the utilization of Clouds for increasing the capacity of computing infrastructures is Desktop Grids: these infrastructures typically provide best effort execution of high throughput jobs and other workloads that fit the model of the platform. By enhancing Desktop Grid infrastructures with Cloud resources, it is possible to offer QoS to users, motivating the adoption of Desktop Grids as a viable platform for application execution. In this paper, we describe how Aneka, a platform for developing scalable applications on the Cloud, supports such a vision by provisioning resources from different sources and supporting different application models. We highlight the key concepts and features of Aneka that support the integration between Desktop Grids and Clouds and present an experiment showing the performance of this integration.


Future Generation Computer Systems | 2012

Deadline-driven provisioning of resources for scientific applications in hybrid clouds with Aneka

Christian Vecchiola; Rodrigo N. Calheiros; Dileban Karunamoorthy; Rajkumar Buyya

Scientific applications require large computing power, traditionally exceeding the amount that is available within the premises of a single institution. Therefore, clouds can be used to provide extra resources whenever required. For this vision to be achieved, however, requires both policies defining when and how cloud resources are allocated to applications and a platform implementing not only these policies but also the whole software stack supporting management of applications and resources. Aneka is a cloud application platform capable of provisioning resources obtained from a variety of sources, including private and public clouds, clusters, grids, and desktops grids. In this paper, we present Anekas deadline-driven provisioning mechanism, which is responsible for supporting quality of service (QoS)-aware execution of scientific applications in hybrid clouds composed of resources obtained from a variety of sources. Experimental results evaluating such a mechanism show that Aneka is able to efficiently allocate resources from different sources in order to reduce application execution times.


ieee international conference on escience | 2008

MRPGA: An Extension of MapReduce for Parallelizing Genetic Algorithms

Chao Jin; Christian Vecchiola; Rajkumar Buyya

The MapReduce programming model allows users to easily develop distributed applications in data centers. However, many applications cannot be exactly expressed with MapReduce due to their specific characteristics. For instance, genetic algorithms (GAs) naturally fit into an iterative style. That does not follow the two phase pattern of MapReduce. This paper presents an extension to the MapReduce model featuring a hierarchical reduction phase. This model is called MRPGA (MapReduce for parallel GAs), which can automatically parallelize GAs. We describe the design and implementation of the extended MapReduce model on a .NET-based enterprise grid system in detail. The evaluation of this model with its runtime system is presented using example applications.


high performance computing and communications | 2010

Managing Peak Loads by Leasing Cloud Infrastructure Services from a Spot Market

Michael Mattess; Christian Vecchiola; Rajkumar Buyya

Dedicated computing clusters are typically sized based on an expected average workload over a period of years, rather than on peak workloads, which might exist for relatively short times of weeks or months. Recent work has proposed temporarily adding capacity to dedicated clusters during peak periods, by purchasing additional resources from Infrastructure as a Service (IaaS) providers such as Amazons EC2. In this paper, we consider the economics of purchasing such resources by taking advantage of new opportunities offered for renting virtual infrastructure such as the spot pricing model introduced by Amazon. Furthermore, we define different provisioning policies and investigate the use of spot instances compared to normal instances in terms of cost savings and total breach time of tasks in the queue.


Future Generation Computer Systems | 2012

A coordinator for scaling elastic applications across multiple clouds

Rodrigo N. Calheiros; Adel Nadjaran Toosi; Christian Vecchiola; Rajkumar Buyya

Cloud computing allows customers to dynamically scale their applications, software platforms, and hardware infrastructures according to negotiated Service Level Agreements (SLAs). However, resources available in a single Cloud data center are limited, thus if a large demand for an elastic application is observed in a given time, a Cloud provider will not be able to deliver uniform Quality of Service (QoS) to handle such a demand and SLAs may be violated. One approach that can be taken to avoid such a scenario is enabling further growing of the application by scaling it across multiple, independent Cloud data centers, following market-based trading and negotiation of resources. This approach, as envisioned in the InterCloud project, is realized by agents called Cloud Coordinators and allows for an increase in performance, reliability, and scalability of elastic applications. In this paper, we propose both an architecture for such Cloud Coordinator and an extensible design that allows its adoption in different public and private Clouds. An evaluation of the Cloud Coordinator prototype running in a small-scale scenario shows the effectiveness of the proposed approach and its impact on elastic applications.


The Journal of Supercomputing | 2013

Mandi: a market exchange for trading utility and cloud computing services

Saurabh Kumar Garg; Christian Vecchiola; Rajkumar Buyya

The recent development in Cloud computing has enabled the realization of delivering computing as an utility. Many industries such as Amazon and Google have started offering Cloud services on a “pay as you go” basis. These advances have led to the evolution of the market infrastructure in the form of a Market Exchange (ME) that facilitates the trading between consumers and Cloud providers. Such market environment eases the trading process by aggregating IT services from a variety of sources, and allows consumers to easily select them. In this paper, we propose a light weight and platform independent ME framework called “Mandi”, which allows consumers and providers to trade computing resources according to their requirements. The novelty of Mandi is that it not only gives its users the flexibility in terms of negotiation protocol, but also allows the simultaneous coexistence of multiple trading negotiations. In this paper, we first present the requirements that motivated our design and discuss how these facilitate the trading of compute resources using multiple market models (also called negotiation protocols). Finally, we evaluate the performance of the first prototype of “Mandi” in terms of its scalability.


Future Generation Computer Systems | 2013

Task granularity policies for deploying bag-of-task applications on global grids

Nithiapidary Muthuvelu; Christian Vecchiola; Ian Chai; Eswaran Chikkannan; Rajkumar Buyya

Deploying lightweight tasks individually on grid resources would lead to a situation where communication overhead dominates the overall application processing time. The communication overhead can be reduced if we group the lightweight tasks at the meta-scheduler before the deployment. However, there is a necessity to limit the number of tasks in a group in order to utilise the resources and the interconnecting network in an optimal manner. In this paper, we propose policies and approaches to decide the granularity of a task group that obeys the task processing requirements and resource-network utilisation constraints while satisfying the users QoS requirements. Experiments on bag-of-task applications reveal that the proposed policies and approaches lead towards an economical and efficient way of grid utilisation. Highlights? Grouping the fine-grain grid tasks highly reduces the application processing time. ? QoS and the resource-network utilisation constrains affect the size of a task group. ? We present batch resizing policies and techniques to create the task groups. ? Our strategies support both parametric and non-parametric sweep applications. ? Our task group deployment increases the resource utilisation.


simulated evolution and learning | 2008

Performance Evaluation of an Adaptive Ant Colony Optimization Applied to Single Machine Scheduling

Davide Anghinolfi; Antonio Boccalatte; Massimo Paolucci; Christian Vecchiola

We propose a self-adaptive Ant Colony Optimization (AD-ACO) approach that exploits a parameter adaptation mechanism to reduce the requirement of a preliminary parameter tuning. The proposed AD-ACO is based on an ACO algorithm adopting a pheromone model with a new global pheromone update mechanism. We applied this algorithm to the single machine total weighted tardiness scheduling problem with sequence-dependent setup times and we executed an experimental campaign on a benchmark available in literature. Results, compared with the ones produced by the ACO algorithm without adaptation mechanism and with those obtained by recently proposed metaheuristic algorithms for the same problem, highlight the quality of the proposed approach.

Collaboration


Dive into the Christian Vecchiola's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge