Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vincent C. Emeakaroha is active.

Publication


Featured researches published by Vincent C. Emeakaroha.


international conference on high performance computing and simulation | 2010

Low level Metrics to High level SLAs - LoM2HiS framework: Bridging the gap between monitored metrics and SLA parameters in cloud environments

Vincent C. Emeakaroha; Ivona Brandic; Michael Maurer; Schahram Dustdar

Cloud computing represents a novel on-demand computing approach where resources are provided in compliance to a set of predefined non-functional properties specified and negotiated by means of Service Level Agreements (SLAs). In order to avoid costly SLA violations and to timely react to failures and environmental changes, advanced SLA enactment strategies are necessary, which include appropriate resource-monitoring concepts. Currently, Cloud providers tend to adopt existing monitoring tools, as for example those from Grid environments. However, those tools are usually restricted to locality and homogeneity of monitored objects, are not scalable, and do not support mapping of low-level resource metrics e.g., system up and down time to high-level application specific SLA parameters e.g., system availability. In this paper we present a novel framework for managing the mappings of the Low-level resource Metrics to High-level SLAs (LoM2HiS framework). The LoM2HiS framework is embedded into FoSII infrastructure, which facilitates autonomic SLA management and enforcement. Thus, the LoM2HiS framework detects future SLA violation threats and can notify the enactor component to act so as to avert the threats. We discuss the conceptual model of the LoM2HiS framework, followed by the implementation details. Finally, we present the first experimental results and a proof of concept of the LoM2HiS framework.


Journal of Parallel and Distributed Computing | 2014

A survey of Cloud monitoring tools: Taxonomy, capabilities and objectives

Kaniz Fatema; Vincent C. Emeakaroha; Philip D. Healy; John P. Morrison; Theo Lynn

a b s t r a c t The efficient management of Cloud infrastructure and deployments is a topic that is currently attracting significant interest. Complex Cloud deployments can result in an intricate layered structure. Understanding the behaviour of these hierarchical systems and how to manage them optimally are challenging tasks that can be facilitated by pervasive monitoring. Monitoring tools and techniques have an important role to play in this area by gathering the information required to make informed decisions. A broad variety of monitoring tools are available, from general-purpose infrastructure monitoring tools that predate Cloud computing, to high-level application monitoring services that are themselves hosted in the Cloud. Surveying the capabilities of monitoring tools can identify the fitness of these tools in serving certain objectives. Monitoring tools are essential components to deal with various objectives of both Cloud providers and consumers in different Cloud operational areas. We have identified the practical capabilities that an ideal monitoring tool should possess to serve the objectives in these operational areas. Based on these identified capabilities, we present a taxonomy and analyse the monitoring tools to determine their strength and weaknesses. In conclusion, we present our reflections on the analysis, discuss challenges and identify future research trends in the area of Cloud monitoring.


computer software and applications conference | 2011

SLA-Aware Application Deployment and Resource Allocation in Clouds

Vincent C. Emeakaroha; Ivona Brandic; Michael Maurer; Ivan Breskovic

Provisioning resources as a service in a scalable on-demand manner is a basic feature in Cloud computing technology. Service provisioning in Clouds is based on Service Level Agreements (SLAs) representing a contract signed between the customer and the service provider stating the terms of the agreement including non-functional requirements of the service specified as Quality of Service (QoS), obligations, and penalties in case of agreement violations. On the one hand SLA violation should be prevented to avoid costly penalties and on the other hand providers have to efficiently utilize resources to minimize cost for the service provisioning. Thus, scheduling strategies considering multiple SLA parameters and efficient allocation of resources are necessary. Recent work considers various strategies with single SLA parameters. However, those approaches are limited to simple workflows and single task applications. Scheduling and deploying service requests considering multiple SLA parameters such as amount of CPU required, network bandwidth, memory and storage are still open research challenges. In this paper, we present a novel scheduling heuristic considering multiple SLA parameters for deploying applications in Clouds. We discuss in details the heuristic design and implementation and finally present detailed evaluations as a proof of concept emphasizing the performance of our approach.


computer software and applications conference | 2012

CASViD: Application Level Monitoring for SLA Violation Detection in Clouds

Vincent C. Emeakaroha; Tiago C. Ferreto; Marco Aurelio Stelmar Netto; Ivona Brandic; César A. F. De Rose

Cloud resources and services are offered based on Service Level Agreements (SLAs) that state usage terms and penalties in case of violations. Although, there is a large body of work in the area of SLA provisioning and monitoring at infrastructure and platform layers, SLAs are usually assumed to be guaranteed at the application layer. However, application monitoring is a challenging task due to monitored metrics of the platform or infrastructure layer that cannot be easily mapped to the required metrics at the application layer. Sophisticated SLA monitoring among those layers to avoid costly SLA penalties and maximize the provider profit is still an open research challenge. This paper proposes an application monitoring architecture named CASViD, which stands for Cloud Application SLA Violation Detection architecture. CASViD architecture monitors and detects SLA violations at the application layer, and includes tools for resource allocation, scheduling, and deployment. Different from most of the existing monitoring architectures, CASViD focuses on application level monitoring, which is relevant when multiple customers share the same resources in a Cloud environment. We evaluate our architecture in a real Cloud testbed using applications that exhibit heterogeneous behaviors in order to investigate the effective measurement intervals for efficient monitoring of different application types. The achieved results show that our architecture, with low intrusion level, is able to monitor, detect SLA violations, and suggest effective measurement intervals for various workloads.


Future Generation Computer Systems | 2012

Cost-benefit analysis of an SLA mapping approach for defining standardized Cloud computing goods

Michael Maurer; Vincent C. Emeakaroha; Ivona Brandic; Jörn Altmann

Due to the large variety in computing resources and, consequently, the large number of different types of service level agreements (SLAs), computing resource markets face the problem of a low market liquidity. Restricting the number of different resource types to a small set of standardized computing resources seems to be the appropriate solution to counteract this problem. Standardized computing resources are defined through an SLA template. An SLA template defines the structure of an SLA, the service attributes, the names of the service attributes, and the service attribute values. However, since existing research results have only introduced static SLA templates so far, the SLA templates cannot reflect changes in user needs and market structures. To address this shortcoming, we present a novel approach of adaptive SLA matching. This approach adapts SLA templates based on SLA mappings of users. It allows Cloud users to define mappings between a public SLA template, which is available in the Cloud market, and their private SLA templates, which are used for various in-house business processes of the Cloud user. Besides showing how public SLA templates are adapted to the demand of Cloud users, we also analyze the costs and benefits of this approach. Costs are incurred every time a user has to define a new SLA mapping to a public SLA template due to its adaptation. In particular, we investigate how the costs differ with respect to the public SLA template adaptation method. The simulation results show that the use of heuristics within adaptation methods allows balancing the costs and benefits of the SLA mapping approach.


world congress on services | 2010

Towards Knowledge Management in Self-Adaptable Clouds

Michael Maurer; Ivona Brandic; Vincent C. Emeakaroha; Schahram Dustdar

Cloud computing represents a promising computing paradigm where resources have to be dynamically allocated to software that needs to be executed. Self-manageable Cloud infrastructures are required to achieve that level of flexibility on the one hand, and to comply to users’ requirements specified by means of Service Level Agreements (SLAs) on the other. Such infrastructures should automatically respond to changing component, workload, and environmental conditions minimizing user interactions with the system and preventing violations of SLAs. However, identification of system states where reactive actions are necessary for the prevention of SLA violations is far from trivial. In this paper we investigate how current knowledge management systems can be used for the prevention of SLA violations in Clouds. First, we define a typical SLA use case and formulate the expected behavior of the knowledge management system in order to prevent possible SLA violations. Second, we investigate different methods for knowledge management, e.g., situation calculus and case based reasoning (CBR). We discuss how these methods match the expected behavior for SLA violation prevention. In particular we examine the CBR method and devise several approaches for knowledge management in Clouds based on CBR. Finally, we evaluate our approach based on the presented use case.


international symposium on computers and communications | 2011

Revealing the MAPE loop for the autonomic management of Cloud infrastructures

Michael Maurer; Ivan Breskovic; Vincent C. Emeakaroha; Ivona Brandic

Cloud computing is the result of the convergence of several concepts, ranging from virtualization, distributed application design, Grid computing, and enterprise IT management. Efficient management of Cloud computing infrastructures faces with the contradicting goals like unlimited scalability, provision of Service Level Agreements (SLAs), extensive use of virtualization, energy efficiency and minimization of the administration overhead by humans. Thus, autonomic computing seems to be one of the promising paradigms for the implementation of the management infrastructures for Clouds. However, currently available autonomic systems do not consider the characteristics of Clouds, e.g., virtualization layer, and thus are not easily applicable to Cloud infrastructures. In this paper we discuss first steps towards revealing the current MAPE (Monitoring, Analysis, Planning, Execution) loops for the application to Cloud infrastructures. We present novel techniques for the adequate monitoring of Clouds, discuss the approach for the knowledge management and present our solutions for facilitating SLA generation and management.


Concurrency and Computation: Practice and Experience | 2013

Cloud resource provisioning and SLA enforcement via LoM2HiS framework

Vincent C. Emeakaroha; Ivona Brandic; Michael Maurer; Schahram Dustdar

Cloud computing represents a novel on‐demand computing technology where resources are provisioned in compliance to a set of predefined non‐functional properties specified and negotiated by means of service level agreements (SLAs). Currently, cloud providers strive to achieve efficient SLA enforcement strategies to avoid costly SLA violations during application provisioning and to timely react to failures and environmental changes. These strategies include advanced application deployment mechanisms and appropriate resource monitoring concepts. In terms of cloud resource monitoring, providers tend to adopt existing monitoring tools, such as those from grid environments. However, those tools are usually restricted to locality and homogeneity of monitored objects, are not scalable, and do not support mapping of low‐level resource metrics (e.g., system uptime and downtime) to high‐level application‐specific SLA parameters (e.g., system availability). In this paper, we present a novel low‐level metrics to high‐level SLA (LoM2HiS) framework for managing the monitoring of low‐level resource metrics and mapping them to high‐level SLAs and an application deployment mechanism for scheduling and provisioning applications in clouds. The LoM2HiS framework provides the application deployment mechanism with monitored information and SLA violation prevention techniques, thereby ensuring the performance of the applications and thus increasing the revenue of the cloud provider by avoiding SLA violation penalty cost. This framework is the building block of the Foundations of Self‐governing ICT Infrastructures project, which intends to facilitate autonomic SLA management and enforcement. Thus, the LoM2HiS framework detects future SLA violation threats and can notify the knowledge component to act so as to avert the threats. We discuss in detail the conceptual design of the LoM2HiS framework and the application deployment mechanism including their implementations. Finally, we present our evaluation results based on a use‐case scenario demonstrating the usage of the LoM2HiS framework in a real cloud environment. Copyright


grid computing | 2013

Managing and Optimizing Bioinformatics Workflows for Data Analysis in Clouds

Vincent C. Emeakaroha; Michael Maurer; Patrick Stern; Paweł P. Łabaj; Ivona Brandic; David P. Kreil

The rapid advancements in recent years of high-throughput technologies in the life sciences are facilitating the generation and storage of huge amount of data in different databases. Despite significant developments in computing capacity and performance, an analysis of these large-scale data in a search for biomedical relevant patterns remains a challenging task. Scientific workflow applications are deemed to support data-mining in more complex scenarios that include many data sources and computational tools, as commonly found in bioinformatics. A scientific workflow application is a holistic unit that defines, executes, and manages scientific applications using different software tools. Existing workflow applications are process- or data- rather than resource-oriented. Thus, they lack efficient computational resource management capabilities, such as those provided by Cloud computing environments. Insufficient computational resources disrupt the execution of workflow applications, wasting time and money. To address this issue, advanced resource monitoring and management strategies are required to determine the resource consumption behaviours of workflow applications to enable a dynamical allocation and deallocation of resources. In this paper, we present a novel Cloud management infrastructure consisting of resource level-, application level monitoring techniques, and a knowledge management strategy to manage computational resources for supporting workflow application executions in order to guarantee their performance goals and their successful completion. We present the design description of these techniques, demonstrate how they can be applied to scientific workflow applications, and present detailed evaluation results as a proof of concept.


workflows in support of large scale science | 2011

Optimizing bioinformatics workflows for data analysis using cloud management techniques

Vincent C. Emeakaroha; Paweł P. Łabaj; Michael Maurer; Ivona Brandic; David P. Kreil

With the rapid development in recent years of high-throughput technologies in the life sciences, huge amounts of data are being generated and stored in databases. Despite significant advances in computing capacity and performance, an analysis of these large-scale data in a search for biomedically relevant patterns remains a challenging task. Scientific workflow applications support data-mining in more complex scenarios that include many data sources and computational tools, as commonly found in bioinformatics. A scientific workflow application is a holistic unit that defines, executes, and manages scientific applications using different software tools. Existing workflow applications are process- or data- rather than resource-oriented. Thus, they lack efficient computational resource management capabilities, such as those provided by Cloud computing environments. Insufficient computational resources disrupt the execution of workflow applications, wasting time and money. To address this issue, advanced resource monitoring and management strategies are required to determine the resource consumption behaviours of workflow applications for a dynamical allocation and deallocation of resources. In this paper, we present a novel Cloud resource monitoring technique and a knowledge management strategy to manage computational resources for workflow applications in order to guarantee their performance goals and their successful completion. We present the design description of these techniques, demonstrate how they can be applied to scientific workflow applications, and present first evaluation results as a proof of concept.

Collaboration


Dive into the Vincent C. Emeakaroha's collaboration.

Top Co-Authors

Avatar

Ivona Brandic

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Maurer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Theo Lynn

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Schahram Dustdar

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaniz Fatema

University College Cork

View shared research outputs
Top Co-Authors

Avatar

Ivan Breskovic

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge