Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Enda Howley is active.

Publication


Featured researches published by Enda Howley.


Concurrency and Computation: Practice and Experience | 2013

Applying reinforcement learning towards automating resource allocation and application scalability in the cloud

Enda Barrett; Enda Howley; Jim Duggan

Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver computational resources by means of virtualisation technologies. These technologies allow multiple independent virtual machines to reside in apparent isolation on the same physical host. Dynamically scaling applications running on IaaS clouds can lead to varied and unpredictable results because of the performance interference effects associated with co‐located virtual machines. Determining appropriate scaling policies in a dynamic non‐stationary environment is non‐trivial. One principle advantage exhibited by IaaS clouds over their traditional hosting counterparts is the ability to scale resources on‐demand. However, a problem arises concerning resource allocation as to which resources should be added and removed when the underlying performance of the resource is in a constant state of flux. Decision theoretic frameworks such as Markov Decision Processes are particularly suited to decision making under uncertainty. By applying a temporal difference, reinforcement learning algorithm known as Q‐learning, optimal scaling policies can be determined. Additionally, reinforcement learning techniques typically suffer from curse of dimensionality problems, where the state space grows exponentially with each additional state variable. To address this challenge, we also present a novel parallel Q‐learning approach aimed at reducing the time taken to determine optimal policies whilst learning online. Copyright


european conference on web services | 2011

A Learning Architecture for Scheduling Workflow Applications in the Cloud

Enda Barrett; Enda Howley; Jim Duggan

The scheduling of workflow applications involves the mapping of individual workflow tasks to computational resources, based on a range of functional and non-functional quality of service requirements. Workflow applications such as scientific workflows often require extensive computational processing and generate significant amounts of experimental data. The emergence of cloud computing has introduced a utility-type market model, where computational resources of varying capacities can be procured on demand, in a pay-per-use fashion. In workflow based applications dependencies exist amongst tasks which requires the generation of schedules in accordance with defined precedence constraints. These constraints pose a difficult planning problem, where tasks must be scheduled for execution only once all their parent tasks have completed. In general the two most important objectives of workflow schedulers are the minimisation of both cost and make span. The cost of workflow execution consists of both computational costs incurred from processing individual tasks, and data transmission costs. With scientific workflows potentially large amounts of data must be transferred between compute and storage sites. This paper proposes a novel cloud workflow scheduling approach which employs a Markov Decision Process to optimally guide the workflow execution process depending on environmental state. In addition the system employs a genetic algorithm to evolve workflow schedules. The overall architecture is presented, and initial results indicate the potential of this approach for developing viable workflow schedules on the Cloud.


congress on evolutionary computation | 2005

The emergence of cooperation among agents using simple fixed bias tagging

Enda Howley; Colm O'Riordan

The principle of cooperation influences our everyday lives. This conflict between individual and collective rationality can be modelled through the use of social dilemmas such as the prisoners dilemma. Reflecting the reality that real world autonomous agents are not chosen at random to interact, we acknowledge the role some structuring mechanisms can play in increasing cooperation. This paper examines one simple structuring technique which has been shown to increase cooperation among agents. Tagging mechanisms structure a population into subgroups and as a result reflect many aspects which are relevant to the domains of kin selection and trust. We outline some simulations involving a simple tagging system and outline the main factors which are vital to increasing cooperation.


Archive | 2016

An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control

Patrick Mannion; Jim Duggan; Enda Howley

Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field.


Computational and Mathematical Organization Theory | 2011

The influence of random interactions and decision heuristics on norm evolution in social networks

Declan Mungovan; Enda Howley; Jim Duggan

In this paper we explore the effect that random social interactions have on the emergence and evolution of social norms in a simulated population of agents. In our model agents observe the behaviour of others and update their norms based on these observations. An agent’s norm is influenced by both their own fixed social network plus a second random network that is composed of a subset of the remaining population. Random interactions are based on a weighted selection algorithm that uses an individual’s path distance on the network to determine their chance of meeting a stranger. This means that friends-of-friends are more likely to randomly interact with one another than agents with a higher degree of separation. We then contrast the cases where agents make highest utility based rational decisions about which norm to adopt versus using a Markov Decision process that associates a weight with the best choice. Finally we examine the effect that these random interactions have on the evolution of a more complex social norm as it propagates throughout the population. We discover that increasing the frequency and weighting of random interactions results in higher levels of norm convergence and in a quicker time when agents have the choice between two competing alternatives. This can be attributed to more information passing through the population thereby allowing for quicker convergence. When the norm is allowed to evolve we observe both global consensus formation and group splintering depending on the cognitive agent model used.


Applied Soft Computing | 2018

A meta optimisation analysis of particle swarm optimisation velocity update equations for watershed management learning

Karl Mason; Jim Duggan; Enda Howley

Abstract Particle swarm optimisation (PSO) is a general purpose optimisation algorithm used to address hard optimisation problems. The algorithm operates as a result of a number of particles converging on what is hoped to be the best solution. How the particles move through the problem space is therefore critical to the success of the algorithm. This study utilises meta optimisation to compare a number of velocity update equations to determine which features of each are of benefit to the algorithm. A number of hybrid velocity update equations are proposed based on other high performing velocity update equations. This research also presents a novel application of PSO to train a neural network function approximator to address the watershed management problem. It is found that the standard PSO with a linearly changing inertia, the proposed hybrid Attractive Repulsive PSO with avoidance of worst locations (AR PSOAWL) and Adaptive Velocity PSO (AV PSO) provide the best performance overall. The results presented in this paper also reveal that commonly used PSO parameters do not provide the best performance. Increasing and negative inertia values were found to perform better.


Neurocomputing | 2017

Multi-objective dynamic economic emission dispatch using particle swarm optimisation variants

Karl Mason; Jim Duggan; Enda Howley

Abstract Particle swarm optimisation (PSO) is a bio-inspired swarm based approach to solving optimisation problems. The algorithm functions as a result of particles traversing and evaluating the problem space, eventually converging on the optimum solution. This paper applies a number of PSO variants to the dynamic economic emission dispatch (DEED) problem. The DEED problem is a multi-objective optimisation problem in which the goal is to optimise two conflicting objectives: cost and emissions. The PSO variants tested include: the standard PSO (SPSO), the PSO with avoidance of worst locations (PSO AWL), and also a selection of different topologies including the PSO with a gradually increasing directed neighbourhood (PSO GIDN). The aim of the paper is to test the performance of different variants of the PSO AWL against variants of the SPSO on the DEED problem. The results show that the PSO AWL outperforms the SPSO for every topology implemented. The results are also compared to state of the art genetic algorithm (NSGA-II) and multi-agent eeinforcement learning (MARL). This paper then examines the performance of each PSO algorithm when the power demand is modified to form a triangle wave. The purpose of this experiment was to analyse the performance of different PSO variants on an increasingly constrained problem.


Artificial Intelligence Review | 2006

The effects of viscosity in choice and refusal IPD environments

Enda Howley; Colm O'Riordan

An objective of multi-agent systems is to build robust intelligent systems capable of existing in complex environments. These systems are often characterised as being uncertain and open to change which make such systems far more difficult to design and understand. Some of this uncertainty and change occurs in open agent environments where agents can freely enter and exit the system. In this paper we will examine this form of population change in a game theoretic setting. These simulations involve studying population change through a number of alternative viscosity models. The simulations will examine two possible trust models. All our simulations will use a simple choice and refusal game environment within which agents may freely choose with which of their peers to interact.


Cluster Computing | 2017

A network aware approach for the scheduling of virtual machine migration during peak loads

Martin Duggan; Jim Duggan; Enda Howley; Enda Barrett

Live virtual machine migration can have a major impact on how a cloud system performs, as it consumes significant amounts of network resources such as bandwidth. Migration contributes to an increase in consumption of network resources which leads to longer migration times and ultimately has a detrimental effect on the performance of a cloud computing system. Most industrial approaches use ad-hoc manual policies to migrate virtual machines. In this paper, we propose an autonomous network aware live migration strategy that observes the current demand level of a network and performs appropriate actions based on what it is experiencing. The Artificial Intelligence technique known as Reinforcement Learning acts as a decision support system, enabling an agent to learn optimal scheduling times for live migration while analysing current network traffic demand. We demonstrate that an autonomous agent can learn to utilise available resources when peak loads saturate the cloud network.


Procedia Computer Science | 2015

Parallel Reinforcement Learning for Traffic Signal Control

Patrick Mannion; Jim Duggan; Enda Howley

Abstract Developing Adaptive Traffic Signal Control strategies for efficient urban traffic management is a challenging problem, which is not easily solved. Reinforcement Learning (RL) has been shown to be a promising approach when applied to traffic signal control (TSC) problems. When using RL agents for TSC, difficulties may arise with respect to convergence times and performance. This is especially pronounced on complex intersections with many different phases, due to the increased size of the state action space. Parallel Learning is an emerging technique in RL literature, which allows several learning agents to pool their experiences while learning concurrently on the same problem. Here we present an extension to a leading published work on RL for TSC, which leverages the benefits of Parallel Learning to increase exploration and reduce delay times and queue lengths.

Collaboration


Dive into the Enda Howley's collaboration.

Top Co-Authors

Avatar

Jim Duggan

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Karl Mason

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Enda Barrett

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Patrick Mannion

Galway-Mayo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Colm O'Riordan

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Martin Duggan

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Hongliang Liu

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Declan Mungovan

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Michael Schukat

National University of Ireland

View shared research outputs
Researchain Logo
Decentralizing Knowledge