David A. Bacigalupo
University of Warwick
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David A. Bacigalupo.
grid computing | 2010
Victor Chang; David A. Bacigalupo; Gary Wills; David De Roure
This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In- House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. Using the Jericho Forum’s ‘Cloud Cube Model’ (CCM), the paper presents a summary of the eight business models. We discuss how the CCM fits into each business model, and then based on this discuss each business model’s strengths and weaknesses. We hope adopting an appropriate cloud computing business model will help organisations investing in this technology to stand firm in the economic downturn.
The Journal of Supercomputing | 2005
David A. Bacigalupo; Stephen A. Jarvis; Ligang He; Daniel P. Spooner; Donna N. Dillenberger; Graham R. Nudd
Response time predictions for workload on new server architectures can enhance Service Level Agreement–based resource management. This paper evaluates two performance prediction methods using a distributed enterprise application benchmark. The historical method makes predictions by extrapolating from previously gathered performance data, while the layered queuing method makes predictions by solving layered queuing networks. The methods are evaluated in terms of: the systems that can be modelled; the metrics that can be predicted; the ease with which the models can be created and the level of expertise required; the overheads of recalibrating a model; and the delay when evaluating a prediction. The paper also investigates how a prediction-enhanced resource management algorithm can be tuned so as to compensate for predictive inaccuracy and balance the costs of SLA violations and server usage.
Simulation Modelling Practice and Theory | 2011
David A. Bacigalupo; Jano van Hemert; Xiaoyu Chen; Asif Usmani; Adam P. Chester; Ligang He; Donna N. Dillenberger; Gary Wills; Lester Gilbert; Stephen A. Jarvis
The automatic allocation of enterprise workload to resources can be enhanced by being able to make what–if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a layered queuing performance model and show how they can provide a good level of support for a dynamic-urgent cloud environment. Using this we define, implement and experimentally investigate the effectiveness of a prediction-based cloud workload and resource management algorithm. Based on these experimental analyses we: (i) comparatively evaluate the layered queuing and historical techniques; (ii) evaluate the effectiveness of the management algorithm in different operating scenarios; and (iii) provide guidance on using prediction-based workload and resource management.
ieee international symposium on parallel distributed processing workshops and phd forum | 2010
David A. Bacigalupo; Jano van Hemert; Asif Usmani; Donna N. Dillenberger; Gary Wills; Stephen A. Jarvis
The automatic allocation of enterprise workload to resources can be enhanced by being able to make ‘what-if’ response time predictions, whilst different allocations are being considered. It is important to quantitatively compare the effectiveness of different prediction techniques for use in cloud infrastructures. To help make the comparison of relevance to a wide range of possible cloud environments it is useful to consider the following. 1.) urgent cloud customers such as the emergency services that can demand cloud resources at short notice (e.g. for our FireGrid emergency response software). 2.) dynamic enterprise systems, that must rapidly adapt to frequent changes in workload, system configuration and/or available cloud servers. 3.) The use of the predictions in a coordinated manner by both the cloud infrastructure and cloud customer management systems. 4.) A broad range of criteria for evaluating each technique. However, there have been no previous comparisons meeting these requirements. This paper, meeting the above requirements, quantitatively compares the layered queuing and (“HYDRA”) historical techniques - including our initial thoughts on how they could be combined. Supporting results and experiments include the following: i.) defining, investigating and hence providing guidelines on the use of a historical and layered queuing model; ii.) using these guidelines showing that both techniques can make low overhead and typically over 70% accurate predictions, for new server architectures for which only a small number of benchmarks have been run; and iii.) defining and investigating tuning a prediction-based cloud workload and resource management algorithm
international conference on advanced learning technologies | 2010
David A. Bacigalupo; W. I. Warburton; E.A. Draffan; Pei Zhang; Lester Gilbert; Gary Wills
Formative eAssessment can be very helpful in providing high quality higher education assignments. However, there are obstacles restricting the uptake of formative eAssessment in higher education including both cultural and technical issues. When a university is encouraging the uptake of formative eAssessment internally it is useful to have case studies from academic schools detailing how academics enthusiastic about formative eAssessment have used it in their modules. It is particularly helpful if these case studies document: i.) the principle obstacles that these champions had to deal with, ii.) a cooperative-design process through which these obstacles have been dealt with by the champions (with assistance from e.g. learning technologists), and iii.) an evaluation of the effectiveness of the resulting formative eAssessments. However there is a shortage of such real-world long-term case studies. This paper helps fill this gap in the literature by describing the case of a Modern Languages module within a Russell Group university (Southampton). The formative eAssessment solution resulting from the case study utilises our QTI, mobile QTI, accessibility, and web 2.0 tools and can be positioned at the cutting edge of formative eAssessment practice. We have evaluated this with undergraduate student volunteers from Spanish modules and received positive feedback.
ieee international conference on services computing | 2004
Ligang He; Stephen A. Jarvis; David A. Bacigalupo; Daniel P. Spooner; Xinuo Chen; Graham R. Nudd
This paper addresses workload allocation techniques for clusters of computers. The workload in question is homogenous or heterogeneous. Homogeneous workload contains only QoS-demanding jobs (QDJ) or nonQoS jobs (NQJ) while heterogeneous workload is a mix of QDJs and NQJs. The processing platform used is a single cluster or multiple clusters of computers. Two workload allocation strategies (called ORT and OMR) are developed for homogeneous workloads by establishing and numerically solving optimisation equation sets. The ORT strategy achieves the optimised mean response time for homogeneous NQJ workload; while the OMR strategy achieves the optimised mean miss rate for homogeneous QDJ workload. Based on ORT and OMR, a heterogeneous workload allocation strategy is developed to dynamically partition the clusters into two parts. Each part is managed by ORT or OMR to exclusively process NQJs or QDJs. The judicial partitioning achieves an optimised comprehensive performance, which combines the mean response time and the mean miss rate. The effectiveness of these workload allocation techniques is demonstrated through queueing-theoretical analysis as well as through experimental studies. These techniques can be applied to e-business workload management to improve the distribution of different types of requests in clusters of servers.
international parallel and distributed processing symposium | 2007
David A. Bacigalupo; James Xue; Simon D. Hammond; Stephen A. Jarvis; Donna N. Dillenberger; Graham R. Nudd
Container-managed persistence is an essential technology as it dramatically simplifies the implementation of enterprise data access. However it can also impose a significant overhead on the performance of the application at runtime. This paper presents a layered queuing performance model for predicting the effect of adding or removing container-managed persistence to a distributed enterprise application, in terms of response time and throughput performance metrics. Predictions can then be made for new server architectures - that is, server architectures for which only a small number of measurements have been made (e.g. to determine request processing speed). An experimental analysis of the model is conducted on a popular enterprise computing architecture based on IBM Websphere, using Enterprise Java Bean-based container-managed persistence as the middleware functionality. The results provide strong experimental evidence for the effectiveness of the model in terms of the accuracy of predictions, the speed with which predictions can be made and the low overhead at which the model can be rapidly parameterised.
international symposium on parallel and distributed processing and applications | 2004
Ligang He; Stephen A. Jarvis; David A. Bacigalupo; Daniel P. Spooner; Graham R. Nudd
In a multicluster architecture, where jobs can be submitted through each constituent cluster, the job arrival rates in individual clusters may be uneven and the load therefore needs to be balanced among clusters. In this paper we investigate load balancing for two types of jobs, namely non-QoS and QoS-demanding jobs and as a result, two performance-specific load balancing strategies (called ORT and OMR) are developed. The ORT strategy is used to obtain the optimised mean response time for non-QoS jobs and the OMR strategy is used to achieve the optimised mean miss rate for QoS-demanding jobs. The ORT and OMR strategies are mathematically modelled combining queuing network theory to establish sets of optimisation equations. Numerical solutions are developed to solve these optimisation equations, and a so called fair workload level is determined for each cluster. When the current workload in a cluster reaches this pre-calculated fair workload level, the jobs subsequently submitted to the cluster are transferred to other clusters for execution. The effectiveness of both strategies is demonstrated through theoretical analysis and experimental verification. The results show that the proposed load balancing mechanisms bring about considerable performance gains for both job types, while the job transfer frequency among clusters is considerably reduced. This has a number of advantages, in particular in the case where scheduling jobs to remote resources involves the transfer of large executable and data files.
cluster computing and the grid | 2005
Ligang He; Stephen A. Jarvis; Daniel P. Spooner; David A. Bacigalupo; Guang Tan; Graham R. Nudd
international parallel and distributed processing symposium | 2004
David A. Bacigalupo; Stephen A. Jarvis; Ligang He; Graham R. Nudd