Blesson Varghese
University of St Andrews
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Blesson Varghese.
Future Generation Computer Systems | 2018
Blesson Varghese; Rajkumar Buyya
The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems. Distributed cloud infrastructure will make use of the network edge in the future.Two tier applications will be replaced by new multi-tier cloud architectures.Next generation cloud computing impacts both societal and scientific avenues.A new marketplace will need to be developed for resources at the network edge.Security and sustainability are key to architecting future cloud systems.
arXiv: Distributed, Parallel, and Cluster Computing | 2016
Blesson Varghese; Nan Wang; Sakil Barbhuiya; Peter Kilpatrick; Dimitrios S. Nikolopoulos
Many cloud-based applications employ a data centers as a central server to process data that is generated by edge devices, such as smartphones, tablets and wearables. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effect on Quality-of-Service and Experience. The concept of Edge Computing is predicated on moving some of this computational load towards the edge of the network to harness computational capabilities that are currently untapped in edge nodes, such as base stations, routers and switches. This position paper considers the challenges and opportunities that arise out of this new direction in the computing landscape.
ieee international conference on high performance computing data and analytics | 2012
Aman Kumar Bahl; Oliver Baltzer; Andrew Rau-Chaplin; Blesson Varghese
At the heart of the analytical pipeline of a modern quantitative insurance/reinsurance company is a stochastic simulation technique for portfolio risk analysis and pricing process referred to as Aggregate Analysis. Support for the computation of risk measures including Probable Maximum Loss (PML) and the Tail Value at Risk (TVAR) for a variety of types of complex property catastrophe insurance contracts including Cat eXcess of Loss (XL), or Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures is obtained in Aggregate Analysis. In this paper, we explore parallel methods for aggregate risk analysis. A parallel aggregate risk analysis algorithm and an engine based on the algorithm is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and in C and CUDA for many-core GPUs. Performance analysis of the algorithm indicates that GPUs offer an alternative HPC solution for aggregate risk analysis that is cost effective. The optimised algorithm on the GPU performs a 1 million trial aggregate simulation with 1000 catastrophic events per trial on a typical exposure set and contract structure in just over 20 seconds which is approximately 15x times faster than the sequential counterpart. This can sufficiently support the real-time pricing scenario in which an underwriter analyses different contractual terms and pricing while discussing a deal with a client over the phone.
international conference on big data | 2013
Vu Dung Nguyen; Blesson Varghese; Adam Barker
Analysis of information retrieved from microblog-ging services such as Twitter can provide valuable insight into public sentiment in a geographic region. This insight can be enriched by visualising information in its geographic context. Two underlying approaches for sentiment analysis are dictionary-based and machine learning. The former is popular for public sentiment analysis, and the latter has found limited use for aggregating public sentiment from Twitter data. The research presented in this paper aims to extend the machine learning approach for aggregating public sentiment. To this end, a framework for analysing and visualising public sentiment from a Twitter corpus is developed. A dictionary-based approach and a machine learning approach are implemented within the framework and compared using one UK case study, namely the royal birth of 2013. The case study validates the feasibility of the framework for analysis and rapid visualisation. One observation is that there is good correlation between the results produced by the popular dictionary-based approach and the machine learning approach when large volumes of tweets are analysed. However, for rapid analysis to be possible faster methods need to be developed using big data techniques and parallel methods.
IEEE Transactions on Services Computing | 2017
Nan Wang; Blesson Varghese; Michail Matthaiou; Dimitrios S. Nikolopoulos
Current computing techniques using the cloud as a centralised server will become untenable as billions of devices get connected to the Internet. This raises the need for fog computing, which leverages computing at the edge of the network on nodes, such as routers, base stations and switches, along with the cloud. However, to realise fog computing the challenge of managing edge nodes will need to be addressed. This paper is motivated to address the resource management challenge. We develop the first framework to manage edge nodes, namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for provisioning and auto-scaling edge node resources are proposed. The feasibility of the framework is demonstrated on a PokéMon Go-like online game use-case. The benefits of using ENORM are observed by reduced application latency between 20-80 percent and reduced data transfer and communication frequency between the edge node and the cloud by up to 95 percent. These results highlight the potential of fog computing for improving the quality of service and experience.
Robotics and Autonomous Systems | 2010
Blesson Varghese; Gerard T. McKee
The work reported in this paper is motivated towards the development of a mathematical model for swarm systems based on macroscopic primitives. A pattern formation and transformation model is proposed. The pattern transformation model comprises two general methods for pattern transformation, namely a macroscopic transformation method and a mathematical transformation method. The problem of transformation is formally expressed and four special cases of transformation are considered. Simulations to confirm the feasibility of the proposed models and transformation methods are presented. Comparison between the two transformation methods is also reported.
international conference on cloud computing | 2015
Adam Barker; Blesson Varghese; Long Thanh Thai
A Cloud Services Brokerage (CSB) acts as an intermediary between cloud service providers (e.g., Amazon and Google) and cloud service end users, providing a number of value adding services. CSBs as a research topic are in there infancy. The goal of this paper is to provide a concise survey of existing CSB technologies in a variety of areas and highlight a roadmap, which details five future opportunities for research.
ieee international conference on cloud computing technology and science | 2014
Blesson Varghese; Özgür Akgün; Ian Miguel; Long Thai; Adam Barker
How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.
ieee international conference on cloud computing technology and science | 2014
Long Thai; Blesson Varghese; Adam Barker
Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the users preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.
International Journal of Intelligent Computing and Cybernetics | 2009
Blesson Varghese; Gerard T. McKee
Purpose – The purpose of this paper is to address a classic problem – pattern formation identified by researchers in the area of swarm robotic systems – and is also motivated by the need for mathematical foundations in swarm systems.Design/methodology/approach – The work is separated out as inspirations, applications, definitions, challenges and classifications of pattern formation in swarm systems based on recent literature. Further, the work proposes a mathematical model for swarm pattern formation and transformation.Findings – A swarm pattern formation model based on mathematical foundations and macroscopic primitives is proposed. A formal definition for swarm pattern transformation and four special cases of transformation are introduced. Two general methods for transforming patterns are investigated and a comparison of the two methods is presented. The validity of the proposed models, and the feasibility of the methods investigated are confirmed on the Traer Physics and Processing environment.Original...