Salman Taherizadeh
University of Ljubljana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Salman Taherizadeh.
Journal of Systems and Software | 2018
Salman Taherizadeh; Andrew Clifford Jones; Ian J. Taylor; Zhiming Zhao; Vlado Stankovski
Abstract Recently, a promising trend has evolved from previous centralized computation to decentralized edge computing in the proximity of end-users to provide cloud applications. To ensure the Quality of Service (QoS) of such applications and Quality of Experience (QoE) for the end-users, it is necessary to employ a comprehensive monitoring approach. Requirement analysis is a key software engineering task in the whole lifecycle of applications; however, the requirements for monitoring systems within edge computing scenarios are not yet fully established. The goal of the present survey study is therefore threefold: to identify the main challenges in the field of monitoring edge computing applications that are as yet not fully solved; to present a new taxonomy of monitoring requirements for adaptive applications orchestrated upon edge computing frameworks; and to discuss and compare the use of widely-used cloud monitoring technologies to assure the performance of these applications. Our analysis shows that none of existing widely-used cloud monitoring tools yet provides an integrated monitoring solution within edge computing frameworks. Moreover, some monitoring requirements have not been thoroughly met by any of them.
grid economics and business models | 2016
Salman Taherizadeh; Ian J. Taylor; Andrew Clifford Jones; Zhiming Zhao; Vlado Stankovski
Renting very high bandwidth or special connection links is neither affordable nor economical for service providers. As a consequence, ensuring data streaming systems to be able to guarantee desired service quality experienced by the users has been a challenging issue due to real-time changes in the network performance of the Internet communications. This paper presents a network monitoring approach that is broadly applicable in the adaptation of real-time services running on network edge computing platforms. The approach identifies runtime variations in the network quality of links between application servers and end-users. It is shown that by identifying critical conditions, it is possible to continuously adapt the deployed service for optimal performance. Adaptation possibilities include reconfiguration by dynamically changing paths between clients and servers, vertical scaling such as re-allocation of bandwidth to specific links, horizontal scaling of application servers, and even live-migration of application components from one edge server to another to improve the application performance.
workflows in support of large scale science | 2015
Kieran Evans; Andrew Clifford Jones; Alun David Preece; Francisco Quevedo; David Rogers; Irena Spasic; Ian J. Taylor; Vlado Stankovski; Salman Taherizadeh; Jernej Trnkoczy; George Suciu; Victor Suciu; Paul Martin; Junchao Wang; Zhiming Zhao
Cloud-based applications that depend on time-critical data processing or network throughput require the capability of reconfiguring their infrastructure on demand as and when conditions change. Although the ability to apply quality of service constraints on the current Cloud offering is limited, there are ongoing efforts to change this. One such effort is the European funded SWITCH project that aims to provide a programming model and toolkit to help programmers specify quality of service and quality of experience metrics of their distributed application and to provide the means to specify the reconfiguration actions which can be taken to maintain these requirements. In this paper, we present an approach to application reconfiguration by applying a workflow methodology to implement a prototype involving multiple reconfiguration scenarios of a distributed real-time social media analysis application, called Sentinel. We show that by using a lightweight RPC-based workflow approach, we can monitor a live application in real time and spawn dependency-based workflows to reconfigure the underlying Docker containers that implement the distributed components of the application. We propose to use this prototype as the basis for part of the SWITCH workbench, which will support more advanced programmable infrastructures.
ieee acm international conference utility and cloud computing | 2015
Vlado Stankovski; Salman Taherizadeh; Ian Taylor; Andrew Clifford Jones; Bruce Becker; Carlo Mastroianni; Heru Suhartanto
This paper presents a design study of an environment that would provide for resilience, high-availability, reproducibility and reliability of Cloud-based applications. The approach involves the use of a resilient container overlay, which provides tools for tracking and optimizing container placement during the course of a scientific experiment execution. The system is designed to detect failure and current performance bottlenecks and be capable of migrating running containers on the fly to servers more optimal for their execution. This work is in the design phase and therefore in this paper, we outline the proposed architecture of system and identify existing container management and migration tools that can be used in the implementation, where appropriate.
international conference on big data | 2017
Salman Taherizadeh; Vlado Stankovski
The perspective of online services such as Internet of Things (IoT) applications has impressively evolved over the last recent years as they are becoming more and more time-sensitive, maintained at decentralized locations and easily affected by the changing workload intensity at runtime. As a consequence, an up-and-coming trend has been emerging from previously centralized computation to distributed edge computing in order to address these new concerns. The goal of the present paper is therefore twofold. At first, to analyze modern types of edge computing applications and their auto-scaling challenges to offer desirable performance in conditions where the workload dynamically changes. Secondly, to present a new taxonomy of auto-scaling applications. This taxonomy thoroughly considers edge computing paradigm and its complementary technologies such as container-based visualization.
advanced information networking and applications | 2016
Paul Martin; A. Taal; Francisco Quevedo; David Mckensrick Rogers; Kieran Evans; Andrew Clifford Jones; Vlado Stankovski; Salman Taherizadeh; Jernej Trnkoczy; George Suciu; Zhiming Zhao
Cloud environments can provide elastic, controllable on-demand services for supporting complex distributed applications. However the engineering methods and software tools used for developing, deploying and executing classical time-critical applications do not, as yet, account for the programmability and controllability that can be provided by clouds, and so time-critical applications do not yet benefit from the full potential of virtualisation technologies. A software workbench for developing, deploying and controlling time-critical applications in cloud environments can address this, but needs to be able to interoperate with existing cloud standards and services in a fashion that can still adapt to the continuing evolution of the field. Semantic linking can enhance interoperability by creating mappings between different vocabularies and specifications, allowing different technologies to be plugged together, which can then be used to buld such a workbench in a flexible manner. A semantic linking framework is presented that uses a multiple-viewpoint model of a cloud application workbench as a means to relate different cloud and quality of service standards in order to aid the development of time-critical applications. The foundations of such a model, developed as part of the H2020 project SWITCH, are also presented.
Sensors | 2018
Salman Taherizadeh; Vlado Stankovski; Marko Grobelnik
The adoption of advanced Internet of Things (IoT) technologies has impressively improved in recent years by placing such services at the extreme Edge of the network. There are, however, specific Quality of Service (QoS) trade-offs that must be considered, particularly in situations when workloads vary over time or when IoT devices are dynamically changing their geographic position. This article proposes an innovative capillary computing architecture, which benefits from mainstream Fog and Cloud computing approaches and relies on a set of new services, including an Edge/Fog/Cloud Monitoring System and a Capillary Container Orchestrator. All necessary Microservices are implemented as Docker containers, and their orchestration is performed from the Edge computing nodes up to Fog and Cloud servers in the geographic vicinity of moving IoT devices. A car equipped with a Motorhome Artificial Intelligence Communication Hardware (MACH) system as an Edge node connected to several Fog and Cloud computing servers was used for testing. Compared to using a fixed centralized Cloud provider, the service response time provided by our proposed capillary computing architecture was almost four times faster according to the 99th percentile value along with a significantly smaller standard deviation, which represents a high QoS.
international conference on advanced applied informatics | 2017
Salman Taherizadeh; Vlado Stankovski
Various Internet of Things (IoT) applications, such as home automation and disaster early warning systems, are being introduced in various areas of human life and business. Today, a common method for delivery of such applications is via component-based software engineering disciplines based on cloud computing technologies such as containers. However, there are still numerous technological challenges to be solved particularly related to the time-critical Quality of Service (QoS) aspects of such applications. Runtime variations in the workload intensity as the amount of service tasks to be processed may radically affect the application performance perceived by the end-users or lead to the underutilization of resources. In order to assure the QoS of these containerized applications, monitoring is required at both container and application levels. Currently, there is a great lack of such multi-level monitoring systems. In this study, we present an architecture and implementation of a multi-level monitoring framework to ensure system health and adapt an IoT application in response to varying quantity, size and computational requirements of arrival requests. In this work, cloud application adaptation possibility includes horizontal scaling of container-based application instances.
information integration and web-based applications & services | 2016
Vlado Stankovski; Jernej Trnkoczy; Salman Taherizadeh; Matej Cigale
computer software and applications conference | 2018
Salman Taherizadeh; Blaz Novak; Marija Komatar; Marko Grobelnik