Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Martin is active.

Publication


Featured researches published by Paul Martin.


Procedia Computer Science | 2015

Developing and Operating Time Critical Applications in Clouds: The State of the Art and the SWITCH Approach☆

Zhiming Zhao; Paul Martin; Junchao Wang; Ari Taal; Andrew Clifford Jones; Ian Taylor; Vlado Stankovski; Ignacio Garcia Vega; George Suciu; Alexandre Ulisses; Cees de Laat

Cloud environments can provide virtualized, elastic, controllable and high quality on-demand services for supporting complex distributed applications. However, the engineering methods and software tools used for developing, deploying and executing classical time critical applications do not, as yet, account for the programmability and controllability provided by clouds, and so time critical applications cannot yet benefit from the full potential of cloud technology. This paper reviews the state of the art of technologies involved in developing time critical cloud applications, and presents the approach of a recently funded EU H2020 project: the Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications (SWITCH). SWITCH aims to improve the existing development and execution model of time critical applications by introducing a novel conceptual model—the application-infrastructure co-programming and control model—in which application QoS and QoE, together with the programmability and controllability of cloud environments, is included in the complete application lifecycle.


ACM Computing Surveys | 2017

Scientific Workflows: Moving Across Paradigms

Chee Sun Liew; Malcolm P. Atkinson; Michelle Galea; Tan Fong Ang; Paul Martin; Jano van Hemert

Modern scientific collaborations have opened up the opportunity to solve complex problems that require both multidisciplinary expertise and large-scale computational experiments. These experiments typically consist of a sequence of processing steps that need to be executed on selected computing platforms. Execution poses a challenge, however, due to (1) the complexity and diversity of applications, (2) the diversity of analysis goals, (3) the heterogeneity of computing platforms, and (4) the volume and distribution of data. A common strategy to make these in silico experiments more manageable is to model them as workflows and to use a workflow management system to organize their execution. This article looks at the overall challenge posed by a new order of scientific experiments and the systems they need to be run on, and examines how this challenge can be addressed by workflows and workflow management systems. It proposes a taxonomy of workflow management system (WMS) characteristics, including aspects previously overlooked. This frames a review of prevalent WMSs used by the scientific community, elucidates their evolution to handle the challenges arising with the emergence of the “fourth paradigm,” and identifies research needed to maintain progress in this area.


Future Generation Computer Systems | 2017

Planning virtual infrastructures for time critical applications with multiple deadline constraints

Junchao Wang; A. Taal; Paul Martin; Yang Hu; Huan Zhou; Jianmin Pang; Cees de Laat; Zhiming Zhao

Executing time critical applications within cloud environments while satisfying execution deadlines and response time requirements is challenging due to the difficulty of securing guaranteed performance from the underlying virtual infrastructure. Cost-effective solutions for hosting such applications in the Cloud require careful selection of cloud resources and efficient scheduling of individual tasks. Existing solutions for provisioning infrastructures for time constrained applications are typically based on a single global deadline. Many time critical applications however have multiple internal time constraints when responding to new input. In this paper we propose a cloud infrastructure planning algorithm that accounts for multiple overlapping internal deadlines on sets of tasks within an application workflow. In order to better compare with existing work, we adapted the IC-PCP algorithm and then compared it with our own algorithm using a large set of workflows generated at different scales with different execution profiles and deadlines. Our results show that the proposed algorithm can satisfy all overlapping deadline constraints where possible given the resources available, and do so with consistently lower host cost in comparison with IC-PCP.


international conference on cloud computing | 2016

Fast Resource Co-provisioning for Time Critical Applications Based on Networked Infrastructures

Huan Zhou; Junchao Wang; Yang Hu; Jinshu Su; Paul Martin; Cees de Laat; Zhiming Zhao

Resource provisioning is a key step in the deployment of applications onto clouds. When some datacenter is not accessible or some part of the infrastructure is crashed, the provisioning mechanism is therefore essential for these applications to recover quickly from sudden failures, especially for time critical applications. However, most current solutions focus on the cloud providers hardware to achieve the fast provisioning of cloud resources. This paper proposes a co-provisioning mechanism to partition the customers cloud resource requests while preserving their connectivity. This mechanism uses a brokering approach that is totally transparent to both the customer and the cloud provider, specifically considering the network topology. We carry out experiments on an NIaaS (networked infrastructure-as-a-service) platform, called ExoGENI. Experimental results and data analysis show that this mechanism is feasible and can dramatically improve the speed of resource provisioning.


workflows in support of large scale science | 2015

Dynamically reconfigurable workflows for time-critical applications

Kieran Evans; Andrew Clifford Jones; Alun David Preece; Francisco Quevedo; David Rogers; Irena Spasic; Ian J. Taylor; Vlado Stankovski; Salman Taherizadeh; Jernej Trnkoczy; George Suciu; Victor Suciu; Paul Martin; Junchao Wang; Zhiming Zhao

Cloud-based applications that depend on time-critical data processing or network throughput require the capability of reconfiguring their infrastructure on demand as and when conditions change. Although the ability to apply quality of service constraints on the current Cloud offering is limited, there are ongoing efforts to change this. One such effort is the European funded SWITCH project that aims to provide a programming model and toolkit to help programmers specify quality of service and quality of experience metrics of their distributed application and to provide the means to specify the reconfiguration actions which can be taken to maintain these requirements. In this paper, we present an approach to application reconfiguration by applying a workflow methodology to implement a prototype involving multiple reconfiguration scenarios of a distributed real-time social media analysis application, called Sentinel. We show that by using a lightweight RPC-based workflow approach, we can monitor a live application in real time and spawn dependency-based workflows to reconfigure the underlying Docker containers that implement the distributed components of the application. We propose to use this prototype as the basis for part of the SWITCH workbench, which will support more advanced programmable infrastructures.


european conference on parallel processing | 2017

Deadline-Aware Deployment for Time Critical Applications in Clouds

Yang Hu; Junchao Wang; Huan Zhou; Paul Martin; A. Taal; Cees de Laat; Zhiming Zhao

Time critical applications are appealing to deploy in clouds due to the elasticity of cloud resources and their on-demand nature. However, support for deploying application components with strict deadlines on their deployment is lacking in current cloud providers. This is particularly important for adaptive applications that must automatically and seamlessly scale, migrate, or recover swiftly from failures. A common deployment procedure is to transmit application packages from the application provider to the cloud, and install the application there. Thus, users need to manually deploy their applications into clouds step by step with no guarantee regarding deadlines. In this work, we propose a Deadline-aware Deployment System (DDS) for time critical applications in clouds. DDS enables users to automatically deploy applications into clouds. We design bandwidth-aware EDF scheduling algorithms in DDS that minimize the number of deployments that miss their deadlines and maximize the utilization of network bandwidth. In the evaluation, we show that DDS leverages network bandwidth sufficiently, and significantly reduces the number of missed deadlines during deployment.


international symposium on object/component/service-oriented real-time distributed computing | 2016

Fast and Dynamic Resource Provisioning for Quality Critical Cloud Applications

Huan Zhou; Yang Hu; Junchao Wang; Paul Martin; Cees de Laat; Zhiming Zhao

As many quality critical applications are migrating to clouds, Quality of Service (QoS) and Quality of Experience (QoE) have become vital properties for cloud applications. Therefore, the provisioning mechanism, which aims to make the virtual infrastructure recover from sudden failures quickly or adapt dynamic properties of applications, is essential. However, most current provisioning mechanisms focus on the cloud provider and are developed for specific hardware. This paper proposes a mechanism to partition a customers cloud resource requests efficiently across multiple domains or clouds, while ensuring that the partitions are still connected with each other. This mechanism exploits networked infrastructure to make dynamic cloud resource provisioning as fast as possible. It works using a broker-based model that is transparent both to the customer and to the cloud provider. It is easy for customers to use and does not force providers to make any changes to their services. Moreover, the dynamic property makes the provisioned infrastructure better able to recover from failures quickly. We implement the mechanism and carry out experiments on ExoGENI, a networked infrastructure-as-a-service (NIaaS) platform. Comprehensive experimental results and theoretical analysis demonstrate that the mechanism we propose is feasible and can dramatically improve the speed of resource provisioning.


workflows in support of large scale science | 2015

Contemporary challenges for data-intensive scientific workflow management systems

Ryan Mork; Paul Martin; Zhiming Zhao

Data-intensive sciences now represent the forefront of current scientific computing. To handle this Big Data focus, scientists demand enabling technologies that can adapt to the increasingly distributed, collaborative, and exploratory scientific milieu. However, how these challenges have changed the design requirements of scientific workflow management systems (SWMSs) has not been assessed. First, how scientists currently use SWMSs was determined through a comprehensive usage survey examining 1455 research publications from 2013 to July 31st, 2015. To understand how data-intensive scientists are producing impactful research, we further examined usage of two major research clouds, the Open Science Data Cloud (OSDC) and Cornells Red Cloud. Here, we present a road map for SWMS development for data-intensive sciences. SWMSs are now needed that interconnect diverse software packages while enabling data exploration and multi-user interaction across distributed software and hardware environments.


networking architecture and storages | 2017

Automatic Collector for Dynamic Cloud Performance Information

Olaf Elzinga; Spiros Koulouzis; A. Taal; Junchao Wang; Yang Hu; Huan Zhou; Paul Martin; Cees de Laat; Zhiming Zhao

When deploying an application in the cloud, a developer often wants to know which of the wide variety of cloud resources is best to use. Most cloud providers only provide static information about different cloud resources which is often not enough because static information does not take into account the hardware and software that is being used or the policy that has been applied by the cloud provider. Therefore, dynamic benchmarking of cloud resources is needed to find out how a certain workload is going to behave on a certain instance. However, benchmarking various cloud resources is a time consuming process. Thus, using a tool which automatically benchmarks various cloud resources will be of great use. % To maximize the effectiveness of such a tool, it will be helpful to maintain an up to date cloud information catalogue, so that users can share and compare their benchmark results to the results of other users. In this paper, we present the Cloud Performance Collector, a modular cloud benchmarking tool aimed to automatically benchmark a wide variety of applications. To demonstrate the benefit of the tool, we did three experiments with three synthetic benchmark applications and one real-world application using the ExoGENI testbed.


european conference on service-oriented and cloud computing | 2017

Developing, Provisioning and Controlling Time Critical Applications in Cloud

Zhiming Zhao; Paul Martin; Andrew Clifford Jones; Ian J. Taylor; Vlado Stankovski; Guadalupe Flores Salado; George Suciu; Alexandre Ulisses; Cees de Laat

Quality constraints on time critical applications require high-performance supporting infrastructure and sophisticated optimisation mechanisms for developing and integrating system components. The lack of software development tools and in particular cloud-oriented programming and control models make the development and operation of time critical cloud applications difficult and costly. The SWITCH project (Software Workbench for Interactive, Time Critical and Highly self-adaptive Cloud applications) addresses the urgent industrial need for developing and executing time critical applications in Clouds. The primary users of SWITCH are Cloud application developers who wish to design and develop elastic, time-critical applications for the federated Cloud. By using SWITCH and its services they can discover appropriate infrastructures, choreograph their applications and QoS/QoE dependencies, and configure their applications for execution. They can choose where to deploy these applications using a specific target infrastructure (e.g. an appropriately selected Cloud provider). They can also manage and monitor their running applications so that they are always running optimally.

Collaboration


Dive into the Paul Martin's collaboration.

Top Co-Authors

Avatar

Zhiming Zhao

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Cees de Laat

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Junchao Wang

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Huan Zhou

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Yang Hu

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

A. Taal

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Suciu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Wenchao Jiang

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Paola Grosso

University of Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge