Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javier Diaz-Montes is active.

Publication


Featured researches published by Javier Diaz-Montes.


IEEE Cloud Computing | 2017

Mobility-Aware Application Scheduling in Fog Computing

Luiz F. Bittencourt; Javier Diaz-Montes; Rajkumar Buyya; Omer Farooq Rana; Manish Parashar

Fog computing provides a distributed infrastructure at the edges of the network, resulting in low-latency access and faster response to application requests when compared to centralized clouds. With this new level of computing capacity introduced between users and the data center-based clouds, new forms of resource allocation and management can be developed to take advantage of the Fog infrastructure. A wide range of applications with different requirements run on end-user devices, and with the popularity of cloud computing many of them rely on remote processing or storage. As clouds are primarily delivered through centralized data centers, such remote processing/storage usually takes place at a single location that hosts user applications and data. The distributed capacity provided by Fog computing allows execution and storage to be performed at different locations. The combination of distributed capacity, the range and types of user applications, and the mobility of smart devices require resource management and scheduling strategies that takes into account these factors altogether. We analyze the scheduling problem in Fog computing, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics.


ieee acm international conference utility and cloud computing | 2015

Docker containers across multiple clouds and data centers

Moustafa AbdelBaky; Javier Diaz-Montes; Manish Parashar; Merve Unuvar; Malgorzata Steinder

Emerging lightweight cloud technologies, such as Docker containers, are gaining wide traction in IT due to the fact that they allow users to deploy applications in any environment faster and more efficiently than using virtual machines. However, current Docker-based container deployment solutions are aimed at managing containers in a single-site, which limits their capabilities. As more users look to adopt Docker containers in dynamic, heterogenous environments, the ability to deploy and effectively manage containers across multiple clouds and data centers becomes of utmost importance. In this paper, we propose a prototype framework, called C-Ports, that enables the deployment and management of Docker containers across multiple hybrid clouds and traditional clusters while taking into consideration user and resource provider objectives and constraints. The framework leverages a constraint-programming model for resource selection and uses CometCloud to allocate/deallocate resources as well as to deploy containers on top of these resources. Our prototype has been effectively used to deploy and manage containers in a dynamic federation composed of five clouds and two clusters.


BMC Bioinformatics | 2014

Content-based histopathology image retrieval using CometCloud

Xin Qi; Daihou Wang; Ivan Rodero; Javier Diaz-Montes; Rebekah H. Gensure; Fuyong Xing; Hua Zhong; Lauri Goodell; Manish Parashar; David J. Foran; Lin Yang

BackgroundThe development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance.ResultsThe CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers.ConclusionsIn this paper, we present a set of newly developed CBIR algorithms and validate them using two different pathology applications, which are regularly evaluated in the practice of pathology. Comparative experimental results demonstrate excellent performance throughout the course of a set of systematic studies. Additionally, we present and evaluate a framework to enable the execution of these algorithms across distributed resources. We show how parallel searching of content-wise similar images in the dataset significantly reduces the overall computational time to ensure the practical utility of the proposed CBIR algorithms.


ieee/acm international symposium cluster, cloud and grid computing | 2015

Integrating Software Defined Networks within a Cloud Federation

Ioan Petri; Mengsong Zou; Ali Reza Zamani; Javier Diaz-Montes; Omer Farooq Rana; Manish Parashar

Cloud computing has generally involved the use of specialist data centres to support computation and data storage at a central site (or a limited number of sites). The motivation for this has come from the need to provide economies of scale (and subsequent reduction in cost) for supporting large scale computation for multiple user applications over (generally) a shared, multi-tenancy infrastructure. The use of such infrastructures requires moving data to a central location (data may be pre-staged to such a location prior to processing using terrestrial delivery channels and does not always require the use of a network-based transfer), undertaking processing on the data, and subsequently enabling users to download results of analysis. We extend this model using software defined networks (SDNs), whereby capability within the network can be used to support in-transit processing while data is in movement from source to destination. Using a smart building infrastructure scenario, consisting of sensors and actuators embedded within a built environment, we describe how an SDN-based architecture can be used to support real time data processing. This significantly influences the processing times to support energy optimisation of the building and reduces costs. We describe an architecture for such a distributed, multi-layered Cloud system and discuss a prototype that has been implemented using the CometCloud system, deployed across three sites in the UK and the US. Wevalidate the prototype using data from sensors within a Sports facility and making use of EnergyPlus.


IEEE Transactions on Services Computing | 2017

Deadline constrained video analysis via in-transit computational environments

Ali Reza Zamani; Mengsong Zou; Javier Diaz-Montes; Ioan Petri; Omer Farooq Rana; Ashiq Anjum; Manish Parashar

Combining edge processing (at data capture site) with analysis carried out while data is enroute from the capture site to a data center offers a variety of different processing models. Such in-transit nodes include network data centers that have generally been used to support content distribution (providing support for data multicast and caching), but have recently started to offer user-defined programmability, through Software Defined Networks (SDN) capability, e.g., OpenFlow and Network Function Visualization (NFV). We demonstrate how this multi-site computational capability can be aggregated to support video analytics, with Quality of Service and cost constraints (e.g., latency-bound analysis). The use of SDN technology enables separation of the data path from the control path, enabling in-network processing capabilities to be supported as data is migrated across the network. We propose to leverage SDN capability to gain control over the data transport service with the purpose of dynamically establishing data routes such that we can opportunistically exploit the latent computational capabilities located along the network path. Using a number of scenarios, we demonstrate the benefits and limitations of this approach for video analysis, comparing this with the baseline scenario of undertaking all such analysis at a data center located at the core of the infrastructure.


2014 International Conference on Cloud and Autonomic Computing | 2014

Extending CometCloud to Process Dynamic Data Streams on Heterogeneous Infrastructures

Rafael Tolosana-Calasanz; Javier Diaz-Montes; Omer Farooq Rana; Manish Parashar

Coordination of multiple concurrent data stream processing, carried out through a distributed Cloud infrastructure, is described. The coordination (control) is carried out through the use of a Reference net (a particular type of Petri net) based interpreter, implemented alongside the Comet Cloud system. One of the benefits of this approach is that the model can also be executed directly to support the coordination action. The proposed approach supports the simultaneous processing of data streams and enables dynamic scale-up of heterogeneous computational resources on demand, while meeting the particular quality of service requirements (throughput) for each data stream. We assume that the processing to be applied to each data stream is known a priori. The workflow interpreter monitors the arrival rate and throughput of each data stream, as a consequence of carrying out the execution using Comet Cloud. We demonstrate the use of the control strategy using two key actions - allocating and deal locating resources dynamically based on the number of tasks waiting to be executed (using a predefined threshold). However, a variety of other control actions can also be supported and are described in this work. Evaluation is carried out using a distributed Comet Cloud deployment - where the allocation of new resources can be based on a number of different criteria, such as: (i) differences between sites, i.e. Based on the types of resources supported (e.g. GPU vs. CPU only, FPGAs, etc), (ii) cost of execution, (iii) failure rate and likely resilience, etc.


ieee international conference on cloud computing technology and science | 2015

Market Models for Federated Clouds

Ioan Petri; Javier Diaz-Montes; Mengsong Zou; Tom Beach; Omer Farooq Rana; Manish Parashar

Multi-cloud systems have enabled resource and service providers to co-exist in a market where the relationship between clients and services depends on the nature of an application and can be subject to a variety of different Quality of Service (QoS) constraints. Deciding whether a cloud provider should host (or finds it profitable to host) a service in the long-term would be influenced by parameters such as the service price, the QoS guarantees required by customers, the deployment cost (taking into account both cost of resource provisioning and operational expenditure, e.g. energy costs) and the constraints over which these guarantees should be met. In a federated cloud system users can combine specialist capabilities offered by a limited number of providers, at particular cost bands-such as availability of specialist co-processors and software libraries. In addition, federation also enables applications to be scaled on-demand and restricts lock in to the capabilities of a particular provider. We devise a market model to support federated clouds and investigate its efficiency in two real application scenarios:(i) energy optimisation in built environments and (ii) cancer image processing both requiring significant computational resources to execute simulations. We describe and evaluate the establishment of such an application based federation and identify a cost-decision based mechanism to determine when tasks should be outsourced to external sites in the federation. The following contributions are provided: (i) understanding the criteria for accessing sites within a federated cloud dynamically, taking into account factors such as performance, cost, user perceived value, and specific application requirements; (ii) developing and deploying a cost based federated cloud framework for supporting real applications over three federated sites at Cardiff (UK), Rutgers and Indiana (USA), (iii) a performance analysis of the application scenarios to determine how task submission could be supported across these three sites, subject to particular revenue targets.


Proceedings of the 2013 ACM Cloud and Autonomic Computing Conference on | 2013

Enabling autonomic computing on federated advanced cyberinfrastructures

Javier Diaz-Montes; Mengsong Zou; Ivan Rodero; Manish Parashar

We present a federation model to support the dynamic federation of resources and autonomic management mechanisms that coordinate multiple workflows to use resources based on objectives. We illustrate the effectiveness of the proposed framework and autonomic mechanisms through the discussion of representative use case application scenarios, and from these experiences, we discuss that such a federation model can support new types of application formulations.


ieee international conference on cloud computing technology and science | 2018

Supporting Data-Intensive Workflows in Software-Defined Federated Multi-Clouds

Javier Diaz-Montes; Manuel Diaz-Granados; Mengsong Zou; Shu Tao; Manish Parashar

Cloud computing is emerging as a viable platform for scientific exploration. Elastic and on-demand access to resources (and other services), the abstraction of “unlimited” resources, and attractive pricing models provide incentives for scientists to move their workflows into clouds. Generalizing these concepts beyond a single virtualized datacenter, it is possible to create federated marketplaces where different types of resources (e.g., clouds, HPC grids, supercomputers) that may be geographically distributed, are collectively exposed as a single elastic infrastructure. This presents opportunities for optimizing the execution of application workflows with heterogeneous and dynamic requirements, and tackling larger scale problems. In this paper, we introduce a framework to manage the end-to-end execution of data-intensive application workflows in dynamic software-defined resource federation. This framework enables the autonomic execution of workflows by elastically provisioning an appropriate set of resources that meet application requirements, and by adapting this set of resources at runtime as the requirements change. It also allows users to customize scheduling policies that drive the way resources federated and used. To demonstrate the benefits of our approach, we study the execution of two different data-intensive scientific workflows in a multi-cloud federation using different policies and objective functions.


International Journal of High Performance Computing Applications | 2018

Software-defined environments for science and engineering

Moustafa AbdelBaky; Javier Diaz-Montes; Manish Parashar

Service-based access models coupled with recent advances in application deployment technologies are enabling opportunities for realizing highly customized software-defined environments that can achieve new levels of efficiencies and can support emerging dynamic and data-driven applications. However, achieving this vision requires new models that can support dynamic (and opportunistic) compositions of infrastructure services, which can adapt to evolving application needs and the state of resources. In this article, we present a programmable dynamic infrastructure service composition approach that uses software-defined environment concepts to control the composition process. The resulting software-defined infrastructure service composition adapts to meet objectives and constraints set by the users, applications, and/or resource providers. We present and compare two different approaches for programming resources and controlling the service composition, one that is based on a rule engine and another that leverages a constraint programming model for resource description. We present the design and prototype implementation of such software-defined service composition and demonstrate its operation through a use case where multiple views of heterogeneous, geographically distributed services are aggregated on demand based on user and resource provider specifications. The resulting compositions are used to run different bioinformatics workloads, which are encapsulated inside Docker containers. Each view independently adapts to various constraints and events that are imposed on the system while minimizing the workload completion time.

Collaboration


Dive into the Javier Diaz-Montes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luiz F. Bittencourt

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge