Rafael Tolosana-Calasanz
University of Zaragoza
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rafael Tolosana-Calasanz.
Journal of Computer and System Sciences | 2012
Rafael Tolosana-Calasanz; José Ángel Bañares; Congduc Pham; Omer Farooq Rana
The ability to support Quality of Service (QoS) constraints is an important requirement in some scientific applications. With the increasing use of Cloud computing infrastructures, where access to resources is shared, dynamic and provisioned on-demand, identifying how QoS constraints can be supported becomes an important challenge. However, access to dedicated resources is often not possible in existing Cloud deployments and limited QoS guarantees are provided by many commercial providers (often restricted to error rate and availability, rather than particular QoS metrics such as latency or access time). We propose a workflow system architecture which enforces QoS for the simultaneous execution of multiple scientific workflows over a shared infrastructure (such as a Cloud environment). Our approach involves multiple pipeline workflow instances, with each instance having its own QoS requirements. These workflows are composed of a number of stages, with each stage being mapped to one or more physical resources. A stage involves a combination of data access, computation and data transfer capability. A token bucket-based data throttling framework is embedded into the workflow system architecture. Each workflow instance stage regulates the amount of data that is injected into the shared resources, allowing for bursts of data to be injected while at the same time providing isolation of workflow streams. We demonstrate our approach by using the Montage workflow, and develop a Reference net model of the workflow.
international conference on developments in esystems engineering | 2013
Thar Baker; Yanik Ngoko; Rafael Tolosana-Calasanz; Omer Farooq Rana; Martin Randles
The ever-increasing density in cloud computing users, services, and data centres has led to significant increases in network traffic and the associated energy consumed by its huge infrastructure, e.g. extra servers, switches, routers, which is required to respond quickly and effectively to users requests. Transferring data, via a high bandwidth connection between data centres and cloud users, consumes even larger amounts of energy than just processing and storing the data on a cloud data centre, and hence producing high carbon dioxide emissions. This power consumption is highly significant when transferring data into a data centre located relatively far from the users geographical location. Thus, it became high-necessity to locate the lowest energy consumption route between the user and the designated data centre, while making sure the users requirements, e.g. response time, are met. This paper proposes a high-end autonomic meta-director framework to find the most energy efficient route to the green data centre by utilising the linear programming approach. The framework is, first, formalised by the situation calculus, and then evaluated against shortest path algorithm with minimum number of nodes traversed.
Concurrency and Computation: Practice and Experience | 2011
Rafael Tolosana-Calasanz; José Ángel Bañares; Omer Farood Rana
Interest in data streaming within scientific workflow has increased significantly over the recent years—mainly due to the emergence of data‐driven applications. Such applications can include data streaming from sensors and data coupling between scientific simulations. To support resource management to enact such streaming‐based workflow, autonomic computing techniques for transmission have been combined with in‐transit processing, so that data elements may be processed in advance, enroute, prior to arrival at the destination. We propose the integration of an autonomic data streaming service (ADSS) with in‐transit processing into a workflow specification. This integration may imply that the associated runtime resource allocation is dependent on environmental conditions and can change for different enactments of the same workflow. In our proposal, our workflow specifications are independent of the constraints imposed by the resource allocation. We express our solutions in terms of Reference nets. We also implement an ADSS utilizing a timed Reference net simulation for predicting future states of the ADSS. There are two advantages: the Reference net which implements the ADSS and the timed model are coincident, and second, token distribution obtained from the Petri net implementation can be utilized to better understand the demand for particular types of resources in the system. Copyright
Future Generation Computer Systems | 2016
Rafael Tolosana-Calasanz; José Ángel Bañares; Congduc Pham; Omer Farooq Rana
The number of applications that need to process data continuously over long periods of time has increased significantly over recent years. The emerging Internet of Things and Smart Cities scenarios also confirm the requirement for real time, large scale data processing. When data from multiple sources are processed over a shared distributed computing infrastructure, it is necessary to provide some Quality of Service (QoS) guarantees for each data stream, specified in a Service Level Agreement (SLA). SLAs identify the price that a user must pay to achieve the required QoS, and the penalty that the provider will pay the user in case of QoS violation. Assuming maximization of revenue as a Cloud providers objective, then it must decide which streams to accept for storage and analysis; and how many resources to allocate for each stream. When the real-time requirements demand a rapid reaction, dynamic resource provisioning policies and mechanisms may not be useful, since the delays and overheads incurred might be too high. Alternatively, idle resources that were initially allocated for other streams could be re-allocated, avoiding subsequent penalties. In this paper, we propose a system architecture for supporting QoS for concurrent data streams to be composed of self-regulating nodes. Each node features an envelope process for regulating and controlling data access and a resource manager to enable resource allocation, and selective SLA violations, while maximizing revenue. Our resource manager, based on a shared token bucket, enables: (i) the re-distribution of unused resources amongst data streams; and (ii) a dynamic re-allocation of resources to streams likely to generate greater profit for the provider. We extend previous work by providing a Petri-net based model of system components, and we evaluate our approach on an OpenNebula-based Cloud infrastructure. We provide a system for simultaneous bursty data streams on shared Clouds.We enforce QoS based on a profit-based resource management model.We provide real experiments within an OpenNebula based data centre.
ieee acm international conference utility and cloud computing | 2014
Marcio R. M. Assis; Luiz F. Bittencourt; Rafael Tolosana-Calasanz
This position paper brings to discussion a next step in the evolution of Cloud Computing as the organisation of multiple clouds. We summarise concepts related to the set of clouds called Inter-Cloud and its main elements (hybrid clouds, multi-clouds, etc.). We explore in more detail the Cloud Federation, which has stood as a well-behaved and voluntary organisation of clouds, focusing on its key features, advantages over other types of cloud organisation and existing models. Finally, we present a conceptual model of a Cloud Federation organised in layers, created based on the relationship of the main segments identified in this kind of organisation.
grid economics and business models | 2013
Thar Baker; Omer Farooq Rana; Radu Calinescu; Rafael Tolosana-Calasanz; José Ángel Bañares
In recent years, the rise and rapid adoption of cloud computing has acted as a catalyst for research in related fields: virtualization, distributed and service-oriented computing to name but a few. Whilst cloud computing technology is rapidly maturing, many of the associated long-standing socio-technical challenges including the dependability of cloud-based service composition, services manageability and interoperability remain unsolved. These can be argued to slow down the migration of serious business critical applications to the cloud model. This paper reports on progress towards the development of a method to generate cloud-based service compositions from requirements metadata. The paper presents a formal approach that uses Situation Calculus to translate service requirements into an Intention Workflow Model (IWM). This IWM is then used to generate autonomic cloud service composition. The Petshop benchmark is used to illustrate and evaluate the proposed method.
european conference on research and advanced technology for digital libraries | 2006
Rafael Tolosana-Calasanz; José Antonio Álvarez-Robles; Javier Lacasta; Javier Nogueras-Iso; Pedro R. Muro-Medrano; F. Javier Zarazaga-Soria
Geographic metadata quality is one of the most important aspects on the performance of Geographic Digital Libraries. After reviewing previous attempts outside the geographic domain, this paper presents early results from a series of experiments for the development of a quantitative method for quality assessment. The methodology is developed through two phases. Firstly, a list of geographic quality criteria is compiled from several experts of the area. Secondly, a statistical analysis (by developing a Principal Component Analysis) of a selection of geographic metadata record sets is performed in order to discover the features which correlate with good geographic metadata.
computational science and engineering | 2010
Marco Lackovic; Domenico Talia; Rafael Tolosana-Calasanz; José Ángel Bañares; Omer Farooq Rana
Scientific workflows generally involve the distribution of tasks to distributed resources, which may exist in different administrative domains. The use of distributed resources in this way may lead to faults, and detecting them, identifying them and subsequently correcting them remains an important research challenge. We introduce a fault taxonomy for scientific workflows that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected (recovered from). The presented taxonomy is motivated by previous work [4], but has a particular focus on workflow environments (compared to previous work which focused on Grid-based resource management) and demonstrated through its use in Weka4WS.
international conference on move to meaningful internet systems | 2007
Rafael Tolosana-Calasanz; José Ángel Bañares; Pedro Álvarez; Joaquín Ezpeleta
Because of the nature of the Grid, Grid application systems built on traditional software development techniques can only interoperate with Grid services in an ad hoc manner that requires substantial human intervention. In this paper, we introduce Vega, a pure service-oriented Grid workflow system which consists of a set of loosely coupled services co-operating each other to solve problems. In Vega, the execution flow of its services is isolated from their interactions and these interactions are explicitly modelled and can be dynamically interpreted at run-time.
International Journal of Metadata, Semantics and Ontologies | 2006
Rafael Tolosana-Calasanz; Javier Nogueras-Iso; Rubén Béjar; Pedro R. Muro-Medrano; Francisco Javier Zarazaga-Soria
The tendency of current cataloguing systems is to interchange metadata in XML according to the specific standard required by each user on demand. Furthermore, metadata schemas from different domains are not usually semantically distinct but overlap and relate to each other in complex ways. As a consequence, the semantic interoperability has to deal with the equivalences between those descriptions. There exist two main approaches in order to tackle this problem: solutions based on the use of ontologies and solutions based on the creation of specific crosswalks for one-to-one mapping. This paper proposes a hierarchical one-to-one mapping solution for improving semantic interoperability.