Joaquín Entrialgo
University of Oviedo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joaquín Entrialgo.
Real-time Systems | 2008
José M. López; José Luis Díaz; Joaquín Entrialgo; Daniel F. García
Exact stochastic analysis of most real-time systems under preemptive priority-driven scheduling is unaffordable in current practice. Even assuming a periodic and independent task model, the exact calculation of the response time distribution of tasks is not possible except for simple task sets. Furthermore, in practice, tasks introduce complexities such as release jitter, blocking in shared resources, etc., which cannot be handled by the periodic independent task set model.In order to solve these problems, exact analysis must be abandoned for an approximated analysis. However, in the real-time field, approximations must not be optimistic, i.e. the deadline miss ratios predicted by the approximated analysis must be greater than or equal to the exact ones. In order to achieve this goal, the concept of pessimism needs to be mathematically defined in the stochastic context, and the pessimistic properties of the analysis carefully derived.This paper provides a mathematical framework for reasoning about stochastic pessimism, and obtaining mathematical properties of the analysis and its approximations. This framework allows us to prove the safety of several proposed approximations and extensions. We analyze and solve some practical problems in the implementation of the stochastic analysis, such as the problem of the finite precision arithmetic or the truncation of the probability functions. In addition, we extend the basic model in several ways, such as the inclusion of shared resources, release jitter or non-preemptive sections.
IEEE Transactions on Services Computing | 2009
Daniel F. García; Javier García; Joaquín Entrialgo; Manuel García; Pablo Valledor; Rodrigo Álvarez García; Antonio M. Campos
Nowadays, enterprises providing services through Internet often require online services supplied by other enterprises. This entails the cooperation of enterprise servers using Web services technology. The service exchange between enterprises must be carried out with a determined level of quality, which is usually established in a service level agreement (SLA). However, the fulfilment of SLAs is not an easy task and requires equipping the servers with special control mechanisms which control the quality of the services supplied. The first contribution of this research work is the analysis and definition of the main requirements that these control mechanisms must fulfil. The second contribution is the design of a control mechanism which fulfils these requirements and overcomes numerous deficiencies posed by previous mechanisms. The designed mechanism provides differentiation between distinct categories of service consumers as well as protection against server overloads. Furthermore, it scales in a cluster and does not require any modification to the system software of the host server, or to its application logic.
Journal of Systems and Software | 2011
Joaquín Entrialgo; Daniel F. García; Javier García; Manuel García; Pablo Valledor; Mohammad S. Obaidat
In transactional systems, the objectives of quality of service regarding are often specified by Service Level Objectives (SLOs) that stipulate a response time to be achieved for a percentile of the transactions. Usually, there are different client classes with different SLOs. In this paper, we extend a technique that enforces the fulfilment of the SLOs using admission control. The admission control of new user sessions is based on a response-time model. The technique proposed in this paper dynamically adapts the model to changes in workload characteristics and system configuration, so that the system can work autonomically, without human intervention. The technique requires no knowledge about the internals of the system; thus, it is easy to use and can be applied to many systems. Its utility is demonstrated by a set of experiments on a system that implements the TPC-App benchmark. The experiments show that the model adaptation works correctly in very different situations that include large and small changes in response times, increasing and decreasing response times, and different patterns of workload injection. In all this scenarios, the technique updates the model progressively until it adjusts to the new situation and in intermediate situations the model never experiences abnormal behaviour that could lead to a failure in the admission control component.
Future Generation Computer Systems | 2017
José Luis Díaz; Joaquín Entrialgo; Manuel García; Javier García; Daniel F. García
Abstract In the Cloud Computing market, a significant number of cloud providers offer Infrastructure as a Service (IaaS), including the capability of deploying virtual machines of many different types. The deployment of a service in a public provider generates a cost derived from the rental of the allocated virtual machines. In this paper we present LLOOVIA (Load Level based OpimizatiOn for VIrtual machine Allocation), an optimization technique designed for the optimal allocation of the virtual machines required by a service, in order to minimize its cost, while guaranteeing the required level of performance. LLOOVIA considers virtual machine types, different kinds of limits imposed by providers, and two price schemas for virtual machines: reserved and on-demand. LLOOVIA, which can be used with multi-cloud environments, provides two types of solutions: (1) the optimal solution and (2) the approximated solution based on a novel approach that uses binning applied on histograms of load levels. An extensive set of experiments has shown that when the size of the problem is huge, the approximated solution is calculated in a much shorter time and is very close to the optimal one. The technique presented has been applied to a set of case studies, based on the Wikipedia workload. These cases demonstrate that LLOOVIA can handle problems in which hundreds of virtual machines of many different types, multiple providers, and different kinds of limits are used.
ieee industry applications society annual meeting | 2015
Rubén Usamentiaga; Julio Molleda; Daniel F. García; Francisco G. Bulnes; Joaquín Entrialgo; Carlos Álvarez
Flatness measurement and control is a requirement in the production of high-quality rolled steel products. One of the most popular measurement methods is based on laser light-sectioning sensors. This method is based on the projection of a laser stripe onto the steel strip while it moves forward along a roll path. The extraction of the laser stripes provides information about the lengths of the strip fibers, which are used to compute flatness. This flatness measurement method is easy to maintain, provides accurate results, and can be built using low-cost components. However, it has a major disadvantage: it is severely affected by vibrations. Thus, when the movement of the strips is affected by vibrations, the resulting flatness measurement is inaccurate. This paper proposes a solution: using a second laser stripe and taking advantage of this redundant information to estimate and remove vibrations. This paper presents the procedure required to combine the laser stripes and produce a vibration-free result. In order to test the proposed approach, a mechanical prototype is built and used to produce vibrations. Results show excellent performance and indicate that the proposed approach presents a far more efficient solution than traditional methods using a single laser stripe.
Computer Applications in Engineering Education | 2015
Javier García; Joaquín Entrialgo
This article presents a lab approach for teaching storage area networks (SANs). Physical SANs are complex and costly infrastructures that require significant space. Both the cost and size are serious difficulties in setting up labs for teaching SANs. The lab approach presented in this article solves both problems by using computer virtualization, as well as software tools to emulate storage systems, so that lab activities can be carried out in a single physical computer. The article discusses the selection of a virtualization platform suitable for the teaching of SANs. Using the selected platform (the Hyper‐V hypervisor), a set of learning modules is designed. These modules teach students fundamental concepts and skills about SAN architecture, configuration and operation. The article provides brief descriptions of the modules. Then, the configuration schema of the practice platform used to support the SAN lab is explained. This configuration schema is designed for the cohabitation of multiple hypervisors (required to support the SAN lab activities) and a client operating system in the same physical computer. Thus, the SAN lab can be taught in a general purpose computer laboratory and coexist with other courses in the same physical lab. The proposed approach has been successfully implemented and verified by teaching the SAN lab in the context of an information technology degree in 2013 and 2014.
Journal of Systems Architecture | 2003
Javier García; Joaquín Entrialgo; Daniel F. García; José Luis Díaz; Francisco J. Suárez
Since the late 1980s a great deal of research has been dedicated to the development of software monitoring and visualization toolkits for parallel systems. These toolkits have traditionally been oriented to measuring and analyzing parallel scientific applications. However, nowadays, other types of parallel applications, using signal-processing or image-processing techniques, are becoming increasingly importance in the field of embedded computing. Such applications are executed on special parallel computers, normally known as embedded multiprocessors, and they exhibit structural and behavioral characteristics very different from scientific applications. Because of this, monitoring tools with specific characteristics are required for measuring and analyzing parallel embedded applications.In this paper we present performance execution tracer (PET), a monitoring and visualization toolkit specifically developed to measure and analyze this type of application. Special emphasis is placed on explaining how PET deals with the particular characteristics of these applications. In addition, in order to demonstrate the capabilities of PET, the measurement and analysis of a real application using this tool are described.
euromicro workshop on parallel and distributed processing | 1998
Francisco J. Suárez; J. Garcia; Joaquín Entrialgo; D.F. Garcia; S. Graffa; P. de Miguel
This paper describes a toolset, whose objective is to provide full support for the analysis and testing of temporal behavior in the development of parallel realtime systems. The development approach supported by the toolset is based on an incremental prototyping technique combined with successive analyses and tests of the temporal behavior of prototypes carried out along the development cycle. The toolset is composed of three main tools: a prototyper, a monitor and visualizer/analyzer. Most current monitoring and analysis tools are oriented to measure and visualize the behavior of software entities (mainly processes) and the utilization of hardware resources, that is, the elements of the system design. In addition to this, this toolset incorporates a graph based model that provides a functional view of the behavior. This is expressed in terms of the sequences of activities executed by the system in response to the main external environment events (reactive aspect of real-time systems). Moreover, the toolset provides a powerful means to design and test the system under development in an incremental manner.
international symposium on performance evaluation of computer and telecommunication systems | 2016
Víctor Peláez; Antonio M. Campos; Daniel F. García; Joaquín Entrialgo
The use of Hybrid Cloud technologies in large scale applications allows organizations to complement on-premises infrastructure with hired infrastructure from Public Cloud providers. The efficient use of the hired resources to provide the expected quality of service while dealing with the heterogeneity and uncertainty of Public Clouds is the main difficulty. A scheduler able to deal with deadline-constrained bag of tasks in Hybrid Clouds is presented in this work. The main contribution of this scheduler is that task runtime estimations are not necessary as inputs. The scheduler includes a runtime estimator based on sampled data to generate the estimation autonomously. A discrete event simulator was developed in order to validate the proposed scheduler in different scenarios. Results show that an estimator based on the Chebyshevs inequality obtains very good results in terms of deadlines met and cost.
network computing and applications | 2012
Ramón Medrano Llamas; Daniel F. García; Joaquín Entrialgo
Cluster management has become a multi objective task that involves many disciplines like power optimization, fault tolerance, dependability and online system operation analysis. Efficient and secure operation of these clusters is a key objective of any data center policy. In addition, the service provided by these servers must fulfill a level of quality of service (QoS) to the customers. Applying self-management techniques to these clusters would simplify and automate its operation. Current self-management techniques that take into account service level agreements (SLAs) do not cover at the same time the two most important sides of the cluster operation: self-optimization, for an efficient and profitable operation, and self-healing, for a secure operation and high level of quality of service perceived by users. This work integrates a self-optimization strategy for Internet server clusters that optimizes the power consumption, using dynamic provisioning of servers, with a self-healing strategy that improves the reaction of the cluster to a server failure, by using the spare capacity of the cluster intelligently. The self-management technique is based on empirical response time and power consumption models of the servers that simplify its operation. Additionally, the technique presented in this paper guarantees the fulfillment of the SLA.