Manuel García
University of Oviedo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manuel García.
euromicro conference on real time systems | 2000
José M. López; Manuel García; José Luis Díaz; D.F. Garcia
Presents the utilization bound for earliest deadline first (EDF) scheduling on homogeneous multiprocessor systems with partitioning strategies. Assuming that tasks are pre-emptively scheduled on each processor according to the EDF algorithm, and allocated according to the first-fit (FF) heuristic, we prove that the worst-case achievable utilization is 0.5(n+1), where n is the number of processors. This bound is valid for arbitrary utilization factors. Moreover, if all the tasks have utilization factors under a value /spl alpha/, the previous bound is raised, and the new utilization bound considering /spl alpha/ is calculated. In addition, we prove that no uniprocessor scheduling algorithm/allocation algorithm pair can provide a higher worst-case achievable utilization than that of EDF-FF. Finally, simulation provides the average-case achievable utilization for EDF-FF.
Real-time Systems | 2003
José M. López; Manuel García; José Luis Díaz; Daniel F. García
In this paper, we extend Liu and Laylands utilization bound for fixed priority scheduling on uniprocessors to homogeneous multiprocessor systems under a partitioning strategy. Assuming that tasks are pre-emptively scheduled on each processor according to fixed priorities assigned by the Rate-Monotonic policy, and allocated to processors by the First Fit algorithm, we prove that the utilization bound is (n−1)(21/2−1)+(m−n+1)(21/(m−n+1)−1), where m and n are the number of tasks and processors, respectively. This bound is valid for arbitrary utilization factors. Moreover, if all the tasks have utilization factors under a value α, the previous bound is raised and the new utilization bound considering α is calculated. Finally, simulation provides the average-case behavior.
IEEE Transactions on Services Computing | 2009
Daniel F. García; Javier García; Joaquín Entrialgo; Manuel García; Pablo Valledor; Rodrigo Álvarez García; Antonio M. Campos
Nowadays, enterprises providing services through Internet often require online services supplied by other enterprises. This entails the cooperation of enterprise servers using Web services technology. The service exchange between enterprises must be carried out with a determined level of quality, which is usually established in a service level agreement (SLA). However, the fulfilment of SLAs is not an easy task and requires equipping the servers with special control mechanisms which control the quality of the services supplied. The first contribution of this research work is the analysis and definition of the main requirements that these control mechanisms must fulfil. The second contribution is the design of a control mechanism which fulfils these requirements and overcomes numerous deficiencies posed by previous mechanisms. The designed mechanism provides differentiation between distinct categories of service consumers as well as protection against server overloads. Furthermore, it scales in a cluster and does not require any modification to the system software of the host server, or to its application logic.
real-time systems symposium | 2004
José Luis Díaz; José M. López; Manuel García; Antonio M. Campos; Kanghee Kim; Lucia Lo Bello
The exact stochastic analysis of most real-time systems is becoming unaffordable in current practice. On one side, the exact calculation of the response time distribution of the tasks is not possible except for simple periodic and independent task sets. On the other side, in practice, tasks introduce complexities like release jitter, blocking in shared resources, stochastic dependencies, etc, which can not be handled by the periodic and independent task set model. This paper introduces the concept of pessimism in the stochastic analysis of real-time systems in the following sense: the exact probability of missing any deadline is always lower than that derived from the pessimistic analysis. Therefore, if real-time constraints are expressed as probabilities of missing deadlines, the pessimistic stochastic analysis provides safe results. Some applications of the pessimism concept are presented. Firstly, the practical problems that arise in the stochastic analysis of periodic and independent task sets are addressed. Secondly, we extend to the stochastic case some well known techniques of the deterministic analysis, such as the blocking in shared resources, and the task priority assignment.
IEEE Journal on Selected Areas in Communications | 2004
Manuel García; Daniel F. García; Víctor G. García; Ricardo Bonis
This paper presents the study carried out on the data traffic collected on the network of a cable operator based on hybrid fiber-coax technology, and the subsequent simulation model developed to predict the bandwidth requirements of the network channels. The paper starts with the analysis of the traffic measurements, taken over two periods of time in one year, on all the networks channels. This analysis identifies the main characteristics of the traffic, as well as some relationships between network parameters and their persistence over time. The paper proceeds to present the development of a simulation model, which represents the cable network. This model is built from the results of the traffic analysis and network parameters. The major challenge of this model is to predict the traffic on each channel of the cable network, related to parameters of the network configuration, such as the number of assigned subscribers to the channel and the time of day. To reach this goal, the simulation model has been developed with a modular structure, which gives it flexibility to adapt to changes in the network. The process followed involves the establishment of a traffic model, a system model, and finally, the validation of the results.
international conference on communications | 2003
Xabiel G. Pañeda; David Melendi; Manuel García; Víctor G. García; Roberto García; Enrique Riesgo
This paper describes the tool developed in order to analyse a video-on-demand streaming service. The aim of this work is to provide a powerful system to help both the service providers and the communication operators to configure this sort of services. Distributing the contents, developing redistribution routes, creating new contents in the most popular subjects and increasing or decreasing the length of information depending on the subscribers’ behaviour can improve the quality of the service. However, these decisions must be taken based in service performance. This tool tries to fill the gap in this field and provide the necessary analysis to configure these services. The quality of the tool has been evaluated by the www.lne.es, which is one of the most successful digital news sites in Spain. Its multimedia section offers a large number of videos on demand with several subjects, lengths and qualities. During the last months an analysis process has been performed to improve the service by using this analysis tool. This work is included in a project about analysis, modelling and configuring of interactive multimedia services.
instrumentation and measurement technology conference | 2000
Daniel F. García; Manuel García; Faustino Obeso; Valentin Fernández
This paper describes the development of a flatness measurement system, integrated in the control process of a hot strip mill in the steel industry. The objective of the system is to calculate flatness indexes for every rolled strip, comparing the length of its lateral profiles with the central length. The reconstruction of the profiles is based on a non-linear triangulation technique. Images of the steel strip, at high temperature and high speed, are sampled every two milliseconds at five different points and are processed on-line in order to calculate height displacement values of the strip, which allows the calculation of final flatness indexes for the steel strip. The measurement method developed,introduces an innovative geometry in the disposition of the optic elements which increases the measurement range of the system without reducing its precision. It also includes a tracking system to compensate for the effects of lateral displacements of the strip. The flatness measurement system has been implemented using a heterogeneous distributed computer system.
Journal of Systems and Software | 2011
Joaquín Entrialgo; Daniel F. García; Javier García; Manuel García; Pablo Valledor; Mohammad S. Obaidat
In transactional systems, the objectives of quality of service regarding are often specified by Service Level Objectives (SLOs) that stipulate a response time to be achieved for a percentile of the transactions. Usually, there are different client classes with different SLOs. In this paper, we extend a technique that enforces the fulfilment of the SLOs using admission control. The admission control of new user sessions is based on a response-time model. The technique proposed in this paper dynamically adapts the model to changes in workload characteristics and system configuration, so that the system can work autonomically, without human intervention. The technique requires no knowledge about the internals of the system; thus, it is easy to use and can be applied to many systems. Its utility is demonstrated by a set of experiments on a system that implements the TPC-App benchmark. The experiments show that the model adaptation works correctly in very different situations that include large and small changes in response times, increasing and decreasing response times, and different patterns of workload injection. In all this scenarios, the technique updates the model progressively until it adjusts to the new situation and in intermediate situations the model never experiences abnormal behaviour that could lead to a failure in the admission control component.
Real-time Imaging | 1999
Daniel F. García; Manuel García; Faustino Obeso; Valentin Fernández
This paper describes the development of a flatness inspection system, integrated in the control process of a hot strip mill in the steel industry. The objective of the system is to calculate flatness indexes for every strip, comparing the length of its lateral profiles with the central length. The reconstruction of the profiles is based on a nonlinear triangulation technique. Images of the steel strip, at high temperature and high speed, are sampled every 2 ms at five different points and are processed on-line in order to calculate height displacement values of the strip, which allows the calculation of final flatness indexes for the steel strip. The measurement method developed introduces an innovative geometry in the disposition of the optical elements which increases the measurement range without reducing precision. It also includes a tracking system to compensate for the effects of lateral displacements of the strip. The flatness inspection system has been implemented using a heterogeneous distributed computer system.
Future Generation Computer Systems | 2017
José Luis Díaz; Joaquín Entrialgo; Manuel García; Javier García; Daniel F. García
Abstract In the Cloud Computing market, a significant number of cloud providers offer Infrastructure as a Service (IaaS), including the capability of deploying virtual machines of many different types. The deployment of a service in a public provider generates a cost derived from the rental of the allocated virtual machines. In this paper we present LLOOVIA (Load Level based OpimizatiOn for VIrtual machine Allocation), an optimization technique designed for the optimal allocation of the virtual machines required by a service, in order to minimize its cost, while guaranteeing the required level of performance. LLOOVIA considers virtual machine types, different kinds of limits imposed by providers, and two price schemas for virtual machines: reserved and on-demand. LLOOVIA, which can be used with multi-cloud environments, provides two types of solutions: (1) the optimal solution and (2) the approximated solution based on a novel approach that uses binning applied on histograms of load levels. An extensive set of experiments has shown that when the size of the problem is huge, the approximated solution is calculated in a much shorter time and is very close to the optimal one. The technique presented has been applied to a set of case studies, based on the Wikipedia workload. These cases demonstrate that LLOOVIA can handle problems in which hundreds of virtual machines of many different types, multiple providers, and different kinds of limits are used.