Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josep L. Lérida is active.

Publication


Featured researches published by Josep L. Lérida.


international conference on parallel processing | 2012

MIP model scheduling for multi-clusters

Hector Blanco; Fernando Guirado; Josep L. Lérida; Víctor M. Albornoz

Multi-cluster environments are composed of multiple clusters that act collaboratively, thus allowing computational problems that require more resources than those available in a single cluster to be treated. However, the degree of complexity of the scheduling process is greatly increased by the resources heterogeneity and the co-allocation process, which distributes the tasks of parallel jobs across cluster boundaries. In this paper, the authors propose a new MIP model which determines the best scheduling for all the jobs in the queue, identifying their resource allocation and its execution order to minimize the overall makespan. The results show that the proposed technique produces a highly compact scheduling of the jobs, producing better resources utilization and lower overall makespan. This makes the proposed technique especially useful for environments dealing with limited resources and large applications.


Journal of Network and Computer Applications | 2013

Analyzing locality over a P2P computing architecture

Damií Castellí; Francesc Giné; Francesc Solsona; Josep L. Lérida

A characteristic of Peer-to-Peer (P2P) computing networks is their huge number of different computational resources scattered across the Internet. Gathering peers into markets according to their multi-attribute computational resources makes it easier to manage these environments. This solution is known as market overlay. In this context, the closeness of the markets with similar resources, known as locality, is a key feature for ensuring good P2P resource management. Thus, the locality feature over a market overlay allows a lack of resources in a given market to be compensated quickly by any other market with similar resources, whenever these are close to each other. Consequently, locality becomes an essential challenge. This paper addresses the analysis of the locality of P2P market over-lays. According to this, a new procedure for measuring locality is applied together with an extensive analysis of some well-known structured P2P overlays. Based on this analysis, a new P2P computing architecture, named DisCoP, oriented towards optimizing locality is proposed. Our proposal gathers the peers into markets according to their computational resources. A Hilbert function is used to arrange multi-attribute markets in an ordered and mono-dimensional space and the markets are linked by means of a Bruijn graph. In order to maintain the DisCoP locality whenever the overlay is not completed, a solution based on the virtualization of markets is also proposed. Finally, the DisCoP locality is tested together with the proposed virtualization method for approximate searches over uncompleted overlays. The simulation results show that approximate searches exploit the DisCoP locality efficiently.


The Journal of Supercomputing | 2011

Multiple job co-allocation strategy for heterogeneous multi-cluster systems based on linear programming

Hector Blanco; Josep L. Lérida; Fernando Cores; Fernando Guirado

Multi-cluster environments are composed of multiple clusters of computers that act collaboratively, and thus allowing computational problems to be treated that require more resources than those available in a single cluster. However, the degree of complexity of the scheduling process is greatly increased by the heterogeneity of resources and co-allocation process, which distributes the tasks of parallel jobs across cluster boundaries.This work presents a new scheduling strategy that allocates multiple jobs from the system queue simultaneously on a heterogeneous multicluster, by applying co-allocation when is necessary. Our strategy is composed by a job selection function and a linear programming model to find the best allocation for multiple jobs. The proposed scheduling technique is shown to reduce the execution times of the parallel jobs and the overall response times by 38% compared with other scheduling techniques in the literature.


Journal of Parallel and Distributed Computing | 2013

State-based predictions with self-correction on Enterprise Desktop Grid environments

Josep L. Lérida; Francesc Solsona; Porfidio Hernández; Francesc Giné; Mauricio Hanzich; Josep Conde

The abundant computing resources in current organizations provide new opportunities for executing parallel scientific applications and using resources. The Enterprise Desktop Grid Computing (EDGC) paradigm addresses the potential for harvesting the idle computing resources of an organizations desktop PCs to support the execution of the companys large-scale applications. In these environments, the accuracy of response-time predictions is essential for effective metascheduling that maximizes resource usage without harming the performance of the parallel and local applications. However, this accuracy is a major challenge due to the heterogeneity and non-dedicated nature of EDGC resources. In this paper, two new prediction techniques are presented based on the state of resources. A thorough analysis by linear regression demonstrated that the proposed techniques capture the real behavior of the parallel applications better than other common techniques in the literature. Moreover, it is possible to reduce deviations with a proper modeling of prediction errors, and thus, a Self-adjustable Correction method (SAC) for detecting and correcting the prediction deviations was proposed with the ability to adapt to the changes in load conditions. An extensive evaluation in a real environment was conducted to validate the SAC method. The results show that the use of SAC increases the accuracy of response-time predictions by 35%. The cost of predictions with self-correction and its accuracy in a real environment was analyzed using a combination of the proposed techniques. The results demonstrate that the cost of predictions is negligible and the combined use of the prediction techniques is preferable.


Journal of Simulation | 2015

Multi-criteria genetic algorithm applied to scheduling in multi-cluster environments

Eloi Gabaldon; Josep L. Lérida; Fernando Guirado; Jordi Planes

Scheduling and resource allocation to optimize performance criteria in multi-cluster heterogeneous environments is known as an NP-hard problem, not only for the resource heterogeneity, but also for the possibility of applying co-allocation to take advantage of idle resources across clusters. A common practice is to use basic heuristics to attempt to optimize some performance criteria by treating the jobs in the waiting queue individually. More recent works proposed new optimization strategies based on Linear Programming techniques dealing with the scheduling of multiple jobs simultaneously. However, the time cost of these techniques makes them impractical for large-scale environments. Population-based meta-heuristics have proved their effectiveness for finding the optimal schedules in large-scale distributed environments with high resource diversification and large numbers of jobs in the batches. The algorithm proposed in the present work packages the jobs in the batch to obtain better optimization opportunities. It includes a multi-objective function to optimize not only the Makespan of the batches but also the Flowtime, thus ensuring a certain level of QoS from the users’ point of view. The algorithm also incorporates heterogeneity and bandwidth awareness issues, and is useful for scheduling jobs in large-scale heterogeneous environments. The proposed meta-heuristic was evaluated with a real workload trace. The results show the effectiveness of the proposed method, providing solutions that improve the performance with respect to other well-known techniques in the literature.


high performance computing for computational science (vector and parallel processing) | 2008

Resource Matching in Non-dedicated Multicluster Environments

Josep L. Lérida; Francesc Solsona; Francesc Giné; Jose Ramon García; Porfidio Hernández

We are interested in making use of Multiclusters to execute parallel applications. The present work is developed within the M-CISNE project. M-CISNE is a non-dedicated and heterogeneous Multicluster environment which includes MetaLoRaS, a two-level MetaScheduler that manages the appropriate job allocation to available resources. In this paper, we present a new resource-matching model for MetaLoRaS, which is aimed at mitigating the degraded turnaround time of co-allocated jobs, caused by the contention on shared inter-cluster links. The model is linear programming based and considers the availability of computational resources and the contention of shared inter and intra-cluster links. Its goal is to minimize the average turnaround time of the parallel applications without disturbing the local applications excessively and maximize the prediction accuracy. We also present a parallel job model that takes both computation and communication characterizations into account. By doing this, greater accuracy is obtained than in other models only focused on one of these characteristics. Our preliminary performance results indicate that the linear programming model for on-line resource matching is efficient in speed and accuracy and can be successfully applied to co-allocate jobs across different clusters.


The Journal of Supercomputing | 2017

Blacklist muti-objective genetic algorithm for energy saving in heterogeneous environments

Eloi Gabaldon; Josep L. Lérida; Fernando Guirado; Jordi Planes

Reducing energy consumption in large-scale computing facilities has become a major concern in recent years. Most of the techniques have focused on determining the computing requirements based on load predictions and thus turning unnecessary nodes on and off. Nevertheless, once the available resources have been configured, new opportunities arise for reducing energy consumption by providing optimal matching of parallel applications to the available computing nodes. Current research in scheduling has concentrated on not only optimizing the energy consumed by the processors but also optimizing the makespan, i.e., job completion time. The large number of heterogeneous computing nodes and variability of application-tasks are factors that make the scheduling an NP-Hard problem. Our aim in this paper is a multi-objective genetic algorithm based on a weighted blacklist able to generate scheduling decisions that globally optimizes the energy consumption and the makespan.


Cluster Computing | 2014

PSysCal: a parallel tool for calibration of ecosystem models

Josep L. Lérida; Albert Agraz; Francesc Solsona; M. Angels Colomer

The methods used for ecosystem modelling are generally based on differential equations. Nowadays, new computational models based on concurrent processing of multiple agents (multi-agents) or the simulation of biological processes with the Population Dynamic P-System models (PDPs) are gaining importance. These models have significant advantages over traditional models, such as high computational efficiency, modularity and its ability to model the interaction between different biological processes which operate concurrently. By this, they are becoming useful for simulating complex dynamic ecosystems, untreatable with classical techniques.On the other hand, the main counterpart of P-System models is the need for calibration. The model parameters represent the field measurements taken by experts. However, the exact values of some of these parameters are unknown and experts define a numerical interval of possible values. Therefore, it is necessary to perform a calibration process to fit the best value of each interval. When the number of unknown parameters increases, the calibration process becomes computationally complex and storage requirements increase significantly.In this paper, we present a parallel tool (PSysCal) for calibrating next generation PDP models. The results shown that the calibration time is reduced exponentially with the amount of computational resources. However, the complexity of the calibration process and a limitation in the number of available computational resources make the calibration process intractable for large models. To solve this, we propose a heuristic technique (PSysCal+H). The results show that this technique significantly reduces the computational cost, it being practical for solving large model instances even with limited computational resources.


2012 Seventh International Conference on P2P, Parallel, Grid, Cloud and Internet Computing | 2012

MIP Model Scheduling for BSP Parallel Applications on Multi-cluster Environments

Hector Blanco; Fernando Guirado; Josep L. Lérida; Víctor M. Albornoz

Multi-cluster environments are composed of multiple clusters that act collaboratively, thus allowing computational problems that require more resources than those available in a single cluster to be treated. However, the degree of complexity of the scheduling process is greatly increased by the resources heterogeneity and the co-allocation process, which distributes the tasks of parallel jobs across cluster boundaries. in this paper, we use the Bulk-Synchronous Parallel model on which the jobs are composed of a fixed number of tasks that act in a collaborative manner. We propose a new MIP model which determines the best allocation for all the jobs in the queue, identifying their execution order and minimizing the overall make span. the results show that the proposed technique produces a highly compact scheduling of the jobs, achieving better resource utilization and reducing the overall make span. This makes the OAS technique especially useful for environments dealing with limited resources and large applications.


european conference on parallel processing | 2008

Enhancing Prediction on Non-dedicated Clusters

Josep L. Lérida; Francesc Solsona; Francesc Giné; Jose Ramon García; Mauricio Hanzich; Porfidio Hernández

In this paper, we present a scheduling scheme to estimate the turnaround time of parallel jobs on a heterogeneous and non-dedicated cluster or NoW (Network of Workstations). This scheme is based on an analytical prediction model that establishes the processing and communication slowdown of the execution times of the jobs based on the cluster nodes and links powerful and occupancy. Preservation of the local application responsiveness is also a goal. We address the impact of inaccuracies in these estimates on the overall system performance. Furthermore, we demonstrate that job scheduling benefits from the accuracy of these estimates. The applicability of our proposal has been proved by measuring the efficiency of our method by comparing the predicted deviations of the parallel jobs in a real environment with respect to the most representative ones of the literature. The additional cost of obtaining these was also evaluated and compared. The present work is implemented within the CISNE project, a previously developed scheduling framework for non-dedicated and heterogeneous cluster environments.

Collaboration


Dive into the Josep L. Lérida's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Porfidio Hernández

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mauricio Hanzich

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Jose Ramon García

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emilio Luque

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge