Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josep Jorba is active.

Publication


Featured researches published by Josep Jorba.


Journal of the Operational Research Society | 2011

On the use of Monte Carlo simulation, cache and splitting techniques to improve the clarke and wright savings heuristics

Angel A. Juan; Javier Faulin; Josep Jorba; Daniel Riera; David Masip; Barry B. Barrios

This paper presents the SR-GCWS-CS probabilistic algorithm that combines Monte Carlo simulation with splitting techniques and the Clarke and Wright savings heuristic to find competitive quasi-optimal solutions to the Capacitated Vehicle Routing Problem (CVRP) in reasonable response times. The algorithm, which does not require complex fine-tuning processes, can be used as an alternative to other metaheuristics—such as Simulated Annealing, Tabu Search, Genetic Algorithms, Ant Colony Optimization or GRASP, which might be more difficult to implement and which might require non-trivial fine-tuning processes—when solving CVRP instances. As discussed in the paper, the probabilistic approach presented here aims to provide a relatively simple and yet flexible algorithm which benefits from: (a) the use of the geometric distribution to guide the random search process, and (b) efficient cache and splitting techniques that contribute to significantly reduce computational times. The algorithm is validated through a set of CVRP standard benchmarks and competitive results are obtained in all tested cases. Future work regarding the use of parallel programming to efficiently solve large-scale CVRP instances is discussed. Finally, it is important to notice that some of the principles of the approach presented here might serve as a base to develop similar algorithms for other routing and scheduling combinatorial problems.


Simulation Modelling Practice and Theory | 2014

A simheuristic algorithm for solving the permutation flow shop problem with stochastic processing times

Angel A. Juan; Barry B. Barrios; Eva Vallada; Daniel Riera; Josep Jorba

Abstract This paper describes a simulation–optimization algorithm for the Permutation Flow shop Problem with Stochastic processing Times (PFSPST). The proposed algorithm combines Monte Carlo simulation with an Iterated Local Search metaheuristic in order to deal with the stochastic behavior of the problem. Using the expected makespan as initial minimization criterion, our simheuristic approach is based on the assumption that high-quality solutions (permutations of jobs) for the deterministic version of the problem are likely to be high-quality solutions for the stochastic version – i.e., a correlation will exist between both sets of solutions, at least for moderate levels of variability in the stochastic processing times. No particular assumption is made on the probability distributions modeling each job-machine processing times. Our approach is able to solve, in just a few minutes or even less, PFSPST instances with hundreds of jobs and dozens of machines. Also, the paper proposes the use of reliability analysis techniques to analyze simulation outcomes or historical observations on the random variable representing the makespan associated with a given solution. This way, criteria other than the expected makespan can be considered by the decision maker when comparing different alternative solutions. A set of classical benchmarks for the deterministic version of the problem are adapted and tested under several scenarios, each of them characterized by a different level of uncertainty – variance level of job-machine processing times.


IEEE Transactions on Learning Technologies | 2014

Experiences in Digital Circuit Design Courses: A Self-Study Platform for Learning Support

David Baneres; Robert Clarisó; Josep Jorba; Montse Serra

The synthesis of digital circuits is a basic skill in all the bachelor programmes around the ICT area of knowledge, such as Computer Science, Telecommunication Engineering or Electrical Engineering. An important hindrance in the learning process of this skill is that the existing educational tools for the design of circuits do not allow the student to validate if his design satisfies the specification. Furthermore, an automatic feedback is essential in order to help students to fix incorrect designs. In this paper, we propose an online platform where the students can design and verify their circuits with an individual and automatic feedback. The technical aspects of the platform and the designed verification tool are presented. The impact of the platform on the learning process of the students is illustrated by analyzing the student performance on the course where the platform has been used. Results on the utilization of the platform versus the success rate and marks in the final exam are presented and compared with previous semesters.


International Journal of Parallel Programming | 2014

Improving Performance on Data-Intensive Applications Using a Load Balancing Methodology Based on Divisible Load Theory

Claudia Rosas; Anna Sikora; Josep Jorba; Andreu Moreno; Eduardo César

Data-intensive applications are those that explore, query, analyze, and, in general, process very large data sets. Generally, these applications can be naturally implemented in parallel but, in many cases, these implementations show severe performance problems mainly due to load imbalances, inefficient use of available resources, and improper data partition policies. It is worth noticing that the problem becomes more complex when the conditions causing these problems change at run time. This paper proposes a methodology for dynamically improving the performance of certain data-intensive applications based on: adapting the size and number of data partitions, and the number of processing nodes, to the current application conditions in homogeneous clusters. To this end, the processing of each exploration is monitored and gathered data is used to dynamically tune the performance of the application. The tuning parameters included in the methodology are: (i) the partition factor of the data set, (ii) the distribution of the data chunks, and (iii) the number of processing nodes to be used. The methodology assumes that a single execution includes multiple related explorations on the same partitioned data set, and that data chunks are ordered according to their processing times during the application execution to assign first the most time consuming partitions. The methodology has been validated using the well-known bioinformatics tool—BLAST—and through extensive experimentation using simulation. Reported results are encouraging in terms of reducing total execution time of the application (up to a 40 % in some cases).


symposium on computer architecture and high performance computing | 2011

Workload Balancing Methodology for Data-Intensive Applications with Divisible Load

Claudia Rosas; Anna Morajko; Josep Jorba; Eduardo César

Data-intensive applications are those that explore, query, analyze, and, in general, process very large data sets. Generally in High Performance Computing (HPC), the main performance problem associated to these applications is the load unbalance or inefficient resources utilization. This paper proposes a methodology for improving performance of data-intensive applications based on performing multiple data partitions prior to the execution, and ordering the data chunks according to their processing times during the application execution. As a first step, we consider that a single execution includes multiple related explorations on the same data set. Consequently, we propose to monitor the processing of each exploration and use the data gathered to dynamically tune the performance of the application. The tuning parameters included in the methodology are the partition factor of the data set, the distribution of these data chunks, and the number of processing nodes to be used by the application. The methodology has been initially tested using the well-known bioinformatics tool BLAST, obtaining encouraging results (up to a 40% of improvement).


international conference on computational science | 2003

Dynamic performance tuning of distributed programming libraries

Anna Morajko; Oleg Morajko; Josep Jorba; Tomàs Margalef; Emilio Luque

The use of distributed programming libraries is very common in the development of scientific and engineering applications. These libraries, from message passing libraries to numerical libraries, are designed in a very general way to be useful for a wide range of applications. Therefore, there are several polices that must be adapted to the particular application, system and input data to provide the expected performance. Our objective is develop an environment for tuning the use of a distributed library on the fly according to the dynamic behavior of the applications. In this paper, we present as an example a tuning environment for PVM-based applications. We show potential bottlenecks when using PVM. We also include tuning scenarios that describe the evaluation of the application behavior and the solutions that can improve the performance.


ACM Computing Surveys | 2013

Decentralized resource discovery mechanisms for distributed computing in peer-to-peer environments

Daniel Lázaro; Joan Manuel Marquès; Josep Jorba; Xavier Vilajosana

Resource discovery is an important part of distributed computing and resource sharing systems, like grids and utility computing. Because of the increasing importance of decentralized and peer-to-peer environments, characterized by high dynamism and churn, a number of resource discovery mechanisms, mainly based on peer-to-peer techniques, have been presented recently. We present and classify them according to criteria like their topology and the degree of achievement of various common requirements of great importance for the targeted environments, as well as compare their reported performance. These classifications intend to provide an intuitive vision of the strengths and weaknesses of each system.


Lecture Notes in Computer Science | 2005

Automatic performance analysis of message passing applications using the KappaPI 2 tool

Josep Jorba; Tomàs Margalef; Emilio Luque

Message passing libraries offer the programmer a set of primitives that are not available in sequential programming. Developing applications using these primitives as well as application performance tuning are complex tasks for non-expert users. Therefore, automatic performance analysis tools that help the user with performance analysis and tuning phases are necessary. KappaPI 2 is a performance analysis tool designed openly to incorporate parallel performance knowledge about performance bottlenecks easily. The tool is able to detect and analyze performance bottlenecks and then make suggestions to the user to improve the application behavior.


complex, intelligent and software intensive systems | 2008

An Architecture for Decentralized Service Deployment

Daniel Lázaro; Joan Manuel Marquès; Josep Jorba

In this paper we present a proposal of the architecture for a system which allows the deployment of services in a group of computers, connected in a peer-to-peer fashion. This architecture is divided in layers, and each of them contains some components which offer specific functions. By putting them together, we obtain a system with desirable characteristics such as scalability, decentralization, ability to deal with heterogeneity, fault tolerance, load-balancing, and self-* properties.


parallel computing | 2006

Search of performance inefficiencies in message passing applications with KappaPI 2 tool

Josep Jorba; Tomàs Margalef; Emilio Luque

Performance is a crucial issue of parallel/distributed applications. One kind of useful tools, in this context, are the automatic performance analysis tools, that help developers in some of the phases of the performance tuning process. KappaPI 2 is an automatic performance tool, with an open extensible knowledge base about typical inefficiencies in message passing applications, and it is able to detect and analyze these inefficiencies, and then make suggestions to the developer about how to improve their application behavior.

Collaboration


Dive into the Josep Jorba's collaboration.

Top Co-Authors

Avatar

Tomàs Margalef

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Emilio Luque

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Anna Sikora

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Angel A. Juan

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Eduardo César

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Morajko

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Joan Manuel Marquès

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Thanasis Daradoumis

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Claudia Rosas

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge