Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Héctor Cancela is active.

Publication


Featured researches published by Héctor Cancela.


Applied Soft Computing | 2011

A survey on parallel ant colony optimization

Martín Pedemonte; Sergio Nesmachnow; Héctor Cancela

Abstract: Ant colony optimization (ACO) is a well-known swarm intelligence method, inspired in the social behavior of ant colonies for solving optimization problems. When facing large and complex problem instances, parallel computing techniques are usually applied to improve the efficiency, allowing ACO algorithms to achieve high quality results in reasonable execution times, even when tackling hard-to-solve optimization problems. This work introduces a new taxonomy for classifying software-based parallel ACO algorithms and also presents a systematic and comprehensive survey of the current state-of-the-art on parallel ACO implementations. Each parallel model reviewed is categorized in the new taxonomy proposed, and an insight on trends and perspectives in the field of parallel ACO implementations is provided.


Applied Soft Computing | 2012

A parallel micro evolutionary algorithm for heterogeneous computing and grid scheduling

Sergio Nesmachnow; Héctor Cancela; Enrique Alba

This work presents a novel parallel micro evolutionary algorithm for scheduling tasks in distributed heterogeneous computing and grid environments. The scheduling problem in heterogeneous environments is NP-hard, so a significant effort has been made in order to develop an efficient method to provide good schedules in reduced execution times. The parallel micro evolutionary algorithm is implemented using MALLBA, a general-purpose library for combinatorial optimization. Efficient numerical results are reported in the experimental analysis performed on both well-known problem instances and large instances that model medium-sized grid environments. The comparative study of traditional methods and evolutionary algorithms shows that the parallel micro evolutionary algorithm achieves a high problem solving efficacy, outperforming previous results already reported in the related literature, and also showing a good scalability behavior when facing high dimension problem instances.


IEEE Transactions on Reliability | 2003

The recursive variance-reduction simulation algorithm for network reliability evaluation

Héctor Cancela; M. El Khadiri

This paper proposes a new formulation of the recursive variance-reduction Monte Carlo estimator of the /spl kappa/ terminal unreliability parameter of communication systems. This formulation allows significant reduction in the simulation execution time, as demonstrated by experimental results.


IEEE Transactions on Reliability | 1995

A recursive variance-reduction algorithm for estimating communication-network reliability

Héctor Cancela; M. El Khadiri

In evaluating the capacity of a communication network architecture to resist possible faults of some of its components, several reliability metrics are used. This paper considers the /spl Kscr/-terminal unreliability measure. The exact evaluation of this parameter is, in general, very costly since it is in the NP-hard family. An alternative to exact evaluation is to estimate it using Monte Carlo simulation. For highly reliable networks, the crude Monte Carlo technique is prohibitively expensive; thus variance reduction techniques must be used. We propose a recursive variance-reduction Monte-Carlo scheme (RVR-MC) specifically designed for this problem, RVR-MC is recursive, changing the original problem into the unreliability evaluation problem for smaller networks. When all resulting systems are either up or down independently of components state, the process terminates. Simulation results are given for a well-known test topology. The speedups obtained by RVR-MC with respect to crude Monte Carlo are calculated for various values of component unreliability. These results are compared to previously published results for five other methods (bounds, sequential construction, dagger sampling, failure sets, and merge process) showing the value of RVR-MC.


soft computing | 2010

Heterogeneous computing scheduling with evolutionary algorithms

Sergio Nesmachnow; Héctor Cancela; Enrique Alba

This work presents sequential and parallel evolutionary algorithms (EAs) applied to the scheduling problem in heterogeneous computing environments, a NP-hard problem with capital relevance in distributed computing. These methods have been specifically designed to provide accurate and efficient solutions by using simple operators that allow them to be later extended for solving realistic problem instances arising in distributed heterogeneous computing (HC) and grid systems. The EAs were codified over MALLBA, a general-purpose library for combinatorial optimization. Efficient numerical results are reported in the experimental analysis performed on well-known problem instances. The comparative study of scheduling methods shows that the parallel versions of the implemented evolutionary algorithms are able to achieve high problem solving efficacy, outperforming traditional scheduling heuristics and also improving over previous results already reported in the related literature.


IEEE Transactions on Reliability | 1998

Series-parallel reductions in Monte Carlo network-reliability evaluation

Héctor Cancela; M. El Khadiri

Monte Carlo simulation appears to be very useful in the evaluation of K-terminal-reliability of large communication systems because the exact algorithms are extremely time consuming. This paper shows that the well-known series-parallel reductions can be incorporated in the recursive variance reduction simulation method, leading to a more efficient estimator, as demonstrated by experimental results.


IEEE Transactions on Reliability | 2011

Polynomial-Time Topological Reductions That Preserve the Diameter Constrained Reliability of a Communication Network

Héctor Cancela; M. El Khadiri; Louis Petingi

We propose a polynomial-time algorithm for detecting and deleting classes of network edges which are irrelevant in the evaluation of the Source-to-terminal Diameter Constrained Network reliability parameter. As evaluating this parameter is known to be an NP-hard problem, the proposed procedure may lead to important computational gains when combined with an exact method to calculate the reliability. For illustration, we integrate this algorithm within an exact recursive factorization approach based upon Moskowitzs edge decomposition. Experiments conducted on different real-world topologies confirmed a substantial computational gain, except when highly-dense graphs were tested.


International Transactions in Operational Research | 2013

On computing the 2‐diameter ‐constrained K ‐reliability of networks

Eduardo Alberto Canale; Héctor Cancela; Franco Robledo; Gerardo Rubino; Pablo Sartor

This article considers a communication network modeled by a graph and a distinguished set of terminal nodes . We assume that the nodes never fail, but the edges fail randomly and independently with known probabilities. The classical K -reliability problem computes the probability that the subnetwork is composed only by the surviving edges in such a way that all terminals communicate with each other. The d -diameter -constrained K -reliability generalization also imposes the constraint that each pair of terminals must be the extremes of a surviving path of approximately d length. It allows modeling communication network situations in which limits exist on the acceptable delay times or on the amount of hops that packets can undergo. Both problems have been shown to be NP -hard, yet the complexity of certain subproblems remains undetermined. In particular, when , it was an open question whether the instances with were solvable in polynomial time. In this paper, we prove that when and is a fixed parameter (i.e. not an input) the problem turns out to be polynomial in the number of nodes of the network (in fact linear). We also introduce an algorithm to compute these cases in such time and also provide two numerical examples.


Archive | 2010

Analysis and Improvements of Path-based Methods for Monte Carlo Reliability Evaluation of Static Models

Héctor Cancela; Pierre L’Ecuyer; Matias David Lee; Gerardo Rubino; Bruno Tuffin

Many dependability analyses are performed using static models, that is, models where time is not an explicit variable. In these models, the system and its components are considered at a fixed point in time, and the word “static” means that the past or future behavior is not relevant for the analysis. Examples of such models are reliability diagrams, or fault trees. The main difficulty when evaluating the dependability of these systems is the combinatorial explosion associated with exact solution techniques. For large and complex models, one may turn to Monte Carlo methods, but these methods have to be modified or adapted in the presence of rare important events, which are commonplace in reliability and dependability systems. This chapter examines a recently proposed method designed to deal with the problem of estimating reliability metrics for highly dependable systems where the failure of the whole system is a rare event. We focus on the robustness properties of estimators. We also propose improvements to the original technique, including its combination with randomized quasi-Monte Carlo, for which we prove that the variance converges at a faster rate (asymptotically) than for standard Monte Carlo.


global communications conference | 2007

Perceptual Quality in P2P Multi-Source Video Streaming Policies

Héctor Cancela; Pablo Rodríguez-Bocca; Gerardo Rubino

This paper explores a key aspect of the problem of sending real-time video over the Internet using a P2P architecture. The main difficulty with such a system is the high dynamics of the P2P topology, because of the frequent moves of the nodes leaving and entering the network. We consider a multi-source approach where the stream is decomposed into several flows sent by different peers to each client. Using the recently proposed PSQA technology for evaluating automatically and accurately the perceived quality at the client side, the paper focuses on the consequences of the way the stream is decomposed on the resulting quality. Our main contribution is to provide a global methodology that can be used to design such a system, illustrated by looking at three extreme cases. Our approach allows to do the design by addressing the ultimate target, the perceived quality (or Quality of Experience), instead of the standard but indirect metrics such as loss rates, delays, reliability, etc. We also propose an improved version of PSQA obtained by considering the video sequences at frame-level, instead of the packet-level approach of previous works.

Collaboration


Dive into the Héctor Cancela's collaboration.

Top Co-Authors

Avatar

Franco Robledo

University of the Republic

View shared research outputs
Top Co-Authors

Avatar

Pablo Romero

University of the Republic

View shared research outputs
Top Co-Authors

Avatar

Sergio Nesmachnow

University of the Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pablo Sartor

Universidad de Montevideo

View shared research outputs
Top Co-Authors

Avatar

Martín Pedemonte

University of the Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gastón Notte

University of the Republic

View shared research outputs
Researchain Logo
Decentralizing Knowledge