Marco Antonio Montes de Oca
University of Delaware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marco Antonio Montes de Oca.
IEEE Transactions on Evolutionary Computation | 2009
Marco Antonio Montes de Oca; Thomas Stützle; Mauro Birattari; Marco Dorigo
During the last decade, many variants of the original particle swarm optimization (PSO) algorithm have been proposed. In many cases, the difference between two variants can be seen as an algorithmic component being present in one variant but not in the other. In the first part of the paper, we present the results and insights obtained from a detailed empirical study of several PSO variants from a component difference point of view. In the second part of the paper, we propose a new PSO algorithm that combines a number of algorithmic components that showed distinct advantages in the experimental study concerning optimization speed and reliability. We call this composite algorithm Frankensteins PSO in an analogy to the popular character of Mary Shelleys novel. Frankensteins PSO performance evaluation shows that by integrating components in novel ways effective optimizers can be designed.
systems man and cybernetics | 2011
Marco Antonio Montes de Oca; Thomas Stützle; Ken Van den Enden; Marco Dorigo
Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.
IEEE Transactions on Evolutionary Computation | 2014
Tianjun Liao; Krzysztof Socha; Marco Antonio Montes de Oca; Thomas Stützle; Marco Dorigo
In this paper, we introduce ACOMV: an ant colony optimization (ACO) algorithm that extends the ACOR algorithm for continuous optimization to tackle mixed-variable optimization problems. In ACOMV, the decision variables of an optimization problem can be explicitly declared as continuous, ordinal, or categorical, which allows the algorithm to treat them adequately. ACOMV includes three solution generation mechanisms: a continuous optimization mechanism (ACOR), a continuous relaxation mechanism (ACOMV-o) for ordinal variables, and a categorical optimization mechanism (ACOMV-c) for categorical variables. Together, these mechanisms allow ACOMV to tackle mixed-variable optimization problems. We also define a novel procedure to generate artificial, mixed-variable benchmark functions, and we use it to automatically tune ACOMVs parameters. The tuned ACOMV is tested on various real-world continuous and mixed-variable engineering optimization problems. Comparisons with results from the literature demonstrate the effectiveness and robustness of ACOMV on mixed-variable optimization problems.
Autonomous Search | 2011
Thomas Stützle; Manuel López-Ibáñez; Paola Pellegrini; Michael Maur; Marco Antonio Montes de Oca; Mauro Birattari; Marco Dorigo
This chapter reviews the approaches that have been studied for the online adaptation of the parameters of ant colony optimization (ACO) algorithms, that is, the variation of parameter settings while solving an instance of a problem. We classify these approaches according to the main classes of online parameter-adaptation techniques. One conclusion of this review is that the available approaches do not exploit an in-depth understanding of the effect of individual parameters on the behavior of ACO algorithms. Therefore, this chapter also presents results of an empirical study of the solution quality over computation time for Ant Colony System and MAX-MIN Ant System, two well-known ACO algorithms. The first part of this study provides insights on the behaviour of the algorithms in dependence of fixed parameter settings. One conclusion is that the best fixed parameter settings of MAX-MIN Ant System depend strongly on the available computation time. The second part of the study uses these insights to propose simple, pre-scheduled parameter variations. Our experimental results show that such pre-scheduled parameter variations can dramatically improve the anytime performance of MAX-MIN Ant System.
genetic and evolutionary computation conference | 2011
Tianjun Liao; Marco Antonio Montes de Oca; Doğan Aydın; Thomas Stützle; Marco Dorigo
ACOR is one of the most popular ant colony optimization algorithms for tackling continuous optimization problems. In this paper, we propose IACOR-LS, which is a variant of ACOR that uses local search and that features a growing solution archive. We experiment with Powells conjugate directions set, Powells BOBYQA, and Lin-Yu Tsengs Mtsls1 methods as local search procedures. Automatic parameter tuning results show that IACOR-LS with Mtsls1 (IACOR-Mtsls1) is not only a significant improvement over ACOR, but that it is also competitive with the state-of-the-art algorithms described in a recent special issue of the Soft Computing journal. Further experimentation with IACOR-Mtsls1 on an extended benchmark functions suite, which includes functions from both the special issue of Soft Computing and the IEEE 2005 Congress on Evolutionary Computation, demonstrates its good performance on continuous optimization problems.
European Journal of Operational Research | 2014
Tianjun Liao; Thomas Stützle; Marco Antonio Montes de Oca; Marco Dorigo
In this article, we propose UACOR, a unified ant colony optimization (ACO) algorithm for continuous optimization. UACOR includes algorithmic components from ACOR,DACOR and IACOR-LS, three ACO algorithms for continuous optimization that have been proposed previously. Thus, it can be used to instantiate each of these three earlier algorithms; in addition, from UACOR we can also generate new continuous ACO algorithms that have not been considered before in the literature. In fact, UACOR allows the usage of automatic algorithm configuration techniques to automatically derive new ACO algorithms. To show the benefits of UACOR’s flexibility, we automatically configure two new ACO algorithms, UACOR-s and UACOR-c, and evaluate them on two sets of benchmark functions from a recent special issue of the Soft Computing (SOCO) journal and the IEEE 2005 Congress on Evolutionary Computation (CEC’05), respectively. We show that UACOR-s is competitive with the best of the 19 algorithms benchmarked on the SOCO benchmark set and that UACOR-c performs superior to IPOP-CMA-ES and statistically significantly better than five other algorithms benchmarked on the CEC’05 set. These results show the high potential ACO algorithms have for continuous optimization and suggest that automatic algorithm configuration is a viable approach for designing state-of-the-art continuous optimizers.
Swarm Intelligence | 2011
Marco Antonio Montes de Oca; Eliseo Ferrante; Alexander Scheidler; Carlo Pinciroli; Mauro Birattari; Marco Dorigo
Collective decision-making is a process whereby the members of a group decide on a course of action by consensus. In this paper, we propose a collective decision-making mechanism for robot swarms deployed in scenarios in which robots can choose between two actions that have the same effects but that have different execution times. The proposed mechanism allows a swarm composed of robots with no explicit knowledge about the difference in execution times between the two actions to choose the one with the shorter execution time. We use an opinion formation model that captures important elements of the scenarios in which the proposed mechanism can be used in order to predict the system’s behavior. The model predicts that when the two actions have different average execution times, the swarm chooses with high probability the action with the shorter average execution time. We validate the model’s predictions through a swarm robotics experiment in which robot teams must choose one of two paths of different length that connect two locations. Thanks to the proposed mechanism, a swarm made of robot teams that do not measure time or distance is able to choose the shorter path.
congress on evolutionary computation | 2009
Marco Antonio Montes de Oca; Jorge Peña; Thomas Stützle; Carlo Pinciroli; Marco Dorigo
Particle swarm optimization (PSO) is a swarm intelligence technique originally inspired by models of flocking and of social influence that assumed homogeneous individuals. During its evolution to become a practical optimization tool, some heterogeneous variants have been proposed. However, heterogeneity in PSO algorithms has never been explicitly studied and some of its potential effects have therefore been overlooked. In this paper, we identify some of the most relevant types of heterogeneity that can be ascribed to particle swarms. A number of particle swarms are classified according to the type of heterogeneity they exhibit, which allows us to identify some gaps in current knowledge about heterogeneity in PSO algorithms. Motivated by these observations, we carry out an experimental study of two heterogeneous particle swarms each of which is composed of two kinds of particles. Directions for future developments on heterogeneous particle swarms are outlined.
soft computing | 2011
Marco Antonio Montes de Oca; Doğan Aydın; Thomas Stützle
The development cycle of high-performance optimization algorithms requires the algorithm designer to make several design decisions. These decisions range from implementation details to the setting of parameter values for testing intermediate designs. Proper parameter setting can be crucial for the effective assessment of algorithmic components because a bad parameter setting can make a good algorithmic component perform poorly. This situation may lead the designer to discard promising components that just happened to be tested with bad parameter settings. Automatic parameter tuning techniques are being used by practitioners to obtain peak performance from already designed algorithms. However, automatic parameter tuning also plays a crucial role during the development cycle of optimization algorithms. In this paper, we present a case study of a tuning-in-the-loop approach for redesigning a particle swarm-based optimization algorithm for tackling large-scale continuous optimization problems. Rather than just presenting the final algorithm, we describe the whole redesign process. Finally, we study the scalability behavior of the final algorithm in the context of this special issue.
genetic and evolutionary computation conference | 2008
Marco Antonio Montes de Oca; Thomas Stützle
The fully informed particle swarm optimization algorithm (FIPS) is very sensitive to changes in the population topology. The velocity update rule used in FIPS considers all the neighbors of a particle to update its velocity instead of just the best one as it is done in most variants. It has been argued that this rule induces a random behavior of the particle swarm when a fully connected topology is used. This argument could explain the often observed poor performance of the algorithm under that circumstance. In this paper we study experimentally the convergence behavior of the particles in FIPS when using topologies with different levels of connectivity. We show that the particles tend to search a region whose size decreases as the connectivity of the population topology increases. We therefore put forward the idea that spatial convergence, and not a random behavior, is the cause of the poor performance of FIPS with a fully connected topology. The practical implications of this result are explored.