Cláudio F. Lima
University of the Algarve
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cláudio F. Lima.
Archive | 2007
Fernando G. Lobo; Cláudio F. Lima; Zbigniew Michalewicz
One of the main difficulties of applying an evolutionary algorithm (or, as a matter of fact, any heuristic method) to a given problem is to decide on an appropriate set of parameter values. Typically these are specified before the algorithm is run and include population size, selection rate, operator probabilities, not to mention the representation and the operators themselves. This book gives the reader a solid perspective on the different approaches that have been proposed to automate control of these parameters as well as understanding their interactions. The book covers a broad area of evolutionary computation, including genetic algorithms, evolution strategies, genetic programming, estimation of distribution algorithms, and also discusses the issues of specific parameters used in parallel implementations, multi-objective evolutionary algorithms, and practical consideration for real-world applications. It is a recommended read for researchers and practitioners of evolutionary computation and heuristic methods.
genetic and evolutionary computation conference | 2005
Fernando G. Lobo; Cláudio F. Lima
This paper reviews the topic of population sizing in genetic algorithms. It starts by revisiting theoretical models which rely on a facetwise decomposition of genetic algorithms, and then moves on to various self-adjusting population sizing schemes that have been proposed in the literature. The paper ends with recommendations for those who design and compare adaptive population sizing schemes for genetic algorithms.
electronic commerce | 2009
Tian-Li Yu; David E. Goldberg; Kumara Sastry; Cláudio F. Lima; Martin Pelikan
In many different fields, researchers are often confronted by problems arising from complex systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve large and difficult problems, proper decomposition is the key. In this paper, investigating and analyzing interactions between components of complex systems shed some light on problem decomposition. By recognizing three bare-bones interactionsmodularity, hierarchy, and overlap, facet-wise models are developed to dissect and inspect problem decomposition in the context of genetic algorithms. The proposed genetic algorithm design utilizes a matrix representation of an interaction graph to analyze and explicitly decompose the problem. The results from this paper should benefit research both technically and scientifically. Technically, this paper develops an automated dependency structure matrix clustering technique and utilizes it to design a model-building genetic algorithm that learns and delivers the problem structure. Scientifically, the explicit interaction model describes the problem structure very well and helps researchers gain important insights through the explicitness of the procedure.
Parameter Setting in Evolutionary Algorithms | 2007
Fernando G. Lobo; Cláudio F. Lima
Summary. This chapter presents a review of adaptive population sizing schemes used in genetic algorithms. We start by briefly revisiting theoretical models which rely on a facetwise design decomposition, and then move on to various self-adjusting population sizing schemes that have been proposed in the literature. For each method, the major advantages and disadvantages are discussed. The chapter ends with recommendations for those who design and compare self-adjusting population sizing mechanisms for genetic and evolutionary algorithms.
congress on evolutionary computation | 2007
Cláudio F. Lima; Martin Pelikan; David E. Goldberg; Fernando G. Lobo; Kumara Sastry; Mark Hauschild
The Bayesian optimization algorithm (BOA) uses Bayesian networks to learn linkages between the decision variables of an optimization problem. This paper studies the influence of different selection and replacement methods on the accuracy of linkage learning in BOA. Results on concatenated m-k deceptive trap functions show that the model accuracy depends on a large extent on the choice of selection method and to a lesser extent on the replacement strategy used. Specifically, it is shown that linkage learning in BOA is more accurate with truncation selection than with tournament selection. The choice of replacement strategy is important when tournament selection is used, but it is not relevant when using truncation selection. On the other hand, if performance is our main concern, tournament selection and restricted tournament replacement should be preferred. These results aim to provide practitioners with useful information about the best way to tune BOA with respect to structural model accuracy and overall performance.
genetic and evolutionary computation conference | 2006
Kumara Sastry; Cláudio F. Lima; David E. Goldberg
The paper presents an evaluation-relaxation scheme where a fitness surrogate automatically adapts to the problem structure and the partial contributions of subsolutions to the fitness of an individual are estimated efficiently and accurately. In particular, the probabilistic model built by extended compact genetic algorithm is used to infer the structural form of the surrogate and a least squares method is used to estimate the coefficients of the surrogate. Using the surrogate avoids the need for expensive fitness evaluation for some of the solutions, and thereby yields significant efficiency enhancement. Results show that a surrogate, which automatically adapts to problem knowledge mined from probabilistic models, yields substantial speedup (1.75--3.1) on a class of boundedly-difficult additively-decomposable problems with and without additive Gaussian noise. The speedup provided by the surrogate increases with the number of substructures, substructure complexity, and noise-to-signal ratio.
parallel problem solving from nature | 2006
Cláudio F. Lima; Martin Pelikan; Kumara Sastry; Martin V. Butz; David E. Goldberg; Fernando G. Lobo
This paper studies the utility of using substructural neighborhoods for local search in the Bayesian optimization algorithm (BOA). The probabilistic model of BOA, which automatically identifies important problem substructures, is used to define the structure of the neighborhoods used in local search. Additionally, a surrogate fitness model is considered to evaluate the improvement of the local search steps. The results show that performing substructural local search in BOA significatively reduces the number of generations necessary to converge to optimal solutions and thus provides substantial speedups.
genetic and evolutionary computation conference | 2007
Mark Hauschild; Martin Pelikan; Cláudio F. Lima; Kumara Sastry
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
genetic and evolutionary computation conference | 2005
Cláudio F. Lima; Kumara Sastry; David E. Goldberg; Fernando G. Lobo
This paper presents an approach to combine competent crossover and mutation operators via probabilistic model building. Both operators are based on the probabilistic model building procedure of the extended compact genetic algorithm (eCGA). The model sampling procedure of eCGA, which mimics the behavior of an idealized recombination---where the building blocks (BBs) are exchanged without disruption---is used as the competent crossover operator. On the other hand, a recently proposed BB-wise mutation operator---which uses the BB partition information to perform local search in the BB space---is used as the competent mutation operator. The resulting algorithm, called hybrid extended compact genetic algorithm (heCGA), makes use of the problem decomposition information for (1) effective recombination of BBs and (2) effective local search in the BB neighborhood. The proposed approach is tested on different problems that combine the core of three well known problem difficulty dimensions: deception, scaling, and noise. The results show that, in the absence of domain knowledge, the hybrid approach is more robust than either single-operator-based approach.
IEEE Transactions on Evolutionary Computation | 2009
Mark Hauschild; Martin Pelikan; Kumara Sastry; Cláudio F. Lima
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on four important classes of test problems: concatenated traps, random additively decomposable problems, hierarchical traps and two-dimensional Ising spin glasses with periodic boundary conditions. We argue that although the probabilistic models in hBOA can encode complex probability distributions, analyzing these models is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in consequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.