Martin Pelikan
University of Missouri
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Pelikan.
Computational Optimization and Applications | 2002
Martin Pelikan; David E. Goldberg; Fernando G. Lobo
This paper summarizes the research on population-based probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the exploration of the search space. It settles the algorithms in the field of genetic and evolutionary computation where they have been originated, and classifies them into a few classes according to the complexity of models they use. Algorithms within each class are briefly described and their strengths and weaknesses are discussed.
Archive | 1999
Martin Pelikan; Heinz Muehlenbein
The paper deals with the Bivariate Marginal Distribution Algorithm (BMDA). BMDA is an extension of the Univariate Marginal Distribution Algorithm (UMDA). It uses the pair gene dependencies in order to improve algorithms that use simple univariate marginal distributions. BMDA is a special case of the Factorization Distribution Algorithm, but without any problem specific knowledge in the initial stage. The dependencies are being discovered during the optimization process itself. In this paper BMDA is described in detail. BMDA is compared to different algorithms including the simple genetic algorithm with different crossover methods and UMDA. For some fitness functions the relation between problem size and the number of fitness evaluations until convergence is shown.
electronic commerce | 2000
Martin Pelikan; David E. Goldberg; Erick Cantú-Paz
This paper proposes an algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions. The algorithm is settled into the context of genetic and evolutionary computation and the algorithms based on the estimation of distributions. The proposed algorithm is called the Bayesian Optimization Algorithm (BOA). To estimate the distribution of promising solutions, the techniques for modeling multivariate data by Bayesian networks are used. The BOA identifies, reproduces, and mixes building blocks up to a specified order. It is independent of the ordering of the variables in strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm, but it is not essential. First experiments were done with additively decomposable problems with both nonoverlapping as well as overlapping building blocks. The proposed algorithm is able to solve all but one of the tested problems in linear or close to linear time with respect to the problem size. Except for the maximal order of interactions to be covered, the algorithm does not use any prior knowledge about the problem. The BOA represents a step toward alleviating the problem of identifying and mixing building blocks correctly to obtain good solutions for problems with very limited domain information.
american control conference | 2000
Martin Pelikan; David E. Goldberg; Fernando G. Lobo
Summarizes the research on population-based probabilistic search algorithms based on modeling promising solutions by estimating their probability distribution and using the constructed model to guide the exploration of the search space. It settles the algorithms in the field of genetic and evolutionary computation where they have been originated. All methods are classified into a few classes according to the complexity of the class of models they use. Algorithms from each of these classes are briefly described and their strengths and weaknesses are discussed.
Swarm and evolutionary computation | 2011
Mark Hauschild; Martin Pelikan
Abstract Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. This explicit use of probabilistic models in optimization offers some significant advantages over other types of metaheuristics. This paper discusses these advantages and outlines many of the different types of EDAs. In addition, some of the most powerful efficiency enhancement techniques applied to EDAs are discussed and some of the key theoretical results relevant to EDAs are outlined.
Journal of Biological Chemistry | 2010
Michal Hammel; Yaping Yu; Brandi L. Mahaney; Brandon Cai; Ruiqiong Ye; Barry M. Phipps; Robert P. Rambo; Greg L. Hura; Martin Pelikan; Sairei So; Ramin M. Abolfath; David J. Chen; Susan P. Lees-Miller; John A. Tainer
DNA double strand break (DSB) repair by non-homologous end joining (NHEJ) is initiated by DSB detection by Ku70/80 (Ku) and DNA-dependent protein kinase catalytic subunit (DNA-PKcs) recruitment, which promotes pathway progression through poorly defined mechanisms. Here, Ku and DNA-PKcs solution structures alone and in complex with DNA, defined by x-ray scattering, reveal major structural reorganizations that choreograph NHEJ initiation. The Ku80 C-terminal region forms a flexible arm that extends from the DNA-binding core to recruit and retain DNA-PKcs at DSBs. Furthermore, Ku- and DNA-promoted assembly of a DNA-PKcs dimer facilitates trans-autophosphorylation at the DSB. The resulting site-specific autophosphorylation induces a large conformational change that opens DNA-PKcs and promotes its release from DNA ends. These results show how protein and DNA interactions initiate large Ku and DNA-PKcs rearrangements to control DNA-PK biological functions as a macromolecular machine orchestrating assembly and disassembly of the initial NHEJ complex on DNA.
International Journal of Approximate Reasoning | 2002
Martin Pelikan; Kumara Sastry; David E. Goldberg
To solve a wide range of different problems, the research in black-box optimization faces several important challenges. One of the most important challenges is the design of methods capable of automatic discovery and exploitation of problem regularities to ensure efficient and reliable search for the optimum. This paper discusses the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model promising solutions and sample new candidate solutions. Using Bayesian networks in combination with population-based genetic and evolutionary search allows BOA to discover and exploit regularities in the form of a problem decomposition. The paper analyzes the applicability of the methods for learning Bayesian networks in the context of genetic and evolutionary search and concludes that the combination of the two approaches yields robust, efficient, and accurate search.
Archive | 2006
Martin Pelikan; Kumara Sastry; Erick Cantú-Paz
Scalable optimization via probabilistic modelin , Scalable optimization via probabilistic modelin , کتابخانه دیجیتال جندی شاپور اهواز
parallel problem solving from nature | 2000
Martin Pelikan; David E. Goldberg
This paper introduces clustering as a tool to improve the effects of recombination and incorporate niching in evolutionary algorithms. Instead of processing the entire set of parent solutions, the set is first clustered and the solutions in each of the clusters are processed separately. This alleviates the problem of symmetry which is often a major difficulty of many evolutionary algorithms in combinatorial optimization. Furthermore, it incorporates niching into genetic algorithms and, for the first time, the probabilistic model-building genetic algorithms. The dynamics and performance of the proposed method are illustrated on example problems.
genetic and evolutionary computation conference | 2005
Martin Pelikan; Kumara Sastry; David E. Goldberg
This paper describes a scalable algorithm for solving multiobjective decomposable problems by combining the hierarchical Bayesian optimization algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGA-II) and clustering in the objective space. It is first argued that for good scalability, clustering or some other form of niching in the objective space is necessary and the size of each niche should be approximately equal. Multiobjective hBOA (mohBOA) is then described that combines hBOA, NSGA-II and clustering in the objective space. The algorithm mohBOA differs from the multiobjective variants of BOA and hBOA proposed in the past by including clustering in the objective space and allocating an approximately equally sized portion of the population to each cluster. The algorithm mohBOA is shown to scale up well on a number of problems on which standard multiobjective evolutionary algorithms perform poorly.