Kumara Sastry
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kumara Sastry.
Archive | 2003
Kumara Sastry; David E. Goldberg
This paper describes probabilistic model building genetic programming (PM-BGP) developed based on the extended compact genetic algorithm (eCGA). Unlike traditional genetic programming, which use fixed recombination operators, the proposed PMBGA adapts linkages. The proposed algorithms, called the extended compact genetic programming (eCGP) adaptively identifies and exchanges non-overlapping building blocks by constructing and sampling probabilistic models of promising solutions. The results show that eCGP scales-up polynomially with the problem size (the number of functionals and terminals) on both GP-easy problem and boundedly difficult GP-hard problem.
International Journal of Approximate Reasoning | 2002
Martin Pelikan; Kumara Sastry; David E. Goldberg
To solve a wide range of different problems, the research in black-box optimization faces several important challenges. One of the most important challenges is the design of methods capable of automatic discovery and exploitation of problem regularities to ensure efficient and reliable search for the optimum. This paper discusses the Bayesian optimization algorithm (BOA), which uses Bayesian networks to model promising solutions and sample new candidate solutions. Using Bayesian networks in combination with population-based genetic and evolutionary search allows BOA to discover and exploit regularities in the form of a problem decomposition. The paper analyzes the applicability of the methods for learning Bayesian networks in the context of genetic and evolutionary search and concludes that the combination of the two approaches yields robust, efficient, and accurate search.
genetic and evolutionary computation conference | 2005
Xavier Llorà; Kumara Sastry; David E. Goldberg; Abhimanyu Gupta; Lalitha Lakshmi
One of the daunting challenges of interactive genetic algorithms (iGAs)---genetic algorithms in which fitness measure of a solution is provided by a human rather than by a fitness function, model, or computation---is user fatigue which leads to sub-optimal solutions. This paper proposes a method to combat user fatigue by augmenting user evaluations with a synthetic fitness function. The proposed method combines partial ordering concepts, notion of non-domination from multiobjective optimization, and support vector machines to synthesize a fitness model based on user evaluation. The proposed method is used in an iGA on a simple test problem and the results demonstrate that the method actively combats user fatigue by requiring 3--7 times less user evaluation when compared to a simple iGA.
Scalable Optimization via Probabilistic Modeling | 2006
Georges R. Harik; Fernando G. Lobo; Kumara Sastry
For a long time, genetic algorithms (GAs) were not very successful in automatically identifying and exchanging structures consisting of several correlated genes. This problem, referred in the literature as the linkage-learning problem, has been the subject of extensive research for many years. This chapter explores the relationship between the linkage-learning problem and that of learning probability distributions over multi-variate spaces. Herein, it is argued that these problems are equivalent. Using a simple but effective approach to learning distributions, and by implication linkage, this chapter reveals the existence of GA-like algorithms that are potentially orders of magnitude faster and more accurate than the simple GA.
Archive | 2006
Martin Pelikan; Kumara Sastry; Erick Cantú-Paz
Scalable optimization via probabilistic modelin , Scalable optimization via probabilistic modelin , کتابخانه دیجیتال جندی شاپور اهواز
genetic and evolutionary computation conference | 2005
Martin Pelikan; Kumara Sastry; David E. Goldberg
This paper describes a scalable algorithm for solving multiobjective decomposable problems by combining the hierarchical Bayesian optimization algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGA-II) and clustering in the objective space. It is first argued that for good scalability, clustering or some other form of niching in the objective space is necessary and the size of each niche should be approximately equal. Multiobjective hBOA (mohBOA) is then described that combines hBOA, NSGA-II and clustering in the objective space. The algorithm mohBOA differs from the multiobjective variants of BOA and hBOA proposed in the past by including clustering in the objective space and allocating an approximately equally sized portion of the population to each cluster. The algorithm mohBOA is shown to scale up well on a number of problems on which standard multiobjective evolutionary algorithms perform poorly.
genetic and evolutionary computation conference | 2004
Martin Pelikan; Kumara Sastry
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
genetic and evolutionary computation conference | 2004
Kumara Sastry; David E. Goldberg
This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses the probabilistic model of linkage groups to find the best among competing building blocks. The competent selectomutative GA successfully solves additively separable problems of bounded difficulty, requiring only subquadratic number of function evaluations. The results show that for additively separable problems the probabilistic model building BB-wise mutation scales as \({\mathcal{O}}(2^km^{1.5})\), and requires \({\mathcal{O}}(\sqrt{k}\log m)\) less function evaluations than its selectorecombinative counterpart, confirming theoretical results reported elsewhere [1].
electronic commerce | 2009
Tian-Li Yu; David E. Goldberg; Kumara Sastry; Cláudio F. Lima; Martin Pelikan
In many different fields, researchers are often confronted by problems arising from complex systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve large and difficult problems, proper decomposition is the key. In this paper, investigating and analyzing interactions between components of complex systems shed some light on problem decomposition. By recognizing three bare-bones interactionsmodularity, hierarchy, and overlap, facet-wise models are developed to dissect and inspect problem decomposition in the context of genetic algorithms. The proposed genetic algorithm design utilizes a matrix representation of an interaction graph to analyze and explicitly decompose the problem. The results from this paper should benefit research both technically and scientifically. Technically, this paper develops an automated dependency structure matrix clustering technique and utilizes it to design a model-building genetic algorithm that learns and delivers the problem structure. Scientifically, the explicit interaction model describes the problem structure very well and helps researchers gain important insights through the explicitness of the procedure.
congress on evolutionary computation | 2004
Kumara Sastry; Martin Pelikan; David E. Goldberg
This paper studies fitness inheritance as an efficiency enhancement technique for a class of competent genetic algorithms called estimation distribution algorithms. Probabilistic models of important sub-solutions are developed to estimate the fitness of a proportion of individuals in the population, thereby avoiding computationally expensive function evaluations. The effect of fitness inheritance on the convergence time and population sizing are modeled and the speed-up obtained through inheritance is predicted. The results show that a fitness-inheritance mechanism which utilizes information on building-block fitnesses provides significant efficiency enhancement. For additively separable problems, fitness inheritance reduces the number of function evaluations to about half and yields a speed-up of about 1.75-2.25.