Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sevan G. Ficici is active.

Publication


Featured researches published by Sevan G. Ficici.


Robotics and Autonomous Systems | 2002

Embodied Evolution: Distributing an evolutionary algorithm in a population of robots

Richard A. Watson; Sevan G. Ficici; Jordan B. Pollack

We introduce Embodied Evolution (EE) as a new methodology for evolutionary robotics (ER). EE uses a population of physical robots that autonomously reproduce with one another while situated in their task environment. This constitutes a fully distributed evolutionary algorithm embodied in physical robots. Several issues identified by researchers in the evolutionary robotics community as problematic for the development of ER are alleviated by the use of a large number of robots being evaluated in parallel. Particularly, EE avoids the pitfalls of the simulate-and-transfer method and allows the speed-up of evaluation time by utilizing parallelism. The more novel features of EE are that the evolutionary algorithm is entirely decentralized, which makes it inherently scalable to large numbers of robots, and that it uses many robots in a shared task environment, which makes it an interesting platform for future work in collective robotics and Artificial Life. We have built a population of eight robots and successfully implemented the first example of Embodied Evolution by designing a fully decentralized, asynchronous evolutionary algorithm. Controllers evolved by EE outperform a hand-designed controller in a simple application. We introduce our approach and its motivations, detail our implementation and initial results, and discuss the advantages and limitations of EE.


european conference on artificial life | 2001

Pareto Optimality in Coevolutionary Learning

Sevan G. Ficici; Jordan B. Pollack

We develop a novel coevolutionary algorithm based upon the concept of Pareto optimality. The Pareto criterion is core to conventional multi-objective optimization (MOO) algorithms. We can think of agents in a coevolutionary system as performing MOO, as well: An agent interacts with many other agents, each of which can be regarded as an objective for optimization. We adapt the Pareto concept to allow agents to follow gradient and create gradient for others to follow, such that co-evolutionary learning succeeds. We demonstrate our Pareto coevolution methodology with the majority function, a density classification task for cellular automata.


parallel problem solving from nature | 2000

A Game-Theoretic Approach to the Simple Coevolutionary Algorithm

Sevan G. Ficici; Jordan B. Pollack

The fundamental distinction between ordinary evolutionary algorithms (EA) and co-evolutionary algorithms lies in the interaction between coevolving entities. We behave that this property is essentially game-theoretic in nature. Using game theory, we describe extensions that allow familiar mixing-matrix and Markov-chain models of EAs to address coevolutionary algorithm dynamics. We then employ concepts from evolutionary game theory to examine design aspects of conventional coevolutionary algorithms that are poorly understood.


congress on evolutionary computation | 2000

A game-theoretic investigation of selection methods used in evolutionary algorithms

Sevan G. Ficici; Ofer Melnik; Jordan B. Pollack

The replicator equation used in evolutionary game theory (EGT) assumes that strategies reproduce in direct proportion to their payoffs; this is akin to the use of fitness-proportionate selection in an evolutionary algorithm (EA). In this paper, we investigate how various other selection methods commonly used in EAs can affect the discrete-time dynamics of EGT. In particular, we show that the existence of evolutionary stable strategies (ESS) is sensitive to the selection method used. Rather than maintain the dynamics and equilibria of EGT, the selection methods we test either impose a fixed-point dynamic virtually unrelated to the payoffs of the game matrix, or they give limit cycles or induce chaos. These results are significant to the field of evolutionary computation because EGT can be understood as a coevolutionary algorithm operating under ideal conditions: an infinite population, noiseless payoffs and complete knowledge of the phenotype space. Thus, certain selection methods, which may operate effectively in simple evolution, are pathological in an ideal-world coevolutionary algorithm, and therefore dubious under real-world conditions.


genetic and evolutionary computation conference | 2005

Monotonic solution concepts in coevolution

Sevan G. Ficici

Assume a coevolutionary algorithm capable of storing and utilizing all phenotypes discovered during its operation, for as long as it operates on a problem; that is, assume an algorithm with a monotonically increasing knowledge of the search space. We ask: If such an algorithm were to periodically report, over the course of its operation, the best solution found so far, would the quality of the solution reported by the algorithm improve monotonically over time? To answer this question, we construct a simple preference relation to reason about the goodness of different individual and composite phenotypic behaviors. We then show that whether the solutions reported by the coevolutionary algorithm improve monotonically with respect to this preference relation depends upon the solution concept implemented by the algorithm. We show that the solution concept implemented by the conventional coevolutionary algorithm does not guarantee monotonic improvement; in contrast, the game-theoretic solution concept of Nash equilibrium does guarantee monotonic improvement. Thus, this paper considers 1) whether global and objective metrics of goodness can be applied to coevolutionary problem domains (possibly with open-ended search spaces), and 2) whether coevolutionary algorithms can, in principle, optimize with respect to such metrics and find solutions to games of strategy.


genetic and evolutionary computation conference | 2007

Advanced tutorial on coevolution

Sevan G. Ficici; Anthony Bucci

The advanced tutorial on coevolution continues the topics covered in the introductory coevolution tutorial with a view towards research conducted in the last eight years. We will explore two themes which have recently been identified: interaction, which is where the context-sensitive nature of evaluation manifests itself; and elaboration, which represents the goal of accumulating capabilities. Evolutionary game theory and the order theory used in Pareto coevolution will be covered as examples of the study of interaction. NeuroEvolution of Augmenting Topologies (NEAT) in particular and monotonic solution concepts more generally treat questions of elaboration. Further topics to be covered include: EGT and dynamical systems studies of cooperative coevolutionary algorithms; archive methods; dimension extraction; and the estimation-exploration algorithm.


Archive | 2008

Multiobjective Optimization and Coevolution

Sevan G. Ficici

This chapter reviews a line of coevolutionary algorithm research that reframes coevolution as a form of multiobjective optimization. This shift in perspective towards coevolution as a form of multiobjective optimization has led researchers to new algorithmic and analytical formulations that have advanced the state of the art in coevolutionary algorithm research. We review relevant literature and discuss the basic concepts and issues involved in the application of multiobjective optimization to coevolutionary algorithms.


genetic and evolutionary computation conference | 2006

A game-theoretic investigation of selection methods in two-population coevolution

Sevan G. Ficici

We examine the dynamical and game-theoretic properties of several selection methods in the context of two-population coevolution. The methods we examine are fitness-proportional, linear rank, truncation, and (μ,λ)-ES selection. We use simple symmetric variable-sum games in an evolutionary game-theoretic framework. Our results indicate that linear rank, truncation, and (μ,λ)-ES selection are somewhat better-behaved in a two-population setting than in the one-population case analyzed by Ficici et al. [4]. These alternative selection methods maintain the Nash-equilibrium attractors found in proportional selection, but also add non-Nash attractors as well as regions of phase-space that lead to cyclic dynamics. Thus, these alternative selection methods do not properly implement the Nash-equilibrium solution concept.


Archive | 2004

Selection in Coevolutionary Algorithms and the Inverse Problem

Sevan G. Ficici; Ofer Melnik; Jordan B. Pollack

The inverse problem in the collective intelligence framework concerns how the private utility functions of agents can be engineered so that their selfish behaviors collectively give rise to a desired world state. In this chapter we examine several selection and fitnesssharing methods used in coevolution and consider their operation with respect to the inverse problem. The methods we test are truncation and linear-rank selection and competitive and similarity-based fitness sharing. Using evolutionary game theory to establish the desired world state, our analyses show that variable-sum games with polymorphic Nash are problematic for these methods. Rather than converge to polymorphic Nash, the methods we test produce cyclic behavior, chaos, or attractors that lack game-theoretic justification and therefore fail to solve the inverse problem. The private utilities of the evolving agents may thus be viewed as poorly factored—improved private utility does not correspond to improved world utility.


congress on evolutionary computation | 2012

Evolutionary algorithms for supertree search

Sevan G. Ficici; Enoch S. Liu; Gary B. Fogel

Phylogenetic inference of the history of life on Earth continues to be a major effort of evolutionary biology. Such inference can be accomplished through the use of individual genes, sets of genes, or complete genomes. While the latter may provide the most robust description of the true phylogenetic history, the computational demands of complete genome comparison and phylogenetic construction is daunting. Thus most researchers are left using sets of conserved genes for the resolution of a common phylogeny (what is termed a “supertree” search). However as the number of taxa increases or as the number of source trees used in construction of a supertree increases, the number of possible supertree solutions increases tremendously. This requires consideration of alternate methods to search this space efficiently such as those that use stochastic methods. Here for the first time we present a method for supertree search using evolutionary algorithms and evaluate its utility on a set of derived supertree problems with 50 taxa. The results indicate the utility of this approach and offer opportunities for future refinement.

Collaboration


Dive into the Sevan G. Ficici's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avi Pfeffer

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge