Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sean Luke is active.

Publication


Featured researches published by Sean Luke.


Autonomous Agents and Multi-Agent Systems | 2005

Cooperative Multi-Agent Learning: The State of the Art

Liviu Panait; Sean Luke

Cooperative multi-agent systems (MAS) are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multi-agent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to MAS problems has spawned increasing interest in machine learning techniques to automate the search and optimization process. We provide a broad survey of the cooperative multi-agent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning, RL or robotics). In this survey we attempt to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics. We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multi-agent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multi-agent learning problem domains, and a list of multi-agent learning resources.


Simulation | 2005

MASON: A Multiagent Simulation Environment

Sean Luke; Claudio Cioffi-Revilla; Liviu Panait; Keith Sullivan; Gabriel Catalin Balan

MASON is a fast, easily extensible, discrete-event multi-agent simulation toolkit in Java, designed to serve as the basis for a wide range of multi-agent simulation tasks ranging from swarm robotics to machine learning to social complexity environments. MASON carefully delineates between model and visualization, allowing models to be dynamically detached from or attached to visualizers, and to change platforms mid-run. This paper describes the MASON system, its motivation, and its basic architectural design. It then compares MASON to related multi-agent libraries in the public domain, and discusses six applications of the system built over the past year which suggest its breadth of utility.


adaptive agents and multi-agents systems | 1997

Ontology-based Web agents

Sean Luke; Lee Spector; David Rager; James A. Hendler

This paper describes SHOE, a set of Simple HTML Ontology Extensions which allow World-Wide Web authors to annotate their pages with semantic knowledge such as “I am a graduate student” or “This person is my graduate advisor”. These annotations are expressed in terms of ontological knowledge which can be generated by using or extending standard ontologies available on the Web. This makes it possible to ask Web agent queries such as “Find me all graduate students in Maryland who are working on a project funded by DoD initiative 123-4567”, instead of simplistic keyword searches enabled by current search engines. We have also developed a web-crawling agent, Expos´ e, which interns SHOE knowledge from web documents, making these kinds queries a reality.


robot soccer world cup | 1998

Co-evolving Soccer Softbot Team Coordination with Genetic Programming

Sean Luke; Charles Hohn; Jonathan Farris; Gary Jackson; James A. Hendler

In this paper we explain how we applied genetic programming to behavior-based team coordination in the RoboCup Soccer Server domain. Genetic programming is a promising new method for automatically generating functions and algorithms through natural selection. In contrast to other learning methods, genetic programmings automatic programming makes it a natural approach for developing algorithmic robot behaviors. The RoboCup Soccer Server was a very challenging domain for genetic programming, but we were pleased with the results. At the end, genetic programming had produced teams of soccer softbots which had learned to cooperate to play a good game of simulator soccer.


genetic and evolutionary computation conference | 2012

Genetic programming needs better benchmarks

James McDermott; David White; Sean Luke; Luca Manzoni; Mauro Castelli; Leonardo Vanneschi; Wojciech Jaskowski; Krzysztof Krawiec; Robin Harper; Kenneth A. De Jong; Una-May O'Reilly

Genetic programming (GP) is not a field noted for the rigor of its benchmarking. Some of its benchmark problems are popular purely through historical contingency, and they can be criticized as too easy or as providing misleading information concerning real-world performance, but they persist largely because of inertia and the lack of good alternatives. Even where the problems themselves are impeccable, comparisons between studies are made more difficult by the lack of standardization. We argue that the definition of standard benchmarks is an essential step in the maturation of the field. We make several contributions towards this goal. We motivate the development of a benchmark suite and define its goals; we survey existing practice; we enumerate many candidate benchmarks; we report progress on reference implementations; and we set out a concrete plan for gathering feedback from the GP community that would, if adopted, lead to a standard set of benchmarks.


electronic commerce | 2006

A comparison of bloat control methods for genetic programming

Sean Luke; Liviu Panait

Genetic programming has highlighted the problem of bloat, the uncontrolled growth of the average size of an individual in the population. The most common approach to dealing with bloat in tree-based genetic programming individuals is to limit their maximal allowed depth. An alternative to depth limiting is to punish individuals in some way based on excess size, and our experiments have shown that the combination of depth limiting with such a punitive method is generally more effective than either alone. Which such combinations are most effective at reducing bloat? In this article we augment depth limiting with nine bloat control methods and compare them with one another. These methods are chosen from past literature and from techniques of our own devising. esting with four genetic programming problems, we identify where each bloat control method performs well on a per-problem basis, and under what settings various methods are effective independent of problem. We report on the results of these tests, and discover an unexpected winner in the cross-platform category.


Genetic Programming and Evolvable Machines | 2013

Better GP benchmarks: community survey results and proposals

David White; James McDermott; Mauro Castelli; Luca Manzoni; Brian W. Goldman; Gabriel Kronberger; Wojciech Jaśkowski; Una-May O'Reilly; Sean Luke

We present the results of a community survey regarding genetic programming benchmark practices. Analysis shows broad consensus that improvement is needed in problem selection and experimental rigor. While views expressed in the survey dissuade us from proposing a large-scale benchmark suite, we find community support for creating a “blacklist” of problems which are in common use but have important flaws, and whose use should therefore be discouraged. We propose a set of possible replacement problems.


adaptive agents and multi-agents systems | 2004

A Pheromone-Based Utility Model for Collaborative Foraging

Liviu Panait; Sean Luke

Multi-agent research often borrows from biology, where remarkable examples of collective intelligence may be found. One interesting example is ant colonies¿ use of pheromones as a joint communication mechanism. In this paper we propose two pheromone-based algorithms for artificial agent foraging, trail-creation, and other tasks. Whereas practically all previous work in this area has focused on biologically-plausible but ad-hoc single pheromone models, we have developed a formalism which uses multiple pheromones to guide cooperative tasks. This model bears some similarity to reinforcement learning. However, our model takes advantage of symmetries common to foraging environments which enables it to achieve much faster reward propagation than reinforcement learning does. Using this approach we demonstrate cooperative behaviors well beyond the previous ant-foraging work, including the ability to create optimal foraging paths in the presence of obstacles, to cope with dynamic environments, and to follow tours with multiple waypoints.We believe that this model may be used for more complex problems still.


IEEE Transactions on Evolutionary Computation | 2006

Biasing Coevolutionary Search for Optimal Multiagent Behaviors

Liviu Panait; Sean Luke; R. P. Wiegand

Cooperative coevolutionary algorithms (CEAs) offer great potential for concurrent multiagent learning domains and are of special utility to domains involving teams of multiple agents. Unfortunately, they also exhibit pathologies resulting from their game-theoretic nature, and these pathologies interfere with finding solutions that correspond to optimal collaborations of interacting agents. We address this problem by biasing a cooperative CEA in such a way that the fitness of an individual is based partly on the result of interactions with other individuals (as is usual), and partly on an estimate of the best possible reward for that individual if partnered with its optimal collaborator. We justify this idea using existing theoretical models of a relevant subclass of CEAs, demonstrate how to apply biasing in a way that is robust with respect to parameterization, and provide some experimental evidence to validate the biasing approach. We show that it is possible to bias coevolutionary methods to better search for optimal multiagent behaviors


parallel problem solving from nature | 2002

Fighting Bloat with Nonparametric Parsimony Pressure

Sean Luke; Liviu Panait

Many forms of parsimony pressure are parametric, that is final fitness is a parametric model of the actual size and raw fitness values. The problem with parametric techniques is that they are hard to tune to prevent size from dominating fitness late in the evolutionary run, or to compensate for problem-dependent nonlinearities in the raw fitness function. In this paper we briefly discuss existing bloat-control techniques, then introduce two new kinds of non-parametric parsimony pressure, Direct and Proportional Tournament. As their names suggest, these techniques are based on simple modifications of tournament selection to consider both size and fitness, but not together as a combined parametric equation. We compare the techniques against, and in combination with, the most popular genetic programming bloat-control technique, Koza-style depth limiting, and show that they are effective in limiting size while still maintaining good best-fitness-of-run results.

Collaboration


Dive into the Sean Luke's collaboration.

Top Co-Authors

Avatar

Liviu Panait

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James A. Hendler

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Paul Wiegand

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Drew Wicke

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Ermo Wei

George Mason University

View shared research outputs
Researchain Logo
Decentralizing Knowledge