Mitchell A. Potter
United States Naval Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mitchell A. Potter.
electronic commerce | 2000
Mitchell A. Potter; Kenneth A. De Jong
To successfully apply evolutionary algorithms to the solution of increasingly complex problems, we must develop effective techniques for evolving solutions in the form of interacting coadapted subcomponents. One of the major difficulties is finding computational extensions to our current evolutionary paradigms that will enable such subcomponents to emerge rather than being hand designed. In this paper, we describe an architecture for evolving such subcomponents as a collection of cooperating species. Given a simple string- matching task, we show that evolutionary pressure to increase the overall fitness of the ecosystem can provide the needed stimulus for the emergence of an appropriate number of interdependent subcomponents that cover multiple niches, evolve to an appropriate level of generality, and adapt as the number and roles of their fellow subcomponents change over time. We then explore these issues within the context of a more complicated domain through a case study involving the evolution of artificial neural networks.
parallel problem solving from nature | 1998
Mitchell A. Potter; Kenneth A. De Jong
We present a novel approach to concept learning in which a coevolutionary genetic algorithm is applied to the construction of an immune system whose antibodies can discriminate between examples and counter-examples of a given concept. This approach is more general than traditional symbolic approaches to concept learning and can be applied in situations where preclassified training examples are not necessarily available. An experimental study is described in which a coevolutionary immune system adapts itself to one of the standard machine learning data sets. The resulting immune system concept description and a description produced by a traditional symbolic concept learner are compared and contrasted.
international conference on automation, robotics and applications | 2000
Mitchell A. Potter
Robot swarms are capable of performing tasks with robustness and flexibility using only local interactions between the agents. Such a system can lead to emergent behavior that is often desirable, but difficult to control and manipulate post-design. These properties make the real-time control of swarms by a human operator challenging—a problem that has not been adequately addressed in the literature. In this paper we present preliminary work on two possible forms of control: top-down control of global swarm characteristics and bottom-up control by influencing a subset of the swarm members. We present learning methods to address each of these. The first method uses instance-based learning to produce a generalized model from a sampling of the parameter space and global characteristics for specific situations. The second method uses evolutionary learning to learn placement and parameterization of virtual agents that can influence the robots in the swarm. Finally we show how these methods generalize and can be used by a human operator to dynamically control a swarm in real time.
genetic and evolutionary computation conference | 2006
R. Paul Wiegand; Mitchell A. Potter
Though recent analysis of traditional cooperative coevolutionary algorithms (CCEAs) casts doubt on their suitability for static optimization tasks, our experience is that the algorithms perform quite well in multiagent learning settings. This is due in part because many CCEAs may be quite suitable to finding behaviors for team members that result in good (though not necessarily optimal) performance but which are also robust to changes in other team members. Given this, there are two main goals of this paper. First, we describe a general framework for clearly defining robustness, offering a specific definition for our studies. Second, we examine the hypothesis that CCEAs exploit this robustness property during their search. We use an existing theoretical model to gain intuition about the kind of problem properties that attract populations in the system, then provide a simple empirical study justifying this intuition in a practical setting. The results are the first steps toward a constructive view of CCEAs as optimizers of robustness.
parallel problem solving from nature | 2006
R. Paul Wiegand; Mitchell A. Potter; Donald A. Sofge; William M. Spears
We present two key components of a principled method for constructing modular, heterogeneous swarms. First, we generalize a well-known technique for representing swarm behaviors to extend the power of multiagent systems by specializing agents and their interactions. Second, a novel graph-based method is introduced for designing swarm-based behaviors for multiagent teams. This method includes engineer-provided knowledge through explicit design decisions pertaining to specialization, heterogeneity, and modularity. We show the representational power of our generalized representation can be used to evolve a solution to a challenging multiagent resource protection problem. We also construct a modular design by hand, resulting in a scalable and intuitive heterogeneous solution for the resource protection problem.
parallel problem solving from nature | 2010
Mitchell A. Potter; Christine Couldrey
A challenge in partitional clustering is determining the number of clusters that best characterize a set of observations. In this paper, we present a novel approach for determining both an optimal number of clusters and partitioning of the data set. Our new algorithm is based on cooperative coevolution and inspired by the natural process of sympatric speciation. We have evaluated our algorithm on a number of synthetic and real data sets from pattern recognition literature and on a recentlycollected set of epigenetic data consisting of DNA methylation levels. In a comparison with a state-of-the-art algorithm that uses a variable string-length GA for clustering, our algorithm demonstrated a significant performance advantage, both in terms of determining an appropriate number of clusters and in the quality of the cluster assignments as reflected by the misclassification rate.
genetic and evolutionary computation conference | 2004
Jeffrey K. Bassett; Mitchell A. Potter; Kenneth A. De Jong
In this paper we show how tools based on extensions of Price’s equation allow us to look inside production-level EAs to see how selection, representation, and reproductive operators interact with each other, and how these interactions affect EA performance. With such tools it is possible to understand at a deeper level how existing EAs work as well as provide support for making better design decisions involving new EC applications.
genetic and evolutionary computation conference | 2005
Jeffrey K. Bassett; Mitchell A. Potter; Kenneth A. De Jong
Several researchers have used Prices equation (from biology theory literature) to analyze the various components of an Evolutionary Algorithm (EA) while it is running, giving insights into the components contributions and interactions. While their results are interesting, they are also limited by the fact that Prices equation was designed to work with the averages of population fitness. The EA practitioner, on the other hand, is typically interested in the best individuals in the population, not the average.In this paper we introduce an approach to using Prices equation which instead calculates the upper tails of population distributions. By applying Prices equation to EAs that use survival selection instead of parent selection, this information is calculated automatically.
Archive | 2011
Thomas Apker; Mitchell A. Potter
Physicomimetics is a simple and scalable means of controlling multiple agents, provided the agents can perform the maneuvers required by the forces applied to them. For most physical agents, such as wheeled vehicles and fixed-wing aircraft, physical constraints such as motor power and stall speed limit the ability of the agents to respond to physicomimetic inputs. We identified four factors, maximum turn rate, controller time resolution, maximum speed and minimum speed, that must be accounted for in the design of the agent model in order to allow good swarming behavior. To address them, we developed an extended body agent model consisting of two particles, one in front of the vehicle’s rotation center and one behind. This allowed us to explicitly determine the agent’s direction of motion and, combined with nonlinear checks to avoid unachievable commands, allowed us to develop agent models whose behavior was still intuitively controllable and analyzable but which respected the constraints of our physical robots. We also defined a dynamic friction term that penalized speed in cluttered environments and excessive or unstable oscillations to address the fact that under asynchronous distributed control no single set of friction parameters worked in all cases.
congress on evolutionary computation | 2005
Mitchell A. Potter; R.P. Wiegand; H.J. Blumenthal; Donald A. Sofge
Seeding the population of an evolutionary algorithm with solutions from previous runs has proved to be useful when learning control strategies for agents operating in a complex, changing environment. It has generally been assumed that initializing a learning algorithm with previously learned solutions will be helpful if the new problem is similar to the old. We will show that this assumption sometimes does not hold for many reasonable similarity metrics. Using a more traditional machine learning perspective, we explain why seeding is sometimes not helpful by looking at the learning-experience bias produced by the previously evolved solutions.