Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David E. Moriarty is active.

Publication


Featured researches published by David E. Moriarty.


Machine Learning | 1996

Efficient reinforcement learning through symbiotic evolution

David E. Moriarty; Risto Mikkulainen

This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster thanQ-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications.


electronic commerce | 1997

Forming neural networks through efficient and adaptive coevolution

David E. Moriarty; Risto Miikkulainen

This article demonstrates the advantages of a cooperative, coevolutionary search in difficult control problems. The symbiotic adaptive neuroevolution (SANE) system coevolves a population of neurons that cooperate to form a functioning neural network. In this process, neurons assume different but overlapping roles, resulting in a robust encoding of control behavior. SANE is shown to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches. Further empirical studies illustrate the emergent neuron specializations and the different roles the neurons assume in the population.


Journal of Artificial Intelligence Research | 1999

Evolutionary algorithms for reinforcement learning

David E. Moriarty; Alan C. Schultz; John J. Grefenstette

There are two distinct approaches to solving reinforcement learning problems, namely, searching in value function space and searching in policy space. Temporal difference methods and evolutionary algorithms are well-known examples of these approaches. Kaelbling, Littman and Moore recently provided an informative survey of temporal difference methods. This article focuses on the application of evolutionary algorithms to the reinforcement learning problem, emphasizing alternative policy representations, credit assignment methods, and problem-specific genetic operators. Strengths and weaknesses of the evolutionary approach to reinforcement learning are presented, along with a survey of representative applications.


Applied Intelligence | 1998

Evolving Neural Networks to Play Go

Norman Richards; David E. Moriarty; Risto Miikkulainen

Go is a difficult game for computers to master, and the best go programs are still weaker than the average human player. Since the traditional game playing techniques have proven inadequate, new approaches to computer go need to be studied. This paper presents a new approach to learning to play go. The SANE (Symbiotic, Adaptive Neuro-Evolution) method was used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge. On a 9 × 9 go board, networks that were able to defeat a simple computer opponent were evolved within a few hundred generations. Most significantly, the networks exhibited several aspects of general go playing, which suggests the approach could scale up well.


Connection Science | 1995

Discovering Complex Othello Strategies Through Evolutionary Neural Networks

David E. Moriarty; Risto Miikkulainen

An approach to develop new game-playing strategies based on artificial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an α-β search program. The networks discovered first a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment.


ieee international conference on evolutionary computation | 1998

Hierarchical evolution of neural networks

David E. Moriarty; Risto Miikkulainen

In most applications of neuro-evolution, each individual in the population represents a complete neural network. Recent work on the SANE system, however, has demonstrated that evolving individual neurons often produces a more efficient genetic search. This paper demonstrates that while SANE can solve easy tasks very quickly, it often stalls in larger problems. A hierarchical approach to neuro-evolution is presented that overcomes SANEs difficulties by integrating both a neuron-level exploratory search and a network-level exploitive search. In a robot arm manipulation task, the hierarchical approach outperforms both a neuron-based search and a network-based search.


world congress on computational intelligence | 1994

Improving game-tree search with evolutionary neural networks

David E. Moriarty; Risto Miikkulainen

Neural networks were evolved to constrain minimax search in the game of Othello. At each level of the search tree, such focus networks decide which moves are to be explored. Based on the evolved knowledge of the minimax algorithms advantages and limitations the networks hide problem nodes from minimax. Focus networks were encoded in marker-based chromosomes and evolved against a full-width minimax opponent using the same heuristic board evaluation function. The focus network was able to guide the minimax search away from poor information, resulting in stronger play while examining far fewer nodes.<<ETX>>


Connection Science | 1995

Game Playing Othello Neuro-EVOLUTION Marker-BASED Encoding

David E. Moriarty; Risto Miikkulainen

An approach to develop new game-playing strategies based on artificial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an alpha - beta search program. The networks discovered first a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment.An approach to develop new game playing strategies based on arti cial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an search program. The networks discovered rst a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment.


Archive | 1997

Symbiotic Evolution of Neural Networks in Sequential Decision Tasks

David E. Moriarty


national conference on artificial intelligence | 1994

Evolving neural networks to focus minimax search

David E. Moriarty; Risto Miikkulainen

Collaboration


Dive into the David E. Moriarty's collaboration.

Top Co-Authors

Avatar

Risto Miikkulainen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Alan C. Schultz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norman Richards

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Christoph Adami

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

David M. Bryson

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Fred C. Dyer

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Clune

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Joel Lehman

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge