Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matt Parker is active.

Publication


Featured researches published by Matt Parker.


computational intelligence and games | 2007

The Evolution of Multi-Layer Neural Networks for the Control of Xpilot Agents

Matt Parker; Gary B. Parker

Learning controllers for the space combat game Xpilot is a difficult problem. Using evolutionary computation to evolve the weights for a neural network could create an effective/adaptive controller that does not require extensive programmer input. Previous attempts have been successful in that the controlled agents were transformed from aimless wanderers into interactive agents, but these methods have not resulted in controllers that are competitive with those learned using other methods. In this paper, we present a neural network learning method that uses a genetic algorithm to select the network inputs and node thresholds, along with connection weights, to evolve competitive Xpilot agents


computational intelligence and games | 2007

Evolving Parameters for Xpilot Combat Agents

Gary B. Parker; Matt Parker

In this paper we present a new method for evolving autonomous agents that are competitive in the space combat game Xpilot. A genetic algorithm is used to evolve the parameters related to the sensitivity of the agent to input stimuli and the agents level of reaction to these stimuli. The resultant controllers are comparable to the best hand programmed artificial Xpilot bots, are competitive with human players, and display interesting behaviors that resemble human strategies.


ieee international conference on evolutionary computation | 2006

Using a Queue Genetic Algorithm to Evolve Xpilot Control Strategies on a Distributed System

Matt Parker; Gary B. Parker

In this paper, we describe a distributed learning system used to evolve a control program for an agent operating in the network game Xpilot. This system, which we refer to as a queue genetic algorithm, is a steady state genetic algorithm that uses stochastic selection and first-in-first-out replacement. We employ it to distribute fitness evaluations over a local network of dissimilar computers. The system made full use of our available computers while evolving successful controller solutions that were comparable to those evolved using a regular generational genetic algorithm.


congress on evolutionary computation | 2005

Evolving autonomous agent control in the Xpilot environment

Gary B. Parker; Matt Parker; Steven D. Johnson

Interactive combat games are useful as test-beds for learning systems employing evolutionary computation. Of particular value are games that can be modified to accommodate differing levels of complexity. In this paper, the authors presented the use of Xpilot as a learning environment that can be used to evolve primitive reactive behaviors, yet can be complex enough to require combat strategies and team cooperation. In addition, this environment was used with a genetic algorithm to learn the weights for an artificial neural network controller that provides both offensive and defensive reactive control for an autonomous agent.


computational intelligence and games | 2008

Visual control in quake II with a cyclic controller

Matt Parker; Bobby D. Bryant

A cyclic controller is evolved in the first person shooter computer game Quake II, to learn to attack a randomly moving enemy in a simple room by using only visual inputs. The chromosome of a genetic algorithm represents a cyclical controller that reads grayscale information from the gameplay screen to determine how far to jump forward in the program and what actions to perform. The cyclic controller learns to effectively find and shoot the enemy, and outperforms our previously published neural network solution for the same problem.


international symposium on neural networks | 2008

Neuro-visual control in the Quake II game engine

Matt Parker; Bobby D. Bryant

The first-person-shooter Quake II is used as a platform to test neuro-visual control and retina input layouts. Agents are trained to shoot a moving enemy as quickly as possible in a visually simple environment, using a neural network controller with evolved weights. Two retina layouts are tested, each with the same number of inputs: first, a graduated density retina which focuses near the center of the screen and blurs outward; second, a uniform retina which focuses evenly across the screen. Results show that the graduated density retina learns more successfully than the uniform retina.


congress on evolutionary computation | 2005

Evolution and prioritization of survival strategies for a simulated robot in Xpilot

Gary B. Parker; Timothy S. Doherty; Matt Parker

Simulated evolution by the use of genetic algorithms (GA) is presented as the solution to a two-faceted problem: the challenge for an autonomous agent to learn the reactive component of multiple survival strategies, while simultaneously determining the relative importance of these strategies as the agent encounters changing multivariate obstacles. The agents ultimate purpose is to prolong its survival; it must learn to navigate its space avoiding obstacles while engaged in combat with an opposing agent. The GA learned rule-based controller significantly improved the agents survivability in the hostile Xpilot environment.


congress on evolutionary computation | 2009

Lamarckian neuroevolution for visual control in the Quake II environment

Matt Parker; Bobby D. Bryant

A combination of backpropagation and neuroevolution is used to train a neural network visual controller for agents in the Quake II environment. The agents must learn to shoot an enemy opponent in a semi-visually complex environment using only raw visual inputs. A comparison is made between using normal neuroevolution and using neuroevolution combined with backpropagation for Lamarckian adaptation. The supervised backpropagation imitates a hand-coded controller that uses non-visual inputs. Results show that using backpropagation in combination with neuroevolution trains the visual neural network controller much faster and more successfully.


computational intelligence and games | 2009

Backpropagation without human supervision for visual control in Quake II

Matt Parker; Bobby D. Bryant

Backpropagation and neuroevolution are used in a Lamarckian evolution process to train a neural network visual controller for agents in the Quake II environment. In previous work, we hand-coded a non-visual controller for supervising in backpropagation, but hand-coding can only be done for problems with known solutions. In this research the problem for the agent is to attack a moving enemy in a visually complex room with a large central pillar. Because we did not know a solution to the problem, we could not hand-code a supervising controller; instead, we evolve a non-visual neural network as supervisor to the visual controller. This setup creates controllers that learn much faster and have a greater fitness than those learning by neuroevolution-only on the same problem in the same amount of time.


ieee international conference on evolutionary computation | 2006

The Incremental Evolution of Attack Agents in Xpilot

Gary B. Parker; Matt Parker

In the research presented in this paper, we use incremental evolution to learn multifaceted neural network (NN) controllers for agents operating in the space game Xpilot. Behavioral components specific to the accomplishment of specific tasks, such as bullet-dodging, shooting, and closing on an enemy, are learned in the first increment. These behavioral components are used in the second increment to evolve a NN that prioritizes the output of a two-layer NN depending on that agents current situation.

Collaboration


Dive into the Matt Parker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryce Himebaugh

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Jonathan W. Mills

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Brian Kopecky

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Chen Zhang

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Chris Weilemann

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Craig A. Shue

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge