Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Edgington is active.

Publication


Featured researches published by Mark Edgington.


international conference on machine learning and applications | 2007

Performance evaluation of EANT in the robocup keepaway benchmark

Jan Hendrik Metzen; Mark Edgington; Yohannes Kassahun; Frank Kirchner

Several methods have been proposed for solving reinforcement learning (RL) problems. In addition to temporal difference (TD) methods, evolutionary algorithms (EA) are among the most promising approaches. The relative performance of these approaches in certain subdomains of the general RL problem remains an open question at this time. In addition to theoretical analysis, benchmarks are one of the most important tools for comparing different RL methods in certain problem domains. A recently proposed RL benchmark problem is the Keepaway benchmark, which is based on the RoboCup Soccer Simulator. This benchmark is one of the most challenging multiagent learning problems because its state-space is continuous and high dimensional, and both the sensors and actuators are noisy. In this paper we analyze the performance of the neuroevolutionary approach called evolutionary acquisition of neural topologies (EANT) in the Keepaway benchmark, and compare the results obtained using EANT with the results of other algorithms tested on the same benchmark.


genetic and evolutionary computation conference | 2007

A common genetic encoding for both direct and indirect encodings of networks

Yohannes Kassahun; Mark Edgington; Jan Hendrik Metzen; Gerald Sommer; Frank Kirchner

In this paper we present a Common Genetic Encoding (CGE) for networks that can be applied to both direct and indirect encoding methods. As a direct encoding method, CGE allows the implicit evaluation of an encoded phenotype without the need to decode the phenotype from the genotype. On the other hand, one can easily decode the structure of a phenotype network, since its topology is implicitly encoded in the genotypes gene-order. Furthermore, we illustrate how CGE can be used for the indirect encoding of networks. CGE has useful properties that makes it suitable for evolving neural networks. A formal definition of the encoding is given, and some of the important properties of the encoding are proven such as its closure under mutation operators, its completeness in representing any phenotype network, and the existence of an algorithm that can evaluate any given phenotype without running into an infinite loop.


genetic and evolutionary computation conference | 2008

Accelerating neuroevolutionary methods using a Kalman filter

Yohannes Kassahun; Jose de Gea; Mark Edgington; Jan Hendrik Metzen; Frank Kirchner

In recent years, neuroevolutionary methods have shown great promise in solving learning tasks, especially in domains that are stochastic, partially observable, and noisy. In this paper, we show how the Kalman filter can be exploited (1) to efficiently find an optimal solution (i. e. reducing the number of evaluations needed to find the solution), (2) to find solutions that are robust against noise, and (3) to recover or reconstruct missing state variables, traditionally known as state estimation in control engineering community. Our algorithm has been tested on the double pole balancing without velocities benchmark, and has achieved significantly better results on this benchmark than the published results of other algorithms to date.


KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence | 2007

A General Framework for Encoding and Evolving Neural Networks

Yohannes Kassahun; Jan Hendrik Metzen; Jose de Gea; Mark Edgington; Frank Kirchner

In this paper we present a novel general framework for encoding and evolving networks called Common Genetic Encoding (CGE) that can be applied to both direct and indirect encoding methods. The encoding has important properties that makes it suitable for evolving neural networks: (1) It is completein that it is able to represent all types of valid phenotype networks. (2) It is closed, i. e. every valid genotype represents a valid phenotype. Similarly, the encoding is closed under genetic operatorssuch as structural mutation and crossover that act upon the genotype. Moreover, the encodings genotype can be seen as a composition of several subgenomes, which makes it to inherently support the evolution of modular networks in both direct and indirect encoding cases. To demonstrate our encoding, we present an experiment where direct encoding is used to learn the dynamic model of a two-link arm robot. We also provide an illustration of how the indirect-encoding features of CGE can be used in the area of artificial embryogeny.


intelligent robots and systems | 2009

Dynamic motion modelling for legged robots

Mark Edgington; Yohannes Kassahun; Frank Kirchner

An accurate motion model is an important component in modern-day robotic systems, but building such a model for a complex system often requires an appreciable amount of manual effort. In this paper we present a motion model representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the need to manually design the form of a motion model, and provides a direct means of incorporating auxiliary sensory data into the model. This representation and its accompanying algorithms are validated experimentally using an 8-legged kinematically complex robot, as well as a standard benchmark dataset. The presented method not only learns the robots motion model, but also improves the models accuracy by incorporating information about the terrain surrounding the robot.


Archive | 2009

Incremental Acquisition of Neural Structures through Evolution

Yohannes Kassahun; Jan Hendrik Metzen; Mark Edgington; Frank Kirchner

In this chapter we present a novel method, called Evolutionary Acquisition of Neural Topologies (EANT), for evolving the structures and weights of neural networks. The method uses an efficient and compact genetic encoding of a neural network into a linear genome that enables a network’s outputs to be computed without the network being decoded. Furthermore, it uses a nature inspired meta-level evolutionary process where new structures are explored at a larger timescale, and existing structures are exploited at a smaller timescale. Because of this, the method is able to find minimal neural structures for solving a given learning task.


parallel problem solving from nature | 2008

Learning Walking Patterns for Kinematically Complex Robots Using Evolution Strategies

Malte Römmermann; Mark Edgington; Jan Hendrik Metzen; Jose de Gea; Yohannes Kassahun; Frank Kirchner

Manually developing walking patterns for kinematically complex robots can be a challenging and time-consuming task. In order to automate this design process, a learning system that generates, tests, and optimizes different walking patterns is needed, as well as the ability to accurately simulate a robot and its environment. In this work, we describe a learning system that uses the CMA-ES method from evolutionary computation to learn walking patterns for a complex legged robot. The robots limbs are controlled using parametrized distorted sine waves, and the evolutionary algorithm optimizes the parameters of these waveforms, testing the walking patterns in a physical simulation. The best solutions evolved by this system has been transferred to and tested on a real robot, and has resulted in a gait that is superior to those previously designed by a human designer.


parallel problem solving from nature | 2008

Evolving Neural Networks for Online Reinforcement Learning

Jan Hendrik Metzen; Mark Edgington; Yohannes Kassahun; Frank Kirchner

For many complex Reinforcement Learning problems with large and continuous state spaces, neuroevolution (the evolution of artificial neural networks) has achieved promising results. This is especially true when there is noise in sensor and/or actuator signals. These results have mainly been obtained in offline learning settings, where the training and evaluation phase of the system are separated. In contrast, in online Reinforcement Learning tasks where the actual performance of the systems during its learning phase matters, the results of neuroevolution are significantly impaired by its purely exploratory nature, meaning that it does not use (i. e. exploit) its knowledge of the performance of single individuals in order to improve its performance during learning. In this paper we describe modifications which significantly improve the online performance of the neuroevolutionary method Evolutionary Acquisition of Neural Topologies (EANT) and discuss the results obtained on two benchmark problems.


genetic and evolutionary computation conference | 2008

Towards efficient online reinforcement learning using neuroevolution

Jan Hendrik Metzen; Frank Kirchner; Mark Edgington; Yohannes Kassahun

For many complex Reinforcement Learning (RL) problems with large and continuous state spaces, neuroevolution has achieved promising results. This is especially true when there is noise in sensor and/or actuator signals. These results have mainly been obtained in offline learning settings, where the training and the evaluation phases of the systems are separated. In contrast, for online RL tasks, the actual performance of a system matters during its learning phase. In these tasks, neuroevolutionary systems are often impaired by their purely exploratory nature, meaning that they usually do not use (i.e. exploit) their knowledge of a single individuals performance to improve performance during learning. In this paper we describe modifications that significantly improve the online performance of the neuroevolutionary method Evolutionary Acquisition of Neural Topologies and discuss the results obtained in the Mountain Car benchmark.


KI '08 Proceedings of the 31st annual German conference on Advances in Artificial Intelligence | 2008

EANT+KALMAN: An Efficient Reinforcement Learning Method for Continuous State Partially Observable Domains

Yohannes Kassahun; Jose de Gea; Jan Hendrik Metzen; Mark Edgington; Frank Kirchner

In this contribution we present an extension of a neuroevolutionary method called Evolutionary Acquisition of Neural Topologies (EANT) [11] that allows the evolution of solutions taking the form of a POMDP agent (Partially Observable Markov Decision Process) [8]. The solution we propose involves cascading a Kalman filter [10] (state estimator) and a feed-forward neural network. The extension (EANT+KALMAN) has been tested on the double pole balancing without velocity benchmark, achieving significantly better results than the to date published results of other algorithms.

Collaboration


Dive into the Mark Edgington's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge