Yohannes Kassahun
University of Bremen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yohannes Kassahun.
international conference on machine learning and applications | 2007
Jan Hendrik Metzen; Mark Edgington; Yohannes Kassahun; Frank Kirchner
Several methods have been proposed for solving reinforcement learning (RL) problems. In addition to temporal difference (TD) methods, evolutionary algorithms (EA) are among the most promising approaches. The relative performance of these approaches in certain subdomains of the general RL problem remains an open question at this time. In addition to theoretical analysis, benchmarks are one of the most important tools for comparing different RL methods in certain problem domains. A recently proposed RL benchmark problem is the Keepaway benchmark, which is based on the RoboCup Soccer Simulator. This benchmark is one of the most challenging multiagent learning problems because its state-space is continuous and high dimensional, and both the sensors and actuators are noisy. In this paper we analyze the performance of the neuroevolutionary approach called evolutionary acquisition of neural topologies (EANT) in the Keepaway benchmark, and compare the results obtained using EANT with the results of other algorithms tested on the same benchmark.
genetic and evolutionary computation conference | 2007
Yohannes Kassahun; Mark Edgington; Jan Hendrik Metzen; Gerald Sommer; Frank Kirchner
In this paper we present a Common Genetic Encoding (CGE) for networks that can be applied to both direct and indirect encoding methods. As a direct encoding method, CGE allows the implicit evaluation of an encoded phenotype without the need to decode the phenotype from the genotype. On the other hand, one can easily decode the structure of a phenotype network, since its topology is implicitly encoded in the genotypes gene-order. Furthermore, we illustrate how CGE can be used for the indirect encoding of networks. CGE has useful properties that makes it suitable for evolving neural networks. A formal definition of the encoding is given, and some of the important properties of the encoding are proven such as its closure under mutation operators, its completeness in representing any phenotype network, and the existence of an algorithm that can evaluate any given phenotype without running into an infinite loop.
international conference hybrid intelligent systems | 2006
Nils T. Siebel; Yohannes Kassahun
In this article we introduce a method to learn neural networks that solve a visual servoing task. Our method, called EANT, Evolutionary Acquisition of Neural Topologies, starts from a minimal network structure and gradually develops it further using evolutionary reinforcement learning. We have improved EANT by combining it with an optimisation technique called CMA-ES, Covariance Matrix Adaptation Evolution Strategy. Results from experiments with a 3 DOF visual servoing task show that the new CMAES based EANT develops very good networks for visual servoing. Their performance is significantly better than those developed by the original EANT and traditional visual servoing approaches.
Artificial Intelligence in Medicine | 2014
Yohannes Kassahun; Roberta Perrone; Elena De Momi; Elmar Berghöfer; Laura Tassi; Maria Paola Canevini; Roberto Spreafico; Giancarlo Ferrigno; Frank Kirchner
OBJECTIVES In the presurgical analysis for drug-resistant focal epilepsies, the definition of the epileptogenic zone, which is the cortical area where ictal discharges originate, is usually carried out by using clinical, electrophysiological and neuroimaging data analysis. Clinical evaluation is based on the visual detection of symptoms during epileptic seizures. This work aims at developing a fully automatic classifier of epileptic types and their localization using ictal symptoms and machine learning methods. METHODS We present the results achieved by using two machine learning methods. The first is an ontology-based classification that can directly incorporate human knowledge, while the second is a genetics-based data mining algorithm that learns or extracts the domain knowledge from medical data in implicit form. RESULTS The developed methods are tested on a clinical dataset of 129 patients. The performance of the methods is measured against the performance of seven clinicians, whose level of expertise is high/very high, in classifying two epilepsy types: temporal lobe epilepsy and extra-temporal lobe epilepsy. When comparing the performance of the algorithms with that of a single clinician, who is one of the seven clinicians, the algorithms show a slightly better performance than the clinician on three test sets generated randomly from 99 patients out of the 129 patients. The accuracy obtained for the two methods and the clinician is as follows: first test set 65.6% and 75% for the methods and 56.3% for the clinician, second test set 66.7% and 76.2% for the methods and 61.9% for the clinician, and third test set 77.8% for the methods and the clinician. When compared with the performance of the whole population of clinicians on the rest 30 patients out of the 129 patients, where the patients were selected by the clinicians themselves, the mean accuracy of the methods (60%) is slightly worse than the mean accuracy of the clinicians (61.6%). Results show that the methods perform at the level of experienced clinicians, when both the methods and the clinicians use the same information. CONCLUSION Our results demonstrate that the developed methods form important ingredients for realizing a fully automatic classification of epilepsy types and can contribute to the definition of signs that are most important for the classification.
Neural Networks | 2013
Alexander Fabisch; Yohannes Kassahun; Hendrik Wöhrle; Frank Kirchner
We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly.
genetic and evolutionary computation conference | 2008
Yohannes Kassahun; Jose de Gea; Mark Edgington; Jan Hendrik Metzen; Frank Kirchner
In recent years, neuroevolutionary methods have shown great promise in solving learning tasks, especially in domains that are stochastic, partially observable, and noisy. In this paper, we show how the Kalman filter can be exploited (1) to efficiently find an optimal solution (i. e. reducing the number of evaluations needed to find the solution), (2) to find solutions that are robust against noise, and (3) to recover or reconstruct missing state variables, traditionally known as state estimation in control engineering community. Our algorithm has been tested on the double pole balancing without velocities benchmark, and has achieved significantly better results on this benchmark than the published results of other algorithms to date.
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence | 2007
Yohannes Kassahun; Jan Hendrik Metzen; Jose de Gea; Mark Edgington; Frank Kirchner
In this paper we present a novel general framework for encoding and evolving networks called Common Genetic Encoding (CGE) that can be applied to both direct and indirect encoding methods. The encoding has important properties that makes it suitable for evolving neural networks: (1) It is completein that it is able to represent all types of valid phenotype networks. (2) It is closed, i. e. every valid genotype represents a valid phenotype. Similarly, the encoding is closed under genetic operatorssuch as structural mutation and crossover that act upon the genotype. Moreover, the encodings genotype can be seen as a composition of several subgenomes, which makes it to inherently support the evolution of modular networks in both direct and indirect encoding cases. To demonstrate our encoding, we present an experiment where direct encoding is used to learn the dynamic model of a two-link arm robot. We also provide an illustration of how the indirect-encoding features of CGE can be used in the area of artificial embryogeny.
intelligent robots and systems | 2009
Mark Edgington; Yohannes Kassahun; Frank Kirchner
An accurate motion model is an important component in modern-day robotic systems, but building such a model for a complex system often requires an appreciable amount of manual effort. In this paper we present a motion model representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the need to manually design the form of a motion model, and provides a direct means of incorporating auxiliary sensory data into the model. This representation and its accompanying algorithms are validated experimentally using an 8-legged kinematically complex robot, as well as a standard benchmark dataset. The presented method not only learns the robots motion model, but also improves the models accuracy by incorporating information about the terrain surrounding the robot.
international conference on artificial neural networks | 2012
Yohannes Kassahun; Hendrik Wöhrle; Alexander Fabisch; Marc Tabie
We present a novel method of reducing the training time by learning parameters of a model at hand in compressed parameter space. In compressed parameter space the parameters of the model are represented by fewer parameters, and hence training can be faster. After training, the parameters of the model can be generated from the parameters in compressed parameter space. We show that for supervised learning, learning the parameters of a model in compressed parameter space is equivalent to learning parameters of the model in compressed input space. We have applied our method to a supervised learning domain and show that a solution can be obtained at much faster speed than learning in uncompressed parameter space. For reinforcement learning, we show empirically that searching directly the parameters of a policy in compressed parameter space accelerates learning.
Archive | 2009
Yohannes Kassahun; Jan Hendrik Metzen; Mark Edgington; Frank Kirchner
In this chapter we present a novel method, called Evolutionary Acquisition of Neural Topologies (EANT), for evolving the structures and weights of neural networks. The method uses an efficient and compact genetic encoding of a neural network into a linear genome that enables a network’s outputs to be computed without the network being decoded. Furthermore, it uses a nature inspired meta-level evolutionary process where new structures are explored at a larger timescale, and existing structures are exploited at a smaller timescale. Because of this, the method is able to find minimal neural structures for solving a given learning task.