Nils T. Siebel
University of Kiel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nils T. Siebel.
hybrid intelligent systems | 2007
Nils T. Siebel; Gerald Sommer
In this article we describe EANT, Evolutionary Acquisition of Neural Topologies, a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy, a derandomised variant of evolution strategies. EANT can create neural networks that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, NeuroEvolution of Augmenting Topologies, to create networks that control a robot in a visual servoing scenario.
international conference hybrid intelligent systems | 2006
Nils T. Siebel; Yohannes Kassahun
In this article we introduce a method to learn neural networks that solve a visual servoing task. Our method, called EANT, Evolutionary Acquisition of Neural Topologies, starts from a minimal network structure and gradually develops it further using evolutionary reinforcement learning. We have improved EANT by combining it with an optimisation technique called CMA-ES, Covariance Matrix Adaptation Evolution Strategy. Results from experiments with a 3 DOF visual servoing task show that the new CMAES based EANT develops very good networks for visual servoing. Their performance is significantly better than those developed by the original EANT and traditional visual servoing approaches.
international conference on robotics and automation | 2009
Andreas Jordt; Nils T. Siebel; Gerald Sommer
In this article a new method is presented to obtain a full and precise calibration of camera-robot systems with eye-in-hand cameras. It achieves a simultaneous and numerically stable calibration of intrinsic and extrinsic camera parameters by analysing the image coordinates of a single point marker placed in the environment of the robot. The method works by first determining a rough initial estimate of the camera pose in the tool coordinate frame. This estimate is then used to generate a set of uniformly distributed calibration poses from which the object is visible. The measurements obtained in these poses are then used to obtain the exact parameters with CMA-ES (Covariance Matrix Adaptation Evolution Strategy), a derandomised variant of an evolution strategy optimiser. Minimal claims on the surrounding area and flexible handling of environmental and kinematical limitations make this method applicable to a range of robot setups and camera models. The algorithm runs autonomously without supervision and does not need manual adjustments. Our problem formulation is directly in the 3D space which helps in minimising the resulting calibration errors in the robots task space. Both simulations and experimental results with a real robot show a very good convergence and high repeatability of calibration results without requiring user-supplied initial estimates of the calibration parameters.
dagm conference on pattern recognition | 2007
Nils T. Siebel; Jochen Krause; Gerald Sommer
In this article we present EANT, a method that creates neural networks (NNs) by evolutionary reinforcement learning. The structure of NNs is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES. EANT can create NNs that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, to create networks that control a robot in a visual servoing scenario.
world congress on computational intelligence | 2008
Nils T. Siebel; Gerald Sommer
This article presents results from experiments where a detector for defects in visual inspection images was learned from scratch by EANT2, a method for evolutionary reinforcement learning. The detector is constructed as a neural network that takes as input statistical data on filter responses from a bank of image filters applied to an image region. Training is done on example images with weakly labelled defects. Experiments show good results of EANT2 in an application area where evolutionary methods are rare.
international symposium on neural networks | 2009
Nils T. Siebel; Jonas Botel; Gerald Sommer
In this article we present a new method for the pruning of unnecessary connections from neural networks created by an evolutionary algorithm (neuro-evolution). Pruning not only decreases the complexity of the network but also improves the numerical stability of the parameter optimisation process. We show results from experiments where connection pruning is incorporated into EANT2, an evolutionary reinforcement learning algorithm for both the topology and parameters of neural networks. By analysing data from the evolutionary optimisation process that determines the networks parameters, candidate connections for removal are identified without the need for extensive additional calculations.
world congress on computational intelligence | 2008
Nils T. Siebel; Sven Grunewald; Gerald Sommer
In this article we present results from experiments where a edge detector was learned from scratch by EANT2, a method for evolutionary reinforcement learning. The detector is constructed as a neural network that takes as input the pixel values from a given image region-the same way that standard edge detectors do. However, it does not have any per-image parameters. A comparison between the evolved neural networks and two standard algorithms, the Sobel and Canny edge detectors, shows very good results.
Computational Intelligence in Medical Informatics | 2008
Nils T. Siebel; Gerald Sommer; Yohannes Kassahun
Artificial neural networks are computer constructs inspired by the neural structure of the brain. The aim is to approximate the vast learning and signal processing power of the human brain by mimicking its structure and mechanisms. In an artificial neural network (often simply called “neural network”), interconnected neural nodes allow the flow of signals from special input nodes to designated output nodes. With this very general concept neural networks are capable of modelling complex mappings between the inputs and outputs of a system up to an arbitrary precision [13, 21]. This allows neural networks to be applied to problems in the sciences, engineering and even economics [4, 15, 25, 26, 30]. A further advantage of neural networks is the fact that learning strategies exist that enable them to adapt to a problem.
international conference on software maintenance | 2004
Manoranjan Satpathy; Nils T. Siebel; Daniel Rodríguez
Assertions had their origin in program verification. For the systems developed in industry, construction of assertions and their use in showing program correctness is a near-impossible task. However, they can be used to show that some key properties are satisfied during program execution. We first present a survey of the special roles that assertions can play in object oriented software construction. We then analyse such assertions by relating them to the case study of an automatic surveillance system. In particular, we address the following two issues: What types of assertions can be used most effectively in the context of object oriented software? How can you discover them and where should they be placed? During maintenance, both the design and the software are continuously changed. These changes can mean that the original assertions, if present, are no longer valid for the new software. Can we automatically derive assertions for the changed software?.
international symposium on neural networks | 2010
Nils T. Siebel; Andreas Jordt; Gerald Sommer
Any neuro-evolutionary algorithm that solves complex problems needs to deal with the issue of computational complexity. We show how a neural network (feed-forward, recurrent or RBF) can be transformed and then compiled in order to achieve fast execution speeds without requiring dedicated hardware like FPGAs. The compiled network uses a simple external data structure—a vector—for its parameters. This allows the weights of the neural network to be optimised by the evolutionary process without the need to re-compile the structure. In an experimental comparison our method effects a speedup of factor 5–10 compared to the standard method of evaluation (i.e., traversing a data structure with optimised C++ code)