Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Koutník is active.

Publication


Featured researches published by Jan Koutník.


computational intelligence and games | 2009

Super mario evolution

Julian Togelius; Sergey Karakovskiy; Jan Koutník; Jürgen Schmidhuber

We introduce a new reinforcement learning benchmark based on the classic platform game Super Mario Bros. The benchmark has a high-dimensional input space, and achieving a good score requires sophisticated and varied strategies. However, it has tunable difficulty, and at the lowest difficulty setting decent score can be achieved using rudimentary strategies and a small fraction of the input space. To investigate the properties of the benchmark, we evolve neural network-based controllers using different network architectures and input spaces. We show that it is relatively easy to learn basic strategies capable of clearing individual levels of low difficulty, but that these controllers have problems with generalization to unseen levels and with taking larger parts of the input space into account. A number of directions worth exploring for learning betterperforming strategies are discussed.


international conference on acoustics, speech, and signal processing | 2016

Lipreading with long short-term memory

Michael Wand; Jan Koutník; Jürgen Schmidhuber

Lipreading, i.e. speech recognition from visual-only recordings of a speakers face, can be achieved with a processing pipeline based solely on neural networks, yielding significantly better accuracy than conventional methods. Feedforward and recurrent neural network layers (namely Long Short-Term Memory; LSTM) are stacked to form a single structure which is trained by back-propagating error gradients through all the layers. The performance of such a stacked network was experimentally evaluated and compared to a standard Support Vector Machine classifier using conventional computer vision features (Eigenlips and Histograms of Oriented Gradients). The evaluation was performed on data from 19 speakers of the publicly available GRID corpus. With 51 different words to classify, we report a best word accuracy on held-out evaluation speakers of 79.6% using the end-to-end neural network-based solution (11.6% improvement over the best feature-based solution evaluated).


Neural Networks | 2010

2010 Special Issue: Meta-learning approach to neural network optimization

Pavel Kordík; Jan Koutník; Jan Drchal; Oleg Kovářík; Miroslav epek; Miroslav Snorek

Optimization of neural network topology, weights and neuron transfer functions for given data set and problem is not an easy task. In this article, we focus primarily on building optimal feed-forward neural network classifier for i.i.d. data sets. We apply meta-learning principles to the neural network structure and function optimization. We show that diversity promotion, ensembling, self-organization and induction are beneficial for the problem. We combine several different neuron types trained by various optimization algorithms to build a supervised feed-forward neural network called Group of Adaptive Models Evolution (GAME). The approach was tested on a large number of benchmark data sets. The experiments show that the combination of different optimization algorithms in the network is the best choice when the performance is averaged over several real-world problems.


congress on evolutionary computation | 2009

HyperNEAT controlled robots learn how to drive on roads in simulated environment

Jan Drchal; Jan Koutník; Miroslav Snorek

In this paper we describe simulation of autonomous robots controlled by recurrent neural networks, which are evolved through indirect encoding using HyperNEAT algorithm. The robots utilize 180 degree wide sensor array. Thanks to the scalability of the neural network generated by HyperNEAT, the sensor array can have various resolution. This would allow to use camera as an input for neural network controller used in real robot. The robots were simulated using software simulation environment. In the experiments the robots were trained to drive with imaximum average speed. Such fitness forces them to learn how to drive on roads and avoid collisions. Evolved neural networks show excellent scalability. Scaling of the sensory input breaks performance of the robots, which should be gained back with re-training of the robot with a different sensory input resolution.


international conference on adaptive and natural computing algorithms | 2009

NEAT in HyperNEAT substituted with genetic programming

Zdeněk Buk; Jan Koutník; Miroslav Snorek

In this paper we present application of genetic programming (GP) [1] to evolution of indirect encoding of neural network weights. We compare usage of original HyperNEAT algorithm with our implementation, in which we replaced the underlying NEAT with genetic programming. The algorithm was named HyperGP. The evolved neural networks were used as controllers of autonomous mobile agents (robots) in simulation. The agents were trained to drive with maximum average speed. This forces them to learn how to drive on roads and avoid collisions. The genetic programming lacking the NEAT complexification property shows better exploration ability and tends to generate more complex solutions in fewer generations. On the other hand, the basic genetic programming generates quite complex functions for weights generation. Both approaches generate neural controllers with similar abilities.


artificial general intelligence | 2013

Resource-bounded machines are motivated to be effective, efficient, and curious

Bastiaan Steunebrink; Jan Koutník; Kristinn R. Thórisson; Eric Nivel; Juergen Schmidhuber

Resource-boundedness must be carefully considered when designing and implementing artificial general intelligence (AGI) algorithms and architectures that have to deal with the real world. But not only must resources be represented explicitly throughout its design, an agent must also take into account their usage and the associated costs during reasoning and acting. Moreover, the agent must be intrinsically motivated to become progressively better at utilizing resources. This drive then naturally leads to effectiveness, efficiency, and curiosity. We propose a practical operational framework that explicitly takes into account resource constraints: activities are organized to maximally utilize an agents bounded resources as well as the availability of a teacher, and to drive the agent to become progressively better at utilizing its resources. We show how an existing AGI architecture called AERA can function inside this framework. In short, the capability of AERA to perform self-compilation can be used to motivate the system to not only accumulate knowledge and skills faster, but also to achieve goals using less resources, becoming progressively more effective and efficient.


parallel problem solving from nature | 2012

Compressed network complexity search

Faustino J. Gomez; Jan Koutník; Jürgen Schmidhuber

Indirect encoding schemes for neural network phenotypes can represent large networks compactly. In previous work, we presented a new approach where networks are encoded indirectly as a set of Fourier-type coefficients that decorrelate weight matrices such that they can often be represented by a small number of genes, effectively reducing the search space dimensionality, and speed up search. Up to now, the complexity of networks using this encoding was fixed a priori, both in terms of (1) the number of free parameters (topology) and (2) the number of coefficients. In this paper, we introduce a method, called Compressed Network Complexity Search (CNCS), for automatically determining network complexity that favors parsimonious solutions. CNCS maintains a probability distribution over complexity classes that it uses to select which class to optimize. Class probabilities are adapted based on their expected fitness. Starting with a prior biased toward the simplest networks, the distribution grows gradually until a solution is found. Experiments on two benchmark control problems, including a challenging non-linear version of the helicopter hovering task, demonstrate that the method consistently finds simple solutions.


simulation of adaptive behavior | 2014

Online Evolution of Deep Convolutional Network for Vision-Based Reinforcement Learning

Jan Koutník; Jürgen Schmidhuber; Faustino J. Gomez

Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper we extend the approach in [16]. The Max-Pooling Convolutional Neural Network (MPCNN) compressor is evolved online, maximizing the distances between normalized feature vectors computed from the images collected by the recurrent neural network (RNN) controllers during their evaluation in the environment. These two interleaved evolutionary searches are used to find MPCNN compressors and RNN controllers that drive a race car in the TORCS racing simulator using only visual input.


genetic and evolutionary computation conference | 2016

A Wavelet-based Encoding for Neuroevolution

Sjoerd van Steenkiste; Jan Koutník; Kurt Driessens; Jürgen Schmidhuber

A new indirect scheme for encoding neural network connection weights as sets of wavelet-domain coefficients is proposed in this paper. It exploits spatial regularities in the weight-space to reduce the gene-space dimension by considering the low-frequency wavelet coefficients only. The wavelet-based encoding builds on top of a frequency-domain encoding, but unlike when using a Fourier-type transform, it offers gene locality while preserving continuity of the genotype-phenotype mapping. We argue that this added property allows for more efficient evolutionary search and demonstrate this on the octopus-arm control task, where superior solutions were found in fewer generations. The scalability of the wavelet-based encoding is shown by evolving networks with many parameters to control game-playing agents in the Arcade Learning Environment.


international conference on artificial neural networks | 2009

Combining Multiple Inputs in HyperNEAT Mobile Agent Controller

Jan Drchal; Ondrej Kapral; Jan Koutník; Miroslav Snorek

In this paper we present neuro-evolution of neural network controllers for mobile agents in a simulated environment. The controller is obtained through evolution of hypercube encoded weights of recurrent neural networks (HyperNEAT). The simulated agents goal is to find a target in a shortest time interval. The generated neural network processes three different inputs --- surface quality, obstacles and distance to the target. A behavior emerged in agents features ability of driving on roads, obstacle avoidance and provides an efficient way of the target search.

Collaboration


Dive into the Jan Koutník's collaboration.

Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Miroslav Snorek

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Faustino J. Gomez

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Klaus Greff

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jan Drchal

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Roman Neruda

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Rupesh Kumar Srivastava

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Celestine Preetham Lawrence

MESA+ Institute for Nanotechnology

View shared research outputs
Top Co-Authors

Avatar

Haitze J. Broersma

MESA+ Institute for Nanotechnology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge