Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leo Pape is active.

Publication


Featured researches published by Leo Pape.


Frontiers in Neurorobotics | 2012

Learning tactile skills through curious exploration

Leo Pape; Calogero Maria Oddo; Marco Controzzi; Christian Cipriani; Alexander Förster; Maria Chiara Carrozza; Jürgen Schmidhuber

We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explicit teacher signal. In this approach, the acquisition of skills is driven by the information content of the sensory input signals relative to a learner that aims at representing sensory inputs using fewer and fewer computational resources. We show that, from initially random exploration of its environment, the robotic system autonomously develops a small set of basic motor skills that lead to different kinds of tactile input. Next, the system learns how to exploit the learned motor skills to solve supervised texture classification tasks. Our approach demonstrates the feasibility of autonomous acquisition of tactile skills on physical robotic platforms through curiosity-driven reinforcement learning, overcomes typical difficulties of engineered solutions for active tactile exploration and underactuated control, and provides a basis for studying developmental learning through intrinsic motivation in robots.


international conference on development and learning | 2012

Autonomous learning of abstractions using Curiosity-Driven Modular Incremental Slow Feature Analysis

Varun Raj Kompella; Matthew D. Luciw; Marijn F. Stollenga; Leo Pape; Jürgen Schmidhuber

To autonomously learn behaviors in complex environments, vision-based agents need to develop useful sensory abstractions from high-dimensional video. We propose a modular, curiosity-driven learning system that autonomously learns multiple abstract representations. The policy to build the library of abstractions is adapted through reinforcement learning, and the corresponding abstractions are learned through incremental slow-feature analysis (IncSFA). IncSFA learns each abstraction based on how the inputs change over time, directly from unprocessed visual data. Modularity is induced via a gating system, which also prevents abstraction duplication. The system is driven by a curiosity signal that is based on the learnability of the inputs by the current adaptive module. After the learning completes, the result is multiple slow-feature modules serving as distinct behavior-specific abstractions. Experiments with a simulated iCub humanoid robot show how the proposed method effectively learns a set of abstractions from raw un-preprocessed video, to our knowledge the first curious learning agent to demonstrate this ability.


artificial general intelligence | 2011

Coherence progress: a measure of interestingness based on fixed compressors

Tom Schaul; Leo Pape; Tobias Glasmachers; Vincent Graziano; Jürgen Schmidhuber

The ability to identify novel patterns in observations is an essential aspect of intelligence. In a computational framework, the notion of a pattern can be formalized as a program that uses regularities in observations to store them in a compact form, called a compressor. The search for interesting patterns can then be stated as a search to better compress the history of observations. This paper introduces coherence progress, a novel, general measure of interestingness that is independent of its use in a particular agent and the ability of the compressor to learn from observations. Coherence progress considers the increase in coherence obtained by any compressor when adding an observation to the history of observations thus far. Because of its applicability to any type of compressor, the measure allows for an easy, quick, and domain-specific implementation. We demonstrate the capability of coherence progress to satisfy the requirements for qualitatively measuring interestingness on a Wikipedia dataset.


ieee-ras international conference on humanoid robots | 2011

AutoIncSFA and vision-based developmental learning for humanoid robots

Varun Raj Kompella; Leo Pape; Jonathan Masci; Mikhail Frank; Jürgen Schmidhuber

Humanoids have to deal with novel, unsupervised high-dimensional visual input streams. Our new method AutoIncSFA learns to compactly represent such complex sensory input sequences by very few meaningful features corresponding to high-level spatio-temporal abstractions, such as: a person is approaching me, or: an object was toppled. We explain the advantages of AutoIncSFA over previous related methods, and show that the compact codes greatly facilitate the task of a reinforcement learner driving the humanoid to actively explore its world like a playing baby, maximizing intrinsic curiosity reward signals for reaching states corresponding to previously unpredicted AutoIncSFA features.


international symposium on neural networks | 2011

Modular deep belief networks that do not forget

Leo Pape; Faustino J. Gomez; Mark B. Ring; Jürgen Schmidhuber

Deep belief networks (DBNs) are popular for learning compact representations of high-dimensional data. However, most approaches so far rely on having a single, complete training set. If the distribution of relevant features changes during subsequent training stages, the features learned in earlier stages are gradually forgotten. Often it is desirable for learning algorithms to retain what they have previously learned, even if the input distribution temporarily changes. This paper introduces the M-DBN, an unsupervised modular DBN that addresses the forgetting problem. M-DBNs are composed of a number of modules that are trained only on samples they best reconstruct. While modularization by itself does not prevent forgetting, the M-DBN additionally uses a learning method that adjusts each modules learning rate proportionally to the fraction of best reconstructed samples. On the MNIST handwritten digit dataset module specialization largely corresponds to the digits discerned by humans. Furthermore, in several learning tasks with changing MNIST digits, M-DBNs retain learned features even after those features are removed from the training data, while monolithic DBNs of comparable size forget feature mappings learned before.


Studies in computational intelligence | 2008

Democratic Liquid State Machines for Music Recognition

Leo Pape; Jornt de Gruijl; Marco Wiering

The liquid state machine (LSM) is a relatively new recurrent neural network (RNN) architecture for dealing with time-series classification problems. The LSM has some attractive properties such as a fast training speed compared with more traditional RNNs, its biological plausibility, and its ability to deal with highly nonlinear dynamics. This paper presents the democratic LSM, an extension of the basic LSM that uses majority voting by combining two dimensions. First, instead of only giving the classification at the end of the time-series, multiple classifications after different time-periods are combined. Second, instead of using a single LSM, multiple ensembles are combined. The results show that the democratic LSM significantly outperforms the basic LSM and other methods on two music composer classification tasks where the goal is to separate Haydn/Mozart and Beethoven/Bach, and a music instrument classification problem where the goal is to distinguish between a flute and a bass guitar.


artificial general intelligence | 2011

Real-world limits to algorithmic intelligence

Leo Pape; Arthur Kok

Recent theories of universal algorithmic intelligence, combined with the view that the world can be completely specified in mathematical terms, have led to claims about intelligence in any agent, including human beings. We discuss the validity of assumptions and claims made by theories of universally optimal intelligence in relation to their application in actual robots and intelligence tests. Our argument is based on an exposition of the requirements for knowledge of the world through observations. In particular, we will argue that the world can only be known through the application of rules to observations, and that beyond these rules no knowledge can be obtained about the origin of our observations. Furthermore, we expose a contradiction in the assumption that it is possible to fully formalize the world, as for example is done in digital physics, which can therefore not serve as the basis for any argument or proof about algorithmic intelligence that interacts with the world.


Continental Shelf Research | 2009

Daily to interannual cross-shore sandbar migration: Observations from a multiple sandbar system

B.G. Ruessink; Leo Pape; Ian L. Turner


Neural Networks | 2007

2007 Special Issue: Recurrent neural network modeling of nearshore sandbar behavior

Leo Pape; B.G. Ruessink; Marco Wiering; Ian L. Turner


Journal of Geophysical Research | 2010

On cross‐shore migration and equilibrium states of nearshore sandbars

Leo Pape; Nathaniel G. Plant; B.G. Ruessink

Collaboration


Dive into the Leo Pape's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian L. Turner

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Förster

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Marijn F. Stollenga

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Mikhail Frank

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Varun Raj Kompella

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge