Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andre Lemme is active.

Publication


Featured researches published by Andre Lemme.


Neural Networks | 2012

Online learning and generalization of parts-based image representations by non-negative sparse autoencoders

Andre Lemme; René Felix Reinhart; Jochen J. Steil

We present an efficient online learning scheme for non-negative sparse coding in autoencoder neural networks. It comprises a novel synaptic decay rule that ensures non-negative weights in combination with an intrinsic self-adaptation rule that optimizes sparseness of the non-negative encoding. We show that non-negativity constrains the space of solutions such that overfitting is prevented and very similar encodings are found irrespective of the network initialization and size. We benchmark the novel method on real-world datasets of handwritten digits and faces. The autoencoder yields higher sparseness and lower reconstruction errors than related offline algorithms based on matrix factorization. It generalizes to new inputs both accurately and without costly computations, which is fundamentally different from the classical matrix factorization approaches.


international conference on robotics and automation | 2012

Teaching nullspace constraints in physical human-robot interaction using Reservoir Computing

Arne Nordmann; Christian Emmerich; Stefan Ruether; Andre Lemme; Sebastian Wrede; Jochen J. Steil

A major goal of current robotics research is to enable robots to become co-workers that collaborate with humans efficiently and adapt to changing environments or workflows. We present an approach utilizing the physical interaction capabilities of compliant robots with data-driven and model-free learning in a coherent system in order to make fast reconfiguration of redundant robots feasible. Users with no particular robotics knowledge can perform this task in physical interaction with the compliant robot, for example to reconfigure a work cell due to changes in the environment. For fast and efficient learning of the respective null-space constraints, a reservoir neural network is employed. It is embedded in the motion controller of the system, hence allowing for execution of arbitrary motions in task space. We describe the training, exploration and the control architecture of the systems as well as present an evaluation on the KUKA Light-Weight Robot. Our results show that the learned model solves the redundancy resolution problem under the given constraints with sufficient accuracy and generalizes to generate valid joint-space trajectories even in untrained areas of the workspace.


Frontiers in Neurorobotics | 2013

Rare neural correlations implement robotic conditioning with delayed rewards and disturbances

Andrea Soltoggio; Andre Lemme; Felix Reinhart; Jochen J. Steil

Neural conditioning associates cues and actions with following rewards. The environments in which robots operate, however, are pervaded by a variety of disturbing stimuli and uncertain timing. In particular, variable reward delays make it difficult to reconstruct which previous actions are responsible for following rewards. Such an uncertainty is handled by biological neural networks, but represents a challenge for computational models, suggesting the lack of a satisfactory theory for robotic neural conditioning. The present study demonstrates the use of rare neural correlations in making correct associations between rewards and previous cues or actions. Rare correlations are functional in selecting sparse synapses to be eligible for later weight updates if a reward occurs. The repetition of this process singles out the associating and reward-triggering pathways, and thereby copes with distal rewards. The neural network displays macro-level classical and operant conditioning, which is demonstrated in an interactive real-life human-robot interaction. The proposed mechanism models realistic conditioning in humans and animals and implements similar behaviors in neuro-robotic platforms.


intelligent robots and systems | 2013

Neural learning of stable dynamical systems based on data-driven Lyapunov candidates

Klaus Neumann; Andre Lemme; Jochen J. Steil

Nonlinear dynamical systems are a promising representation to learn complex robot movements. Besides their undoubted modeling power, it is of major importance that such systems work in a stable manner. We therefore present a neural learning scheme that estimates stable dynamical systems from demonstrations based on a two-stage process: first, a data-driven Lyapunov function candidate is estimated. Second, stability is incorporated by means of a novel method to respect local constraints in the neural learning. We show in two experiments that this method is capable of learning stable dynamics while simultaneously sustaining the accuracy of the estimate and robustly generates complex movements.


Neurocomputing | 2014

Neural learning of vector fields for encoding stable dynamical systems

Andre Lemme; Klaus Neumann; René Felix Reinhart; Jochen J. Steil

Abstract The data-driven approximation of vector fields that encode dynamical systems is a persistently hard task in machine learning. If data is sparse and given in the form of velocities derived from few trajectories only, state-space regions exist, where no information on the vector field and its induced dynamics is available. Generalization towards such regions is meaningful only if strong biases are introduced, for instance assumptions on global stability properties of the to-be-learned dynamics. We address this issue in a novel learning scheme that represents vector fields by means of neural networks, where asymptotic stability of the induced dynamics is explicitly enforced through utilizing knowledge from Lyapunov׳s stability theory, in a predefined workspace. The learning of vector fields is constrained through point-wise conditions, derived from a suitable Lyapunov function candidate, which is first adjusted towards the training data. We point out the significance of optimized Lyapunov function candidates and analyze the approach in a scenario where trajectories are learned and generalized from human handwriting motions. In addition, we demonstrate that learning from robotic data obtained by kinesthetic teaching of the humanoid robot iCub leads to robust motion generation.


ieee-ras international conference on humanoid robots | 2012

Representation and generalization of bi-manual skills from kinesthetic teaching

René Felix Reinhart; Andre Lemme; Jochen J. Steil

The paper presents a modular architecture for bi-manual skill acquisition from kinesthetic teaching. Skills are learned and embedded over several representational levels comprising a compact movement representation by means of movement primitives, a task space description of the bi-manual tool constraint, and the particular redundancy resolution of the inverse kinematics. A comparative evaluation of different architectural configurations identifies a specific modulation scheme for skill execution to achieve optimal spatial generalization from few training samples. Based on this architectural layout together with a novel stabilization approach for dynamical movement primitives, the robust teaching and execution of complex skill sequences is demonstrated on the humanoid robot iCub.


Neurocomputing | 2013

Kinesthetic teaching of visuomotor coordination for pointing by the humanoid robot iCub

Andre Lemme; Ananda L. Freire; Guilherme A. Barreto; Jochen J. Steil

Pointing at something refers to orienting the hand, the arm, the head or the body in the direction of an object or an event. This skill constitutes a basic communicative ability for cognitive agents like, e.g. humanoid robots. The goal of this study is to show that approximate and, in particular, precise pointing can be learned as a direct mapping from the objects pixel coordinates in the visual field to hand positions or to joint angles. This highly nonlinear mapping defines the pose and orientation of a robots arm. The study underlines that this is possible without calculating the objects depth and 3D position explicitly since only the direction is required. To this aim, three state-of-the-art neural network paradigms (multilayer perceptron, extreme learning machine and reservoir computing) are evaluated on real world data gathered from the humanoid robot iCub. Training data are interactively generated and recorded from kinesthetic teaching for the case of precise pointing. Successful generalization is verified on the iCub using a laser pointer attached to its hand.


Paladyn: Journal of Behavioral Robotics | 2015

Open-source benchmarking for learned reaching motion generation in robotics

Andre Lemme; Yaron Meirovitch; Seyed Mohammad Khansari-Zadeh; Tamar Flash; Aude Billard; Jochen J. Steil

Abstract This paper introduces a benchmark framework to evaluate the performance of reaching motion generation approaches that learn from demonstrated examples. The system implements ten different performance measures for typical generalization tasks in robotics using open source MATLAB software. Systematic comparisons are based on a default training data set of human motions, which specify the respective ground truth. In technical terms, an evaluated motion generation method needs to compute velocities, given a state provided by the simulation system. This however is agnostic to how this is done by the method or how the methods learns from the provided demonstrations. The framework focuses on robustness, which is tested statistically by sampling from a set of perturbation scenarios. These perturbations interfere with motion generation and challenge its generalization ability. The benchmark thus helps to identify the strengths and weaknesses of competing approaches, while allowing the user the opportunity to configure the weightings between different measures.


robotics and biomimetics | 2012

Using movement primitives in interpreting and decomposing complex trajectories in learning-by-doing

Andrea Soltoggio; Andre Lemme; Jochen J. Steil

Learning and reproducing complex movements is an important skill for robots. However, while humans can learn and generalise new complex trajectories, robots are often programmed to execute point-by-point precise but fixed patterns. This study proposes a method for decomposing new complex trajectories into a set of known robot-based primitives. Instead of reproducing accurately an observed trajectory, the robot interprets it as a composition of its own previously acquired primitive movements. The method attempts initially a rough approximation with the idea of capturing the most essential features of the movement. Observing the discrepancy between the demonstrated and reproduced trajectories, the process then proceeds with incremental decompositions. The method is tested on both geometric and human generated trajectories. The shift from a data-centred view to an agent-centred view in learning trajectories results in generalisation properties like the abstraction to primitives and noise suppression. This study suggests a novel approach to learning complex robot motor patterns that builds upon existing motor skills. Applications include drawing, writing, movement generation and object manipulation in a variety of tasks.


Proceedings of the 5th Workshop on Cognitive Aspects of Computational Language Learning (CogACLL) | 2014

A multimodal corpus for the evaluation of computational models for (grounded) language acquisition

Judith Gaspers; Maximilian Panzner; Andre Lemme; Philipp Cimiano; Katharina J. Rohlfing; Sebastian Wrede

This paper describes the design and acquisition of a German multimodal corpus for the development and evaluation of computational models for (grounded) language acquisition and algorithms enabling corresponding capabilities in robots. The corpus contains parallel data from multiple speakers/actors, including speech, visual data from different perspectives and body posture data. The corpus is designed to support the development and evaluation of models learning rather complex grounded linguistic structures, e.g. syntactic patterns, from sub-symbolic input. It provides moreover a valuable resource for evaluating algorithms addressing several other learning processes, e.g. concept formation or acquisition of manipulation skills. The corpus will be made available to the public.

Collaboration


Dive into the Andre Lemme's collaboration.

Top Co-Authors

Avatar

Jochen J. Steil

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aude Billard

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ananda L. Freire

Federal University of Ceará

View shared research outputs
Top Co-Authors

Avatar

Tamar Flash

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Yaron Meirovitch

Weizmann Institute of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge