Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jacques Kaiser is active.

Publication


Featured researches published by Jacques Kaiser.


Frontiers in Neurorobotics | 2017

Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

Egidio Falotico; Lorenzo Vannucci; Alessandro Ambrosano; Ugo Albanese; Stefan Ulbrich; Juan Camilo Vasquez Tieck; Georg Hinkel; Jacques Kaiser; Igor Peric; Oliver Denninger; Nino Cauli; Murat Kirtay; Arne Roennau; Gudrun Klinker; Axel Von Arnim; Luc Guyot; Daniel Peppicelli; Pablo Martínez-Cañada; Eduardo Ros; Patrick Maier; Sandro Weber; Manuel J. Huber; David A. Plecher; Florian Röhrbein; Stefan Deser; Alina Roitberg; Patrick van der Smagt; Rüdiger Dillman; Paul Levi; Cecilia Laschi

Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.


international conference on artificial neural networks | 2017

Spiking Convolutional Deep Belief Networks

Jacques Kaiser; David Zimmerer; J. Camilo Vasquez Tieck; Stefan Ulbrich; Arne Roennau; Rüdiger Dillmann

Understanding visual input as perceived by humans is a challenging task for machines. Today, most successful methods work by learning features from static images. Based on classical artificial neural networks, those methods are not adapted to process event streams as provided by the Dynamic Vision Sensor (DVS). Recently, an unsupervised learning rule to train Spiking Restricted Boltzmann Machines has been presented [9]. Relying on synaptic plasticity, it can learn features directly from event streams. In this paper, we extend this method by adding convolutions, lateral inhibitions and multiple layers. We evaluate our method on a self-recorded DVS dataset as well as the Poker-DVS dataset. Our results show that our convolutional method performs better and needs less parameters. It also achieves comparable results to previous event-based classification methods while learning features in an unsupervised fashion.


conference on biomimetic and biohybrid systems | 2016

Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platform

Alessandro Ambrosano; Lorenzo Vannucci; Ugo Albanese; Murat Kirtay; Egidio Falotico; Pablo Martínez-Cañada; Georg Hinkel; Jacques Kaiser; Stefan Ulbrich; Paul Levi; Christian A. Morillas; Alois Knoll; Marc-Oliver Gewaltig; Cecilia Laschi

The ‘red-green’ pathway of the retina is classically recognized as one of the retinal mechanisms allowing humans to gather color information from light, by combining information from L-cones and M-cones in an opponent way. The precise retinal circuitry that allows the opponency process to occur is still uncertain, but it is known that signals from L-cones and M-cones, having a widely overlapping spectral response, contribute with opposite signs. In this paper, we simulate the red-green opponency process using a retina model based on linear-nonlinear analysis to characterize context adaptation and exploiting an image-processing approach to simulate the neural responses in order to track a moving target. Moreover, we integrate this model within a visual pursuit controller implemented as a spiking neural network to guide eye movements in a humanoid robot. Tests conducted in the Neurorobotics Platform confirm the effectiveness of the whole model. This work is the first step towards a bio-inspired smooth pursuit model embedding a retina model using spiking neural networks.


simulation modeling and programming for autonomous robots | 2016

Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks

Jacques Kaiser; J. Camilo Vasquez Tieck; Christian Hubschneider; Peter Wolf; Michael Weber; Michael Hoff; Alexander Friedrich; Konrad Wojtasik; Arne Roennau; Ralf Kohlhaas; Rüdiger Dillmann; J. Marius Zöllner

Spiking neural networks are in theory more computationally powerful than rate-based neural networks often used in deep learning architectures. However, unlike rate-based neural networks, it is yet unclear how to train spiking networks to solve complex problems. There are still no standard algorithms and it is preventing roboticists to use spiking networks, yielding a lack of Neurorobotics applications. The contribution of this paper is twofold. First, we present a modular framework to evaluate neural self-driving vehicle applications. It provides a visual encoder from camera images to spikes inspired by the silicon retina (DVS), and a steering wheel decoder based on an agonist antagonist muscle model. Secondly, using this framework, we demonstrate a spiking neural network which controls a vehicle end-to-end for lane following behavior. The network is feed-forward and relies on hand-crafted feature detectors. In future work, this framework could be used to design more complex networks and use the evaluation metrics for learning.


Bioinspiration & Biomimetics | 2017

Scaling up liquid state machines to predict over address events from dynamic vision sensors

Jacques Kaiser; Rainer Stal; Anand Subramoney; Arne Roennau; Rüdiger Dillmann

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  [Formula: see text]  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.


international conference on artificial neural networks | 2018

Learning Continuous Muscle Control for a Multi-joint Arm by Extending Proximal Policy Optimization with a Liquid State Machine

Juan Camilo Vasquez Tieck; Marin Vlastelica Pogančić; Jacques Kaiser; Arne Roennau; Marc-Oliver Gewaltig; Rüdiger Dillmann

There have been many advances in the field of reinforcement learning in continuous control problems. Usually, these approaches use deep learning with artificial neural networks for approximation of policies and value functions. In addition, there have been interesting advances in spiking neural networks, towards a more biologically plausible model of the neurons and the learning mechanisms. We present an approach to learn continuous muscle control of a multi joint arm. We use reinforcement learning for a target reaching task, which can be modeled as partially observable markov decision processes. We extend proximal policy optimization with a liquid state machine (LSM) for state representation to achieve better performance in the target reaching task. The results show that we are able to learn to control the arm after training the readout of the LSM with reinforcement learning. The input current encoding used for encoding the state is enough to have a good projection into a higher dimensional space of the LSM. The results also show that we are able to learn a linear readout, which is equivalent to a one-layer neural network to learn to control the arm. We show that there are clear benefits of training the readouts of a LSM with reinforcement learning. These results can lead to demonstrate the benefits of using a LSM as a drop-in state transformation in general.


international conference on artificial neural networks | 2018

Microsaccades for Neuromorphic Stereo Vision.

Jacques Kaiser; Jakob Weinland; Philip Keller; Lea Steffen; J. Camilo Vasquez Tieck; Daniel Reichard; Arne Roennau; Jörg Conradt; Rüdiger Dillmann

Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.


international conference on artificial neural networks | 2017

Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands

J. Camilo Vasquez Tieck; Heiko Donat; Jacques Kaiser; Igor Peric; Stefan Ulbrich; Arne Roennau; Marius Zöllner; Rüdiger Dillmann

Representation and execution of movement in biology is an active field of research relevant to neurorobotics. Humans can remember grasp motions and modify them during execution based on the shape and the intended interaction with objects. We present a hierarchical spiking neural network with a biologically inspired architecture for representing different grasp motions. We demonstrate the ability of our network to learn from human demonstration using synaptic plasticity on two different exemplary grasp types (pinch and cylinder). We evaluate the performance of the network in simulation and on a real anthropomorphic robotic hand. The network exposes the ability of learning finger coordination and synergies between joints that can be used for grasping.


ieee international conference on cognitive informatics and cognitive computing | 2018

Controlling a Robot Arm for Target Reaching without Planning Using Spiking Neurons

J. Camilo Vasquez Tieck; Lea Steffen; Jacques Kaiser; Arne Roennau; Rüdiger Dillmann


ieee international conference on biomedical robotics and biomechatronics | 2018

Learning to Reproduce Visually Similar Movements by Minimizing Event-Based Prediction Error

Jacques Kaiser; Svenja Melbaum; J. Camilo Vasquez Tieck; Arne Roennau; Martin V. Butz; Rüdiger Dillmann

Collaboration


Dive into the Jacques Kaiser's collaboration.

Top Co-Authors

Avatar

Arne Roennau

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Rüdiger Dillmann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

J. Camilo Vasquez Tieck

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Ulbrich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Igor Peric

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Lea Steffen

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Georg Hinkel

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Juan Camilo Vasquez Tieck

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Levi

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Alessandro Ambrosano

Sant'Anna School of Advanced Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge