Eric Aislan Antonelo
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric Aislan Antonelo.
Neural Networks | 2008
Eric Aislan Antonelo; Benjamin Schrauwen; Dirk Stroobandt
Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.
Neural Processing Letters | 2007
Eric Aislan Antonelo; Benjamin Schrauwen; Jan Van Campenhout
Autonomous mobile robots form an important research topic in the field of robotics due to their near-term applicability in the real world as domestic service robots. These robots must be designed in an efficient way using training sequences. They need to be aware of their position in the environment and also need to create models of it for deliberative planning. These tasks have to be performed using a limited number of sensors with low accuracy, as well as with a restricted amount of computational power. In this contribution we show that the recently emerged paradigm of Reservoir Computing (RC) is very well suited to solve all of the above mentioned problems, namely learning by example, robot localization, map and path generation. Reservoir Computing is a technique which enables a system to learn any time-invariant filter of the input by training a simple linear regressor that acts on the states of a high-dimensional but random dynamic system excited by the inputs. In addition, RC is a simple technique featuring ease of training, and low computational and memory demands.
IEEE Transactions on Neural Networks | 2015
Eric Aislan Antonelo; Benjamin Schrauwen
This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.
international conference on robotics and automation | 2008
Eric Aislan Antonelo; Benjamin Schrauwen; Dirk Stroobandt
In this work we tackle the road sign problem with reservoir computing (RC) networks. The T-maze task (a particular form of the road sign problem) consists of a robot in a T-shaped environment that must reach the correct goal (left or right arm of the T-maze) depending on a previously received input sign. It is a control task in which the delay period between the sign received and the required response (e.g., turn right or left) is a crucial factor. Delayed response tasks like this one form a temporal problem that can be handled very well by RC networks. Reservoir computing is a biologically plausible technique which overcomes the problems of previous algorithms such as backpropagation through time - which exhibits slow (or non-) convergence on training. RC is a new concept that includes a fast and efficient training algorithm. We show that this simple approach can solve the T-maze task efficiently.
international joint conference on neural network | 2006
Eric Aislan Antonelo; Albert-Jan Baerveldt; Thorsteinn Rögnvaldsson; Mauricio Figueiredo
Classical reinforcement learning mechanisms and a modular neural network are unified for conceiving an intelligent autonomous system for mobile robot navigation. The conception aims at inhibiting two common navigation deficiencies: generation of unsuitable cyclic trajectories and ineffectiveness in risky configurations. Distinct design apparatuses are considered for tackling these navigation difficulties, for instance: 1) neuron parameter for memorizing neuron activities (also functioning as a learning factor), 2) reinforcement learning mechanisms for adjusting neuron parameters (not only synapse weights), and 3) a inner-triggered reinforcement. Simulation results show that the proposed system circumvents difficulties caused by specific environment configurations, improving the relation between collisions and captures.
Neural Networks | 2012
Eric Aislan Antonelo; Benjamin Schrauwen
This work proposes a hierarchical biologically-inspired architecture for learning sensor-based spatial representations of a robot environment in an unsupervised way. The first layer is comprised of a fixed randomly generated recurrent neural network, the reservoir, which projects the input into a high-dimensional, dynamic space. The second layer learns instantaneous slowly-varying signals from the reservoir states using Slow Feature Analysis (SFA), whereas the third layer learns a sparse coding on the SFA layer using Independent Component Analysis (ICA). While the SFA layer generates non-localized activations in space, the ICA layer presents high place selectivity, forming a localized spatial activation, characteristic of place cells found in the hippocampus area of the rodents brain. We show that, using a limited number of noisy short-range distance sensors as input, the proposed system learns a spatial representation of the environment which can be used to predict the actual location of simulated and real robots, without the use of odometry. The results confirm that the reservoir layer is essential for learning spatial representations from low-dimensional input such as distance sensors. The main reason is that the reservoir state reflects the recent history of the input stream. Thus, this fading memory is essential for detecting locations, mainly when locations are ambiguous and characterized by similar sensor readings.
computational intelligence in robotics and automation | 2009
Tim Waegeman; Eric Aislan Antonelo; Francis wyffels; Benjamin Schrauwen
Autonomous mobile robots must accomplish tasks in unknown and noisy environments. In this context, learning robot behaviors in an imitation based approach would be desirable in the perspective of service robotics as well as of learning robots. In this work, we use Reservoir Computing (RC) for learning robot behaviors by demonstration. In RC, a randomly generated recurrent neural network, the reservoir, projects the input to a dynamic temporal space. The reservoir states are mapped into a readout output layer which is the solely part being trained using standard linear regression. In this paper, we use a two layered modular structure, where the first layer comprises two RC networks, each one for learning primitive behaviors, namely, obstacle avoidance and target seeking. The second layer is composed of one RC network for behavior combination and coordination. The hierarchical RC network learns by examples given by simple controllers which implement the primitive behaviors. We use a simulation model of the e-puck robot which has distance sensors and a camera that serves as input for our system. The experiments show that, after training, the robot learns to coordinate the Goal Seeking (GS) and the Object Avoidance (OA) behaviors in unknown environments, being able to capture targets and navigate efficiently.
systems, man and cybernetics | 2008
Eric Aislan Antonelo; Benjamin Schrauwen; Dirk Stroobandt
Reservoir computing (RC) uses a randomly created Recurrent Neural Network as a reservoir of rich dynamics which projects the input to a high dimensional space. These projections are mapped to the desired output using a linear output layer, which is the only part being trained by standard linear regression. In this work, RC is used for imitation learning of multiple behaviors which are generated by different controllers using an intelligent navigation system for mobile robots previously published in literature. Target seeking and exploration behaviors are conflicting behaviors which are modeled with a single RC network. The switching between the learned behaviors is implemented by an extra input which is able to change the dynamics of the reservoir, and in this way, change the behavior of the system. Experiments show the capabilities of Reservoir Computing for modeling multiple behaviors and behavior switching.
computational intelligence in robotics and automation | 2005
Eric Aislan Antonelo; Mauricio Figueiredo; Albert Jan Baerveldt; Rodrigo Calvo
An autonomous system able to construct its own navigation strategy for mobile robots is proposed. The navigation strategy is molded from navigation experiences (succeeding as the robot navigates) according to a classical reinforcement learning procedure. The autonomous system is based on modular hierarchical neural networks. Initially, the navigation performance is poor (many collisions occur). Computer simulations show that after a period of learning, the autonomous system generates efficient obstacle avoidance and target seeking behaviors. Experiments also offer support for concluding that the autonomous system develops a variety of object discrimination capability and of spatial concepts.
international conference on robotics and automation | 2010
Eric Aislan Antonelo; Benjamin Schrauwen
In this work we propose a hierarchical architecture which constructs internal models of a robot environment for goal-oriented navigation by an imitation learning process. The proposed architecture is based on the Reservoir Computing paradigm for training Recurrent Neural Networks (RNN). It is composed of two randomly generated RNNs (called reservoirs), one for modeling the localization capability and one for learning the navigation skill. The localization module is trained to detect the current and previously visited robot rooms based only on 8 noisy infra-red distance sensors. These predictions together with distance sensors and the desired goal location are used by the navigation network to actually steer the robot through the environment in a goal-oriented manner. The training of this architecture is performed in a supervised way (with examples of trajectories created by a supervisor) using linear regression on the reservoir states. So, the reservoir acts as a temporal kernel projecting the inputs to a rich feature space, whose states are linearly combined to generate the desired outputs. Experimental results on a simulated robot show that the trained system can localize itself within both simple and large unknown environments and navigate successfully to desired goals.