Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Verstraeten is active.

Publication


Featured researches published by David Verstraeten.


Neural Networks | 2007

2007 Special Issue: An experimental unification of reservoir computing methods

David Verstraeten; Benjamin Schrauwen; Michiel D'Haene; Dirk Stroobandt

Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.


Neurocomputing | 2008

Improving reservoirs using intrinsic plasticity

Benjamin Schrauwen; Marion Wardermann; David Verstraeten; Jochen J. Steil; Dirk Stroobandt

The benefits of using intrinsic plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neurons output towards an exponential distribution-thereby realizing an information maximization-have already been demonstrated. In this work, we extend the ideas of this adaptation method to a more commonly used non-linearity and a Gaussian output distribution. After deriving the learning rules, we show the effects of the bounded output of the transfer function on the moments of the actual output distribution. This allows us to show that the rule converges to the expected distributions, even in random recurrent networks. The IP rule is evaluated in a reservoir computing setting, which is a temporal processing technique which uses random, untrained recurrent networks as excitable media, where the networks state is fed to a linear regressor used to calculate the desired output. We present an experimental comparison of the different IP rules on three benchmark tasks with different characteristics. Furthermore, we show that this unsupervised reservoir adaptation is able to adapt networks with very constrained topologies, such as a 1D lattice which generally shows quite unsuitable dynamic behavior, to a reservoir that can be used to solve complex tasks. We clearly demonstrate that IP is able to make reservoir computing more robust: the internal dynamics can autonomously tune themselves-irrespective of initial weights or input scaling-to the dynamic regime which is optimal for a given task.


Nature Communications | 2014

Experimental demonstration of reservoir computing on a silicon photonics chip

Kristof Vandoorne; Pauline Mechet; Thomas Van Vaerenbergh; Martin Fiers; Geert Morthier; David Verstraeten; Benjamin Schrauwen; Joni Dambre; Peter Bienstman

In todays age, companies employ machine learning to extract information from large quantities of data. One of those techniques, reservoir computing (RC), is a decade old and has achieved state-of-the-art performance for processing sequential data. Dedicated hardware realizations of RC could enable speed gains and power savings. Here we propose the first integrated passive silicon photonics reservoir. We demonstrate experimentally and through simulations that, thanks to the RC paradigm, this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s(-1), without power consumption in the reservoir. It can also perform isolated spoken digit recognition. Our realization exploits optical phase for computing. It is scalable to larger networks and much higher bitrates, up to speeds >100 Gbit s(-1). These results pave the way for the application of integrated photonic RC for a wide range of applications.


Optics Express | 2008

Toward optical signal processing using photonic reservoir computing.

Kristof Vandoorne; Wouter Dierckx; Benjamin Schrauwen; David Verstraeten; Roel Baets; Peter Bienstman; Jan Van Campenhout

We propose photonic reservoir computing as a new approach to optical signal processing in the context of large scale pattern recognition problems. Photonic reservoir computing is a photonic implementation of the recently proposed reservoir computing concept, where the dynamics of a network of nonlinear elements are exploited to perform general signal processing tasks. In our proposed photonic implementation, we employ a network of coupled Semiconductor Optical Amplifiers (SOA) as the basic building blocks for the reservoir. Although they differ in many key respects from traditional software-based hyperbolic tangent reservoirs, we show using simulations that such a photonic reservoir can outperform traditional reservoirs on a benchmark classification task. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed.


international joint conference on neural network | 2006

Reservoir-based techniques for speech recognition

David Verstraeten; Benjamin Schrauwen; Dirk Stroobandt

A solution for the slow convergence of most learning rules for Recurrent Neural Networks (RNN) has been proposed under the terms Liquid State Machines (LSM) and Echo State Networks (ESN). These methods use a RNN as a reservoir that is not trained. For this article we build upon previous work, where we used reservoir-based techniques to solve the task of isolated digit recognition. We present a straightforward improvement of our previous LSM-based implementation that results in an outperformance of a state-of-the-art Hidden Markov Model (HMM) based recognizer. Also, we apply the Echo State approach to the problem, which allows us to investigate the impact of several interconnection parameters on the performance of our speech recognizer.


Neural Networks | 2013

Reservoir computing and extreme learning machines for non-linear time-series data analysis

John B. Butcher; David Verstraeten; Benjamin Schrauwen; Charles R. Day; P.W. Haycock

Random projection architectures such as Echo state networks (ESNs) and Extreme Learning Machines (ELMs) use a network containing a randomly connected hidden layer and train only the output weights, overcoming the problems associated with the complex and computationally demanding training algorithms traditionally used to train neural networks, particularly recurrent neural networks. In this study an ESN is shown to contain an antagonistic trade-off between the amount of non-linear mapping and short-term memory it can exhibit when applied to time-series data which are highly non-linear. To overcome this trade-off a new architecture, Reservoir with Random Static Projections (R(2)SP) is investigated, that is shown to offer a significant improvement in performance. A similar approach using an ELM whose input is presented through a time delay (TD-ELM) is shown to further enhance performance where it significantly outperformed the ESN and R(2)SP as well other architectures when applied to a novel task which allows the short-term memory and non-linearity to be varied. The hard-limiting memory of the TD-ELM appears to be best suited for the data investigated in this study, although ESN-based approaches may offer improved performance when processing data which require a longer fading memory.


Scientific Reports | 2012

Information Processing Capacity of Dynamical Systems

Joni Dambre; David Verstraeten; Benjamin Schrauwen; Serge Massar

Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the systems short-term memory.


IEEE Transactions on Neural Networks | 2011

Parallel Reservoir Computing Using Optical Amplifiers

Kristof Vandoorne; Joni Dambre; David Verstraeten; Benjamin Schrauwen; Peter Bienstman

Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the systems physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations.


PLOS ONE | 2012

A Bayesian Model for Exploiting Application Constraints to Enable Unsupervised Training of a P300-based BCI

Pieter-Jan Kindermans; David Verstraeten; Benjamin Schrauwen

This work introduces a novel classifier for a P300-based speller, which, contrary to common methods, can be trained entirely unsupervisedly using an Expectation Maximization approach, eliminating the need for costly dataset collection or tedious calibration sessions. We use publicly available datasets for validation of our method and show that our unsupervised classifier performs competitively with supervised state-of-the-art spellers. Finally, we demonstrate the added value of our method in different experimental settings which reflect realistic usage situations of increasing difficulty and which would be difficult or impossible to tackle with existing supervised or adaptive methods.


Neural Networks | 2008

2008 Special Issue: Compact hardware liquid state machines on FPGA for real-time speech recognition

Benjamin Schrauwen; Michiel D'Haene; David Verstraeten; Jan Van Campenhout

Hardware implementations of Spiking Neural Networks are numerous because they are well suited for implementation in digital and analog hardware, and outperform classic neural networks. This work presents an application driven digital hardware exploration where we implement real-time, isolated digit speech recognition using a Liquid State Machine. The Liquid State Machine is a recurrent neural network of spiking neurons where only the output layer is trained. First we test two existing hardware architectures which we improve and extend, but that appears to be too fast and thus area consuming for this application. Next, we present a scalable, serialized architecture that allows a very compact implementation of spiking neural networks that is still fast enough for real-time processing. All architectures support leaky integrate-and-fire membranes with exponential synaptic models. This work shows that there is actually a large hardware design space of Spiking Neural Network hardware that can be explored. Existing architectures have only spanned part of it.

Collaboration


Dive into the David Verstraeten's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge