Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothée Masquelier is active.

Publication


Featured researches published by Timothée Masquelier.


Frontiers in Neuroscience | 2011

On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex.

Carlos Zamarreño-Ramos; Luis A. Camuñas-Mesa; José Antonio Pérez-Carrasco; Timothée Masquelier; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site1.


Frontiers in Neuroscience | 2013

STDP and STDP variations with memristors for spiking neuromorphic learning systems

Teresa Serrano-Gotarredona; Timothée Masquelier; Themistoklis Prodromakis; Giacomo Indiveri; Bernabé Linares-Barranco

In this paper we review several ways of realizing asynchronous Spike-Timing-Dependent-Plasticity (STDP) using memristors as synapses. Our focus is on how to use individual memristors to implement synaptic weight multiplications, in a way such that it is not necessary to (a) introduce global synchronization and (b) to separate memristor learning phases from memristor performing phases. In the approaches described, neurons fire spikes asynchronously when they wish and memristive synapses perform computation and learn at their own pace, as it happens in biological neural systems. We distinguish between two different memristor physics, depending on whether they respond to the original “moving wall” or to the “filament creation and annihilation” models. Independent of the memristor physics, we discuss two different types of STDP rules that can be implemented with memristors: either the pure timing-based rule that takes into account the arrival time of the spikes from the pre- and the post-synaptic neurons, or a hybrid rule that takes into account only the timing of pre-synaptic spikes and the membrane potential and other state variables of the post-synaptic neuron. We show how to implement these rules in cross-bar architectures that comprise massive arrays of memristors, and we discuss applications for artificial vision.


Neurocomputing | 2016

Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition

Saeed Reza Kheradpisheh; Mohammad Ganjtabesh; Timothée Masquelier

Retinal image of surrounding objects varies tremendously due to the changes in position, size, pose, illumination condition, background context, occlusion, noise, and non-rigid deformations. But despite these huge variations, our visual system is able to invariantly recognize any object in just a fraction of a second. To date, various computational models have been proposed to mimic the hierarchical processing of the ventral visual pathway, with limited success. Here, we show that the association of both biologically inspired network architecture and learning rule significantly improves the models performance when facing challenging invariant object recognition problems. Our model is an asynchronous feedforward spiking neural network. When the network is presented with natural images, the neurons in the entry layers detect edges, and the most activated ones fire first, while neurons in higher layers are equipped with spike timing-dependent plasticity. These neurons progressively become selective to intermediate complexity visual features appropriate for object categorization. The model is evaluated on 3D-Object and ETH-80 datasets which are two benchmarks for invariant object recognition, and is shown to outperform state-of-the-art models, including DeepConvNet and HMAX. This demonstrates its ability to accurately recognize different instances of multiple object classes even under various appearance conditions (different views, scales, tilts, and backgrounds). Several statistical analysis techniques are used to show that our model extracts class specific and highly informative features.


Scientific Reports | 2016

Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition.

Saeed Reza Kheradpisheh; Masoud Ghodrati; Mohammad Ganjtabesh; Timothée Masquelier

Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.


Neural Networks | 2018

STDP-based spiking deep convolutional neural networks for object recognition

Saeed Reza Kheradpisheh; Mohammad Ganjtabesh; Simon J. Thorpe; Timothée Masquelier

Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.


eNeuro | 2016

Rank order coding: a retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings

Geoffrey Portelli; John Martin Barrett; Gerrit Hilgen; Timothée Masquelier; Alessandro Maccione; Stefano Di Marco; Luca Berdondini; Pierre Kornprobst; Evelyne Sernagor

Visual Abstract How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.


Scientific Reports | 2016

Microsaccades enable efficient synchrony-based coding in the retina: a simulation study

Timothée Masquelier; Geoffrey Portelli; Pierre Kornprobst

It is now reasonably well established that microsaccades (MS) enhance visual perception, although the underlying neuronal mechanisms are unclear. Here, using numerical simulations, we show that MSs enable efficient synchrony-based coding among the primate retinal ganglion cells (RGC). First, using a jerking contrast edge as stimulus, we demonstrate a qualitative change in the RGC responses: synchronous firing, with a precision in the 10u2009ms range, only occurs at high speed and high contrast. MSs appear to be sufficiently fast to be able reach the synchronous regime. Conversely, the other kinds of fixational eye movements known as tremor and drift both hardly synchronize RGCs because of a too weak amplitude and a too slow speed respectively. Then, under natural image stimulation, we find that each MS causes certain RGCs to fire synchronously, namely those whose receptive fields contain contrast edges after the MS. The emitted synchronous spike volley thus rapidly transmits the most salient edges of the stimulus, which often constitute the most crucial information. We demonstrate that the readout could be done rapidly by simple coincidence-detector neurons without knowledge of the MS landing time, and that the required connectivity could emerge spontaneously with spike timing-dependent plasticity.


international symposium on circuits and systems | 2017

Hardware implementation of convolutional STDP for on-line visual feature learning

Amirreza Yousefzadeh; Timothée Masquelier; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

We present a highly hardware friendly STDP (Spike Timing Dependent Plasticity) learning rule for training Spiking Convolutional Cores in Unsupervised mode and training Fully Connected Classifiers in Supervised Mode. Examples are given for a 2-layer Spiking Neural System which learns in real time features from visual scenes obtained with spiking DVS (Dynamic Vision Sensor) Cameras.


Neuroscience | 2017

STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons

Timothée Masquelier

Highlights • Repeating spike patterns exist and are informative. Can a single cell do the readout?• We show how a leaky integrate-and-fire (LIF) can do this readout optimally.• The optimal membrane time constant is short, possibly much shorter than the pattern.• Spike-timing-dependent plasticity (STDP) can turn a neuron into an optimal detector.• These results may explain how humans can learn repeating visual or auditory sequences.


Archive | 2014

Spike-Timing-Dependent-Plasticity with Memristors

Teresa Serrano-Gotarredona; Timothée Masquelier; Bernabé Linares-Barranco

(This chapter is reprints material from Zamarreno-Ramos et al. in Front. Neurosci. 5:26, 2011 and Serrano-Gotarredona et al. in Front. Neurosci. 7:02, 2013, with permission.) Here we present a very exciting overlap between emergent nano technology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nano technology devices to the biological synaptic update rule known as Spike-Time-Dependent-Plasticity found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on behavioral macro models for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forwards but also backwards. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can be assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim here is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three terminal memristive type devices. (A Supplemental Material compressed zip file containing all files used for the simulations can be downloaded from http://www.frontiersin.org/neuromorphic_engineering/10.3389/fnins.2011.00026/abstract.)

Collaboration


Dive into the Timothée Masquelier's collaboration.

Top Co-Authors

Avatar

Bernabé Linares-Barranco

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Teresa Serrano-Gotarredona

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Amirreza Yousefzadeh

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Luca Berdondini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Zamarreño-Ramos

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Camuñas-Mesa

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge