Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis A. Camuñas-Mesa is active.

Publication


Featured researches published by Luis A. Camuñas-Mesa.


Frontiers in Neuroscience | 2011

On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex.

Carlos Zamarreño-Ramos; Luis A. Camuñas-Mesa; José Antonio Pérez-Carrasco; Timothée Masquelier; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site1.


IEEE Transactions on Neural Networks | 2009

CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking

Rafael Serrano-Gotarredona; Matthias Oster; Patrick Lichtsteiner; Alejandro Linares-Barranco; Rafael Paz-Vicente; Francisco Gomez-Rodriguez; Luis A. Camuñas-Mesa; Raphael Berner; Manuel Rivas-Perez; Tobi Delbruck; Shih-Chii Liu; Rodney J. Douglas; Philipp Häfliger; Gabriel Jiménez-Moreno; Anton Civit Ballcels; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Bernabé Linares-Barranco

This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.


IEEE Journal of Solid-state Circuits | 2012

An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

Luis A. Camuñas-Mesa; Carlos Zamarreño-Ramos; Alejandro Linares-Barranco; Antonio Acosta-Jimenez; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 μm CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.


IEEE Transactions on Circuits and Systems | 2011

A 32

Luis A. Camuñas-Mesa; Antonio Acosta-Jimenez; Carlos Zamarreño-Ramos; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

This paper describes a convolution chip for event-driven vision sensing and processing systems. As opposed to conventional frame-constraint vision systems, in event-driven vision there is no need for frames. In frame-free event-based vision, information is represented by a continuous flow of self-timed asynchronous events. Such events can be processed on the fly by event-based convolution chips, providing at their output a continuous event flow representing the 2-D filtered version of the input flow. In this paper we present a 32 × 32 pixel 2-D convolution event processor whose kernel can have arbitrary shape and size up to 32 × 32. Arrays of such chips can be assembled to process larger pixel arrays. Event latency between input and output event flows can be as low as 155 ns. Input event throughput can reach 20 Meps (mega events per second), and output peak event rate can reach 45 Meps. The chip can be configured to discriminate between two simulated propeller-like shapes rotating simultaneously in the field of view at a speed as high as 9400 rps (revolutions per second). Achieving this with a frame-constraint system would require a sensing and processing capability of about 100 K frames per second. The prototype chip has been built in 0.35 CMOS technology, occupies 4.3 × 5.4 and consumes a peak power of 200 mW at maximum kernel size at maximum input event rate.


IEEE Transactions on Neural Networks | 2010

\,\times\,

José Antonio Pérez-Carrasco; Begoña Acha; Carmen Serrano; Luis A. Camuñas-Mesa; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunaths frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.


Neural Computation | 2013

32 Pixel Convolution Processor Chip for Address Event Vision Sensors With 155 ns Event Latency and 20 Meps Throughput

Luis A. Camuñas-Mesa; Rodrigo Quian Quiroga

We present a novel method to generate realistic simulations of extracellular recordings. The simulations were obtained by superimposing the activity of neurons placed randomly in a cube of brain tissue. Detailed models of individual neurons were used to reproduce the extracellular action potentials of close-by neurons. To reduce the computational load, the contributions of neurons further away were simulated using previously recorded spikes with their amplitude normalized by the distance to the recording electrode. For making the simulations more realistic, we also considered a model of a finite-size electrode by averaging the potential along the electrode surface and modeling the electrode-tissue interface with a capacitive filter. This model allowed studying the effect of the electrode diameter on the quality of the recordings and how it affects the number of identified neurons after spike sorting. Given that not all neurons are active at a time, we also generated simulations with different ratios of active neurons and estimated the ratio that matches the signal-to-noise values observed in real data. Finally, we used the model to simulate tetrode recordings.


Frontiers in Neuroscience | 2014

Fast Vision Through Frameless Event-Based Sensing and Convolutional Processing: Application to Texture Recognition

Luis A. Camuñas-Mesa; Teresa Serrano-Gotarredona; Sio Hoi Ieng; Ryad Benosman; Bernabé Linares-Barranco

The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.


international symposium on circuits and systems | 2008

A Detailed and Fast Model of Extracellular Recordings

Luis A. Camuñas-Mesa; Antonio Acosta-Jimenez; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

We present a neuromorphic fully digital convolution microchip for address event representation (AER) spike-based processing systems. This microchip computes 2-D convolutions with a programmable kernel in real time. It operates on a pixel array of size 32 times 32, and the kernel is programmable and can be of arbitrary shape and size up to 32 times 32 pixels. The chip receives and generates data in AER format, which is asynchronous and digital. The paper describes the architecture of the chip, the test setup, and experimental results obtained from a fabricated prototype.


international symposium on circuits and systems | 2010

On the use of orientation filters for 3D reconstruction in event-driven stereo vision.

Luis A. Camuñas-Mesa; José Antonio Pérez-Carrasco; Carlos Zamarreño-Ramos; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using peformance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.


biomedical circuits and systems conference | 2014

Fully digital AER convolution chip for vision processing

Luis A. Camuñas-Mesa; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

We present here an overview of a new vision paradigm where sensors and processors use visual information not represented by sequences of frames. Event-driven vision is inherently frame-free, as happens in biological systems. We use an event-driven sensor chip (called Dynamic Vision Sensor or DVS) together with event-driven convolution module arrays implemented on high-end FPGAs. Experimental results demonstrate the application of this paradigm to implement Gabor filters and 3D stereo reconstruction systems. This architecture can be applied to real systems which need efficient and high-speed visual perception, like vehicle automatic driving, robotic applications in non-structured environments, or intelligent surveillance in security systems.

Collaboration


Dive into the Luis A. Camuñas-Mesa's collaboration.

Top Co-Authors

Avatar

Bernabé Linares-Barranco

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Teresa Serrano-Gotarredona

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Antonio Acosta-Jimenez

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Carlos Zamarreño-Ramos

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafael Serrano-Gotarredona

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge