Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Antonio Pérez-Carrasco is active.

Publication


Featured researches published by José Antonio Pérez-Carrasco.


Frontiers in Neuroscience | 2011

On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex.

Carlos Zamarreño-Ramos; Luis A. Camuñas-Mesa; José Antonio Pérez-Carrasco; Timothée Masquelier; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site1.


IEEE Transactions on Neural Networks | 2008

On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing

Rafael Serrano-Gotarredona; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Clara Serrano-Gotarredona; José Antonio Pérez-Carrasco; Bernabé Linares-Barranco; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno; Antón Civit-Ballcels

In this paper, a chip that performs real-time image convolutions with programmable kernels of arbitrary shape is presented. The chip is a first experimental prototype of reduced size to validate the implemented circuits and system level techniques. The convolution processing is based on the address-event-representation (AER) technique, which is a spike-based biologically inspired image and video representation technique that favors communication bandwidth for pixels with more information. As a first test prototype, a pixel array of 16times16 has been implemented with programmable kernel size of up to 16times16. The chip has been fabricated in a standard 0.35 mum complimentary metal-oxide-semiconductor (CMOS) process. The technique also allows to process larger size images by assembling 2D arrays of such chips. Pixel operation exploits low-power mixed analog-digital circuit techniques. Because of the low currents involved (down to nanoamperes or even picoamperes), an important amount of pixel area is devoted to mismatch calibration. The rest of the chip uses digital circuit techniques, both synchronous and asynchronous. The fabricated chip has been thoroughly tested, both at the pixel level and at the system level. Specific computer interfaces have been developed for generating AER streams from conventional computers and feeding them as inputs to the convolution chip, and for grabbing AER streams coming out of the convolution chip and storing and analyzing them on computers. Extensive experimental results are provided. At the end of this paper, we provide discussions and results on scaling up the approach for larger pixel arrays and multilayer cortical AER systems.


international symposium on circuits and systems | 2010

On neuromorphic spiking architectures for asynchronous STDP memristive systems

José Antonio Pérez-Carrasco; Carlos Zamarreño-Ramos; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Neuromorphic circuits and systems techniques have great potential for exploiting novel nanotechnology devices, which suffer from great parametric spread and high defect rate. In this paper we explore some potential ways of building neural network systems for sophisticated pattern recognition tasks using memristors. We will focus on spiking signal coding because of its energy and information coding efficiency, and concentrate on Convolutional Neural Networks because of their good scaling behavior, both in terms of number of synapses and temporal processing delay. We propose asynchronous architectures that exploit memristive synapses with specially designed neurons that allow for arbitrary scalability as well as STDP learning. We present some behavioral simulation results for small neural arrays using electrical circuit simulators, and system level spike processing results on human detection using a custom made event based simulator.


IEEE Transactions on Neural Networks | 2010

Fast Vision Through Frameless Event-Based Sensing and Convolutional Processing: Application to Texture Recognition

José Antonio Pérez-Carrasco; Begoña Acha; Carmen Serrano; Luis A. Camuñas-Mesa; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunaths frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Efficient Feedforward Categorization of Objects and Human Postures with Address-Event Image Sensors

Shoushun Chen; Polina Akselrod; Bo Zhao; José Antonio Pérez-Carrasco; Bernabé Linares-Barranco; Eugenio Culurciello

This paper proposes an algorithm for feedforward categorization of objects and, in particular, human postures in real-time video sequences from address-event temporal-difference image sensors. The system employs an innovative combination of event-based hardware and bio-inspired software architecture. An event-based temporal difference image sensor is used to provide input video sequences, while a software module extracts size and position invariant line features inspired by models of the primate visual cortex. The detected line features are organized into vectorial segments. After feature extraction, a modified line segment Hausdorff-distance classifier combined with on-the-fly cluster-based size and position invariant categorization. The system can achieve about 90 percent average success rate in the categorization of human postures, while using only a small number of training samples. Compared to state-of-the-art bio-inspired categorization methods, the proposed algorithm requires less hardware resource, reduces the computation complexity by at least five times, and is an ideal candidate for hardware implementation with event-based circuits.


Frontiers in Neuroscience | 2012

Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

Clément Farabet; Rafael Paz; José Antonio Pérez-Carrasco; Carlos Zamarreño-Ramos; Alejandro Linares-Barranco; Yann LeCun; Eugenio Culurciello; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.


international conference on pattern recognition | 2010

Spike-Based Convolutional Network for Real-Time Processing

José Antonio Pérez-Carrasco; Carmen Serrano; Begoña Acha; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

In this paper we propose the first bio-inspired six layer convolutional network (ConvNet) non-frame based that can be implemented with already physically available spikebased electronic devices. The system was designed to recognize people in three different positions: standing, lying or up-side down. The inputs were spikes obtained with a motion retina chip. We provide simulation results showing recognition delays of 16 milliseconds from stimulus onset (time-to-first spike) with a recognition rate of 94%. The weight sharing property in ConvNets and the use of AER protocol allow a great reduction in the number of both trainable parameters and connections (only 748 trainable parameters and 123 connections in our AER system (out of 506998 connections that would be required in a frame-based implementation).


international symposium on circuits and systems | 2010

On scalable spiking convnet hardware for cortex-like visual sensory processing systems

Luis A. Camuñas-Mesa; José Antonio Pérez-Carrasco; Carlos Zamarreño-Ramos; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using peformance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.


international symposium on circuits and systems | 2008

High-speed character recognition system based on a complex hierarchical AER architecture

José Antonio Pérez-Carrasco; Teresa Serrano-Gotarredona; Carmen Serrano-Gotarredona; Begoña Acha; Bernabé Linares-Barranco

In this paper we briefly summarize the fundamental properties of spikes processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured cortical-like layers for sophisticated processing. The paper describes briefly cortex-like spiking vision processing principles, and the AER (address event representation) technique used in hardware spiking systems. Afterwards an example application is described, which is a simplification of Fukushimas Neocognitron. Realistic behavioral simulations based on existing AER hardware characteristics, reveal that the simplified neocognitron, although it processes 52 large kernel convolutions, is capable of performing recognition in less than 10 mus.


advanced concepts for intelligent vision systems | 2009

Advanced Vision Processing Systems: Spike-Based Simulation and Processing

José Antonio Pérez-Carrasco; Carmen Serrano-Gotarredona; Begoña Acha-Piñero; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

In this paper we briefly summarize the fundamental properties of spike events processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured neuro-cortical-like layers for sophisticated processing. The paper describes briefly cortex-like spike event vision processing principles, and the AER (Address Event Representation) technique used in hardware spiking systems. In this paper we present a simulation AER tool that we have developed entirely in Visual C++ 6.0. We have validated it using real AER stimulus and comparing the outputs with real outputs obtained from AER-based devices. With this tool we can predict the eventual performance of AER-based systems, before the technology becomes mature enough to allow such large systems.

Collaboration


Dive into the José Antonio Pérez-Carrasco's collaboration.

Top Co-Authors

Avatar

Bernabé Linares-Barranco

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Teresa Serrano-Gotarredona

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Zamarreño-Ramos

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Camuñas-Mesa

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge