Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Acosta-Jimenez is active.

Publication


Featured researches published by Antonio Acosta-Jimenez.


IEEE Transactions on Neural Networks | 2009

CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking

Rafael Serrano-Gotarredona; Matthias Oster; Patrick Lichtsteiner; Alejandro Linares-Barranco; Rafael Paz-Vicente; Francisco Gomez-Rodriguez; Luis A. Camuñas-Mesa; Raphael Berner; Manuel Rivas-Perez; Tobi Delbruck; Shih-Chii Liu; Rodney J. Douglas; Philipp Häfliger; Gabriel Jiménez-Moreno; Anton Civit Ballcels; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Bernabé Linares-Barranco

This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.


IEEE Transactions on Circuits and Systems | 2006

A Neuromorphic Cortical-Layer Microchip for Spike-Based Event Processing Vision Systems

Rafael Serrano-Gotarredona; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Bernabé Linares-Barranco

We present a neuromorphic cortical-layer processing microchip for address event representation (AER) spike-based processing systems. The microchip computes 2-D convolutions of video information represented in AER format in real time. AER, as opposed to conventional frame-based video representation, describes visual information as a sequence of events or spikes in a way similar to biological brains. This format allows for fast information identification and processing, without waiting to process complete image frames. The neuromorphic cortical-layer processing microchip presented in this paper computes convolutions of programmable kernels over the AER visual input information flow. It not only computes convolutions but also allows for a programmable forgetting rate, which in turn allows for a bio-inspired coincidence detection processing. Kernels are programmable and can be of arbitrary shape and arbitrary size of up to 32 times 32 pixels. The convolution processor operates on a pixel array of size 32 times 32, but can process an input space of up to 128 times 128 pixels. Larger pixel arrays can be directly processed by tiling arrays of chips. The chip receives and generates data in AER format, which is asynchronous and digital. However, its internal operation is based on analog low-current circuit techniques. The paper describes the architecture of the chip and circuits used for the pixels, including calibration techniques to overcome mismatch. Extensive experimental results are provided, describing pixel operation and calibration, convolution processing with and without forgetting, and high-speed recognition experiments like discriminating rotating propellers of different shape rotating at speeds of up to 5000 revolutions per second


IEEE Transactions on Neural Networks | 2008

On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing

Rafael Serrano-Gotarredona; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Clara Serrano-Gotarredona; José Antonio Pérez-Carrasco; Bernabé Linares-Barranco; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno; Antón Civit-Ballcels

In this paper, a chip that performs real-time image convolutions with programmable kernels of arbitrary shape is presented. The chip is a first experimental prototype of reduced size to validate the implemented circuits and system level techniques. The convolution processing is based on the address-event-representation (AER) technique, which is a spike-based biologically inspired image and video representation technique that favors communication bandwidth for pixels with more information. As a first test prototype, a pixel array of 16times16 has been implemented with programmable kernel size of up to 16times16. The chip has been fabricated in a standard 0.35 mum complimentary metal-oxide-semiconductor (CMOS) process. The technique also allows to process larger size images by assembling 2D arrays of such chips. Pixel operation exploits low-power mixed analog-digital circuit techniques. Because of the low currents involved (down to nanoamperes or even picoamperes), an important amount of pixel area is devoted to mismatch calibration. The rest of the chip uses digital circuit techniques, both synchronous and asynchronous. The fabricated chip has been thoroughly tested, both at the pixel level and at the system level. Specific computer interfaces have been developed for generating AER streams from conventional computers and feeding them as inputs to the convolution chip, and for grabbing AER streams coming out of the convolution chip and storing and analyzing them on computers. Extensive experimental results are provided. At the end of this paper, we provide discussions and results on scaling up the approach for larger pixel arrays and multilayer cortical AER systems.


IEEE Journal of Solid-state Circuits | 2012

An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

Luis A. Camuñas-Mesa; Carlos Zamarreño-Ramos; Alejandro Linares-Barranco; Antonio Acosta-Jimenez; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 μm CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.


IEEE Transactions on Circuits and Systems | 2011

A 32

Luis A. Camuñas-Mesa; Antonio Acosta-Jimenez; Carlos Zamarreño-Ramos; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

This paper describes a convolution chip for event-driven vision sensing and processing systems. As opposed to conventional frame-constraint vision systems, in event-driven vision there is no need for frames. In frame-free event-based vision, information is represented by a continuous flow of self-timed asynchronous events. Such events can be processed on the fly by event-based convolution chips, providing at their output a continuous event flow representing the 2-D filtered version of the input flow. In this paper we present a 32 × 32 pixel 2-D convolution event processor whose kernel can have arbitrary shape and size up to 32 × 32. Arrays of such chips can be assembled to process larger pixel arrays. Event latency between input and output event flows can be as low as 155 ns. Input event throughput can reach 20 Meps (mega events per second), and output peak event rate can reach 45 Meps. The chip can be configured to discriminate between two simulated propeller-like shapes rotating simultaneously in the field of view at a speed as high as 9400 rps (revolutions per second). Achieving this with a frame-constraint system would require a sensing and processing capability of about 100 K frames per second. The prototype chip has been built in 0.35 CMOS technology, occupies 4.3 × 5.4 and consumes a peak power of 200 mW at maximum kernel size at maximum input event rate.


international symposium on circuits and systems | 2008

\,\times\,

Luis A. Camuñas-Mesa; Antonio Acosta-Jimenez; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

We present a neuromorphic fully digital convolution microchip for address event representation (AER) spike-based processing systems. This microchip computes 2-D convolutions with a programmable kernel in real time. It operates on a pixel array of size 32 times 32, and the kernel is programmable and can be of arbitrary shape and size up to 32 times 32 pixels. The chip receives and generates data in AER format, which is asynchronous and digital. The paper describes the architecture of the chip, the test setup, and experimental results obtained from a fabricated prototype.


international symposium on circuits and systems | 2006

32 Pixel Convolution Processor Chip for Address Event Vision Sensors With 155 ns Event Latency and 20 Meps Throughput

Rafael Serrano-Gotarredona; Bernabé Linares-Barranco; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Alejandro Linares-Barranco; Rafael Paz-Vicente; Francisco Gomez-Rodriguez

A high speed sample image processing application using AER-based components is presented. The setup objective is to distinguish between two propellers of different shape rotating at high speed (around 1000 revolutions/sec) to show event-based systems capabilities in high speed applications. Event-based schemes allow the most relevant information to propagate faster through the system layers. So image processing is sped up because a rough result may be available when only a little part of the input has arrived. This setup is much faster than the conventional frame-based image processing systems because they would need to process more than 10kFrames/s to do the same task proposed here, whereas only few events are required with the event based technique


international symposium on circuits and systems | 2007

Fully digital AER convolution chip for vision processing

Rafael Serrano-Gotarredona; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno; A. Civit-Balcells; Bernabé Linares-Barranco

In this paper we briefly summarize the fundamental properties of spike events processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured cortical-like layers for sophisticated processing. The paper includes a few examples that have demonstrated the potential of this technology for high-speed vision processing, such as a multilayer event processing network of 5 sequential cortical-like layers, and a recognition system capable of discriminating propellers of different shape rotating at 5000 revolutions per second (300000 revolutions per minute).


ieee international conference on biomedical robotics and biomechatronics | 2006

High-speed image processing with AER-based components

Rafael Serrano-Gotarredona; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Bernabé Linares-Barranco; Luis A. Camuñas-Mesa

AER (Address Event Representation) is an emergent bio-inspired protocol intended to communicate chips containing many processing units, called them neurons or pixels. It exploits the advantages of communicating the activation state of a neuron as pulses, as done in the human brain. The information is sent out sorted beginning with the most relevant. This feature together with the parallel processing of the information allows for performing very fast image processing. In this paper, we explain how AER is suitable for real-time image processing and, as an example, we present results from some AER-based convolution chips which is able to perform convolutions in real time


Proceedings of SPIE | 2005

Spike Events Processing for Vision Systems

Luis A. Camuñas-Mesa; Antonio Acosta-Jimenez; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco

Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

Collaboration


Dive into the Antonio Acosta-Jimenez's collaboration.

Top Co-Authors

Avatar

Bernabé Linares-Barranco

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Teresa Serrano-Gotarredona

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Rafael Serrano-Gotarredona

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Camuñas-Mesa

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Zamarreño-Ramos

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge