Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elena Cerezuela-Escudero is active.

Publication


Featured researches published by Elena Cerezuela-Escudero.


international conference on artificial neural networks | 2011

On the designing of spikes band-pass filters for FPGA

M. Domínguez-Morales; Angel Jiménez-Fernandez; Elena Cerezuela-Escudero; Rafael Paz-Vicente; Alejandro Linares-Barranco; Gabriel Jiménez

In this paper we present two implementations of spike-based band-pass filters, which are able to reject out-of-band frequency components in the spike domain. First one is based on the use of previously designed spike-based low-pass filters. With this architecture the quality factor, Q, is lower than 0.5. The second implementation is inspired in the analog multi-feedback filters (MFB) topology, it provides a higher than 1 Q factor, and ideally tends to infinite. These filters have been written in VHLD, and synthesized for FPGA. Two spike-based band-pass filters presented take advantages of the spike rate coded representation to perform a massively parallel processing without complex hardware units, like floating point arithmetic units, or a large memory. These low requirements of hardware allow the integration of a high number of filters inside a FPGA, allowing to process several spike coded signals fully in parallel.


IEEE Transactions on Neural Networks | 2017

A Binaural Neuromorphic Auditory Sensor for FPGA: A Spike Signal Processing Approach

Angel Jiménez-Fernandez; Elena Cerezuela-Escudero; Lourdes Miro-Amarante; Manuel Jesus Dominguez-Moralse; Francisco Gomez-Rodriguez; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno

This paper presents a new architecture, design flow, and field-programmable gate array (FPGA) implementation analysis of a neuromorphic binaural auditory sensor, designed completely in the spike domain. Unlike digital cochleae that decompose audio signals using classical digital signal processing techniques, the model presented in this paper processes information directly encoded as spikes using pulse frequency modulation and provides a set of frequency-decomposed audio information using an address-event representation interface. In this case, a systematic approach to design led to a generic process for building, tuning, and implementing audio frequency decomposers with different features, facilitating synthesis with custom features. This allows researchers to implement their own parameterized neuromorphic auditory systems in a low-cost FPGA in order to study the audio processing and learning activity that takes place in the brain. In this paper, we present a 64-channel binaural neuromorphic auditory system implemented in a Virtex-5 FPGA using a commercial development board. The system was excited with a diverse set of audio signals in order to analyze its response and characterize its features. The neuromorphic auditory system response times and frequencies are reported. The experimental results of the proposed system implementation with 64-channel stereo are: a frequency range between 9.6 Hz and 14.6 kHz (adjustable), a maximum output event rate of 2.19 Mevents/s, a power consumption of 29.7 mW, the slices requirements of 11141, and a system clock frequency of 27 MHz.


international symposium on neural networks | 2015

Musical notes classification with neuromorphic auditory system using FPGA and a convolutional spiking network

Elena Cerezuela-Escudero; Angel Jiménez-Fernandez; Rafael Paz-Vicente; M. Domínguez-Morales; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno

In this paper, we explore the capabilities of a sound classification system that combines both a novel FPGA cochlear model implementation and a bio-inspired technique based on a trained convolutional spiking network. The neuromorphic auditory system that is used in this work produces a form of representation that is analogous to the spike outputs of the biological cochlea. The auditory system has been developed using a set of spike-based processing building blocks in the frequency domain. They form a set of band pass filters in the spike-domain that splits the audio information in 128 frequency channels, 64 for each of two audio sources. Address Event Representation (AER) is used to communicate the auditory system with the convolutional spiking network. A layer of convolutional spiking network is developed and trained on a computer with the ability to detect two kinds of sound: artificial pure tones in the presence of white noise and electronic musical notes. After the training process, the presented system is able to distinguish the different sounds in real-time, even in the presence of white noise.


international conference on neural information processing | 2011

An approach to distance estimation with stereo vision using address-event-representation

M. Domínguez-Morales; Angel Jiménez-Fernandez; R. Paz; M. R. López-Torres; Elena Cerezuela-Escudero; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno; A. Morgado

Image processing in digital computer systems usually considers the visual information as a sequence of frames. These frames are from cameras that capture reality for a short period of time. They are renewed and transmitted at a rate of 25-30 fps (typical real-time scenario). Digital video processing has to process each frame in order to obtain a result or detect a feature. In stereo vision, existing algorithms used for distance estimation use frames from two digital cameras and process them pixel by pixel to obtain similarities and differences from both frames; after that, depending on the scene and the features extracted, an estimate of the distance of the different objects of the scene is calculated. Spike-based processing is a relatively new approach that implements the processing by manipulating spikes one by one at the time they are transmitted, like a human brain. The mammal nervous system is able to solve much more complex problems, such as visual recognition by manipulating neuron spikes. The spike-based philosophy for visual information processing based on the neuro-inspired Address-Event-Representation (AER) is achieving nowadays very high performances. In this work we propose a two-DVS-retina system, composed of other elements in a chain, which allow us to obtain a distance estimation of the moving objects in a close environment. We will analyze each element of this chain and propose a Multi Hold&Fire algorithm that obtains the differences between both retinas.


distributed computing and artificial intelligence | 2016

Performance Evaluation of Neural Networks for Animal Behaviors Classification: Horse Gaits Case Study

Elena Cerezuela-Escudero; Antonio Rios-Navarro; Juan Pedro Dominguez-Morales; Ricardo Tapiador-Morales; Daniel Gutierrez-Galan; C. Martín-Cañal; Alejandro Linares-Barranco

The study and monitoring of wildlife has always been a subject of great interest. Studying the behavior of wildlife animals is a very complex task due to the difficulties to track them and classify their behaviors through the collected sensory information. Novel technology allows designing low cost systems that facilitate these tasks. There are currently some commercial solutions to this problem; however, it is not possible to obtain a highly accurate classification due to the lack of gathered information. In this work, we propose an animal behavior recognition, classification and monitoring system based on a smart collar device provided with inertial sensors and a feed-forward neural network or Multi-Layer Perceptron (MLP) to classify the possible animal behavior based on the collected sensory information. Experimental results over horse gaits case study show that the recognition system achieves an accuracy of up to 95.6%.


international conference on artificial neural networks | 2013

Spikes monitors for FPGAs, an experimental comparative study

Elena Cerezuela-Escudero; M. Domínguez-Morales; Angel Jiménez-Fernandez; Rafael Paz-Vicente; Alejandro Linares-Barranco; Gabriel Jiménez-Moreno

In this paper we present and analyze two VHDL components for monitoring internal activity of spikes fired by silicon neurons inside FPGAs. These spikes monitors encode each spike according to the Address-Event Representation, sending them through a time multiplexed digital bus as discrete events, using different strategies. In order to study and analyze their behavior we have designed an experimental scenario, where diverse AER systems have been used to stimulate the spikes monitors and collect the output AER events, for later analysis. We have applied a battery of tests on both monitors in order to measure diverse features such as maximum spike load and AER event loss due to collisions.


international conference on artificial neural networks | 2016

Multilayer Spiking Neural Network for Audio Samples Classification Using SpiNNaker

Juan Pedro Dominguez-Morales; Angel Jiménez-Fernandez; Antonio Rios-Navarro; Elena Cerezuela-Escudero; Daniel Gutierrez-Galan; M. Domínguez-Morales; Gabriel Jiménez-Moreno

Audio classification has always been an interesting subject of research inside the neuromorphic engineering field. Tools like Nengo or Brian, and hardware platforms like the SpiNNaker board are rapidly increasing in popularity in the neuromorphic community due to the ease of modelling spiking neural networks with them. In this manuscript a multilayer spiking neural network for audio samples classification using SpiNNaker is presented. The network consists of different leaky integrate-and-fire neuron layers. The connections between them are trained using novel firing rate based algorithms and tested using sets of pure tones with frequencies that range from 130.813 to 1396.91 Hz. The hit rate percentage values are obtained after adding a random noise signal to the original pure tone signal. The results show very good classification results (above 85 % hit rate) for each class when the Signal-to-noise ratio is above 3 decibels, validating the robustness of the network configuration and the training step.


Neurocomputing | 2018

Embedded neural network for real-time animal behavior classification

Daniel Gutierrez-Galan; Juan Pedro Dominguez-Morales; Elena Cerezuela-Escudero; Antonio Rios-Navarro; Ricardo Tapiador-Morales; Manuel Rivas-Perez; M. Domínguez-Morales; Angel Jiménez-Fernandez; Alejandro Linares-Barranco

Recent biological studies have focused on understanding animal interactions and welfare. To help biologists to obtain animals behavior information, resources like wireless sensor networks are needed. Moreover, large amounts of obtained data have to be processed off-line in order to classify different behaviors. There are recent research projects focused on designing monitoring systems capable of measuring some animals parameters in order to recognize and monitor their gaits or behaviors. However, network unreliability and high power consumption have limited their applicability.In this work, we present an animal behavior recognition, classification and monitoring system based on a wireless sensor network and a smart collar device, provided with inertial sensors and an embedded multi-layer perceptron-based feed-forward neural network, to classify the different gaits or behaviors based on the collected information. In similar works, classification mechanisms are implemented in a server (or base station). The main novelty of this work is the full implementation of a reconfigurable neural network embedded into the animals collar, which allows a real-time behavior classification and enables its local storage in SD memory. Moreover, this approach reduces the amount of data transmitted to the base station (and its periodicity), achieving a significantly improving battery life. The system has been simulated and tested in a real scenario for three different horse gaits, using different heuristics and sensors to improve the accuracy of behavior recognition, achieving a maximum of 81%.


international conference on event based control communication and signal processing | 2015

Real-time motor rotation frequency detection with event-based visual and spike-based auditory AER sensory integration for FPGA

Antonio Rios-Navarro; Elena Cerezuela-Escudero; M. Domínguez-Morales; Angel Jiménez-Fernandez; Gabriel Jiménez-Moreno; Alejandro Linares-Barranco

Multisensory integration is commonly used in various robotic areas to collect more environmental information using different and complementary types of sensors. Neuromorphic engineers mimics biological systems behavior to improve systems performance in solving engineering problems with low power consumption. This work presents a neuromorphic sensory integration scenario for measuring the rotation frequency of a motor using an AER DVS128 retina chip (Dynamic Vision Sensor) and a stereo auditory system on a FPGA completely event-based. Both of them transmit information with Address-Event-Representation (AER). This integration system uses a new AER monitor hardware interface, based on a Spartan-6 FPGA that allows two operational modes: real-time (up to 5 Mevps through USB2.0) and data logger mode (up to 20Mevps for 33.5Mev stored in onboard DDR RAM). The sensory integration allows reducing prediction error of the rotation speed of the motor since audio processing offers a concrete range of rpm, while DVS can be much more accurate.


international symposium on circuits and systems | 2015

Live demonstration: Real-time motor rotation frequency detection by spike-based visual and auditory AER sensory integration for FPGA

Antonio Rios-Navarro; Elena Cerezuela-Escudero; M. Domínguez-Morales; Angel Jiménez-Fernandez; Gabriel Jiménez-Moreno; Alejandro Linares-Barranco

Multisensory integration is commonly used in various robotic areas to collect much more information from an environment using different and complementary types of sensors. This demonstration presents a scenario where the motor rotation frequency is obtained using an AER DVS128 retina chip (Dynamic Vision Sensor) and a frequency decomposer auditory system on a FPGA that mimics a biological cochlea. Both of them are spike-based sensors with Address-Event-Representation (AER) outputs. A new AER monitor hardware interface, based on a Spartan-6 FPGA, allows two operational modes: real-time (up to 5 Mevps through USB2.0) and off-line mode (up to 20 Mevps and 33.5 Mev stored in DDR RAM). The sensory integration allows the bio-inspired cochlea limit to provide a concrete range of rpm approaches, which are obtained by the silicon retina.

Collaboration


Dive into the Elena Cerezuela-Escudero's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge