Juan Pedro Dominguez-Morales
University of Seville
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Juan Pedro Dominguez-Morales.
distributed computing and artificial intelligence | 2016
Elena Cerezuela-Escudero; Antonio Rios-Navarro; Juan Pedro Dominguez-Morales; Ricardo Tapiador-Morales; Daniel Gutierrez-Galan; C. Martín-Cañal; Alejandro Linares-Barranco
The study and monitoring of wildlife has always been a subject of great interest. Studying the behavior of wildlife animals is a very complex task due to the difficulties to track them and classify their behaviors through the collected sensory information. Novel technology allows designing low cost systems that facilitate these tasks. There are currently some commercial solutions to this problem; however, it is not possible to obtain a highly accurate classification due to the lack of gathered information. In this work, we propose an animal behavior recognition, classification and monitoring system based on a smart collar device provided with inertial sensors and a feed-forward neural network or Multi-Layer Perceptron (MLP) to classify the possible animal behavior based on the collected sensory information. Experimental results over horse gaits case study show that the recognition system achieves an accuracy of up to 95.6%.
international conference on computer information and telecommunication systems | 2015
Ricardo Tapiador-Morales; Antonio Rios-Navarro; Angel Jiménez-Fernandez; Juan Pedro Dominguez-Morales; Alejandro Linares-Barranco
Sensors Network is an integration of multiples sensors in a system to collect information about different environment variables. Monitoring systems allow us to determine the current state, to know its behavior and sometimes to predict what it is going to happen. This work presents a monitoring system for semi-wild animals that get their actions using an IMU (inertial measure unit) and a sensor fusion algorithm. Based on an ARM-CortexM4 microcontroller this system sends data using ZigBee technology of different sensor axis in two different operations modes: RAW (logging all information into a SD card) or RT (real-time operation). The sensor fusion algorithm improves both the precision and noise interferences.
international conference on artificial neural networks | 2016
Juan Pedro Dominguez-Morales; Angel Jiménez-Fernandez; Antonio Rios-Navarro; Elena Cerezuela-Escudero; Daniel Gutierrez-Galan; M. Domínguez-Morales; Gabriel Jiménez-Moreno
Audio classification has always been an interesting subject of research inside the neuromorphic engineering field. Tools like Nengo or Brian, and hardware platforms like the SpiNNaker board are rapidly increasing in popularity in the neuromorphic community due to the ease of modelling spiking neural networks with them. In this manuscript a multilayer spiking neural network for audio samples classification using SpiNNaker is presented. The network consists of different leaky integrate-and-fire neuron layers. The connections between them are trained using novel firing rate based algorithms and tested using sets of pure tones with frequencies that range from 130.813 to 1396.91 Hz. The hit rate percentage values are obtained after adding a random noise signal to the original pure tone signal. The results show very good classification results (above 85 % hit rate) for each class when the Signal-to-noise ratio is above 3 decibels, validating the robustness of the network configuration and the training step.
Neurocomputing | 2018
Daniel Gutierrez-Galan; Juan Pedro Dominguez-Morales; Elena Cerezuela-Escudero; Antonio Rios-Navarro; Ricardo Tapiador-Morales; Manuel Rivas-Perez; M. Domínguez-Morales; Angel Jiménez-Fernandez; Alejandro Linares-Barranco
Recent biological studies have focused on understanding animal interactions and welfare. To help biologists to obtain animals behavior information, resources like wireless sensor networks are needed. Moreover, large amounts of obtained data have to be processed off-line in order to classify different behaviors. There are recent research projects focused on designing monitoring systems capable of measuring some animals parameters in order to recognize and monitor their gaits or behaviors. However, network unreliability and high power consumption have limited their applicability.In this work, we present an animal behavior recognition, classification and monitoring system based on a wireless sensor network and a smart collar device, provided with inertial sensors and an embedded multi-layer perceptron-based feed-forward neural network, to classify the different gaits or behaviors based on the collected information. In similar works, classification mechanisms are implemented in a server (or base station). The main novelty of this work is the full implementation of a reconfigurable neural network embedded into the animals collar, which allows a real-time behavior classification and enables its local storage in SD memory. Moreover, this approach reduces the amount of data transmitted to the base station (and its periodicity), achieving a significantly improving battery life. The system has been simulated and tested in a real scenario for three different horse gaits, using different heuristics and sensors to improve the accuracy of behavior recognition, achieving a maximum of 81%.
international conference on artificial neural networks | 2016
Antonio Rios-Navarro; Juan Pedro Dominguez-Morales; Ricardo Tapiador-Morales; M. Domínguez-Morales; Angel Jiménez-Fernandez; Alejandro Linares-Barranco
The study and monitoring of the behavior of wildlife has always been a subject of great interest. Although many systems can track animal positions using GPS systems, the behavior classification is not a common task. For this work, a multi-sensory wearable device has been designed and implemented to be used in the Donana National Park in order to control and monitor wild and semi-wild life animals. The data obtained with these sensors is processed using a Spiking Neural Network (SNN), with Address-Event-Representation (AER) coding, and it is classified between some fixed activity behaviors. This works presents the full infrastructure deployed in Donana to collect the data, the wearable device, the SNN implementation in SpiNNaker and the classification results.
international work-conference on artificial and natural neural networks | 2017
Brayan Cuevas-Arteaga; Juan Pedro Dominguez-Morales; Horacio Rostro-Gonzalez; Andrés Espinal; Angel Jiménez-Fernandez; Francisco Gomez-Rodriguez; Alejandro Linares-Barranco
In this paper, we present the numerical results of the implementation of a Spiking Central Pattern Generator (SCPG) on a SpiNNaker board. The SCPG is a network of current-based leaky integrate-and-fire (LIF) neurons, which generates periodic spike trains that correspond to different locomotion gaits (i.e. walk, trot, run). To generate such patterns, the SCPG has been configured with different topologies, and its parameters have been experimentally estimated. To validate our designs, we have implemented them on the SpiNNaker board using PyNN and we have embedded it on a hexapod robot. The system includes a Dynamic Vision Sensor system able to command a pattern to the robot depending on the frequency of the events fired. The more activity the DVS produces, the faster that the pattern that is commanded will be.
international conference on event based control communication and signal processing | 2016
Antonio Rios-Navarro; Juan Pedro Dominguez-Morales; Ricardo Tapiador-Morales; Daniel Gutierrez-Galan; Angel Jiménez-Fernandez; Alejandro Linares-Barranco
Neuromorphic systems are engineering solutions that take inspiration from biological neural systems. They use spike-or event-based representation and codification of the information. This codification allows performing complex computations, filters, classifications and learning in a pseudo-simultaneous way. Small incremental processing is done per event, which shows useful results with very low latencies. Therefore, developing this kind of systems requires the use of specialized tools for debugging and testing those flows of events. This paper presents a set of logic implementations for FPGA that assists on the development of event-based systems and their debugging. Address-Event-Representation (AER) is a communication protocol for transferring events/spikes between bio-inspired chips/systems. Real-time monitoring and sequencing, logging and playing back long sequences of events/spikes to and from memory; and several merging and splitting ports are the main requirements when developing these systems. These functionalities and implementations are explained and tested in this work. The logic has been evaluated in an Opal-Kelly XEM6010 acting as a daughter board for the AER-Node platform. It has a peak rate of 20Mevps when logging and a total of 32Mev of logging capacity on DDR when debugging an AER system in the AER-Node or a set of them connected in daisy chain.
Neurocomputing | 2017
Juan Pedro Dominguez-Morales; Angel Jiménez-Fernandez; M. Domínguez-Morales; Gabriel Jiménez-Moreno
Abstract This software presents diverse utilities to develop the first post-processing layer using the neuromorphic auditory sensors (NAS) information. The NAS used implements a cascade filters architecture in FPGA, imitating the behavior of the basilar membrane and inner hair cells, working with the sound information decomposed into its frequency components as spike streams. The neuromorphic hardware interface Address-Event-Representation (AER) is used to propagate auditory information out of the NAS, emulating the auditory vestibular nerve. Using the packetized information (aedat files) generated with jAER software plus an AER to USB computer interface, NAVIS implements a set of graphs that allows to represent the auditory information as cochleograms, histograms, sonograms, etc. It can also split the auditory information into different sets depending on the activity level of the spike streams. The main contribution of this software tool is its capability to apply complex audio post-processing treatments and representations, which is a novelty for spike-based systems in the neuromorphic community. This software will help neuromorphic engineers to build sets for the training of spiking neural networks (SNN).
IEEE Transactions on Biomedical Circuits and Systems | 2017
Juan Pedro Dominguez-Morales; Angel Jiménez-Fernandez; M. Domínguez-Morales; Gabriel Jiménez-Moreno
Auscultation is one of the most used techniques for detecting cardiovascular diseases, which is one of the main causes of death in the world. Heart murmurs are the most common abnormal finding when a patient visits the physician for auscultation. These heart sounds can either be innocent, which are harmless, or abnormal, which may be a sign of a more serious heart condition. However, the accuracy rate of primary care physicians and expert cardiologists when auscultating is not good enough to avoid most of both type-I (healthy patients are sent for echocardiogram) and type-II (pathological patients are sent home without medication or treatment) errors made. In this paper, the authors present a novel convolutional neural network based tool for classifying between healthy people and pathological patients using a neuromorphic auditory sensor for FPGA that is able to decompose the audio into frequency bands in real time. For this purpose, different networks have been trained with the heart murmur information contained in heart sound recordings obtained from nine different heart sound databases sourced from multiple research groups. These samples are segmented and preprocessed using the neuromorphic auditory sensor to decompose their audio information into frequency bands and, after that, sonogram images with the same size are generated. These images have been used to train and test different convolutional neural network architectures. The best results have been obtained with a modified version of the AlexNet model, achieving 97% accuracy (specificity: 95.12%, sensitivity: 93.20%, PhysioNet/CinC Challenge 2016 score: 0.9416). This tool could aid cardiologists and primary care physicians in the auscultation process, improving the decision making task and reducing type-I and type-II errors.
international conference on artificial neural networks | 2016
Elena Cerezuela-Escudero; Angel Jiménez-Fernandez; Rafael Paz-Vicente; Juan Pedro Dominguez-Morales; M. Domínguez-Morales; Alejandro Linares-Barranco
In this paper, we explore the capabilities of a sound classification system that combines a Neuromorphic Auditory System for feature extraction and an artificial neural network for classification. Two models of neural network have been used: Multilayer Perceptron Neural Network and Spiking Neural Network. To compare their accuracies, both networks have been developed and trained to recognize pure tones in presence of white noise. The spiking neural network has been implemented in a FPGA device. The neuromorphic auditory system that is used in this work produces a form of representation that is analogous to the spike outputs of the biological cochlea. Both systems are able to distinguish the different sounds even in the presence of white noise. The recognition system based in a spiking neural networks has better accuracy, above 91 %, even when the sound has white noise with the same power.