Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francisco Javier Díaz-Pernas is active.

Publication


Featured researches published by Francisco Javier Díaz-Pernas.


Computer-aided Civil and Infrastructure Engineering | 2010

Wavelet‐Based Denoising for Traffic Volume Time Series Forecasting with Self‐Organizing Neural Networks

D. Boto-Giralda; Francisco Javier Díaz-Pernas; D. González-Ortega; J. F. Díez-Higuera; M. Antón-Rodríguez; Mario Martínez-Zarzuela

In their goal to effectively manage the use of existing infrastructures, intelligent transportation systems require precise forecasting of near-term traffic volumes to feed real-time analytical models and traffic surveillance tools that alert of network links reaching their capacity. This article proposes a new methodological approach for short-term predictions of time series of volume data at isolated cross sections. The originality in the computational modeling stems from the fit of threshold values used in the stationary wavelet-based denoising process applied on the time series, and from the determination of patterns that characterize the evolution of its samples over a fixed prediction horizon. A self-organizing fuzzy neural network is optimized in its configuration parameters for learning and recognition of these patterns. Four real-world data sets from 3 interstate roads are considered for evaluating the performance of the proposed model. A quantitative comparison made with the results obtained by 4 other relevant prediction models shows a favorable outcome.


PLOS ONE | 2014

Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

Patricia Wollstadt; Mario Martínez-Zarzuela; Raul Vicente; Francisco Javier Díaz-Pernas; Michael Wibral

Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble methods practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.


Sensors | 2017

Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems

Carlos González-Gutiérrez; Jesús Santos; Mario Martínez-Zarzuela; A. G. Basden; James Osborn; Francisco Javier Díaz-Pernas; Francisco Javier de Cos Juez

Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named “CARMEN” are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.


Applied Soft Computing | 2014

Double recurrent interaction V1–V2–V4 based neural architecture for color natural scene boundary detection and surface perception

Francisco Javier Díaz-Pernas; Mario Martínez-Zarzuela; M. Antón-Rodríguez; D. González-Ortega

Abstract In this paper, a new neural model for bio-inspired processing of color images, called dPREEN (double recurrent Perceptual boundaRy dEtection Neural) model, is presented. The dPREEN model includes a double feedback among V1, V2 and V4 cortical areas, simple and double color opponent processes, orientation filtering using Gabor kernels, surround suppression in complex cells, top-down and bottom-up information fusion and chromatic diffusion, to generate contours of perceptual significance in color natural scenes. The outputs of the model are a boundary map of the scene and surface perception images. This paper incorporates a comparative analysis of the proposed model against two other contour extraction methods in the Berkeley Segmentation Dataset and Benchmark. The analysis shows favorable results to the dPREEN model. Additionally, this paper describes two parallel implementations of the model for its execution on Graphics Processing Units.


Computer Methods and Programs in Biomedicine | 2013

Cross-Approximate Entropy parallel computation on GPUs for biomedical signal analysis. Application to MEG recordings

Mario Martínez-Zarzuela; Carlos Gómez; Francisco Javier Díaz-Pernas; Alberto Fernández; Roberto Hornero

Cross-Approximate Entropy (Cross-ApEn) is a useful measure to quantify the statistical dissimilarity of two time series. In spite of the advantage of Cross-ApEn over its one-dimensional counterpart (Approximate Entropy), only a few studies have applied it to biomedical signals, mainly due to its high computational cost. In this paper, we propose a fast GPU-based implementation of the Cross-ApEn that makes feasible its use over a large amount of multidimensional data. The scheme followed is fully scalable, thus maximizes the use of the GPU despite of the number of neural signals being processed. The approach consists in processing many trials or epochs simultaneously, with independence of its origin. In the case of MEG data, these trials can proceed from different input channels or subjects. The proposed implementation achieves an average speedup greater than 250× against a CPU parallel version running on a processor containing six cores. A dataset of 30 subjects containing 148 MEG channels (49 epochs of 1024 samples per channel) can be analyzed using our development in about 30min. The same processing takes 5 days on six cores and 15 days when running on a single core. The speedup is much larger if compared to a basic sequential Matlab(®) implementation, that would need 58 days per subject. To our knowledge, this is the first contribution of Cross-ApEn measure computation using GPUs. This study demonstrates that this hardware is, to the day, the best option for the signal processing of biomedical data with Cross-ApEn.


global engineering education conference | 2010

Approach to teaching communications systems by collaborative learning. Student perceptions in the application of problem-based learning: Analysis of results

B. Sainz-de Abajo; I. de la Torre-Díez; Miguel López-Coronado; Francisco Javier Díaz-Pernas; J. F. Díez-Higuera; M. Antón-Rodríguez; E. García-Salcines; C. de Castro-Lozano

This document reflects the results of the study at the end of the course, which justify how the application of problem-based learning and collaborative learning help the student to take on board in the most appropriate way the study material. An analysis of the results of the surveys carried out amongst those students learning Communication Systems in Industrial Technical Engineering was undertaken, in order to evaluate whether the application of Problem-Based Learning (PBL) methodology together with Collaborative Learning (CL) was likely to improve the rate of development of their ability indispensable in todays business world, and also to achieve those objectives set out in the course. By far the majority of students showed a very positive attitude towards this methodology, their objections being very limited. A comparison was carried out with the results of those courses of a similar nature at the University of Cordoba, as was the attitude of the students towards this new methodology, with very comparable results.


international conference of the ieee engineering in medicine and biology society | 2012

Synchrony analysis of spontaneous MEG activity in Alzheimer's disease patients

Carlos Gómez; Mario Martínez-Zarzuela; Jesús Poza; Francisco Javier Díaz-Pernas; Alberto Fernández; Roberto Hornero

The aim of this study was to analyze the magnetoencephalography (MEG) background activity in Alzheimers disease (AD) patients using cross-approximate entropy (Cross-ApEn). Cross-ApEn is a nonlinear measure of asynchrony between time series. Five minutes of recording were acquired with a 148-channel whole-head magnetometer in 12 AD patients and 12 age-matched control subjects. We found significantly higher synchrony between MEG signals from AD patients compared with control subjects. Additionally, we evaluated the ability of Cross-ApEn to discriminate these two groups using receiver operating characteristic (ROC) curves with a leave-one-out cross-validation procedure. We obtained an accuracy of 70.83% (66.67% sensitivity, 75% specificity) and a value of area under the ROC curve of 0.83. These results provide evidence of disconnection problems in AD. Our findings show the usefulness of Cross-ApEn to detect the brain dysfunction in AD.


euro american conference on telematics and information systems | 2012

Analysis of the benefits and constraints for the implementation of cloud computing over an EHRs system

Francisco Javier Díaz-Pernas; Gonzalo Fernández; M. Antón-Rodríguez; Mario Martínez-Zarzuela; D. González-Ortega; D. Boto-Giralda

The Cloud Computing paradigm means a radical change over the IT technologies. This transform offers us many benefits in terms of e-services. Cloud Computing offers us a new solution for the implementation of electronic management system in a huge variety of fields. So the e-health is included on these solutions. Despite the fact that Cloud Computing is under development there are a lot of opportunities of implementation of Cloud Computing over e-health services. So in this paper we are going to discuss the viability of the implementation of this new model over an Electronic Health Records (EHRs) system. To find an answer of this issue we are going to analyze the benefits and constraints that can be given in this kind of systems.


Journal of Medical Systems | 2012

Comparison of Response Times of a Mobile-Web EHRs System Using PHP and JSP Languages

M. Antón-Rodríguez; Francisco Javier Díaz-Pernas; Freddy José Perozo-Rondón

Performance evaluation is highly important in the Electronic Health Records (EHRs) system implementation. Response time’s measurement can be considered as one manner to make that evaluation. In the e-health field, after the creation of EHRs available through different platforms such as Web and/or mobile, a performance evaluation is necessary. The operation of the system in the right way is essential. In this paper, a comparison of the response times for the MEHRmobile system is presented. The first version uses PHP language with a MySQL database and the second one employs JSP with an eXist database. Both versions have got the same functionalities. In addition to the technological aspects, a significant difference is the way the information is stored. The main goal of this paper is choosing the version which offers better response times. We have created a new benchmark to calculate the response times. Better results have been obtained for the PHP version. Nowadays, this version is being used for specialists from Fundación Intras, Spain.


global engineering education conference | 2010

Adapting the Telecommunication Engineering curriculum to the EEES: A project based learning tied to several subjects

J. F. Díez-Higuera; M. Antón-Rodríguez; Francisco Javier Díaz-Pernas; Mario Martínez-Zarzuela; D. González-Ortega; D. Boto-Giralda; Miguel López-Coronado; B. Sainz-de Abajo; I. de la Torre-Díez

This paper describes the adaptation process to the European Credit Transfer System requirements of several subjects aiming at the Information and Communication Technologies (ICT) learning. Specifically, these subjects are sited at the Telecommunications Engineering studies lectured in the University of Valladolid. In a first step two first grade subjects have been established, while in a second and final step, coinciding with the new degrees beginning, it will be extended to five subjects placed in consecutive semesters. The global programming has been divided into several subprojects of growing complexity, developed into subjects sited in different and successive semesters of the degree, following a pathway leading to the development of a global project throughout four years. The whole learning process is ICT-supported, as tools for overcoming distance and scheduling barriers are offered. In particular, Moodle platform is used, which has been enhanced with self-evaluation and co-evaluation tools developed by the teaching group. Main innovation regarding to the classical approach consists of a computer programming subject focused on the student learning and based on the detailed specification of the activity the students have to perform in and out of the classroom in order to achieve the educational objectives of each of the subjects. The educational strategies used to accomplish these objectives are based on the cooperative learning, on the teamwork developing a programming project (Project Based Learning, PBL), and on the discovery learning.

Collaboration


Dive into the Francisco Javier Díaz-Pernas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Fernández

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Gómez

University of Valladolid

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge