Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eduardo Morgado is active.

Publication


Featured researches published by Eduardo Morgado.


IEEE Transactions on Wireless Communications | 2010

End-to-End Average BER in Multihop Wireless Networks over Fading Channels

Eduardo Morgado; Inmaculada Mora-Jiménez; Juan J. Vinagre; Javier Ramos; Antonio J. Caamaño

This paper addresses the problem of finding an analytical expression for the end-to-end Average Bit Error Rate (ABER) in multihop Decode-and-Forward(DAF) routes within the context of wireless networks. We provide an analytical recursive expression for the most generic case of any number of hops and any single-hop ABER for every hop in the route. Then, we solve the recursive relationship in two scenarios to obtain simple expressions for the end-to-end ABER, namely: (a) The simplest case, where all the relay channels have identical statistical behaviour; (b) The most general case, where every relay channel has a different statistical behaviour. Along with the theoretical proofs, we test our results against simulations. We then use the previous results to obtain closed analytical expressions for the end-to-end ABER considering DAF relays over Nakagami-m fading channels and with various modulation schemes. We compare these results with the corresponding expressions for Amplify-and-Forward (AAF) and, after corroborating the theoretical results with simulations, we conclude that DAF strategy is more advantageous than the AAF over Nakagami-m fading channels as both the number of relays and m-index increase.


IEEE Transactions on Biomedical Engineering | 2014

Detection of Life-Threatening Arrhythmias Using Feature Selection and Support Vector Machines

Felipe Alonso-Atienza; Eduardo Morgado; Lorena Fernandez-Martinez; Arcadi García-Alberola; José Luis Rojo-Álvarez

Early detection of ventricular fibrillation (VF) and rapid ventricular tachycardia (VT) is crucial for the success of the defibrillation therapy. A wide variety of detection algorithms have been proposed based on temporal, spectral, or complexity parameters extracted from the ECG. However, these algorithms are mostly constructed by considering each parameter individually. In this study, we present a novel life-threatening arrhythmias detection algorithm that combines a number of previously proposed ECG parameters by using support vector machines classifiers. A total of 13 parameters were computed accounting for temporal (morphological), spectral, and complexity features of the ECG signal. A filter-type feature selection (FS) procedure was proposed to analyze the relevance of the computed parameters and how they affect the detection performance. The proposed methodology was evaluated in two different binary detection scenarios: shockable (FV plus VT) versus nonshockable arrhythmias, and VF versus nonVF rhythms, using the information contained in the medical imaging technology database, the Creighton University ventricular tachycardia database, and the ventricular arrhythmia database. sensitivity (SE) and specificity (SP) analysis on the out of sample test data showed values of SE=95%, SP=99%, and SE=92% , SP=97% in the case of shockable and VF scenarios, respectively. Our algorithm was benchmarked against individual detection schemes, significantly improving their performance. Our results demonstrate that the combination of ECG parameters using statistical learning algorithms improves the efficiency for the detection of life-threatening arrhythmias.


Biomedical Engineering Online | 2015

Quality estimation of the electrocardiogram using cross-correlation among leads

Eduardo Morgado; Felipe Alonso-Atienza; Ricardo Santiago-Mozos; Óscar Barquero-Pérez; Ikaro Silva; Javier Ramos; Roger G. Mark

BackgroundFast and accurate quality estimation of the electrocardiogram (ECG) signal is a relevant research topic that has attracted considerable interest in the scientific community, particularly due to its impact on tele-medicine monitoring systems, where the ECG is collected by untrained technicians. In recent years, a number of studies have addressed this topic, showing poor performance in discriminating between clinically acceptable and unacceptable ECG records.Methods This paper presents a novel, simple and accurate algorithm to estimate the quality of the 12-lead ECG by exploiting the structure of the cross-covariance matrix among different leads. Ideally, ECG signals from different leads should be highly correlated since they capture the same electrical activation process of the heart. However, in the presence of noise or artifacts the covariance among these signals will be affected. Eigenvalues of the ECG signals covariance matrix are fed into three different supervised binary classifiers.Results and conclusion The performance of these classifiers were evaluated using PhysioNet/CinC Challenge 2011 data. Our best quality classifier achieved an accuracy of 0.898 in the test set, while having a complexity well below the results of contestants who participated in the Challenge, thus making it suitable for implementation in current cellular devices.


IEEE Transactions on Wireless Communications | 2015

Scalable Data-Coupled Clustering for Large Scale WSN

Mihaela I. Chidean; Eduardo Morgado; Eduardo del Arco; Julio Ramiro-Bargueño; Antonio J. Caamaño

Self-organizing algorithms (SOAs) for wireless sensor networks (WSNs) usually seek to increase the lifetime, to minimize unnecessary transmissions or to maximize the transport capacity. The goal left out in the design of this type of algorithms is the capability of the WSN to ensure an accurate reconstruction of the sensed field while maintaining the self-organization. In this work, we formulate a general framework where the data from the resulting clusters ensures the well-posedness of the signal processing problem in the cluster. We develop the second-order data-coupled clustering (SODCC) algorithm and the distributed compressive-projections principal component analysis (D-CPPCA) algorithm, that use second-order statistics. The condition to form a cluster is that D-CPCCA does not fail to resolve the Principal Components in any given cluster. We show that SODCC is scalable and has similar or better message complexity than other well-known SOAs. We validate these results with extensive computer simulations using an actual LS-WSN. We also show that the performance of SODCC is, comparative to other state-of-the-art SOAs, better at any compression rate and needs no prior adjustment of any parameter. Finally, we show that SODCC compares well to other energy efficient clustering algorithms in terms of energy consumption while excelling in data reconstruction Average SNR.


IEEE Sensors Journal | 2016

Energy Efficiency and Quality of Data Reconstruction Through Data-Coupled Clustering for Self-Organized Large-Scale WSNs

Mihaela I. Chidean; Eduardo Morgado; Margarita Sanromán-Junquera; Julio Ramiro-Bargueño; Javier Ramos; Antonio J. Caamaño

Energy efficiency has been a leading issue in Wireless Sensor Networks (WSNs) and has produced a vast amount of research. Although the classic tradeoff has been between the quality of gathered data versus the lifetime of the network, most works gave preference to an increased network lifetime at the expense of the data quality. A common approach for energy efficiency is partitioning the network into clusters with correlated data, where the representative nodes simply transmit or average measurements inside the cluster. In this paper, we explore the joint use of in-network processing techniques and clustering algorithms. This approach seeks both high data quality with a controlled number of transmissions using an aggregation function and an energy efficient network partition, respectively. The aim of this combination is to increase energy efficiency without sacrificing the data quality. We compare the performance of the Second-Order Data-Coupled Clustering (SODCC) and Compressive-Projections Principal Component Analysis (CPPCA) algorithm combination, in terms of both the energy consumption and the quality of the data reconstruction, to other combinations of the state-of-the-art clustering algorithms and in-network processing techniques. Among all the considered cases, the SODCC + CPPCA combination revealed a perfect balance between data quality, energy expenditure, and ease of network management. The main conclusion of this paper is that the design of WSN algorithms must be processing-oriented rather than transmission-oriented, i.e., investing energy on both the clustering and in-network processing algorithms ensures both energy efficiency and data quality.


personal, indoor and mobile radio communications | 2013

Self-Organized Distributed Compressive Projection in Large Scale Wireless Sensor Networks

Mihaela I. Chidean; Eduardo Morgado; Julio Ramiro; Antonio J. Caamaño

The optimal configuration for a Large Scale Wireless Sensor Networks (LS-WSN) is the one that minimizes the sampling rate, the CPU time and the channel accesses (thus maximizing the network lifetime), with a controlled distortion in the recovered data. Initial deployments of LS-WSN are usually not able to adapt to changing environments and rarely take into account either the spatial or temporal nature of the sensed variables, both techniques that optimize the network operation. In this work we propose the use of Self-Organized Distributed Compressive Projection (SODCP) in order to let the nodes to form clusters in a distributed and data-driven way, exploiting the spatial correlation of the sensed data. We compare the performance of this innovative technique, using actual data from the LUCE LS-WSN, with two different baselines: Centralized Compressive Projection (CCP) and Distributed Compressive Projection (DCP). The former uses no clustering, whereas the latter makes use of an a priori clustering that favors proximity and balances the number of nodes in each cluster. We show that SODCP outperforms DCP (in terms of Signal-to-Noise vs. Compression Rate). We also show that the performance of SODCP converges to that of CCP for relatively high compression rates of 55%.


personal, indoor and mobile radio communications | 2013

Wireless Sensor Network for low-complexity entropy determination of human gait

Mihaela I. Chidean; Giancarlo Pastor; Eduardo Morgado; Julio Ramiro-Bargueño; Antonio J. Caamaño

In this work we present a Wireless Sensor Network (WSN) system designed for the on-board determination of human gait entropy. The usage of nonlinear entropy-based metrics has proven to be a useful tool for analyzing the complexity of biological systems. The final goal of entropy calculation in this type of biological system is to identify possible causes of future injuries (in order to improve aging) and the early injury detection (ideal for elite athletes). Existing systems for human gait analysis are limited to traditional data gathering, e.g. continuous measurement and wireless transmission to a Data Fusion Center (DFC), due to the computational burden of entropy calculation. In addition, actual systems are likely to interfere the natural movement due to their cumbersome nature. The WSN presented here uses four sensor nodes, located in both ankles and hip sides, and are equipped with triaxial accelerometers. We propose the use of low-complexity algorithms in order to perform onboard entropy determination prior to wireless transmission. The proposed system can be used to reliably determine long-term human gait entropy.


IEEE Sensors Journal | 2017

Ambulatory Gait Measurement System for Natural Environments

Mihaela I. Chidean; Eduardo del Arco; Eduardo Morgado; Julio Ramiro-Bargueño; Antonio J. Caamaño

In this paper, we propose a novel ambulatory gait measurement system, based on a wireless sensor network consisting of four sensor nodes that measure the acceleration of specific spots of the lower limbs. The main novelty of the proposed system is that it can be used outside specialized laboratories in outdoor environments, where the natural and spontaneous gait is favored. The system measures the tri-axial ±8g acceleration, allowing the full characterization of the complete gait cycle of both the lower limbs. For the deployment, we opted for a commercial solution and selected the SunSPOT devices that are the most appropriate from both technical and cost points of view. We also performed multiple experiments in order to technically validate the proposed measuring system, including battery duration and memory stress tests, to show its sufficiency in terms of energy and storage resources. In conclusion, with this paper, we show that it is possible to measure gait physiological signals in a natural environment using commercial products, prioritizing the comfort and the natural motion of the person that is measured.


IEEE Transactions on Intelligent Transportation Systems | 2015

Sparse Vehicular Sensor Networks for Traffic Dynamics Reconstruction

Eduardo del Arco; Eduardo Morgado; Mihaela I. Chidean; Julio Ramiro-Bargueño; Inmaculada Mora-Jiménez; Antonio J. Caamaño

In this paper, we propose the use of an ad-hoc wireless network formed by a fraction of the passing vehicles (sensor vehicles) to periodically recover their positions and speeds. A static roadside unit (RSU) gathers data from passing sensor vehicles. Finally, the speed/position information or space-time velocity (STV) field is then reconstructed in a data fusion center with simple interpolation techniques. We use widely accepted theoretical traffic models (i.e., car-following, multilane, and overtake-enabled models) to replicate the nonlinear characteristics of the STV field in representative situations (congested, free, and transitional traffic). To obtain realistic packet losses, we simulate the multihop ad-hoc wireless network with an IEEE 802.11p PHY layer. We conclude that: 1) for relevant configurations of both sensor vehicle and RSU densities, the wireless multihop channel performance does not critically affect the STV reconstruction error, 2) the system performance is marginally affected by transmission errors for realistic traffic conditions, 3) the STV field can be recovered with minimal mean absolute error for a very small fraction of sensor vehicles (FSV) ≈ 9%, and 4) for that FSV value, the probability that at least one sensor vehicle transits the spatiotemporal regions that contribute the most to reduce the STV reconstruction error sharply tends to 1. Thus, a random and sparse selection of wireless sensor vehicles, in realistic traffic conditions, is sufficient to get an accurate reconstruction of the STV field.


international conference on intelligent transportation systems | 2011

Vehicular Sensor Networks in congested traffic: Linking STV field reconstruction and communications channel

E. del Arco; Eduardo Morgado; Julio Ramiro-Bargueño; Inmaculada Mora-Jiménez; Antonio J. Caamaño

It has been proposed that the use of speed and position information from a subset of vehicles in the traffic (probe vehicles) can provide for accurate traffic information. Furthermore, it can be seen as an economic and scalable alternative to the use of inductive loop detectors, cameras and radars. However, the impact of the communications channel performance in the estimation of the traffic states has been insufficiently studied. In this work we propose the use of the Wireless Sensor Network (WSN) paradigm to develop a Vehicular Sensor Network (VSN) in order to obtain accurate traffic information from a few probe vehicles. The problems that plague WSNs are 1) the deployment is to cope with the variations of the measured field, the Space-Time-Velocity (STV) field and 2) the communications channel influences data collection reliability. In order to assess these two issues, we perform 1) accurate simulation of discrete vehicular traffic in order to obtain spatiotemporal patterns that closely mimic traffic congestion and 2) accurate simulation of Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) wireless communications. We obtain experimental evidence that with a Fraction of Sensor Vehicles (FSV) as low as 1% we are able to obtain accurate measurements of traffic congestion accounting for Packet Loss from Rayleigh fading, Doppler spreading and multihop relaying. We present results for different FSVs and RoadSide wireless bridge (RSU) densities and assess that, for FSVs of 10% and RSU densities half the usual detector densities, the reconstructed STV fields are virtually indistinguishable from the ground truth.

Collaboration


Dive into the Eduardo Morgado's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julio Ramiro-Bargueño

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Mihaela I. Chidean

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Javier Ramos

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar

Carlos Figuera

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar

Eduardo del Arco

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio G. Marques

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge