Carlos Figuera
King Juan Carlos University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlos Figuera.
IEEE Transactions on Mobile Computing | 2011
Carlos Figuera; José Luis Rojo-Álvarez; Inmaculada Mora-Jiménez; Alicia Guerrero-Curieses; Mark Richard Wilby; Javier Ramos-López
Indoor location systems based on IEEE 802.11b (WiFi) mobile devices often rely on the received signal strength indicator to estimate the user position. Two key characteristics of these systems have not yet been fully analyzed, namely, the temporal and spatial sampling process required to adequately describe the distribution of the electromagnetic field in indoor scenarios; and the device calibration, necessary for supporting different mobile devices within the same system. By using a previously proposed nonparametric methodology for system comparison, we first analyzed the time-space sampling requirements for WiFi indoor location systems in terms of conventional sampling theory and system performance. We also proposed and benchmarked three new algorithms for device calibration, with increasing levels of complexity and performance. We conclude that feasible time and space sampling rates can be used, and that calibration algorithms make possible the handling of previously unknown mobile devices in the system.
Signal Processing | 2012
Carlos Figuera; José Luis Rojo-Álvarez; Mark Richard Wilby; Inmaculada Mora-Jiménez; Antonio J. Caamaño
Due to the proliferation of ubiquitous computing services, locating a device in indoor scenarios has received special attention during recent years. A variety of algorithms are based on Wi-Fi measurements of the received signal strength and estimate the relation between this one and position using previous measurements at known locations. This problem naturally fits in well with learning algorithms such as neural networks, or support vector machines (SVM). However, existing machine learning techniques do not significantly outperform other simpler techniques, such as k-nn. This is mainly due to the fact that these solutions do not include significant a priori information. In this paper, we propose a technique to enhance these algorithms by including certain a priori information within the learning machine, using the spectral information of the training set, and a complex output to take advantage of the cross information in the two dimensions of the location. Specifically, we modify a SVM algorithm to obtain three advanced methods incorporating this information: one using an autocorrelation kernel, another using a complex output, and a third one combining both. These algorithms are compared to the k-nn and an SVM with Gaussian kernel, showing that including the a priori information improves the location performance.
IEEE Transactions on Mobile Computing | 2009
Carlos Figuera; Inmaculada Mora-Jiménez; Alicia Guerrero-Curieses; José Luis Rojo-Álvarez; Estrella Everss; Mark Richard Wilby; Javier Ramos-López
Indoor location (IL) using received signal strength (RSS) is receiving much attention, mainly due to its ease of use in deployed IEEE 802.11b (Wi-Fi) wireless networks. Fingerprinting is the most widely used technique. It consists of estimating position by comparison of a set of RSS measurements, made by the mobile device, with a database of RSS measurements whose locations are known. However, the most convenient data structure to be used and the actual performance of the proposed fingerprinting algorithms are still controversial. In addition, the statistical distribution of indoor RSS is not easy to characterize. Therefore, we propose here the use of nonparametric statistical procedures for diagnosis of the fingerprinting model, specifically: 1) A nonparametric statistical test, based on paired bootstrap resampling, for comparison of different fingerprinting models and 2) new accuracy measurements (the uncertainty area and its bias) which take into account the complex nature of the fingerprinting output. The bootstrap comparison test and the accuracy measurements are used for RSS-IL in our Wi-Fi network, showing relevant information relating to the different fingerprinting schemes that can be used.
Signal Processing | 2014
Carlos Figuera; íscar Barquero-Pérez; José Luis Rojo-Álvarez; Manel Martínez-Ramón; Alicia Guerrero-Curieses; Antonio J. Caamaño
Interpolation of nonuniformly sampled signals in the presence of noise is a widely analyzed problem in signal processing applications. Interpolators based on Support Vector Machines (SVM) with Gaussian and sinc Mercer kernels have been previously proposed, obtaining good performance in terms of regularization and sparseness. In this paper, inspired in the classical spectral interpretation of the Wiener filter, we explore the impact of adapting the spectrum of the SVM kernel to that of the observed signal. We provide a theoretical foundation for this approach based on a continuous-time equivalent system for interpolation. We study several kernels with different degrees of spectral adaptation to band-pass signals, namely, modulated kernels and autocorrelation kernels. The proposed algorithms are evaluated with extensive simulations with synthetic signals and an application example with real data. Our approach is compared with SVM with Gaussian and sinc kernels and with other well known interpolators. The SVM with autocorrelation kernel provides the highest performance in terms of signal to error ratio in several scenarios. We conclude that the estimated (or actual if known) autocorrelation of the observed sequence can be straightforwardly used as a spectrally adapted kernel, outperforming the classic SVM with low pass kernels for nonuniform interpolation.
IEEE Transactions on Wireless Communications | 2013
Antonio G. Marques; Carlos Figuera; Carlos Rey-Moreno; Javier Simo-Reigadas
Convex optimization and dual decomposition have been successfully used to design cross-layer resource allocation algorithms for cellular access networks. However, less effort has been devoted to design optimal algorithms for systems equipped with relay stations. Presence of relay stations renders the design of the access schemes more difficult and requires consideration of additional constraints. The present paper relies on a sum-utility constrained maximization framework to design cross-layer algorithms that guarantee diverse quality of service (QoS) and consider different forwarding strategies at the relay stations. One of the main challenges in the design is the joint consideration of both long-term (elastic) and short-term (real-time) constraints. Such constraints account for diverse delay QoS requirements and relay forwarding strategies. A two-step methodology is proposed to efficiently deal with this challenge. Specifically, for each time instant it applies: a) an approximate online method to estimate the multipliers for the long-term constraints and the corresponding primal variables (resources), and b) a classical iterative method to calculate the multipliers for the short-term constraints and the corresponding primal variables. Our approach incurs an arbitrarily small loss of optimality, and can accommodate both static and fading channels.
Frontiers in Physiology | 2016
Carlos Figuera; Víctor Suárez-Gutiérrez; Ismael Hernandez-Romero; Miguel Rodrigo; Alejandro Liberos; Felipe Atienza; Maria S. Guillem; Óscar Barquero-Pérez; Andreu M. Climent; Felipe Alonso-Atienza
The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF) for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF), phase maps, and singularity point (SP) location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e., sinus rhythm, simple AF, and complex AF). Body surface potentials (BSP) were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using 14 different regularization techniques. DF, phase maps, and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location, and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero-order Tikhonov method in complex fibrillatory conditions). Regarding the different targets, DF and SP location estimation were more robust with respect to pattern complexity and noise, and most algorithms provided a reasonable estimation of these parameters, even when the epicardial potentials estimation was inaccurate. Finally, the proposed evaluation procedure and metrics represent a suitable framework for techniques benchmarking and provide useful insights for the clinical practice.
IEEE Transactions on Biomedical Engineering | 2017
Óscar Barquero-Pérez; Carlos Figuera; Rebeca Goya-Esteban; Inmaculada Mora-Jiménez; Francisco Javier Gimeno-Blanes; Pablo Laguna; Juan Pablo Martínez; Eduardo Gil; Leif Sörnmo; Arcadio García-Alberola; José Luis Rojo-Álvarez
OBJECTIVE Heart rate turbulence (HRT) has been successfully explored for cardiac risk stratification. While HRT is known to be influenced by the heart rate (HR) and the coupling interval (CI), nonconcordant results have been reported on how the CI influences HRT. The purpose of this study is to investigate HRT changes in terms of CI and HR by means of an especially designed protocol. METHODS A dataset was acquired from 11 patients with structurally normal hearts for which CI was altered by different pacing trains and HR by isoproterenol during electrophysiological study (EPS). The protocol was designed so that, first, the effect of HR changes on HRT and, second, the combined effect of HR and CI could be explored. As a complement to the EPS dataset, a database of 24-h Holters from 61 acute myocardial infarction (AMI) patients was studied for the purpose of assessing risk. Data analysis was performed by using different nonlinear ridge regression models, and the relevance of model variables was assessed using resampling methods. The EPS subjects, with and without isoproterenol, were analyzed separately. RESULTS The proposed nonlinear regression models were found to account for the influence of HR and CI on HRT, both in patients undergoing EPS without isoproterenol and in low-risk AMI patients, whereas this influence was absent in high-risk AMI patients. Moreover, model coefficients related to CI were not statistically significant, p > 0.05, on EPS subjects with isoproterenol. CONCLUSION The observed relationship between CI and HRT, being in agreement with the baroreflex hypothesis, was statistically significant ( ), when decoupling the effect of HR and normalizing the CI by the HR. SIGNIFICANCE The results of this study can help to provide new risk indicators that take into account physiological influence on HRT, as well as to model how this influence changes in different cardiac conditions.
asilomar conference on signals, systems and computers | 2012
Antonio G. Marques; Javier Ramos; Carlos Figuera; Eduardo Morgado
We design adaptive resource allocation schemes for cognitive radios so that the sum-rate of secondary users is optimized while the damage (interference) to the primary users is keep under control. Secondary users transmit orthogonally and adhere to limits on: a) the long-term interfering power at each primary receiver and b) the long-term capacity loss inflicted to each primary receiver. We first analyze the single antenna case and then consider that the secondary users implement adaptive beamforming. The focus of the paper is on scenarios where users can implement only a finite number of power levels and beamforming vectors. Although b) renders the resultant optimization problem non-convex, it holds that it has zero duality gap and that, due to the favorable structure in the dual domain, it can be solved in polynomial time. Specifically, it holds that the computational complexity required to obtain the optimum resource allocation for a given fading realization is proportional to the number of: secondary users, primary users (channels), power levels, and beamforming vectors.
international conference on intelligent transportation systems | 2011
Carlos Figuera; J. M. Lillo; Inmaculada Mora-Jiménez; José Luis Rojo-Álvarez; Antonio J. Caamaño
According to previous studies, traffic accident data have a spatial dependence which should be taken into account when analyzed. For this purpose, a proper spatial segmentation of accidents should be carried out so that subsequent spatial analysis can provide significant results. In this work, we propose a method for spatial clustering of multiple variables in order to make a new spatial characterization of the different road stretches and then to assign them into a small set of typical accidents according to their risk profile. First, every road is segmented according to an estimation of the corresponding spatial accident density. Then, each segment is characterized with a numerical vector representing accident attributes. The spatial clustering is performed in the third stage by applying a k-means clustering algorithm. Traffic accident data from Comunidad Valenciana, in Spain, have been used for testing our method. Results show that our approach is a flexible and intuitive way for spatially characterizing the roads of the region under study, and even for finding relationships between values of the analyzed risk factors.
International Journal of Human Capital and Information Technology Professionals | 2011
Carlos Figuera; Eduardo Morgado; David Gutiérrez-Pérez; Felipe Alonso-Atienza; Eduardo del Arco-Fernández-Cano; Antonio J. Caamaño; Javier Ramos-López; Julio Ramiro-Bargueño; Jesús Requena-Carrión
The Telecommunications Engineering degree contains the study and understanding of a wide range of knowledge areas, like signal theory and communications, computer networks, and radio propagation. This diversity makes it hard for students to integrate different concepts, which is essential to tackle real and practical problems that involve different subjects. As a response to this need of integration, a group of professors at Rey Juan Carlos University carried out an educational project based on Problem Based Learning PBL, called the Wireles4x4 Project. In this project, groups of students build a complete system to autonomously drive a radio controlled car, involving different technologies such as wireless communications, positioning systems, power management, and system integration. The results show that the participating students improve not only their specific knowledge on the involved issues, but also their capability of integrating different subjects of the degree and the skills for autonomous learning.