Ricardo Santiago-Mozos
Instituto de Salud Carlos III
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ricardo Santiago-Mozos.
systems man and cybernetics | 2004
Sancho Salcedo-Sanz; Ricardo Santiago-Mozos; Carlos Bousoño-Calzón
A hybrid Hopfield network-simulated annealing algorithm (HopSA) is presented for the frequency assignment problem (FAP) in satellite communications. The goal of this NP-complete problem is minimizing the cochannel interference between satellite communication systems by rearranging the frequency assignment, for the systems can accommodate the increasing demands. The HopSA algorithm consists of a fast digital Hopfield neural network which manages the problem constraints hybridized with a simulated annealing which improves the quality of the solutions obtained. We analyze the problem and its formulation, describing and discussing the HopSA algorithm and solving a set of benchmark problems. The results obtained are compared with other existing approaches in order to show the performance of the HopSA approach.
Acta Psychiatrica Scandinavica | 2007
Enrique Baca-García; Maria M. Perez-Rodriguez; Ignacio Basurte-Villamor; Jorge Lopez-Castroman; A. Fernández del Moral; J. L. Gronzalez de Rivera; Jerónimo Saiz-Ruiz; José M. Leiva-Murillo; M. De Prado‐Cumplido; Ricardo Santiago-Mozos; Antonio Artés-Rodríguez; Maria A. Oquendo; J. de Leon
Objective: To evaluate the long‐term stability of International Classification of Diseases‐10th revision bipolar affective disorder (BD) in multiple settings.
Computers & Operations Research | 2005
Ricardo Santiago-Mozos; Sancho Salcedo-Sanz; Mario DePrado-Cumplido; Carlos Bousoño-Calzón
This paper presents, as a case study, the application of a two-phase heuristic evolutionary algorithm to obtain personalized timetables in a Spanish university. The algorithm consists of a two-phase heuristic, which, starting from an initial ordering of the students, allocates students into groups, taking into account the students preferences as a primal factor for the assignment. An evolutionary algorithm is then used in order to select the ordering of students which provides the best assignment.The algorithm has been tested in a real problem, the timetable of the Telecommunication Engineering School at Universidade de Vigo (Spain), and has shown good performance in terms of the number of constraints fulfilled and groups assigned to students.
advanced video and signal based surveillance | 2003
Ricardo Santiago-Mozos; José M. Leiva-Murillo; Fernando Pérez-Cruz; Antonio Artés-Rodríguez
We tackle the problem of detecting sources of combustion in high definition multispectral medium wavelength infrared (MWIR) (3-5 /spl mu/m) images. We present a novel approach to this problem consisting of processing the images block-wise using a new technique that we call supervised principal component analysis (SPCA) to get the components of these blocks. This outperforms state-of-the-art methods with a significant reduction in the complexity of the whole scheme. As a classifier, we propose the use of a support vector machine (SVM) comparing the results from both its novelty-detection and binary non-linear versions. High performance is achieved from a small set of components.
IEEE Journal of Biomedical and Health Informatics | 2014
Ricardo Santiago-Mozos; Fernando Pérez-Cruz; Michael G. Madden; Antonio Artés-Rodríguez
Automated screening systems are commonly used to detect some agent in a sample and take a global decision about the subject (e.g., ill/healthy) based on these detections. We propose a Bayesian methodology for taking decisions in (sequential) screening systems that considers the false alarm rate of the detector. Our approach assesses the quality of its decisions and provides lower bounds on the achievable performance of the screening system from the training data. In addition, we develop a complete screening system for sputum smears in tuberculosis diagnosis, and show, using a real-world database, the advantages of the proposed framework when compared to the commonly used count detections and threshold approach.
American Journal of Medical Genetics | 2009
Enrique Baca-Garcia; Concepción Vaquero-Lorenzo; M. Mercedes Perez-Rodriguez; Mònica Gratacòs; Mònica Bayés; Ricardo Santiago-Mozos; José M. Leiva-Murillo; Mario de Prado-Cumplido; Antonio Artés-Rodríguez; Antonio Ceverino; Carmen Diaz-Sastre; Pablo Fernández-Navarro; Javier Costas; Fernández-Piqueras J; Montserrat Diaz-Hernandez; Jose de Leon; Enrique Baca-Baldomero; Jerónimo Saiz-Ruiz; J. John Mann; Ramin V. Parsey; Angel Carracedo; Xavier Estivill; Maria A. Oquendo
Despite marked morbidity and mortality associated with suicidal behavior, accurate identification of individuals at risk remains elusive. The goal of this study is to identify a model based on single nucleotide polymorphisms (SNPs) that discriminates between suicide attempters and non‐attempters using data mining strategies. We examined functional SNPs (n = 840) of 312 brain function and development genes using data mining techniques. Two hundred seventy‐seven male psychiatric patients aged 18 years or older were recruited at a University hospital psychiatric emergency room or psychiatric short stay unit. The main outcome measure was history of suicide attempts. Three SNPs of three genes (rs10944288, HTR1E; hCV8953491, GABRP; and rs707216, ACTN2) correctly classified 67% of male suicide attempters and non‐attempters (0.50 sensitivity, 0.82 specificity, positive likelihood ratio = 2.80, negative likelihood ratio = 1.64). The OR for the combined three SNPs was 4.60 (95% CI: 1.31–16.10). The models accuracy suggests that in the future similar methodologies may generate simple genetic tests with diagnostic utility in identification of suicide attempters. This strategy may uncover new pathophysiological pathways regarding the neurobiology of suicidal acts.
IEEE Journal of Biomedical and Health Informatics | 2015
J. M. Lillo-Castellano; Inmaculada Mora-Jiménez; Ricardo Santiago-Mozos; Fernando Chavarría-Asso; A. Cano-Gonzalez; Arcadio García-Alberola; José Luis Rojo-Álvarez
The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.
international symposium on biomedical imaging | 2008
Ricardo Santiago-Mozos; R. Ferndndez-Lorenzana; Fernando Pérez-Cruz; Antonio Artés-Rodríguez
We consider the problem of sequential hypothesis testing when the exact pdfs are not known but instead a set of iid samples are used to describe the hypotheses. We modify the classical test by introducing a likelihood ratio interval which accommodates the uncertainty in the pdfs. The test finishes when the whole likelihood ratio interval crosses one of the thresholds and reduces to the classical test as the number of samples to describe the hypotheses tend to infinity. We illustrate the performance of this test in a medical image application related to tuberculosis diagnosis. We show in this example how the test confidence level can be accurately determined.
International Journal of Knowledge Discovery in Bioinformatics | 2010
Ricardo Santiago-Mozos; Imtiaz A. Khan; Michael G. Madden
In this paper, the authors identify the strategies that resistant subpopulations of cancer cells undertake to overcome the effect of the anticancer drug Topotecan. For the analyses of cell lineage data encoded from timelapse microscopy, data mining tools are chosen that generate interpretable models of the data, addressing their statistical significance. By interpreting the short-term and long-term cytotoxic effect of Topotecan through these data models, the authors reveal the strategies that resistant subpopulations of cells undertake to maximize their clonal expansion potential. In this context, this paper identifies a pattern of cell death independent of cytotoxic effect. Finally, it is observed that cells exposed to Topotecan have higher movement over time, indicating a putative relationship between cytotoxic effect and cell motility.
IEEE Transactions on Neural Networks | 2011
Ricardo Santiago-Mozos; Fernando Pérez-Cruz; Antonio Artés-Rodríguez
In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for test samples. In the second approach, we build a new classifier that directly computes the label for test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases.