Elsa Fernandez
University of the Basque Country
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elsa Fernandez.
Image and Vision Computing | 2010
Manuel Graña; Alexandre Savio; Maite García-Sebastián; Elsa Fernandez
We introduce an approach to fMRI analysis based on the Endmember Induction Heuristic Algorithm (EIHA). This algorithm uses the Lattice Associative Memory (LAM) to detect Lattice Independent vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Induced endmembers are used to compute the activation levels of voxels as result of an unmixing process. The endmembers correspond to diverse activation patterns, one of these activation patterns corresponds to the resting state of the neuronal tissue. The on-line working of the algorithm does not need neither a previous training process nor a priori models of the data. Results on a case study compare with the results given by the state of art SPM software.
Pattern Recognition Letters | 2007
Maite García-Sebastián; Elsa Fernandez; Manuel Graña; Francisco Javier Torrealdea
Given an appropriate imaging resolution, a common Magnetic Resonance Imaging (MRI) model assumes that the object under study is composed of homogeneous tissues whose imaging intensity is constant, so that MRI produces piecewise constant images. The intensity inhomogeneity (IIH) is modeled by a multiplicative inhomogeneity field. It is due to the spatial inhomogeneity in the excitatory Radio Frequency (RF) signal and other effects. It has been acknowledged as a greater source of error for automatic segmentation algorithms than additive noise. We propose a parametric IIH correction algorithm for MRI that consists of the gradient descent of an error function related to the classification error of the IIH corrected image. The inhomogeneity field is modeled as a linear combination of 3D products of Legendre polynomials. In this letter we test both the image restoration capabilities and the classification accuracy of the algorithm. In restoration processes the adaptive algorithm is used only to estimate the inhomogeneity field. Test images to be restored are IIH corrupted versions of the BrainWeb site simulations. The algorithm image restoration is evaluated by the correlation of the restored image with the known clean image. In classification processes the algorithm is used to estimate both the inhomogeneity field and the intensity class means. The algorithm classification accuracy is tested over the images from the IBSR site. The proposed algorithm is compared with Maximum A Posteriori (MAP) and Fuzzy Clustering algorithms.
Statistics in Medicine | 1996
C. Sánchez Sellero; E. Vázquez Fernández; W. González Manteiga; X. L. Otero; X. Hervada; Elsa Fernandez; X. A. Taboada
To correct for the effect of reporting delay on incidence data relating to AIDS, three methods of estimation have been analysed: Poisson log-linear; log-linear logistic mixed regression (log-logit), and truncation. The first two methods transform the data in a contingency table. The difference between them is the hypothesis of delay stationarity, which is only assumed by the former. A correction is proposed for the first method to improve its asymptotic properties. The truncation method is based on the product-limit estimator. A simulation study was carried out to examine the behaviour (means, variances and mean squared errors) of the three methods. All were applied to data from the National Commission on AIDS (Spain), showing an improvement in reporting efficiency.
congress on evolutionary computation | 2004
Elsa Fernandez; Manuel Graña; J.R. Cabello
Memetic algorithms are hybrid evolutionary algorithms that combine local optimization with evolutionary search operators. In this paper we describe an instance of this paradigm designed for the correction of illumination inhomogeneities in images. The algorithm uses the gradient information of an error function embedded in the mutation operator. Moreover, the algorithm is a single-solution population algorithm, which makes it computationally light. The fitness function is defined assuming that the image intensity is piecewise constant and that the illumination bias may be approximated by a linear combination of 2D Legendre polynomials. We call the algorithm instantaneous memetic illumination correction (IMIC).
international work-conference on the interplay between natural and artificial computation | 2011
Elsa Fernandez; Manuel Graña; Jorge Villanúa
Dynamic velocity-encoded Phase-contrast MRI (PC-MRI) techniques are being used increasingly to quantify pulsatile flows for a variety of flow clinical application. A method for igh resolution segmentation of cerebrospinal fluid (CSF) velocity is described. The method works on PC-MRI with high temporal and spatial resolution. It has been applied in this paper to the CSF flow at the Aqueduct of Sylvius (AS). The approach first selects the regions with high flow applying a threshold on the coefficient of variation of the image pixels velocity profiles. The AS corresponds to the most central detected region. We perform a lattice independent component analysis (LICA) on this small region, so that the image abundances provide the high resolution segmentation of the CSF flow at the AS. Long term goal of our work is to use this detection and segmentation to take some measurements and evaluate the changes in patients with suspected Idiopathic Normal Pressure Hydrocephalus (iNPH).
international conference on artificial neural networks | 2005
M Manzano García; Elsa Fernandez; Manuel Graña; Francisco Javier Torrealdea
Magnetic Resonance Images(MRI) are piecewise constant functions that can be corrupted by an inhomogeneous illumination field. We propose a gradient descent parametric illumination correction algorithm for MRI. The illumination bias is modelled as a linear combination of 2D products of Legendre polynomials. The error function is related to the classification error in the bias corrected image. In this work the intensity classes are given beforehand, so the adaptive algorithm is used only to estimate the bias field. We test our algorithm against Maximum A Posteriori algorithms over some images from the ISBR public domain database.
international symposium on neural networks | 2000
Elsa Fernandez; Imanol Echave; Manuel Graña
To increase the robustness of visual processing in the context of mobile robotics, we introduce an image filtering process based on the codebooks computed by the SOM. The Self Organizing Map and the Simple Competitive Learning are used to compute adaptively the vector quantizers of color image sequences. The codebook computed for each image in the sequence is then used as a smoothing filter, the VQ Bayesian Filter (VQ-BF), for the preprocessing of the images in the sequence. This filter is applied to the computation of optical flow at the single pixel level.
Archive | 2013
M. Termenon; Elsa Fernandez; Manuel Graña; Alfonso Barrós-Loscertales; Juan Carlos Bustamante; César Ávila
Due to the high dimensionality of the neuroimaging data, it is common to select a subset of relevant information from the whole dataset. The inclusion of information of the complete dataset during that selection of a subset, can drive to some bias in the results, often leading to optimistic conclusions. In this study, the differences in results obtained performing an experiment free of circularity and repeating the process including a circularity effect (Double-Dipping (DD)) are shown. Discriminant features (based on voxel’s intensity values) are obtained from structural Magnetic Resonance Imaging (MRI) to train and test classifiers that are able to discriminate cocaine dependent patients from healthy subjects. Feature selection is done by computing Pearson’s correlation between voxel values across subjects with the subject class as control variable. As classifiers, several machine learning techniques are used: k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Extreme Learning Machines (ELM) and Learning Vector Quantization (LVQ). Feature selection process with DD obtains, in general, higher accuracy, sensitivity and specificity values.
Archive | 2012
Juan Manuel Górriz; Elmar Wolfgang Lang; Javier Ramírez; M. Arzoz; Florian Blöchl; Pietro Bonizzi; Matthias Böhm; Alexander Brawanski; R. Chaves; Darya Chyzhyk; Francesco Ciompi; Josep Comet; Marteen De Vos; Lieven De Lathauwer; Deniz Erdogmus; Rupert Faltermeier; Elsa Fernandez; Esther Fernández; Volker Fischer; Glenn Fung; Carlos García Puntonet; Maite García-Sebastián; Carlo Gatta; Pedro Gómez Vilda; J. M. Górriz-Sáez; Manuel Graña; Albert Gubern-Mérida; Daniela Herold; Kenneth E. Hild; Roberto Hornero
Archive | 2002
Elsa Fernandez; Esteve Fernández; Anna Schiaffino; Josep M. Borràs