Gonzalo Safont
Polytechnic University of Valencia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gonzalo Safont.
international carnahan conference on security technology | 2012
Gonzalo Safont; Addisson Salazar; A. Soriano; Luis Vergara
The different structures of the brain of human beings produce spontaneous electroencephalographic (EEG) records that can be used to identify subjects. This paper presents a method for biometric authorization and identification based on EEG signals. The hardware uses a simple 2-signal electrode and a reference electrode configuration. The electrodes are positioned in such a way to be as unobtrusive as possible for the tested subject. Multiple features are extracted from the EEG signals that are processed by different classifiers. The system uses all the possible combinations between classifiers and features, fusing the best results. The fused decision improves the classification performance for even a small number of observation vectors. Results were obtained from a population of 50 subjects and 20 intruders, both in authentication and identification tasks. The system obtains an Equal Error Rate (EER) of 2.4% with only a few seconds for testing. The obtained performance measures are an improvement over the results of current EEG-based systems.
Sensors | 2015
Jorge Igual; Addisson Salazar; Gonzalo Safont; Luis Vergara
The detection and identification of internal defects in a material require the use of some technology that translates the hidden interior damages into observable signals with different signature-defect correspondences. We apply impact-echo techniques for this purpose. The materials are classified according to their defective status (homogeneous, one defect or multiple defects) and kind of defect (hole or crack, passing through or not). Every specimen is impacted by a hammer, and the spectrum of the propagated wave is recorded. This spectrum is the input data to a Bayesian classifier that is based on the modeling of the conditional probabilities with a mixture of Gaussians. The parameters of the Gaussian mixtures and the class probabilities are estimated using an extended expectation-maximization algorithm. The advantage of our proposal is that it is flexible, since it obtains good results for a wide range of models even under little supervision; e.g., it obtains a harmonic average of precision and recall value of 92.38% given only a 10% supervision ratio. We test the method with real specimens made of aluminum alloy. The results show that the algorithm works very well. This technique could be applied in many industrial problems, such as the optimization of the marble cutting process.
international carnahan conference on security technology | 2012
Addisson Salazar; Gonzalo Safont; A. Soriano; Luis Vergara
Fraud detection is a critical problem affecting large financial companies that has increased due to the growth in credit card transactions. This paper presents a new method for automatic detection of frauds in credit card transactions based on non-linear signal processing. The proposed method consists of the following stages: feature extraction, training and classification, decision fusion, and result presentation. Discriminant-based classifiers and an advanced non-Gaussian mixture classification method are employed to distinguish between legitimate and fraudulent transactions. The posterior probabilities produced by classifiers are fused by means of order statistical digital filters. Results from data mining of a large database of real transactions are presented. The feasibility of the proposed method is demonstrated for several datasets using parameters derived from receiver characteristic operating analysis and key performance indicators of the business.
Remote Sensing | 2014
Gonzalo Safont; Addisson Salazar; Alberto Rodriguez; Luis Vergara
Missing traces in ground penetrating radar (GPR) B-scans (radargrams) may appear because of limited scanning resolution, failures during the acquisition process or the lack of accessibility to some areas under test. Four statistical interpolation methods for recovering these missing traces are compared in this paper: Kriging, Wiener structures, Splines and the expectation assuming an independent component analyzers mixture model (E-ICAMM). Kriging is an adaptation to the spatial context of the linear least mean squared error estimator. Wiener structures improve the linear estimator by including a nonlinear scalar function. Splines are a commonly used method to interpolate GPR traces. This consists of piecewise-defined polynomial curves that are smooth at the connections (or knots) between pieces. E-ICAMM is a new method proposed in this paper. E-ICAMM consists of computing the optimum nonlinear estimator (the conditional mean) assuming a non-Gaussian mixture model for the joint probability density in the observation space. The proposed methods were tested on a set of simulated data and a set of real data, and four performance indicators were computed. Real data were obtained by GPR inspection of two replicas of historical walls. Results show the superiority of E-ICAMM in comparison with the other three methods in the application of reconstructing incomplete B-scans.
IEEE Transactions on Neural Networks | 2018
Gonzalo Safont; Addisson Salazar; Luis Vergara; Enriqueta Gomez; Vicente Villanueva
Independent component analysis (ICA) is a blind source separation technique where data are modeled as linear combinations of several independent non-Gaussian sources. The independence and linear restrictions are relaxed using several ICA mixture models (ICAMMs) obtaining a two-layer artificial neural network structure. This allows for dependence between sources of different classes, and thus, a myriad of multidimensional probability density functions can be accurate modeled. This paper proposes a new probabilistic distance (PDI) between the parameters learned for two ICAMMs. The PDI is computed explicitly, unlike the popular Kullback–Leibler divergence (KLD) and other similar metrics, removing the need for numerical integration. Furthermore, the PDI is symmetric and bounded within 0 and 1, which enables its use as a posterior probability in fusion approaches. In this paper, the PDI is employed for change detection by measuring the distance between two ICAMMs learned in consecutive time windows. The changes might be associated with relevant states from a process under analysis that are explicitly reflected in the learned ICAMM parameters. The proposed distance was tested in two challenging applications using simulated and real data: 1) detecting flaws in materials using ultrasounds and 2) detecting changes in electroencephalography signals from humans performing neuropsychological tests. The results demonstrate that the PDI outperforms the KLD in change-detection capabilities.
international carnahan conference on security technology | 2014
Addisson Salazar; Gonzalo Safont; Luis Vergara
Banks collect large amount of historical records corresponding to millions of credit cards operations, but, unfortunately, only a small portion, if any, is open access. This is because, e.g., the records include confidential customer data and banks are afraid of public quantitative evidence of existing fraud operations. This paper tackles this problem with the application of surrogate techniques to generate new synthetic credit card data. The quality of the surrogate multivariate data is guaranteed by constraining them to have the same covariance, marginal distributions, and joint distributions as the original multivariate data. The performance of fraud detection algorithms (in terms of receiver operating characteristic (ROC) curves) using a varying proportion of real and surrogate data is tested. We demonstrate the feasibility of surrogates in a real scenario considering very low false alarm and high disproportion between legitimate and fraud operations.
computational intelligence communication systems and networks | 2013
Gonzalo Safont; Addisson Salazar; Luis Vergara; Alberto Rodriguez
Independent Component Analysis (ICA) is a blind source separation method that has proven popular in many fields of application. ICA can be improved incorporating temporal dependencies creating dynamic ICA methods and defining subspaces with multiple ICAs. Such a dynamic ICA method is called Sequential Independent Component Analysis Mixture Model (SICAMM). This method is proposed for two new EEG signal processing applications: detection of arousals in apnea patients and brain hemisphere activity classification during a memory task. SICAMM is compared with a nondynamic ICAMM model and a Dynamic Bayesian Network (DBN). Results show that SICAMM obtains a better performance than the DBN and both dynamic methods achieve a higher classification rate than the stationary ICAMM model. Furthermore, the structure of the SICAMM parameters suggests it for extraction of significant clinical information.
international conference on computational science | 2015
Addisson Salazar; Jorge Igual; Gonzalo Safont; Luis Vergara; Antonio M. Vidal
We present two applications in image processing of an agglomerative clustering method based on mixtures of non-Gaussian distributions. The method joins pair-wise the mixture models estimated for every cluster building a pyramidal or hierarchical structure by using the Kullback-Leibler divergence. This process can be related with the feedforward process of abstraction carried out by the brain. The applications consist of grouping images based on their content similarities and segmentation of regions of an image in similar areas. The capability of the method to distinguish between natural and artificial images is also demonstrated.
international workshop on machine learning for signal processing | 2012
A. Soriano; Luis Vergara; Gonzalo Safont; Addisson Salazar
A detection problem, where we have a set of two types of different measurements or modalities of one event, is considered. The optimal fusion rule to combine both modalities in one detector needs the knowledge of the joint statistics of modalities. In many cases we do not know these joint statistics and it is usual to consider independence between modalities for implementing a suboptimal fusion rule. Another suboptimum alternative not much used is to make hard fusion, that is, to thresholding every modality to obtain a set of binary decisions to be fused in only on final decision. In some situations, we can obtain better results using hard fusion instead of soft fusion under the independence assumption. The goal of this paper is to show that the later sentence is generally true.
IOP Conference Series: Materials Science and Engineering | 2012
Addisson Salazar; Alberto Rodriguez; Gonzalo Safont; Luis Vergara
This paper presents a prospective analysis of non destructive testing (NDT) based on ultrasounds in the field of archaeology applications. Classical applications of ultrasounds techniques are reviewed, including ocean exploration to detect wrecks, imaging of archaeological sites, and cleaning archaeological objects. The potential of prospective applications is discussed from the perspective of signal processing, with emphasis on the area of linear time variant models. Thus, the use of ultrasound NDT is proposed for new ceramic cataloguing and restoration methods.