Joaquin Gonzalez-Rodriguez
Technical University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joaquin Gonzalez-Rodriguez.
Lecture Notes in Computer Science | 2003
Julian Fierrez-Aguilar; Javier Ortega-Garcia; Daniel Garcia-Romero; Joaquin Gonzalez-Rodriguez
The aim of this paper, regarding multimodal biometric verification, is twofold: on the one hand, some score fusion strategies reported in the literature are reviewed and, on the other hand, we compare experimentally a selection of them using as monomodal baseline experts: i) our face verification system based on a global face appearance representation scheme, ii) our minutiae-based fingerprint verification system, and iii) our on-line signature verification system based on HMM modeling of temporal functions, on the MCYT multimodal database. A new strategy is also proposed and discussed in order to generate a multimodal combined score by means of Support Vector Machine (SVM) classifiers from which user-independent and user-dependent fusion schemes are derived and evaluated.
Speech Communication | 2000
Javier Ortega-Garcia; Joaquin Gonzalez-Rodriguez; Victoria Marrero-Aguiar
Speaker recognition is an emerging task in both commercial and forensic applications. Nevertheless, while in certain applications we can estimate, adapt or hypothesize about our working conditions, most of the commercial applications and almost the whole of the forensic approaches to speaker recognition are still open problems, due to several reasons. Some of these reasons can be stated: environmental conditions are (usually) rapidly changing or highly degraded, acquisition processes are not always under control, incriminated people exhibit low degree of cooperativeness, etc., inducing a wide range of variability sources on speech utterances. In this sense, real approaches to speaker identification necessarily imply taking into account all these variability factors. In order to isolate, analyze and measure the effect of some of the main variability sources that can be found in real commercial and forensic applications, and their influence in automatic recognition systems, a specific large speech database in Castilian Spanish called AHUMADA (/aumada/) has been designed and acquired under controlled conditions. In this paper, together with a detailed description of the database, some experimental results including different speech variability factors are also presented.
international conference on spoken language processing | 1996
Javier Ortega-Garcia; Joaquin Gonzalez-Rodriguez
Real world conditions differ from ideal or laboratory conditions, causing mismatch between training and testing phases, and consequently, inducing performance degradation in automatic speaker recognition systems. Many strategies have been adopted to cope with acoustical degradation; in some applications of speaker identification systems a clean sample of speech, prior to the recognition stage, is needed. This has justified the use of procedures that may reduce the impact of acoustical noise on the desired signal, giving rise to techniques involved in the enhancement of noisy speech. A comparative performance analysis of single-channel (based in classical spectral subtraction and some derived alternatives), dual-channel (based in adaptive noise cancelling) and multi-channel (using microphone arrays) speech enhancement techniques, with different types of noise at different SNRs, as a pre-processing stage to an ergodic HMM-based speaker recognizer, is presented.
Lecture Notes in Computer Science | 2003
Javier Ortega-Garcia; Julian Fierrez-Aguilar; J. Martin-Rello; Joaquin Gonzalez-Rodriguez
In this contribution a function-based approach to on-line signature verification is presented. An initial set of 8 time sequences is used; then first and second time derivates of each function are computed over these, so 24 time sequences are simultaneously considered. A valuable function normalization is applied as a previous stage to a continuous-density HMM-based complete signal modeling scheme of these 24 functions, so no derived statistical features are employed, fully exploiting in this manner the HMM modeling capabilities of the inherent time structure of the dynamic process. In the verification stage, scores are considered not as absolute but rather as relative values with respect to a reference population, permitting the use of a best-reference score-normalization technique. Results using MCYT_Signature sub-corpus on 50 clients are presented, attaining an outstanding best figure of 0.35% EER for skilled forgeries, when signer-dependent thresholds are considered.
international conference on image processing | 2001
Danilo Simon-Zorita; Javier Ortega-Garcia; Santiago Cruz-Llanas; Joaquin Gonzalez-Rodriguez
A complete minutiae extraction scheme for automatic fingerprint recognition systems is presented. The proposed method uses improving alternatives for the image enhancement process, leading consequently to an increase in the reliability in the minutiae extraction task. In the first stages, image normalization and the orientation field of the fingerprint are calculated. The local orientation of the ridges serve as parameter for the next processing stages. Details of the adaptive morphological filtering used for ridge extraction and background noise elimination are described. Evaluation results are obtained from both inked and scanned fingerprints. Conclusions in terms of Goodness Index (GI), which compares the results obtained by automatic minutiae extraction with manually extracted ones, are provided in order to test the global performance of this approach.
IEEE Aerospace and Electronic Systems Magazine | 2006
Marcos Faundez-Zanuy; Julian Fierrez-Aguilar; Javier Ortega-Garcia; Joaquin Gonzalez-Rodriguez
The interest on biometric recognition systems for person authentication has experienced an important growth in the last decade. One of the key factors of this success is the availability of biometric databases; these are of utmost importance to define common benchmarks that enable consistent comparison of competing recognition strategies. The design, acquisition, and collection of these databases are one of the most time- and resource-consuming tasks for the research community, especially in the case of multimodal databases including multiple biometric traits and acquisition sessions. In this paper, the most important multimodal biometric databases publicly available are summarized, and the contents of some new multimodal databases under development are outlined
Archive | 2002
Javier Ortega-Garcia; Joaquin Gonzalez-Rodriguez; Danilo Simon-Zorita; Santiago Cruz-Llanas
In this chapter, several biometric recognition systems, based on voice, fingerprint, face and signature are presented. The description of the state-of-the-art technologies regarding these biometric characteristics is widely faced. Minutiae extraction-based fingerprint matching, GMM-based speaker verification, on-line HMM-based signature verification, and PCA- or LDA-based face recognition are quoted. We will also focus on multimodality and data fusion in biometric systems; finally, some application strategies and some real-world demos are described.
IEEE Aerospace and Electronic Systems Magazine | 2000
Javier Ortega-Garcia; Joaquin Gonzalez-Rodriguez; Santiago Cruz-Llanas
Speaker recognition is a major task when security applications through speech input are needed. Nevertheless, speech variability is a main degradation factor in speaker recognition tasks. Both intra-speaker and external variability sources produce mismatch between training and testing phases. In this contribution, channel and inter-session variability are explored in order to accomplish real automatic systems for both commercial and forensic speaker recognition. Results are presented making use of AHUMADA, a subset of GAUDI large speaker recognition-oriented database in Spanish.
Lecture Notes in Computer Science | 2004
Julian Fierrez-Aguilar; Javier Ortega-Garcia; Joaquin Gonzalez-Rodriguez
Score normalization methods in biometric verification, which encompass the more traditional user-dependent decision thresholding techniques, are reviewed from a test hypotheses point of view. These are classified into test dependent and target dependent methods. The focus of the paper is on target dependent score normalization techniques, which are further classified into impostor-centric, target-centric, and target-impostor methods. These are applied to an on-line signature verification system on signature data from the First International Signature Verification Competition (SVC 2004). In particular, a target-centric technique based on the cross-validation procedure provides the best relative performance improvement testing both with skilled (19%) and random forgeries (53%) as compared to the raw verification performance without score normalization (7.14% and 1.06% Equal Error Rate for skilled and random forgeries, respectively).
international carnahan conference on security technology | 2000
Santiago Cruz-Llanas; Javier Ortega-Garcia; E. Martinez-Torrico; Joaquin Gonzalez-Rodriguez
The paper is aimed at analyzing the performance of two different state-of-the-art automatic face recognition systems. One of the key issues regarding face recognition is the election of convenient features for representing identity in facial images. Multivariate analysis and Gabor analysis are alternative methods for accomplishing this feature extraction stage. Consequently, two different approaches to the face recognition problem, one based on multivariate analysis, the other on Gabor analysis, are proposed. A brief review of the theoretical foundations of both systems, together with some tests conducted for comparison, are addressed.