Dijana Petrovska-Delacrétaz
Telecom & Management SudParis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dijana Petrovska-Delacrétaz.
international conference on biometrics theory applications and systems | 2009
Sanjay Ganesh Kanade; Dijana Petrovska-Delacrétaz; Bernadette Dorizzi
Biometrics lack revocability and privacy while cryptography cannot detect the users identity. By obtaining cryptographic keys using biometrics, one can achieve the properties such as revocability, assurance about users identity, and privacy. In this paper, we propose a multi-biometric based cryptographic key regeneration scheme. Since left and right irises of a person are uncorrelated, we treat them as two independent biometrics and combine in our system. We propose a novel idea for feature level fusion through weighted error correction to obtain a multi-biometric feature vector which is used to get a secure template. A shuffling key which is protected by a password is used to shuffle the error correcting codes data. The password helps improve revocability, privacy, and security of the system. We succeed to generate 147-bit long keys with as much entropy at 0% FAR and 0.18% FRR on the NIST-ICE database.
Multimodal Signals: Cognitive and Algorithmic Issues | 2009
Gérard Chollet; Anna Esposito; Annie Gentes; Patrick Horain; Walid Karam; Zhenbo Li; Catherine Pelachaud; Patrick Perrot; Dijana Petrovska-Delacrétaz; Dianle Zhou; Leila Zouari
Virtual worlds are developing rapidly over the Internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a physical person. Each person controls one or several avatars and usually receives feedback from the virtual world on an audio-visual display. Ideally, all senses should be used to feel fully embedded in a virtual world. Sound, vision and sometimes touch are the available modalities. This paper reviews the technological developments which enable audio-visual interactions in virtual and augmented reality worlds. Emphasis is placed on speech and gesture interfaces, including talking face analysis and synthesis.
advanced concepts for intelligent vision systems | 2009
M. Anouar Mellakh; Anis Chaari; Souhila Guerfi; Johan D'Hose; Joseph Colineau; Sylvie Lelandais; Dijana Petrovska-Delacrétaz; Bernadette Dorizzi
In this paper, the first evaluation campaign on 2D-face images using the multimodal IV2 database is presented. The five appearance-based algorithms in competition are evaluated on four experimental protocols, including experiments with challenging illumination and pose variabilities. The results confirm the advantages of the Linear Discriminant Analysis (LDA) and the importance of the training set for the Principal Component Analysis (PCA) based approaches. The experiments show the robustness of the Gabor based approach combined with LDA, in order to cope with challenging face recognition conditions. This evaluation shows the interest and the richness of the IV2 multimodal database.
Archive | 1999
Gérard Chollet; Jan Cernocký; Guillaume Gravier; Jean Hennebert; Dijana Petrovska-Delacrétaz; François Yvon
Automatic Speech Processing (Speech Recognition, Coding, Synthesis, Language Identification, Speaker Verification, Interpreting Telephony, etc.) has progressed to a level which allows its integration in the context of Interactive Voice Servers (IVS). The description of a personal telephone attendant (’Majordome’) focuses on some of the issues in the development of IVS. In particular, users should be allowed to dialogue with automatic systems over the telephone in their native language. To achieve this goal, we propose an approach called ALISP (Automatic Language Independent Speech Processing). The needs for ALISP are justified and some of the corresponding tools are described. Applications to very low bit-rate coders, automatic speech recognition and speaker verification illustrate our proposal.
2007 IEEE Workshop on Automatic Identification Advanced Technologies | 2007
A. El Hannani; Dijana Petrovska-Delacrétaz
Various studies have shown that high-level features, such as linguistic content, pronunciation and idiolectal word usage, convey more speaker information and can be added to the low-level features in order to increase the robustness of the system. Usually these features are extracted by analyzing streams produced by phonetic speech recognition systems. Two of the major problems that arise when phone based systems are being developed are the possible mismatches between the development and evaluation data and the lack of transcribed databases. We propose in this paper to replace the phone-based approaches by data-driven segmentation methodologies. Our data-driven high-level systems do not use transcribed data and can easily be applied on development data minimizing the mismatches. These systems were fused with a state-of-the-art acoustic Gaussian mixture models (GMM) system. Results obtained on the NIST 2006 speaker recognition evaluation data show that the data-driven features provide complementary information and the resulting fused system reduced the error rate in comparison to the GMM baseline system.
Security and Privacy in Biometrics | 2013
Sanjay Ganesh Kanade; Dijana Petrovska-Delacrétaz; Bernadette Dorizzi
Multi-biometric systems have several advantages over uni-biometrics based systems, such as, better verification accuracy, larger feature space to accommodate more subjects, and higher security against spoofing. Unfortunately, as in case of uni-biometric systems, multi-biometric systems also face the problems of nonrevocability, lack of template diversity, and possibility of privacy compromise. A combination of biometrics and cryptography is a good solution to eliminate these limitations. In this chapter we present a multi-biometric cryptosystem based on the fuzzy commitment scheme, in which, a crypto-biometric key is derived from multi-biometric data. An idea (recently proposed by the authors) denoted as FeaLingECc (Feature Level Fusion through Weighted Error Correction) is used for the multi-biometric fusion. The FeaLingECc allows fusion of different biometric modalities having different performances (e.g., face + iris). This scheme is adapted for a multi-unit system based on two-irises and a multi-modal system using a combination of iris and face. The difficulty in obtaining the crypto-biometric key locked in the system (and in turn the reference biometric data) is 189 bits for the two-iris system while 183 bits for the iris-face system using brute force attack. In addition to strong keys, these systems possess revocability and template diversity and protect user privacy.
open source systems | 2008
Aurélien Mayoue; Dijana Petrovska-Delacrétaz
This paper focuses on the common evaluation framework which was developed by the BioSecure Network of Excellence during the European FP6 project BioSecure (Biometrics for Secure authentication). This framework, which is composed of open-source reference systems, publicly available databases, assessment protocols and benchmarking results, introduces a new experimental methodology for conducting, reporting and comparing experiments for biometric systems, participating to standardisation efforts. Its use will permit to make a link between different published works. It provides also the necessary tools to assure the reproducibility of the benchmarking biometric experiments. This framework can be considered as a re-liable and innovative process to evaluate the progress of research in the field of bio-metrics.
international conference on biometrics theory applications and systems | 2007
A. El Hannani; Dijana Petrovska-Delacrétaz
Recognition of speaker identity based on modeling the streams produced by phonetic decoders(phonetic speaker recognition) has gained popularity during the past few years. Two of the major problems that arise when phone based systems are being developed are the possible mismatches between the development and evaluation data and the lack of transcribed databases. Data-driven segmentation techniques provide a potential solution to these problems because they do not use transcribed data and can easily be applied on development data minimizing the mismatches. In this paper we compare speaker recognition results using phonetic and data-driven decoders. To this end, we have compared the results obtained with two sets of speaker verification systems; the first one based on data-driven units and the second one on phonetic units. Results obtained on the NIST 2006 Speaker Recognition Evaluation data show that the data-driven approach is comparable to the phonetic one and that further improvements can be achieved by combining both approaches.
Progress in nonlinear speech processing | 2007
Dijana Petrovska-Delacrétaz; Asmaa El Hannani; Gérard Chollet
Odyssey | 2001
Jamal Kharroubi; Dijana Petrovska-Delacrétaz; Gérard Chollet