Andreas Humm
University of Fribourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Humm.
systems man and cybernetics | 2009
Andreas Humm; Jean Hennebert; Rolf Ingold
In this paper, we report on the development of an efficient user authentication system based on a combined acquisition of online pen and speech signals. The novelty of our approach is in the simultaneous recording of these two modalities, simply asking the user to utter what she/he is writing. The main benefit of this multimodal approach is a better accuracy at no extra costs in terms of access time or inconvenience. Another benefit comes from an increased difficulty for forgers willing to perform imitation attacks as two signals need to be reproduced. We are comparing here two potential scenarios of use. The first one is called spoken signatures where the user signs and says the content of the signature. The second scenario is based on spoken handwriting where the user is prompted to write and read the content of sentences randomly extracted from a text. Data according to these two scenarios have been recorded from a set of 70 users. In the first part of this paper, we describe the acquisition procedure, and we comment on the viability and usability of such simultaneous recordings. Our conclusions are supported by a short survey performed with the users. In the second part, we present the authentication systems that we have developed for both scenarios. More specifically, our strategy was to model independently both streams of data and to perform a fusion at the score level. Starting from a state-of-the-art-modeling algorithm based on Gaussian Mixture Models trained with an Expectation-Maximization procedure, we report on several significant improvements that are brought. As a general observation, the use of both modalities outperforms significantly the modalities used alone.
international conference on biometrics | 2007
Jean Hennebert; Renato Loeffel; Andreas Humm; Rolf Ingold
We present in this paper a new forgery scenario for dynamic signature verification systems. In this scenario, we assume that the forger has got access to a static version of the genuine signature, is using a dedicated software to automatically recover dynamics of the signature and is using these regained signatures to break the verification system. We also show that automated procedures can be built to regain signature dynamics, making some simple assumptions on how signatures are performed. We finally report on the evaluation of these procedures on the MCYT-100 signature database on which regained versions of the signatures are generated. This set of regained signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. Results show that the regained forgeries generate much more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database. These results clearly show that such kind of forgery attacks can potentially represent a critical security breach for signature verification systems.
international conference on machine learning | 2006
Andreas Humm; Jean Hennebert; Rolf Ingold
In this paper we report on first experimental results of a novel multimodal user authentication system based on a combined acquisition of online handwritten signature and speech modalities. In our project, the so-called CHASM signatures are recorded by asking the user to utter what he is writing. CHASM actually stands for Combined Handwriting and Speech Modalities where the pen and voice signals are simultaneously recorded. We have built a baseline CHASM signature verification system for which we have conducted a complete experimental evaluation. This baseline system is composed of two Gaussian Mixture Models sub-systems that model independently the pen and voice signal. A simple fusion of both sub-systems is performed at the score level. The evaluation of the verification system is conducted on CHASM signatures taken from the MyIDea multimodal database, accordingly to the protocols provided with the database. This allows us to draw our first conclusions in regards to time variability impact, to skilled versus unskilled forgeries attacks and to some training parameters. Results are also reported for the two sub-systems evaluated separately and for the global system.
acm multimedia | 2006
Alain Wahl; Jean Hennebert; Andreas Humm; Rolf Ingold
We present a procedure to create brute-force signature forgeries. The procedure is supported by Sign4J, a dynamic signature imitation training software that was specifically built to help people learn to imitate the dynamics of signatures. The main novelty of the procedure lies in a feedback mechanism that is provided to let the user know how good the imitation is and on what part of the signature the user has still to improve. The procedure and the software are used to generate a set of brute-force signatures on the MCYT-100 database. This set of forged signatures is used to evaluate the rejection performance of a baseline dynamic signature verification system. As expected, the brute-force forgeries generate more false acceptation in comparison to the random and low-force forgeries available in the MCYT-100 database.
international conference on acoustics, speech, and signal processing | 2007
Jean Hennebert; Andreas Humm; Roif Ingold
We report on our developments towards building a novel user authentication system using combined acquisition of online handwritten signature and speech modalities. In our approach, signatures are recorded by asking the user to say what she/he is writing, leading to the so-called spoken signatures. We have built a verification system composed of two Gaussian mixture models (GMMs) sub-systems that model independently the pen and voice signal. We report on results obtained with two algorithms used for training the GMMs, respectively expectation maximization and maximum a posteriori adaptation. Different algorithms are also compared for fusing the scores of each modality. The evaluations are conducted on spoken signatures taken from the MyIDea multimodal database, accordingly to the protocols provided with the database. Results are in favor of using MAP adaptation with a simple weighted sum fusion. Results show also clearly the impact of time variability and of skilled versus unskilled forgeries attacks.
international conference on document analysis and recognition | 2007
Andreas Humm; Rolf Ingold; Jean Hennebert
We are proposing a novel and efficient user authentication system using combined acquisition of online handwriting and speech signals. In our approach, signals are recorded by asking the user to say what she or he is simultaneously writing. This methodology has the clear advantage of acquiring two sources of biometric information at no extra cost in terms of time or inconvenience. We have built a straightforward verification system to model these signals using statistical models. It is composed of two Gaussian mixture models (GMMs) sub-systems that takes as input features extracted from the pen and voice signals. The system is evaluated on Myldea, a realistic multimodal biometric database. Results show that the use of both speech and handwriting modalities outperforms significantly these modalities used alone. We also report on the evaluations of different training algorithms and fusion strategies.
international conference on biometrics theory applications and systems | 2007
Andreas Humm; Jean Hennebert; Rolf Ingold
In this paper we report on the developments of an efficient user authentication system using combined acquisition of online signature and speech modalities. In our project, these two modalities are simultaneously recorded by asking the user to utter what she/he is writing. The main benefit of this multimodal approach is a better accuracy at no extra costs in terms of access time or inconvenience. More specifically, we report in this paper on significant improvements of our initial system that was based on Gaussian Mixture Models (GMMs) applied independently to the pen and voice signal. We show that the GMMs can be advantageously replaced by Hidden Markov Models (HMMs) provided that the number of state used for the topology is optimized and provided that the model parameters are trained with a Maximum a Posteriori (MAP) adaptation procedure instead of the classically used Expectation Maximization (EM). The evaluations are conducted on spoken signatures taken from the MylDea multimodal database. Consistently with our previous evaluation of the GMM system, we observe for the HMM system that the use of both speech and handwriting modalities outperforms significantly these modalities used alone. We also report on the evaluations of different score fusion strategies.
2009 Proceedings of 6th International Symposium on Image and Signal Processing and Analysis | 2009
Andreas Humm; Rolf Ingold; Jean Hennebert
We report on results obtained with a new user authentication system based on a combined acquisition of online pen and speech signals. In our approach, the two modalities are recorded by simply asking the user to say what she or he is simultaneously writing. The main benefit of this methodology lies in the simultaneous acquisition of two sources of biometric information with a better accuracy at no extra cost in terms of time or inconvenience. Another benefit comes from an increased difficulty for forgers willing to perform imitation attacks as two signals need to be reproduced. Our first strategy was to model independently both streams of data and to perform a fusion at the score level using state-of-the-art modelling tools and training algorithms. We report here on a second strategy, complementing the first one and aiming at modelling both streams of data jointly. This approach uses a recognition system to compute the forced alignment of Hidden Markov Models (HMMs). The system then tries to determine synchronization patterns using these two alignments of handwriting and speech and computes a new score according to these patterns. In this paper, we present these authentication systems with the focus on the joint modelling. The evaluation is performed on MyIDea, a realistic multimodal biometric database. Results show that a combination of the different modelling strategies (independent and joint) can improve the system performance on spoken handwriting data.
Journal of Electronic Imaging | 2008
Andreas Humm; Jean Hennebert; Rolf Ingold
We propose a new user authentication system based on spoken signatures, where online signature and speech signals are acquired simultaneously. The main benefit of this multimodal approach is better accuracy at no extra cost for the user in terms of access time or inconvenience. Another benefit lies in a better robustness against intentional forgeries due to the extra difficulty for the forger to produce both signals. We set up an experimental framework to measure these benefits on MyIDea, a realistic multimodal biometric database publicly available. More specifically, we evaluate the performance of state of the art modeling systems based on Gaussian mixture models (GMM) and hidden Markov models (HMM) applied independently to the pen and voice signal, where a simple rule-based score fusion procedure is used. We conclude that the best performance is achieved by the HMMs, provided that their topology is optimized on a per user basis. Furthermore, we show that more precise models can be obtained through the use of maximum a posteriori probability (MAP) training instead of the classically used expectation maximization (EM). We also measure the impact of multisession scenarios versus monosession scenarios, and the impact of skilled versus unskilled signature forgeries attacks.
international conference on biometrics | 2007
Andreas Humm; Jean Hennebert; Rolf Ingold
We are reporting on consolidated results obtained with a new user authentication system based on combined acquisition of online handwriting and speech signals. In our approach, signals are recorded by asking the user to say what she or he is simultaneously writing. This methodology has the clear advantage of acquiring two sources of biometric information at no extra cost in terms of time or inconvenience. We are proposing here two scenarios of use: spoken signature where the user signs and speaks at the same time and spoken handwriting where the user writes and says what is written. These two scenarios are implemented and fully evaluated using a verification system based on Gaussian Mixture Models (GMMs). The evaluation is performed on MyIdea, a realistic multimodal biometric database. Results show that the use of both speech and handwriting modalities outperforms significantly these modalities used alone, for both scenarios. Comparisons between the spoken signature and spoken handwriting scenarios are also drawn.