Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Girija Chetty is active.

Publication


Featured researches published by Girija Chetty.


2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference | 2006

Multi-Level Liveness Verification for Face-Voice Biometric Authentication

Girija Chetty; Michael Wagner

In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types of audio and video replay attacks. The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and allows multiple levels of security. Experiments with three different speaking corpora VidTIMIT, UCBN and AVOZES shows a significant improvement in system performance in terms of DET curves and equal error rates (EER) for different types of replay and synthesis attacks.


international symposium on neural networks | 2012

A novel image watermarking scheme using Extreme Learning Machine

Anurag Mishra; Amita Goel; Ram Pal Singh; Girija Chetty; Lavneet Singh

In this paper, a novel digital image watermarking algorithm based on a fast neural network known as Extreme Learning Machine (ELM) for two grayscale images is proposed. The ELM algorithm is very fast and completes its training in milliseconds unlike its other counterparts such as BPN. The proposed watermarking algorithm trains the ELM by using low frequency coefficients of the grayscale host image in transform domain. The trained ELM produces a sequence of 1024 real numbers, normalized as per N(0, 1) as an output. This sequence is used as watermark to be embedded within the host image using Coxs formula to obtain the signed image. The visual quality of the signed images is evaluated by PSNR. High PSNR values indicate that the quality of signed images is quite good. The computed high value of SIM (X, X*) establishes that the extraction process is quite successful and overall the algorithm finds good practical applications, especially in situations that warrant meeting time constraints.


Image and Vision Computing | 2008

Robust face-voice based speaker identity verification using multilevel fusion

Girija Chetty; Michael Wagner

In this paper, we propose a robust multilevel fusion strategy involving cascaded multimodal fusion of audio-lip-face motion, correlation and depth features for biometric person authentication. The proposed approach combines the information from different audio-video based modules, namely: audio-lip motion module, audio-lip correlation module, 2D+3D motion-depth fusion module, and performs a hybrid cascaded fusion in an automatic, unsupervised and adaptive manner, by adapting to the local performance of each module. This is done by taking the output-score based reliability estimates (confidence measures) of each of the module into account. The module weightings are determined automatically such that the reliability measure of the combined scores is maximised. To test the robustness of the proposed approach, the audio and visual speech (mouth) modalities are degraded to emulate various levels of train/test mismatch; employing additive white Gaussian noise for the audio and JPEG compression for the video signals. The results show improved fusion performance for a range of tested levels of audio and video degradation, compared to the individual module performances. Experiments on a 3D stereovision database AVOZES show that, at severe levels of audio and video mismatch, the audio, mouth, 3D face, and tri-module (audio-lip motion, correlation and depth) fusion EERs were 42.9%, 32%, 15%, and 7.3%, respectively, for biometric person authentication task.


ieee international conference on fuzzy systems | 2010

Biometric liveness checking using multimodal fuzzy fusion

Girija Chetty

In this paper we propose a novel fusion protocol based on fuzzy fusion of face and voice features for checking liveness in secure identity authentication systems based on face and voice biometrics. Liveness checking can detect fraudulent impostor attacks on the security systems, and ensure that biometric cues are acquired from a live person who is actually present at the time of capture for authenticating the identity. The proposed fuzzy fusion of audio visual features is based on mutual dependency models which extract the spatio-temporal correlation between face and voice dynamics during speech production, Performance evaluation in terms of DET (Detector Error Tradeoff) curves and EERs (Equal Error Rates) on publicly available audiovisual speech databases show a significant improvement in performance of proposed fuzzy fusion of face-voice features based on mutual dependency models over conventional fusion techniques.


international conference on neural information processing | 2011

Multimodal Identity Verification Based on Learning Face and Gait Cues

Emdad Hossain; Girija Chetty

In this paper we propose a novel multimodal Bayesian approach based on PCA-LDA processing for person identification from low resolution surveillance video with cues extracted from gait and face biometrics. The experimental evaluation of the proposed scheme on a publicly available database [2] showed that the combined PCA-LDA face and gait features can lead to powerful identity verification and can capture the inherent multimodality in walking gait patterns and discriminate the identity from low resolution surveillance videos.


network and system security | 2010

Face Gender Recognition Based on 2D Principal Component Analysis and Support Vector Machine

Len Bui; Dat Tran; Xu Huang; Girija Chetty

This paper presents a novel method for solving face gender recognition problem. This method employs 2D Principal Component Analysis, one of the prominent methods for extracting feature vectors, and Support Vector Machine, the most powerful discriminative method for classification. Experiments for the proposed approach have been conducted on FERET data set and the results show that the proposed method could improve the classification rates.


information sciences, signal processing and their applications | 2005

Investigating feature-level fusion for checking liveness in face-voice authentication

Girija Chetty; Michael Wagner

In this paper we propose a feature level fusion approach for checking liveness in face-voice person authentication. Liveness verification experiments conducted on two audiovisual databases, VidTIMIT and UCBN, show that feature-level fusion is indeed a powerful technique for checking liveness in systems that are vulnerable to replay attacks, as it preserves synchronisation between closely coupled modalities, such as voice and face, through various stages of authentication. An improvement in error rate of the order of 25-40% is achieved for replay attack experiments by using feature level fusion of acoustic and visual feature vectors from lip region as compared to classical late fusion approach.


international conference on neural information processing | 2013

Multimodal Feature Learning for Gait Biometric Based Human Identity Recognition

Emdad Hossain; Girija Chetty

In this paper we propose a novel multimodal feature learning technique based on deep learning for gait biometric based human-identification scheme from surveillance videos. Experimental evaluation of proposed learning features based on novel deep learning and standard PCA/LDA features in combination with classifier techniques NN/MLP/SVM/SMO on different datasets from two gait databases the publicly available CASIA multiview multispectral database, and the UCMG multiview database, show a significant improvement in recognition accuracies with proposed fused deep learning features.


digital image computing: techniques and applications | 2011

Blind Video Tamper Detection Based on Fusion of Source Features

Julian Goodwin; Girija Chetty

In this paper, we propose novel algorithmic models based on information fusion and feature transformation in cross-modal subspace for different types of residue features extracted from several intra-frame and inter-frame pixel sub-blocks in video sequences for detecting digital video tampering or forgery. An evaluation of proposed residue features -- the noise residue features and the quantization features, their transformation in cross-modal subspace, and their multimodal fusion, for emulated copy-move tamper scenario shows a significant improvement in tamper detection accuracy as compared to single mode features without transformation in cross-modal subspace.


International Journal of Biometrics | 2009

Biometric person authentication with liveness detection based on audio-visual fusion

Girija Chetty; Michael Wagner

In this paper, we propose two new approaches for extracting mouth features for authenticating the person identity with liveness checks. The novel correlated audio-lip features and tensor lip-motion features allow liveness checks to be included in the person identity authentication systems, and ensures that the biometric cues are acquired from a live person who is actually present at the time of capture. Incorporating liveness check functionality in identity authentication systems can guard the system against the advanced spoofing attempts such as manufactured or replayed videos.

Collaboration


Dive into the Girija Chetty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dat Tran

University of Canberra

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge