Emdad Hossain
University of Canberra
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emdad Hossain.
international conference on neural information processing | 2011
Emdad Hossain; Girija Chetty
In this paper we propose a novel multimodal Bayesian approach based on PCA-LDA processing for person identification from low resolution surveillance video with cues extracted from gait and face biometrics. The experimental evaluation of the proposed scheme on a publicly available database [2] showed that the combined PCA-LDA face and gait features can lead to powerful identity verification and can capture the inherent multimodality in walking gait patterns and discriminate the identity from low resolution surveillance videos.
international conference on neural information processing | 2013
Emdad Hossain; Girija Chetty
In this paper we propose a novel multimodal feature learning technique based on deep learning for gait biometric based human-identification scheme from surveillance videos. Experimental evaluation of proposed learning features based on novel deep learning and standard PCA/LDA features in combination with classifier techniques NN/MLP/SVM/SMO on different datasets from two gait databases the publicly available CASIA multiview multispectral database, and the UCMG multiview database, show a significant improvement in recognition accuracies with proposed fused deep learning features.
fuzzy systems and knowledge discovery | 2012
Emdad Hossain; Girija Chetty
In this paper, we proposed a novel approach for establishing person identity based on gait cues in surveillance videos using simple feature extraction and classifier methods. Person identity verification is an exigent task. When we go for identification or verification, first thing we count; the process or the method. Robust identification always depends on trait selection and robust method. From the beginning of the automated identification; classifiers and specific trait was the main concern, because, classifier is the tool which enables scientists to identity a person or classify a person in respect to provided input, on the other hand, biometric trait has to be unique, reliable and should have expected applicability. We used classifier approaches based on two different classifiers-NaiveBayes and C4.5 [1].
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction | 2012
Emdad Hossain; Girija Chetty; Roland Goecke
In this paper we propose a novel human-identification scheme from long range gait profiles in surveillance videos. We investigate the role of multi view gait images acquired from multiple cameras, the importance of infrared and visible range images in ascertaining identity, the impact of multimodal fusion, efficient subspace features and classifier methods, and the role of soft/secondary biometric (walking style) in enhancing the accuracy and robustness of the identification systems, Experimental evaluation of several subspace based gait feature extraction approaches (PCA/LDA) and learning classifier methods (NB/MLP/SVM/SMO) on different datasets from a publicly available gait database CASIA, show significant improvement in recognition accuracies with multimodal fusion of multi-view gait images from visible and infrared cameras acquired from video surveillance scenarios.
embedded and ubiquitous computing | 2011
Emdad Hossain; Girija Chetty
In this paper we propose a novel multimodal fusion approach based on PCA-LDA processing for person identification from low resolution surveillance video with cues extracted from gait and face biometrics. The experimental evaluation of the proposed scheme on a publicly available database [2] showed that the combined PCA-LDA face and gait when fused in either hierarchical or holistic fusion, can lead to powerful identity verification that can capture the inherent multimodality in walking gait patterns and ascertain the identity from low resolution surveillance videos
international conference on signal processing and communication systems | 2012
Emdad Hossain; Girija Chetty
In this paper we propose a novel human-identification scheme from long range gait profiles in surveillance videos. We investigate the role of multi view gait images acquired from multiple cameras, importance of infrared and visible range images in ascertaining identity, and role of soft/secondary biometric (walking style) in enhancing the accuracy and robustness of the identification systems, Experimental evaluation of several subspace based gait feature extraction approaches (PCA/LDA) and learning classifier methods (MLP/SMO) on different datasets from a publicly available gait database CASIA, show that it is possible to do large scale human identity recognition from gait information captured in multiple view-points, with multiple cameras and with usage of subtle soft/secondary biometric information.
machine learning and data mining in pattern recognition | 2012
Emdad Hossain; Girija Chetty
In this paper we propose a novel person-identification scheme based on gait biometric information in surveillance videos using simple PCA-LDA features, and RBF-MLP and SMO-SVM classifier. The experimental evaluation on resolution surveillance video images from a publicly available database [1] showed that the combined PCA-MLP and LDA-MLP technique turns out to be a powerful method for capturing identity specific information from walking gait patterns.
international conference on algorithms and architectures for parallel processing | 2012
Emdad Hossain; Girija Chetty
In this paper we propose a novel multi-view feature fusion of gait biometric information in surveillance videos for large scale human identification. The experimental evaluation on low resolution surveillance video images from a publicly available database showed that the combined LDA-MLP technique turns out to be a powerful method for capturing identity specific information from walking gait patterns. The multi-view fusion at feature level allows complementarity of multiple camera views in surveillance scenarios to be exploited for improvement of identity recognition performance.
advanced concepts for intelligent vision systems | 2012
Emdad Hossain; Girija Chetty
In this paper we propose a novel multi-view feature fusion of gait biometric information in surveillance videos for large scale human identification. The experimental evaluation on low resolution surveillance video images from a publicly available database [1] showed that the combined LDA-MLP technique turns out to be a powerful method for capturing identity specific information from walking gait patterns. The multi-view fusion at feature level allows complementarity of multiple camera views in surveillance scenarios to be exploited for improvement of identity recognition performance.
Archive | 2011
Girija Chetty; Emdad Hossain
Most of the current biometric identity authentication systems currently deployed are based on modeling the identity of a person based on unimodal information, i.e. face, voice, or fingerprint features. Also, many current interactive civilian remote human computer interaction applications are based on speech based voice features, which achieve significantly lower performance for operating environments with low signal-to-noise ratios (SNR). For a long time, use of acoustic information alone has been a great success for several automatic speech processing applications such as automatic speech transcription or speaker authentication, while face identification systems based visual information alone from faces also proved to be of equally successful. However, in adverse operating environments, performance of either of these systems could be suboptimal. Use of both visual and audio information can lead to better robustness, as they can provide complementary secondary clues that can help in the analysis of the primary biometric signals (Potamianos et al (2004)). The joint analysis of acoustic and visual speech can improve the robustness of automatic speech recognition systems (Liu et al (2002), Gurbuz et al (2002). There have been several systems proposed on use of joint face-voice information for improving the performance of current identity authentication systems. However, most of these state-of-the-art authentication approaches are based on independently processing the voice and face information and then fusing the scores – the score fusion (Chibelushi et al (2002), Pan et al (2000), Chaudari et. al.(2003)). A major weakness of these systems is that they do not take into account fraudulent replay attack scenarios into consideration, leaving them vulnerable to spoofing by recording the voice of the target in advance and replaying it in front of the microphone, or simply placing a still picture of the target’s face in front of the camera. This problem can be addressed with liveness verification, which ensures that biometric cues are acquired from a live person who is actually present at the time of capture for authenticating the identity. With the diffusion of Internet based authentication systems for day-to-day civilian scenarios at a astronomical pace (Chetty and Wagner (2008)), it is high time to think about the vulnerability of traditional biometric authentication approaches and consider inclusion of liveness checks for next generation biometric systems. Though there is some work in finger print based liveness checking techniques (Goecke and Millar (2003), Molhom et al (2002)), there is hardly any work in liveness checks based on user-