David Ngo Chek Ling
Multimedia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Ngo Chek Ling.
Pattern Recognition | 2004
Andrew Teoh Beng Jin; David Ngo Chek Ling; Alwyn Goh
Abstract Human authentication is the security task whose job is to limit access to physical locations or computer network only to those with authorisation. This is done by equipped authorised users with passwords, tokens or using their biometrics. Unfortunately, the first two suffer a lack of security as they are easy being forgotten and stolen; even biometrics also suffers from some inherent limitation and specific security threats. A more practical approach is to combine two or more factor authenticator to reap benefits in security or convenient or both. This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier–Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing. BioHashing highly tolerant of data capture offsets, with same user fingerprint data resulting in highly correlated bitstrings. Moreover, there is no deterministic way to get the user specific code without having both token with random data and user fingerprint feature. This would protect us for instance against biometric fabrication by changing the user specific credential, is as simple as changing the token containing the random data. The BioHashing has significant functional advantages over solely biometrics i.e. zero equal error rate point and clean separation of the genuine and imposter populations, thereby allowing elimination of false accept rates without suffering from increased occurrence of false reject rates.
Image and Vision Computing | 2005
Tee Connie; Andrew Teoh Beng Jin; Michael Goh Kah Ong; David Ngo Chek Ling
Recently, biometric palmprint has received wide attention from researchers. It is well-known for several advantages such as stable line features, low-resolution imaging, low-cost capturing device, and user-friendly. In this paper, an automated scanner-based palmprint recognition system is proposed. The system automatically captures and aligns the palmprint images for further processing. Several linear subspace projection techniques have been tested and compared. In specific, we focus on principal component analysis (PCA), fisher discriminant analysis (FDA) and independent component analysis (ICA). In order to analyze the palmprint images in multi-resolution-multi-frequency representation, wavelet transformation is also adopted. The images are decomposed into different frequency subbands and the best performing subband is selected for further processing. Experimental result shows that application of FDA on wavelet subband is able to yield both FAR and FRR as low as 1.356 and 1.492% using our palmprint database.
Proceedings of the 2003 ACM SIGMM workshop on Biometrics methods and applications | 2003
Michael Goh Kah Ong; Tee Connie; Andrew Teoh Beng Jin; David Ngo Chek Ling
Several contributions have shown that fusion of decisions or scores obtained from various single-modal biometrics verification systems often enhances the overall system performance. A recent approach of multimodal biometric systems with the use of single sensor has received significant attention among researchers. In this paper, a combination of hand geometry and palmprint verification system is being developed. This system uses a scanner as sole sensor to obtain the hands images. First, the hand geometry verification system performs the feature extraction to obtain the geometrical information of the fingers and palm. Second, the region of interest (ROI) is detected and cropped by palmprint verification system. This ROI acts as the base for palmprint feature extraction by using Linear Discriminant Analysis (LDA). Lastly, the matching scores of the two individual classifiers is fused by several fusion algorithms namely sum rule, weighted sum rule and Support Vector Machine (SVM). The results of the fusion algorithms are being compared with the outcomes of the individual palm and hand geometry classifiers. We are able to show that fusion using SVM with Radial Basis Function (RBF) kernel has outperformed other combined and individual classifiers.
computer graphics, imaging and visualization | 2004
Neo Han Foon; Ying Han Pang; Andrew Teoh Beng Jin; David Ngo Chek Ling
This paper presents a method of combining wavelet transforms (WT) and Zernike moments (ZM) as a feature vector for face recognition. Wavelet transform, with its approximate decomposition is used to reduce the noise and produce a representation in the low frequency domain, and hence making the facial images insensitive to facial expression and small occlusion. The Zernike moments, on the other hand, is selected as feature extractor due to its robustness to image noise, geometrical invariants property and orthogonal property. The simulation results on Essex database indicates that higher order degree of WT combine with ZM achieve better performance with respect to recognition rate rather than using WT or ZM alone. The optimum result is obtained for ZM of order 10 with Daubechies orthonormal wavelet filter of order 7 in the first decomposition level. It can achieve the verification of 94.26%.
australasian joint conference on artificial intelligence | 2004
Neo Han Foon; Andrew Teoh Beng Jin; David Ngo Chek Ling
This paper demonstrates a novel subspace projection technique via Non-Negative Matrix Factorization (NMF) to represent human facial image in low frequency subband, which is able to realize through the wavelet transform. Wavelet transform (WT), is used to reduce the noise and produce a representation in the low frequency domain, and hence making the facial images insensitive to facial expression and small occlusion. After wavelet decomposition, NMF is performed to produce region or part-based representations of the images. Non-negativity is a useful constraint to generate expressiveness in the reconstruction of faces. The simulation results on Essex and ORL database show that the hybrid of NMF and the best wavelet filter will yield better verification rate and shorter training time. The optimum results of 98.5% and 95.5% are obtained from Essex and ORL Database, respectively. These results are compared with our baseline method, Principal ComponentThis paper demonstrates a novel subspace projection technique via Non-Negative Matrix Factorization (NMF) to represent human facial image in low frequency subband, which is able to realize through the wavelet transform Wavelet transform (WT), is used to reduce the noise and produce a representation in the low frequency domain, and hence making the facial images insensitive to facial expression and small occlusion After wavelet decomposition, NMF is performed to produce region or part-based representations of the images Non-negativity is a useful constraint to generate expressiveness in the reconstruction of faces The simulation results on Essex and ORL database show that the hybrid of NMF and the best wavelet filter will yield better verification rate and shorter training time The optimum results of 98.5% and 95.5% are obtained from Essex and ORL Database, respectively These results are compared with our baseline method, Principal Component Analysis (PCA).
australasian joint conference on artificial intelligence | 2004
Ying Han Pang; Andrew Teoh Beng Jin; David Ngo Chek Ling
This paper presents a novel two-factor authenticator which hashes tokenized random data and moment based palmprint features to produce a set of private binary string, coined as Discrete-Hashing code This novel technique requires two factors (random number + authorized biometrics) credentials in order to access the authentication system Absence of either factor will just handicap the progress of authentication Besides that, Discrete-Hashing also possesses high discriminatory power, with highly correlated bit strings for intra-class data Experimental results show that this two-factor authenticator surpasses the classic biometric authenticator in terms of verification rate Our proposed approach provides a clear separation between genuine and imposter population distributions This implies that Discrete-Hashing technique allows achievement of zero False Accept Rate (FAR) without jeopardizing the False Reject Rate (FRR) performance, which is hardly possible to conventional biometric systems.
computational intelligence and security | 2005
Ying Han Pang; Andrew Teoh Beng Jin; David Ngo Chek Ling
This paper proposes a novel revocable two-factor authentication approach which combines user-specific tokenized pseudo-random bit sequence with biometrics data via a logic operation. Through the process, a distinct binary code per person, coined as bio-Bit, is formed. There is no deterministic way to acquire bio-Bit without having both factors. This feature offers an extra protection layer against biometrics fabrication since bio-Bit authenticator is replaceable via token replacement. The proposed method also presents functional advantages of obtaining zero equal error rate and yielding a clean separation between the genuine and imposter populations. Thereby, false accept rate can be eradicated without suffering from the increased occurrence of false reject rate.
australasian joint conference on artificial intelligence | 2004
Neo Han Foon; Andrew Teoh Beng Jin; David Ngo Chek Ling
With the wonders of the Internet and the promises of the worldwide information infrastructure, a highly secure authentication system is desirable Biometric has been deployed in this purpose as it is a unique identifier However, it also suffers from inherent limitations and specific security threats such as biometric fabrication To alleviate the liabilities of the biometric, a combination of token and biometric for user authentication and verification is introduced All user data is kept in the token and human can get rid of the task of remembering passwords The proposed framework is named as Bio- Discretization Bio-Discretization is performed on the face image features, which is generated from Non-Negative Matrix Factorization (NMF) in the wavelet domain to produce a set of unique compact bitstring by iterated inner product between a set of pseudo random numbers and face images Bio- Discretization possesses high data capture offset tolerance, with highly correlated bitstring for intraclass data This approach is highly desirable in a secure environment and it outperforms the classic authentication scheme.
australasian joint conference on artificial intelligence | 2005
Ying Han Pang; Andrew Teoh Beng Jin; David Ngo Chek Ling
This paper proposes a robust face recognition system, by providing a strong discrimination power and cancelable mechanism to biometrics data. Fisher’s Linear Discriminant uses pseudo Zernike moments to derive an enhanced feature subset. On the other hand, the revocation capability is formed by the combination of a tokenized pseudo-random data and the enhanced template. The inner product of these factors generates a user-specific binary code, face-Hash. This two-factor basis offers an extra protection layer against biometrics fabrication since face-Hash authenticator is replaceable via token replacement.
Computer Vision and Image Understanding | 2006
Chong Siew Chin; Andrew Teoh Beng Jin; David Ngo Chek Ling