Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramzi Abiantun is active.

Publication


Featured researches published by Ramzi Abiantun.


international conference on biometrics theory applications and systems | 2010

Robust local binary pattern feature sets for periocular biometric identification

Juefei Xu; Miriam Cha; Joseph L. Heyman; Shreyas Venugopalan; Ramzi Abiantun; Marios Savvides

In this paper, we perform a detailed investigation of various features that can be extracted from the periocular region of human faces for biometric identification. The emphasis of this study is to explore the BEST feature extraction approach used in stand-alone mode without any generative or discriminative subspace training. Simple distance measures are used to determine the verification rate (VR) on a very large dataset. Several filter-based techniques and local feature extraction methods are explored in this study, where we show an increase of 15% verification performance at 0.1% false accept rate (FAR) compared to raw pixels with the proposed Local Walsh-Transform Binary Pattern encoding. Additionally, when fusing our best feature extraction method with Kernel Correlation Feature Analysis (KCFA) [36], we were able to obtain VR of 61.2%. Our experiments are carried out on the large validation set of the NIST FRGC database [6], which contains facial images from environments with uncontrolled illumination. Verification experiments based on a pure 1–1 similarity matrix of 16028×8014 (~128 million comparisons) carried out on the entire database, where we find that we can achieve a raw VR of 17.0% at 0.1% FAR using our proposed Local Walsh-Transform Binary Pattern approach. This result, while may seem low, is more than the NIST reported baseline VR on the same dataset (12% at 0.1% FAR), when PCA was trained on the entire facial features for recognition [6].


computer vision and pattern recognition | 2006

Partial a Holistic Face Recognition on FRGC-II data using Support Vector Machine

Marios Savvides; Ramzi Abiantun; Jingu Heo; Sung Won Park; Chunyan Xie; B. V. K. Vijayakumar

In this paper we investigate how to perform face recognition on the hardest experiment (Exp4) in Face Recognition Grand Challenge(FRGC) phase-II data which deals with subjects captured under uncontrolled conditions such as harsh overhead illumination, some pose variations and facial expressions in both indoor and outdoor environments. Other variations include the presence and absence of eye-glasses. The database consists of a generic dataset of 12,776 images for training a generic face subspace, a target set of 16,028 images and a query set of 8,014 images are given for matching. We propose to use our novel face recognition algorithm using Kernel Correlation Feature Analysis for dimensionality reduction (222 features) coupled with Support Vector Machine discriminative training in the Target KCFA feature set for providing a similarity distance measure of the probe to each target subject. We show that this algorithm configuration yields the best verification rate at 0.1% FAR (87.5%) compared to PCA+SVM, GSLDA+SVM, SVM+SVM, KDA+SVM. Thus we explore with our proposed algorithm which facial regions provide the best discrimination ability, we analyze performing partial face recognition using the eye-region, nose region and mouth region. We empirically find that the eye-region is the most discriminative feature of the faces in FRGC data and yields a verification rate closest to the holistic face recognition of 83.5% @ 0.1% FAR compared to 87.5%. We use Support Vector Machines for fusing these two to boost the performance to [email protected] % FAR on the first large-scale face database such as the FRGC dataset.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Sparse Feature Extraction for Pose-Tolerant Face Recognition

Ramzi Abiantun; Utsav Prabhu; Marios Savvides

Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles.


international conference on acoustics, speech, and signal processing | 2006

Class Dependent Kernel Discrete Cosine Transform Features for Enhanced Holistic Face Recognition in FRGC-II

Marios Savvides; Jingu Heo; Ramzi Abiantun; Chunyan Xie; Bhagavatula Vijaya Kumar

Face recognition is one of the least intrusive biometric modalities that can be used to identify individuals from surveillance video. In such scenarios the users are under the least co-operative conditions and thus the ability to perform robust face recognition in such scenarios is very challenging. In this paper we focus on improving the face recognition performance on a large database with over 36,000 facial images from the face recognition grand challenge phase-II data collected by University of Notre Dame. We particularly focus on Experiment 4 which is the most challenging and captured in uncontrolled conditions where the baseline PCA algorithm yields 12% verification rate at 0.1% FAR. We propose a novel approach using class-dependent kernel discrete cosine transform features which improves the performance significantly yielding a 91.33% verification rate at 0.1% FAR, and we also show that by working in the DCT transform domain for obtaining nonlinear features is more optimal than working in the original spatial-pixel domain which only yields a verification rate of 85% at 0.1% FAR. Thus our proposed method outperforms the baseline by 79.33% in verification rate @ 0.1% false acceptance rate


applied imagery pattern recognition workshop | 2008

Tear-duct detector for identifying left versus right iris images

Ramzi Abiantun; Marios Savvides

In this paper, we present different pattern recognition approaches for automatically detecting tear ducts in iris acquired eye images for enhancing iris recognition and detecting mislabeling in datasets. Detecting the tear duct in an image will tell an iris recognition system whether the presented eye image is that of a left or a right eye. This will enable the iris matcher to match the enrolled image against images in the database belonging to the same side, thus reducing the error rates by eliminating the chance of matching a left iris to a right iris or vice-versa. This is a major problem in many single iris imaging acquisition devices currently deployed in the field where the data recorded is mislabeled due to human error. We present several techniques of detecting tear ducts, including boosted Haar features, support vector machines (SVM), and more traditional approaches like PCA and LDA. Finally, we show that tear duct detection improves the detection of left/right iris recognition over previous approaches.


2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference | 2006

How Low Can You Go? Low Resolution Face Recognition Study Using Kernel Correlation Feature Analysis on the FRGCv2 dataset

Ramzi Abiantun; Marios Savvides; B. V. K. Vijaya Kumar

In this paper we investigate the effect of image resolution of the face recognition grand challenge (FRGC) dataset on the kernel class-dependence feature analysis (KCFA) method. Good performance on low-resolution image data is important for any face recognition system using low- resolution imagery, such as in surveillance footage. We show that KCFA works reliably even at very low resolutions on the FRGC dataset Experiment 4 using the one-to-one matching protocol (greater than 70% verification rate (VR) at 0.1% false accept rate (FAR)). We observe reasonable performance at resolution as low as 16x16. However performance of KCFA degrades significantly below this resolution, but still outperforms the PCA baseline algorithm with 12% VR at 0.1% FAR.


applied imagery pattern recognition workshop | 2008

Boosted multi image features for improved face detection

Ramzi Abiantun; Marios Savvides

In this paper, we present novel approaches of automatically detecting human faces in images which is extremely important for any face recognition system. This paper expands on the traditional Viola-Jones approach by proposing to boost a plethora of mixed feature sets for face detection; we do this by adding non-Haar-like elements to a large pool of mixed features in an Adaboost framework. We show how to generate discriminative support vector machine (SVM) type features and Gabor-type features (in various orientations and frequencies and central locations) and use this whole pool as possible discriminative candidate feature sets in modeling the patterns of a frontal view human face. This general and large-diversity pool of features is used to build a boosted strong classifier and we show we can improve the generalization performance of the AdaBoost approach, and as a result improving the robustness of the face detector. We report performance on the MIT+CMU face database and compare the result with other published face detection algorithms. We also discuss processing times and speeding up methods to offset the increase in complexity in order to achieve face detection in real time.


international conference on acoustics, speech, and signal processing | 2006

Face Recognition with Kernel Correlation Filters on a Large Scale Database

Jingu Heo; Marios Savvides; Ramzi Abiantun; Chunyan Xie; B. V. K. Vijayakumar

Recently, direct linear discriminant analysis (D-LDA) and Gram-Schmidt LDA methods have been proposed for face recognition. By also utilizing some of the null-space of the within-class scatter matrix, they exhibit better performance compared to Fisherfaces and eigenfaces. However, these linear subspace methods may not discriminate faces well due to large nonlinear distortions in the face images. Redundant class dependence feature analysis (CFA) method exhibits superior performance compared to other methods by representing nonlinear features well. We show that with a proper choice of kernel parameters used with the proposed kernel correlation filters within the CFA framework, the overall face recognition performance is significantly improved. We present results of this proposed approach on a large scale database from the face recognition grand challenge (FRGC) which contains over 36,000 images


Archive | 2007

Frequency Domain Face Recognition

Marios Savvides; Ramamurthy Bhagavatula; Yung-Hui Li; Ramzi Abiantun

In the always expanding field of biometrics the choice of which biometric modality or modalities to use, is a difficult one. While a particular biometric modality might offer superior discriminative properties (or be more stable over a longer period of time) when compared to another modality, the ease of its acquisition might be quite difficult in comparison. As such, the use of the human face as a biometric modality presents the attractive qualities of signifi cant discrimination with the least amount of intrusiveness . In this sense, the majority of biometric systems whose primary modality is the face, emphasize analysis of the spatial representation of the face i.e., the intensity image of the face. While there has been varying and significant levels of performance achieved through the use of spatial 2-D data, there is significant theoretical work and empirical results that support the use of a frequency domain representation, to achieve greater face recognition performance. The use of the Fourier transform allows us to quickly and easily obtain raw frequency data which is significantly more discriminative (after appropriate data manipulation) than the raw spatial data from which it was derived. We can further increase discrimination through additional signal transforms and specific feature extraction algorithms intended for use in the frequency domain, so we can achieve significant improved performance and distortion tolerance compared to that of their spatial domain counterparts. In this chapter we will review, outline, and present theory and results that elaborate on frequency domain processing and representations for enhanced face recognition. The second section is a brief literature review of various face recognition algorithms. The third section will focus on two points: a review of the commonly used algorithms such as Principal Component Analysis (PCA) (Turk and Pentland, 1991) and Fisher Linear Discriminant Analysis (FLDA) (Belhumeur et al., 1997) and their novel use in conjunction with frequency domain processed data for enhancing face recognition ability of these algorithms. A comparison of performance with respect to the use of spatial versus processed and un-processed frequency domain data will be presented. The fourth section will be a thorough analysis and derivation of a family of advanced frequency domain matching algorithms collectively known as Advanced Correlation Filters (ACFs). It is in this section that the most significant discussion will occur as ACFs represent the latest advances in frequency domain facial recognition algorithms with specifically built-in distortion tolerance. In the fifth section we present results of more recent research done involving ACFs and face recognition. The final


Fourth IEEE Workshop on Automatic Identification Advanced Technologies (AutoID'05) | 2005

Automatic eye-level height system for face and iris recognition systems

Ramzi Abiantun; Marios Savvides; Pradeep K. Khosla

In this paper we present a fully automated mobile camera device platform that will automatically adjust to the eye-levels of the persons in front or approaching the system. This system serves as a front-end to aid the face recognition and iris recognition systems during both the enrollment and verification of different people of variable heights. Currently most systems are positioned at fixed heights and require subjects who are very tall or short to adjust themselves for enrollment. In some cases this requires the enrollment system operator (or officer) to adjust the camera system accordingly to the users height. This leads to much failure to acquire errors. The current U.S. VISIT system, Immigration officers manually move the camera goose-necks to adjust to the height of the visitors for photographing. The system presented in this paper can fully automate this process by automatically adjusting the height of the camera (or other biometric device such as iris acquisition systems) to provide a good enrollment/verification photo image for matching using the automated face detection to drive the biometric sensors to the appropriate height level. The need for such a system can help the results obtained by the Independent Testing of Iris Recognition Technology (ITIRT) and face recognition systems that can lead to minimal user co-operation.

Collaboration


Dive into the Ramzi Abiantun's collaboration.

Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jingu Heo

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunyan Xie

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Utsav Prabhu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brett E. Bagwell

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

David V. Wick

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Grant Soehnel

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge