Seyed Mehdi Lajevardi
RMIT University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Seyed Mehdi Lajevardi.
Signal, Image and Video Processing | 2012
Seyed Mehdi Lajevardi; Zahir M. Hussain
In this paper, we investigate feature extraction and feature selection methods as well as classification methods for automatic facial expression recognition (FER) system. The FER system is fully automatic and consists of the following modules: face detection, facial detection, feature extraction, selection of optimal features, and classification. Face detection is based on AdaBoost algorithm and is followed by the extraction of frame with the maximum intensity of emotion using the inter-frame mutual information criterion. The selected frames are then processed to generate characteristic features using different methods including: Gabor filters, log Gabor filter, local binary pattern (LBP) operator, higher-order local autocorrelation (HLAC) and a recent proposed method called HLAC-like features (HLACLF). The most informative features are selected based on both wrapper and filter feature selection methods. Experiments on several facial expression databases show comparisons of different methods.
IEEE Transactions on Image Processing | 2012
Seyed Mehdi Lajevardi; Hong Ren Wu
This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multilinear image analysis in different color spaces, and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to 2-D tensors based on multilinear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient method is employed for feature selection. These features are classified using a multiclass linear discriminant analysis classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for FER than other color spaces, by providing more efficient and robust performance for FER using facial images with illumination variation.
Digital Signal Processing | 2010
Seyed Mehdi Lajevardi; Zahir M. Hussain
Automatic facial expression recognition (FER) is a sub-area of face analysis research that is based heavily on methods of computer vision, machine learning, and image processing. This study proposes a rotation and noise invariant FER system using an orthogonal invariant moment, namely, Zernike moments (ZM) as a feature extractor and Naive Bayesian (NB) classifier. The system is fully automatic and can recognize seven different expressions. Illumination condition, pose, rotation, noise and others changing in the image are challenging task in pattern recognition system. Simulation results on different databases indicated that higher order ZM features are robust in images that are affected by noise and rotation, whereas the computational rate for feature extraction is lower than other methods.
digital image computing: techniques and applications | 2008
Seyed Mehdi Lajevardi; Margaret Lech
An efficient automatic facial expression recognition method is proposed. The method uses a set of characteristic features obtained by averaging the outputs from the Gabor filter bank with 5 frequencies and 8 different orientations, and then further reducing the dimensionality by the means of principal component analysis. The performance of the proposed system was compared with the full Gabor filter bank method. The classification tasks were performed using the K-Nearest neighbor (K-NN) classifier. The training and testing images were selected from the publicly available JAFFE database. The classification results show that the average Gabor filter (AGF) provides very high computational efficiency at the cost of a relatively small decrease in performance when compared to the full Gabor filter features.
IEEE Transactions on Image Processing | 2013
Seyed Mehdi Lajevardi; Arathi Arakala; Stephen A. Davis; Kathy J. Horadam
This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude.
image and vision computing new zealand | 2008
Seyed Mehdi Lajevardi; Margaret Lech
A novel method for facial expression recognition from sequences of image frames is described and tested. The expression recognition system is fully automatic, and consists of the following modules: face detection, maximum arousal detection, feature extraction, selection of optimal features, and facial expression recognition. The face detection is based on AdaBoost algorithm and is followed by the extraction of frames with the maximum arousal (intensity) of emotion using the inter-frame mutual information criterion. The selected frames are then processed to generate characteristic features based on the log-Gabor filter method combined with an optimal feature selection process, which uses the MIFS algorithm. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features were classified using the Naive Bayesian (NB) classifier.The system was tested using image sequences from the Cohn-Kanade database. The percentage of correct classification was increased from 68.9% for the non-optimized features to 79.5% for the optimized set of features.
digital image computing: techniques and applications | 2008
Seyed Mehdi Lajevardi; Margaret Lech
This study proposes a classification-based facial expression recognition method using a bank of multilayer perceptron neural networks. Six different facial expressions were considered. Firstly, logarithmic Gabor filters were applied to extract the features. Optimal subsets of features were then selected for each expression, down-sampled and further reduced in size via principal component analysis (PCA). The arrays of eigenvectors were multiplied by the original log-Gabor features to form feature arrays concatenated into six data tensors, representing training sets for different emotions. Each tensor was then used to train one of the six parallel neural networks making each network most sensitive to a different emotion. The classification efficiency of the proposed method was tested on static images from the Cohn-Kanade database. The results were compared with the full set of log-Gabor features. The average percentage of the correct classifications varied across different expressions from 31% to 85% for the optimised sub-set of log-Gabor features and from 23% to 67% for the full set of features. The average correct classification rate was increased from 52% for the full set of the log-Gabor features, to 70% for the optimised sub-set of log-Gabor features.
IET Biometrics | 2014
Seyed Mehdi Lajevardi; Arathi Arakala; Stephen Davis; Kathy J. Horadam
This study proposes an automatic dorsal hand vein verification system using a novel algorithm called biometric graph matching (BGM). The dorsal hand vein image is segmented using the K-means technique and the region of interest is extracted based on the morphological analysis operators and normalised using adaptive histogram equalisation. Veins are extracted using a maximum curvature algorithm. The locations and vascular connections between crossovers, bifurcations and terminations in a hand vein pattern define a hand vein graph. The matching performance of BGM for hand vein graphs is tested with two cost functions and compared with the matching performance of two standard point patterns matching algorithms, iterative closest point (ICP) and modified Hausdorff distance. Experiments are conducted on two public databases captured using far infrared and near infrared (NIR) cameras. BGMs matching performance is competitive with state-of-the-art algorithms on the databases despite using small and concise templates. For both databases, BGM performed at least as well as ICP. For the small sized graphs from the NIR database, BGM significantly outperformed point pattern matching. The size of the common subgraph of a pair of graphs is the most significant discriminating measure between genuine and imposter comparisons.
international conference on acoustics, speech, and signal processing | 2010
Seyed Mehdi Lajevardi; Zahir M. Hussain
This paper presents a novel classification method based on perceptual image quality metrics for facial expression recognition. The features are extracted based on Contourlet sub-bands. Then, the optimum features are selected using minimum redundancy and maximum relevance algorithm (MRMR). The selected features are classified by structural similarity metric in contourlet domain. The proposed method has been extensively assessed using two different databases: the Cohn-Kanade database and the JAFFE database. A series of experiments have been carried out and a comparative study suggests the efficiency of the proposed method in enhancing the classification rates of a number of known algorithms.
image and vision computing new zealand | 2010
Seyed Mehdi Lajevardi; Zahir M. Hussain
A novel approach is proposed for emotion recognition from low resolution color facial images. The 3D color images are unfolded to 2D matrix based on multilinear algebra. Then, the features are extracted from them by Log-Gabor filters. The optimum features are selected based on mutual information. These features are classified using linear discriminant analysis (LDA) classifier. Experimental results carry out that the proposed method has large improvement in emotion recognition for 3D color facial images. Furthermore, the color subspace has a great impact in the rate of emotion recognition from low resolution images.