Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hossein Moeini is active.

Publication


Featured researches published by Hossein Moeini.


Image and Vision Computing | 2015

Unrestricted pose-invariant face recognition by sparse dictionary matrix

Ali Moeini; Hossein Moeini; Karim Faez

Abstract In this paper, a novel method is proposed for real-world pose-invariant face recognition from only a single image in a gallery. A 3D Facial Expression Generic Elastic Model (3D FE-GEM) is proposed to reconstruct a 3D model of each human face using only a single 2D frontal image. Then, for each person in the database, a Sparse Dictionary Matrix (SDM) is created from all face poses by rotating the 3D reconstructed models and extracting features in the rotated face. Each SDM is subsequently rendered based on triplet angles of face poses. Before matching to SDM, an initial estimate of triplet angles of face poses is obtained in the probe face image using an automatic head pose estimation approach. Then, an array of the SDM is selected based on the estimated triplet angles for each subject. Finally, the selected arrays from SDMs are compared with the probe image by sparse representation classification. Convincing results were acquired to handle pose changes on the FERET, CMU PIE, LFW and video face databases based on the proposed method compared to several state-of-the-art in pose-invariant face recognition.


IEEE Transactions on Information Forensics and Security | 2015

Real-World and Rapid Face Recognition Toward Pose and Expression Variations via Feature Library Matrix

Ali Moeini; Hossein Moeini

In this paper, a novel method for face recognition under pose and expression variations is proposed from only a single image in the gallery. A 3D probabilistic facial expression recognition generic elastic model is proposed to reconstruct a 3D model from real-world human face using only a single 2D frontal image with/without facial expressions. Then, a feature library matrix (FLM) is generated for each subject in the gallery from all face poses by rotating the 3D reconstructed models and extracting features in the rotated face pose. Therefore, each FLM is subsequently rendered for each subject in the gallery based on triplet angles of face poses. In addition, before matching the FLM, an initial estimate of triplet angles is obtained from the face pose in probe images using an automatic head pose estimation approach. Then, an array of the FLM is selected for each subject based on the estimated triplet angles. Finally, the selected arrays from FLMs are compared with extracted features from the probe image by iterative scoring classification using the support vector machine. Convincing results are acquired to handle pose and expression changes on the Bosphorus, Face Recognition Technology (FERET), Carnegie Mellon University-Pose, Illumination, and Expression (CMU-PIE), and Labeled Faces in the Wild (LFW) face databases compared with several state-of-the-art methods in pose-invariant face recognition. The proposed method not only demonstrates an excellent performance by obtaining high accuracy on all four databases but also outperforms other approaches realistically.


international conference on pattern recognition | 2014

Real-Time Pose-Invariant Face Recognition by Triplet Pose Sparse Matrix from Only a Single Image

Ali Moeini; Hossein Moeini; Karim Faez

In this paper, a novel method for real-time pose-invariant face recognition is proposed from only a single image in a gallery including any facial expressions. A 3D Facial Expression Generic Elastic Model (3D FE-GEM) is proposed to reconstruct 3D model of each human face in the present database using only a single 2D frontal image. Then, for each person in the database, a Triplet Pose Sparse Matrix (TPSM) is created from all face poses by rotating the 3D reconstructed models and extracting features in rotated face. Each TPSM is subsequently rendered based on triplet angles of face poses. Before matching to TPSM, an initial estimate of triplet angles of face poses is obtained in the test face image/video using an automatic head pose estimation approach. Then, an array of the TPSM is selected based on the estimated triplet angles for each subject. Finally, the selected arrays from TPSMs are compared with target image by joint dynamic sparse representation classification. Favorable outcomes were acquired to handle pose and expression changes on the available image and video databases based on the proposed method compared to several state-of-the-arts in pose-invariant face recognition.


international conference on pattern recognition | 2014

Pose-Invariant Facial Expression Recognition Based on 3D Face Reconstruction and Synthesis from a Single 2D Image

Ali Moeini; Hossein Moeini; Karim Faez

In this paper, a novel method is proposed for person-independent pose-invariant facial expression recognition based on 3D face reconstruction from only 2D frontal images in a training set. A 3D Facial Expression Generic Elastic Model (3D FE-GEM) is proposed to reconstruct an expression-invariant 3D model of each human face in the present database using only a single 2D frontal image with/without facial expressions. Then, for each 7-class of facial expressions in the database, a Feature Library Matrix (FLM) is created from yaw face poses by the rotating the 3D reconstructed models and extracting features in rotated face. Each FLM is subsequently rendered based on yaw angles of face poses. Before matching to the FLM, an initial estimate of yaw angles of face poses is obtained in the test face image using an automatic head pose estimation approach. Then, an array of the FLM is selected based on the estimated yaw angles for each class of facial expressions. Finally, the selected arrays from FLMs are compared with target image features by Support Vector Machine (SVM) classification. Favorable outcomes were acquired to handle pose in facial expression recognition on the available image based on the proposed method compared to several state-of-the-arts in pose-invariant facial expression recognition.


international conference on pattern recognition | 2014

Makeup-Invariant Face Recognition by 3D Face: Modeling and Dual-Tree Complex Wavelet Transform from Women's 2D Real-World Images

Ali Moeini; Hossein Moeini; Fazael Ayatollahi; Karim Faez

In this paper, a novel feature extraction method is proposed to handle facial makeup in face recognition. To develop a face recognition method robust to facial makeup, features are extracted from face depth in which facial makeup is not effective. Then, face depth features are added to face texture features to perform feature extraction. Accordingly, a 3D face is reconstructed from only a single 2D frontal image with/without facial expressions. Then, the texture and depth of the face are extracted from the reconstructed model. Afterwards, the Dual-Tree Complex Wavelet Transform (DT-CWT) is applied to both texture and reconstructed depth of the face to extract the feature vectors from both texture and reconstructed depth images. Finally, by combining 2D and 3D feature vectors, the final feature vectors are generated and classified by the Support Vector Machine (SVM). Promising results were achieved for makeup-invariant face recognition on the available image database based on the present method compared to several state-of-the-art methods.


Journal of Visual Communication and Image Representation | 2016

2D facial expression recognition via 3D reconstruction and feature fusion

Ali Moeini; Karim Faez; Hamid Sadeghi; Hossein Moeini

This paper proposed a method for facial expression recognition.In proposed method, facial depth has been added to facial texture for feature extraction.We demonstrated that adding the facial depth to feature extraction is effective.The 3DH-LLBP is proposed for feature extraction from facial depth images.2D facial expression recognition is performed by combination of 2D and 3D features through feature fusion. In this paper, a novel feature extraction method is proposed for facial expression recognition by extracting the feature from facial depth and 3D mesh alongside texture. Accordingly, the 3D Facial Expression Generic Elastic Model (3D FE-GEM) method is used to reconstruct an expression-invariant 3D model from the human face. Then, the texture, depth and mesh are extracted from the reconstructed face model. Afterwards, the Local Binary Pattern (LBP), proposed 3D High-Low Local Binary Pattern (3DH-LLBP) and Local Normal Binary Patterns (LNBPs) are applied to texture, depth and mesh of the face, respectively, to extract the feature from 2D images. Finally, the final feature vectors are generated through feature fusion and are classified by the Support Vector Machine (SVM). Convincing results are acquired for facial expression recognition on the CK+, CK, JAFFE and Bosphorus image databases compared to several state-of-the-art methods.


Pattern Recognition Letters | 2015

Unconstrained pose-invariant face recognition by a triplet collaborative dictionary matrix

Ali Moeini; Karim Faez; Hossein Moeini

We propose a novel method for real-world pose-invariant face recognition.Proposed method used from a single image in gallery with any facial expressions.We generate a collaborative dictionary matrix for each people.Promising results were obtained to handle pose on the FERET, LFW and video databases. In this paper, a novel method is proposed for unconstrained pose-invariant face recognition from only an image in a gallery. A 3D face is initially reconstructed using only a 2D frontal image. Then, for each person in the gallery, a Triplet Collaborative Dictionary Matrix (TCDM) is created from all face poses by rotating the 3D reconstructed models and extracting features in rotated face. Each TCDM is subsequently rendered based on triplet angles of face poses. Finally, the classification is performed by Collaborative Representation Classification (CRC) with Regularized Least Square (RLS). Promising results were acquired to handle pose changes on the FERET, LFW and video face databases compared to state-of-the-art methods in pose-invariant face recognition.


Iet Image Processing | 2015

Real-world gender classification via local Gabor binary pattern and three-dimensional face reconstruction by generic elastic model

Ali Moeini; Karim Faez; Hossein Moeini

In this study, a novel method is proposed for gender classification by adding facial depth features to texture features. Accordingly, the three-dimensional (3D) generic elastic model is used to reconstruct the 3D model from human face using only a single 2D frontal image. Then, the texture and depth are extracted from the reconstructed face model. Afterwards, the local Gabor binary pattern (LGBP) is applied to both facial texture and reconstructed depth to extract the feature vectors from both texture and reconstructed depth images. Finally, by combining 2D and 3D feature vectors, the final LGBP histogram bins are generated and classified by the support vector machine. Favourable outcomes are acquired for gender classification on the labelled faces in the wild and FERET databases based on the proposed method compared to several state-of-the-arts in gender classification.


international conference on pattern recognition | 2014

Expression-Invariant Face Recognition via 3D Face Reconstruction Using Gabor Filter Bank from a 2D Single Image

Ali Moeini; Hossein Moeini; Karim Faez

In this paper, a novel method for expression-insensitive face recognition is proposed from only a 2D single image in a gallery including any facial expressions. A 3D Generic Elastic Model (3D GEM) is used to reconstruct a 3D model of each human face in the present database using only a single 2D frontal image with/without facial expressions. Then, the rigid parts of the face are extracted from both the texture and reconstructed depth based on 2D facial land-marks. Afterwards, the Gabor filter bank was applied to the extracted rigid-part of the face to extract the feature vectors from both texture and reconstructed depth images. Finally, by combining 2D and 3D feature vectors, the final feature vectors are generated and classified by the Support Vector Machine (SVM). Favorable outcomes were acquired to handle expression changes on the available image database based on the proposed method compared to several state-of-the-arts in expression-insensitive face recognition.


international conference on distributed smart cameras | 2013

Deformable Generic Elastic Models from a single 2D image for facial expression and large pose face together synthesis and recognition

Ali Moeini; Karim Faez; Hossein Moeini

In this paper, we propose an efficient method to reconstruct the 3D models of a human face from a single 2D face image robustness under a variety facial expressions using the Deformable Generic Elastic Model (D-GEM). We extended the Generic Elastic Model (GEM) approach and combined it with statistical information of the human face and deformed generic depth models by computing the distance around face lips. Particularly, we demonstrate that D-GEM can approximate the 3D shape of the input face image more accurately, achieving a better and higher quality of 3D face modeling and reconstruction robustness under a variety of facial expressions compared to the original GEM and Gender and Ethnicity-GEM (GE-GEM) approach. It has been tested on an available 2D face database and new synthesized facial expression and large pose changes together from gallery images. We acquire promising results for handling pose and expression changes based on the proposed method compared to the GEM and GE-GEM.

Collaboration


Dive into the Hossein Moeini's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge