Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohammed Bennamoun is active.

Publication


Featured researches published by Mohammed Bennamoun.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Linear Regression for Face Recognition

Imran Naseem; Roberto Togneri; Mohammed Bennamoun

In this paper, we present a novel approach of face identification by formulating the pattern recognition problem in terms of linear regression. Using a fundamental concept that patterns from a single-object class lie on a linear subspace, we develop a linear model representing a probe image as a linear combination of class-specific galleries. The inverse problem is solved using the least-squares method and the decision is ruled in favor of the class with the minimum reconstruction error. The proposed Linear Regression Classification (LRC) algorithm falls in the category of nearest subspace classification. The algorithm is extensively evaluated on several standard databases under a number of exemplary evaluation protocols reported in the face recognition literature. A comparative study with state-of-the-art algorithms clearly reflects the efficacy of the proposed approach. For the problem of contiguous occlusion, we propose a Modular LRC approach, introducing a novel Distance-based Evidence Fusion (DEF) algorithm. The proposed methodology achieves the best results ever reported for the challenging problem of scarf occlusion.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

An Efficient Multimodal 2D-3D Hybrid Approach to Automatic Face Recognition

Ajmal S. Mian; Mohammed Bennamoun; Robyn A. Owens

We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D spherical face representation (SFR) is used in conjunction with the scale-invariant feature transform (SIFT) descriptor to form a rejection classifier, which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach, which is robust to facial expressions. This approach automatically segments the eyes- forehead and the nose regions, which are relatively less sensitive to expressions and matches them separately using a modified iterative closest point (ICP) algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms that used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74 percent and 98.31 percent verification rates at a 0.001 false acceptance rate (FAR) and identification rates of 99.02 percent and 95.37 percent for probes with a neutral and a nonneutral expression, respectively.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Three-Dimensional Model-Based Object Recognition and Segmentation in Cluttered Scenes

Ajmal S. Mian; Mohammed Bennamoun; Robyn A. Owens

Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency


International Journal of Computer Vision | 2010

On the Repeatability and Quality of Keypoints for Local Feature-based 3D Object Retrieval from Cluttered Scenes

Ajmal S. Mian; Mohammed Bennamoun; Robyn A. Owens

Abstract3D object recognition from local features is robust to occlusions and clutter. However, local features must be extracted from a small set of feature rich keypoints to avoid computational complexity and ambiguous features. We present an algorithm for the detection of such keypoints on 3D models and partial views of objects. The keypoints are highly repeatable between partial views of an object and its complete 3D model. We also propose a quality measure to rank the keypoints and select the best ones for extracting local features. Keypoints are identified at locations where a unique local 3D coordinate basis can be derived from the underlying surface in order to extract invariant features. We also propose an automatic scale selection technique for extracting multi-scale and scale invariant features to match objects at different unknown scales. Features are projected to a PCA subspace and matched to find correspondences between a database and query object. Each pair of matching features gives a transformation that aligns the query and database object. These transformations are clustered and the biggest cluster is used to identify the query object. Experiments on a public database revealed that the proposed quality measure relates correctly to the repeatability of keypoints and the multi-scale features have a recognition rate of over 95% for up to 80% occluded objects.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey

Yulan Guo; Mohammed Bennamoun; Ferdous Ahmed Sohel; Min Lu; Jianwei Wan

3D object recognition in cluttered scenes is a rapidly growing research area. Based on the used types of features, 3D object recognition methods can broadly be divided into two categories-global or local feature based methods. Intensive research has been done on local surface feature based methods as they are more robust to occlusion and clutter which are frequently present in a real-world scene. This paper presents a comprehensive survey of existing local surface feature based 3D object recognition methods. These methods generally comprise three phases: 3D keypoint detection, local surface feature description, and surface matching. This paper covers an extensive literature survey of each phase of the process. It also enlists a number of popular and contemporary databases together with their relevant attributes.


ACM Computing Surveys | 2012

Ontology learning from text: A look back and into the future

Wilson Wong; Wei Liu; Mohammed Bennamoun

Ontologies are often viewed as the answer to the need for interoperable semantics in modern information systems. The explosion of textual information on the Read/Write Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas, such as natural language processing, have fueled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium and discusses the remaining challenges that will define the research directions in this area in the near future.


International Journal of Computer Vision | 2008

Keypoint Detection and Local Feature Matching for Textured 3D Face Recognition

Ajmal S. Mian; Mohammed Bennamoun; Robyn A. Owens

Abstract Holistic face recognition algorithms are sensitive to expressions, illumination, pose, occlusions and makeup. On the other hand, feature-based algorithms are robust to such variations. In this paper, we present a feature-based algorithm for the recognition of textured 3D faces. A novel keypoint detection technique is proposed which can repeatably identify keypoints at locations where shape variation is high in 3D faces. Moreover, a unique 3D coordinate basis can be defined locally at each keypoint facilitating the extraction of highly descriptive pose invariant features. A 3D feature is extracted by fitting a surface to the neighborhood of a keypoint and sampling it on a uniform grid. Features from a probe and gallery face are projected to the PCA subspace and matched. The set of matching features are used to construct two graphs. The similarity between two faces is measured as the similarity between their graphs. In the 2D domain, we employed the SIFT features and performed fusion of the 2D and 3D features at the feature and score-level. The proposed algorithm achieved 96.1% identification rate and 98.6% verification rate on the complete FRGC v2 data set.


International Journal of Computer Vision | 2006

A Novel Representation and Feature Matching Algorithm for Automatic Pairwise Registration of Range Images

Ajmal S. Mian; Mohammed Bennamoun; Robyn A. Owens

Automatic registration of range images is a fundamental problem in 3D modeling of free-from objects. Various feature matching algorithms have been proposed for this purpose. However, these algorithms suffer from various limitations mainly related to their applicability, efficiency, robustness to resolution, and the discriminating capability of the used feature representation. We present a novel feature matching algorithm for automatic pairwise registration of range images which overcomes these limitations. Our algorithm uses a novel tensor representation which represents semi-local 3D surface patches of a range image by third order tensors. Multiple tensors are used to represent each range image. Tensors of two range images are matched to identify correspondences between them. Correspondences are verified and then used for pairwise registration of the range images. Experimental results show that our algorithm is accurate and efficient. Moreover, it is robust to the resolution of the range images, the number of tensors per view, the required amount of overlap, and noise. Comparisons with the spin image representation revealed that our representation has more discriminating capabilities and performs better at a low resolution of the range images.


International Journal of Computer Vision | 2016

A Comprehensive Performance Evaluation of 3D Local Feature Descriptors

Yulan Guo; Mohammed Bennamoun; Ferdous Ahmed Sohel; Min Lu; Jianwei Wan; Ngai Ming Kwok

A number of 3D local feature descriptors have been proposed in the literature. It is however, unclear which descriptors are more appropriate for a particular application. A good descriptor should be descriptive, compact, and robust to a set of nuisances. This paper compares ten popular local feature descriptors in the contexts of 3D object recognition, 3D shape retrieval, and 3D modeling. We first evaluate the descriptiveness of these descriptors on eight popular datasets which were acquired using different techniques. We then analyze their compactness using the recall of feature matching per each float value in the descriptor. We also test the robustness of the selected descriptors with respect to support radius variations, Gaussian noise, shot noise, varying mesh resolution, distance to the mesh boundary, keypoint localization error, occlusion, clutter, and dataset size. Moreover, we present the performance results of these descriptors when combined with different 3D keypoint detection methods. We finally analyze the computational efficiency for generating each descriptor.


International Journal of Computer Vision | 2009

An Expression Deformation Approach to Non-rigid 3D Face Recognition

Faisal R. Al-Osaimi; Mohammed Bennamoun; Ajmal S. Mian

The accuracy of non-rigid 3D face recognition approaches is highly influenced by their capacity to differentiate between the deformations caused by facial expressions from the distinctive geometric attributes that uniquely characterize a 3D face, interpersonal disparities. We present an automatic 3D face recognition approach which can accurately differentiate between expression deformations and interpersonal disparities and hence recognize faces under any facial expression. The patterns of expression deformations are first learnt from training data in PCA eigenvectors. These patterns are then used to morph out the expression deformations. Similarity measures are extracted by matching the morphed 3D faces. PCA is performed in such a way it models only the facial expressions leaving out the interpersonal disparities. The approach was applied on the FRGC v2.0 dataset and superior recognition performance was achieved. The verification rates at 0.001 FAR were 98.35% and 97.73% for scans under neutral and non-neutral expressions, respectively.

Collaboration


Dive into the Mohammed Bennamoun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto Togneri

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Ajmal S. Mian

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Farid Boussaid

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Wei Liu

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Robyn A. Owens

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Senjian An

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Syed Afaq Ali Shah

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Imran Naseem

Karachi Institute of Economics and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge