Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. Saquib Sarfraz is active.

Publication


Featured researches published by M. Saquib Sarfraz.


Image and Vision Computing | 2010

Probabilistic learning for fully automatic face recognition across pose

M. Saquib Sarfraz; Olaf Hellwich

Recent pose invariant methods try to model the subject specific appearance change across pose. For this, however, almost all of the existing methods require a perfect alignment between a gallery and a probe image. In this paper we present a pose invariant face recognition method that does not require the facial landmarks to be detected as such and is able to work with only single training image of the subject. We propose novel extensions by introducing to use a more robust feature description as opposed to pixel-based appearances. Using such features we put forward to synthesize the non-frontal views to frontal. Furthermore, using local kernel density estimation, instead of commonly used normal density assumption, is suggested to derive the prior models. Our method does not require any strict alignment between gallery and probe images which makes it particularly attractive as compared to the existing state of the art methods. Improved recognition across a wide range of poses has been achieved using these extensions.


british machine vision conference | 2015

Deep Perceptual Mapping for Thermal to Visible Face Recogntion.

M. Saquib Sarfraz; Rainer Stiefelhagen

Cross modal face matching between the thermal and visible spectrum is a much de- sired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship be- tween the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity in- formation. We show substantive performance improvement on a difficult thermal-visible face dataset. The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%.


international geoscience and remote sensing symposium | 2012

Automatic registration of SAR and optical images based on mutual information assisted Monte Carlo

Muhammad Adnan Siddique; M. Saquib Sarfraz; David Bornemann; Olaf Hellwich

The development of Geographical Information Systems applications involving fusion of data from different space-borne imaging sensors inevitably requires a preliminary registration of the images. In case of Synthetic Aperture Radar (SAR) and optical sensors, the registration is particularly challenging due to the vast radiometric differences in the data. In this paper, we present a novel method to register SAR and optical images automatically. It provides an accurate registration despite the radiometric differences in the images. Moreover, this paper introduces a Monte Carlo formulation of the image registration problem.


Archive | 2010

Feature Extraction and Representation for Face Recognition

M. Saquib Sarfraz; Olaf Hellwich; Zahid Riaz

Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1


international conference on computer vision | 2008

On Head Pose Estimation in Face Recognition

M. Saquib Sarfraz; Olaf Hellwich

We present a robust front-end pose classification/estimation procedure to be used in face recognition scenarios. A novel discriminative feature description that encodes underlying shape well and is insensitive to illumination and other common variations in facial appearance, such as skin color etc., is proposed. Using such features we generate a pose similarity feature space (PSFS) that turns the multi-class problem into two-class by using inter-pose and intra-pose similarities. A new classification procedure is laid down which models this feature space and copes well with discriminating between nearest poses. For a test image it outputs a measure of confidence or so called posterior probability for all poses without explicitly estimating underlying densities. The pose estimation system is evaluated using CMU Pose, Illumination and Expression (PIE) database.


frontiers of information technology | 2009

Multi-feature fusion in advanced robotics applications

Zahid Riaz; Christoph Mayer; Michael Beetz; Bernd Radig; M. Saquib Sarfraz

This paper describes a feature extraction technique from human face image sequences using model based approach. We study two different models with our proposed approach towards multifeature extraction. These features are efficiently used for human face information extraction for different applications. The approach follows in fitting a model to face image using robust objective function and extracting textural and temporal features for three major applications naming 1) face recognition, 2) facial expressions recognition and 3) gender classification. For experimentation and comparative study of our multi-features over two models, we use same set of features with two different classifiers generating promising results to explain that extracted features are strong enough to be used for face image analysis. Features goodness has been investigated on Cohn Kanade Facial Expressions Database (CKFED). The proposed multi-features approach is automatic and real time.


british machine vision conference | 2013

RPM: Random Points Matching for Pair wise Face-Similarity.

M. Saquib Sarfraz; Muhammad Adnan Siddique; Rainer Stiefelhagen

Matching face image pairs based on global features or local analysis on some points found using a key point or fiducial point detector becomes prohibitively difficult in realistic images when there are large pose, lighting, expressions and imaging differences. We develop a new approach that automatically and reliably finds well-matched and useful corresponding points, referred to as homologous points, from randomly initialized points on the two probe images under unrestricted image variations. The procedure obviates the need of using key or fiducial point detector and the over restrictive requirement of image alignment. We then propose a new pair-wise similarity metric that combines the strength of the useful parameters found during the random point matching and the similarity computed using a local descriptor around the homologous points. Our results in a face verification setting on two challenging datasets (‘Labelled Faces in the Wild’ and FacePix) under large pose, expression and imaging variations, show improved performance over the state-of-the-art methods for pair-wise similarity.


Archive | 2011

Towards Unconstrained Face Recognition Using 3D Face Model

Zahid Riaz; M. Saquib Sarfraz; Michael Beetz

Over the last couple of decades, many commercial systems are available to identify human faces. However, face recognition is still an outstanding challenge against different kinds of real world variations especially facial poses, non-uniform lightings and facial expressions. Meanwhile the face recognition technology has extended its role from biometrics and security applications to human robot interaction (HRI). Person identity is one of the key tasks while interacting with intelligent machines/robots, exploiting the non intrusive system security and authentication of the human interacting with the system. This capability further helps machines to learn person dependent traits and interaction behavior to utilize this knowledge for tasks manipulation. In such scenarios acquired face images contain large variations which demands an unconstrained face recognition system.


international conference on computer vision theory and applications | 2008

HEAD POSE ESTIMATION IN FACE RECOGNITION ACROSS POSE SCENARIOS

M. Saquib Sarfraz; Olaf Hellwich


frontiers of information technology | 2009

Bayesian prior models for vehicle make and model recognition

M. Saquib Sarfraz; Ahmed Saeed; M. Haris Khan; Zahid Riaz

Collaboration


Dive into the M. Saquib Sarfraz's collaboration.

Top Co-Authors

Avatar

Olaf Hellwich

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Rainer Stiefelhagen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Haris Khan

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Muhammad Adnan Siddique

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Muhammad Fraz

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmed Saeed

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Azeem Shahzad

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Arne Schumann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Bornemann

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge