Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Usman Tariq is active.

Publication


Featured researches published by Usman Tariq.


international conference on computer vision | 2012

Multi-view facial expression recognition analysis with generic sparse coding feature

Usman Tariq; Jianchao Yang; Thomas S. Huang

Expression recognition from non-frontal faces is a challenging research area with growing interest. This paper works with a generic sparse coding feature, inspired from object recognition, for multi-view facial expression recognition. Our extensive experiments on face images with seven pan angles and five tilt angles, rendered from the BU-3DFE database, achieve state-of-the-art results. We achieve a recognition rate of 69.1% on all images with four expression intensity levels, and a recognition performance of 76.1% on images with the strongest expression intensity. We then also present detailed analysis of the variations in expression recognition performance for various pose changes.


international conference on image processing | 2009

Gender and ethnicity identification from silhouetted face profiles

Usman Tariq; Yuxiao Hu; Thomas S. Huang

This paper demonstrates, to our best knowledge, the first attempt on gender and ethnicity identification from silhouetted face profiles using a computer vision technique. The results achieved, after testing on 441 images, show that silhouetted face profiles have a lot of information, in particular, for ethnicity identification. Shape context based matching [1] was employed for classification. The test samples were multi-ethnic. Average accuracy for gender was 71.20% and for ethnicity 71.66%. However, the accuracy was significantly higher for some classes, such as 83.41% for females (in case of gender identification) and 80.37% for East and South East Asians (in case of ethnicity identification).


Pattern Recognition Letters | 2014

Supervised super-vector encoding for facial expression recognition

Usman Tariq; Jianchao Yang; Thomas S. Huang

Expression recognition from faces with varying pose and illumination conditions is a challenging research area with growing interest. In this paper, we develop a novel supervised super-vector encoding framework to learn discriminative image feature representations. The framework is then validated on the Multi-PIE and BU3D-FE databases for multi-view facial expression recognition. Extensive experiments show that our supervised framework gives significant improvement over the unsupervised counterpart and outperforms the state-of-the-arts.


ieee international conference on automatic face gesture recognition | 2013

Maximum margin GMM learning for facial expression recognition

Usman Tariq; Jianchao Yang; Thomas S. Huang

Expression recognition from non-frontal faces is a challenging research area with growing interest. In this paper, we explore discriminative learning of Gaussian Mixture Models for multi-view facial expression recognition. Adopting the BoW model from image categorization, our image descriptors are computed using Soft Vector Quantization based on the Gaussian Mixture Model. We do extensive experiments on recognizing six universal facial expressions from face images with a range of seven pan angles (-45°~+45°) and five tilt angles (-30°~+30°) generated from the BU-3dFE facial expression database. Our results show that our approach not only significantly improves the resulting classification rate over unsupervised training but also outperforms the published state-of-the-art results, when combined with Spatial Pyramid Matching.


computer vision and pattern recognition | 2012

Features and fusion for expression recognition — A comparative analysis

Usman Tariq; Thomas S. Huang

This paper looks at various low-level features, such as Local Binary Pattern (LBP), Local Phase Quantization (LPQ), Scale Invariant Feature Transform (SIFT) and Discrete Cosine Transform (DCT), for performance comparison in subject independent facial expression recognition setting. We use Soft Vector Quantization (SVQ) to compute image-level descriptors. We also do a performance comparison of various pooling methodologies in SVQ. We later do classification using logistic regression followed by fusing likelihoods from the classifiers with various features to come up with joint decisions. Our analysis on the BU-3DFE show that SIFT and mean pooling outperform other features and pooling strategies and that classifier fusion helps in improving the recognition performance.


conference on multimedia modeling | 2010

Subjective experiments on gender and ethnicity recognition from different face representations

Yuxiao Hu; Yun Fu; Usman Tariq; Thomas S. Huang

The design of image-based soft-biometrics systems highly depends on the human factor analysis. How well can human do in gender/ethnicity recognition by looking at faces in different representations? How does human recognize gender/ethnicity? What factors affect the accuracy of gender/ethnicity recognition? The answers of these questions may inspire our design of computer-based automatic gender/ethnicity recognition algorithms. In this work, several subjective experiments are conducted to test the capability of human in gender/ethnicity recognition on different face representations, including 1D face silhouette, 2D face images and 3D face models. Our experimental results provide baselines and interesting inspirations for designing computer-based face gender/ethnicity recognition algorithms.


Archive | 2011

Gender and Race Identification by Man and Machine

Usman Tariq; Yuxiao Hu; Thomas S. Huang

This work details a comprehensive study on gender and race identification from different facial representations. The major contributions of this work are: comparison of human and machine performance, qualitative analysis of use of color for race identification, combining different facial views for gender identification and extensive human experiments for both gender and race recognition from four different facial representations.


IEEE MultiMedia | 2014

Visual Media: History and Perspectives

Thomas S. Huang; Vuong Le; Thomas Paine; Pooya Khorrami; Usman Tariq

In the early days of multimedia research, the first image dataset collected consisted of only four still grayscale images captured by a drum scanner. At the time, digital imaging was only available in laboratories, and digital videos barely existed. Half a century later, the amount of visual data has exploded at an unprecedented rate. Images and videos are now created, stored, and used by the majority of the population. In this historical overview, the authors follow the great journey that visual media research has embarked upon by looking at the fundamental scientific and engineering inventions. Through this lens, they show that all three aspects of media capturing, delivery, and understanding are developed surrounding the interaction with humans, making visual data processing a particular human-centric field of computing.


Archive | 2016

Face Processing and Applications to Distance Learning

Vuong Le; Usman Tariq; Hao Tang

This special compendium provides a concise and unified vision of facial image processing. It addresses a collection of state-of-the-art techniques, covering the most important areas for facial biometrics and behavior analysis. These techniques also converge to serve an emerging practical application of interactive distance learning. Readers will get a broad picture of the fundamental science of the field and technical details that make the research interesting. Moreover, the intellectual investigation motivated by the demand of real-life application will make this volume an inspiring read for current and prospective researchers and engineers in the fields of computer vision, machine learning and image processing.


Face and Gesture 2011 | 2011

Emotion recognition from an ensemble of features

Usman Tariq; Kai-Hsiang Lin; Zhen Li; Xi Zhou; Zhaowen Wang; Vuong Le; Thomas S. Huang; Xutao Lv; Tony X. Han

Collaboration


Dive into the Usman Tariq's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tony X. Han

University of Missouri

View shared research outputs
Top Co-Authors

Avatar

Xutao Lv

University of Missouri

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xi Zhou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yun Fu

Northeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge