Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Muhammad Hameed Siddiqi is active.

Publication


Featured researches published by Muhammad Hameed Siddiqi.


IEEE Transactions on Image Processing | 2015

Human Facial Expression Recognition Using Stepwise Linear Discriminant Analysis and Hidden Conditional Random Fields

Muhammad Hameed Siddiqi; Rahman Ali; Adil Mehmood Khan; Young-Tack Park; Sungyoung Lee

This paper introduces an accurate and robust facial expression recognition (FER) system. For feature extraction, the proposed FER system employs stepwise linear discriminant analysis (SWLDA). SWLDA focuses on selecting the localized features from the expression frames using the partial F-test values, thereby reducing the within class variance and increasing the low between variance among different expression classes. For recognition, the hidden conditional random fields (HCRFs) model is utilized. HCRF is capable of approximating a complex distribution using a mixture of Gaussian density functions. To achieve optimum results, the system employs a hierarchical recognition strategy. Under these settings, expressions are divided into three categories based on parts of the face that contribute most toward an expression. During recognition, at the first level, SWLDA and HCRF are employed to recognize the expression category; whereas, at the second level, the label for the expression within the recognized category is determined using a separate set of SWLDA and HCRF, trained just for that category. In order to validate the system, four publicly available data sets were used, and a total of four experiments were performed. The weighted average recognition rate for the proposed FER approach was 96.37% across the four different data sets, which is a significant improvement in contrast to the existing FER methods.


Multimedia Systems | 2015

Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection

Muhammad Hameed Siddiqi; Rahman Ali; Adil Mehmood Khan; Eun Soo Kim; Gerard Junghyun Kim; Sungyoung Lee

Knowledge about people’s emotions can serve as an important context for automatic service delivery in context-aware systems. Hence, human facial expression recognition (FER) has emerged as an important research area over the last two decades. To accurately recognize expressions, FER systems require automatic face detection followed by the extraction of robust features from important facial parts. Furthermore, the process should be less susceptible to the presence of noise, such as different lighting conditions and variations in facial characteristics of subjects. Accordingly, this work implements a robust FER system, capable of providing high recognition accuracy even in the presence of aforementioned variations. The system uses an unsupervised technique based on active contour model for automatic face detection and extraction. In this model, a combination of two energy functions: Chan–Vese energy and Bhattacharyya distance functions are employed to minimize the dissimilarities within a face and maximize the distance between the face and the background. Next, noise reduction is achieved by means of wavelet decomposition, followed by the extraction of facial movement features using optical flow. These features reflect facial muscle movements which signify static, dynamic, geometric, and appearance characteristics of facial expressions. Post-feature extraction, feature selection, is performed using Stepwise Linear Discriminant Analysis, which is more robust in contrast to previously employed feature selection methods for FER. Finally, expressions are recognized using trained HMM(s). To show the robustness of the proposed system, unlike most of the previous works, which were evaluated using a single dataset, performance of the proposed system is assessed in a large-scale experimentation using five publicly available different datasets. The weighted average recognition rate across these datasets indicates the success of employing the proposed system for FER.


Sensors | 2013

Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

Muhammad Hameed Siddiqi; Sungyoung Lee; Young-Koo Lee; Adil Mehmood Khan; Phan Tran Ho Truc

Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.


Computers in Biology and Medicine | 2016

Multimodal hybrid reasoning methodology for personalized wellbeing services

Rahman Ali; Muhammad Afzal; Maqbool Hussain; Maqbool Ali; Muhammad Hameed Siddiqi; Sungyoung Lee; Byeong Ho Kang

A wellness system provides wellbeing recommendations to support experts in promoting a healthier lifestyle and inducing individuals to adopt healthy habits. Adopting physical activity effectively promotes a healthier lifestyle. A physical activity recommendation system assists users to adopt daily routines to form a best practice of life by involving themselves in healthy physical activities. Traditional physical activity recommendation systems focus on general recommendations applicable to a community of users rather than specific individuals. These recommendations are general in nature and are fit for the community at a certain level, but they are not relevant to every individual based on specific requirements and personal interests. To cover this aspect, we propose a multimodal hybrid reasoning methodology (HRM) that generates personalized physical activity recommendations according to the user׳s specific needs and personal interests. The methodology integrates the rule-based reasoning (RBR), case-based reasoning (CBR), and preference-based reasoning (PBR) approaches in a linear combination that enables personalization of recommendations. RBR uses explicit knowledge rules from physical activity guidelines, CBR uses implicit knowledge from experts׳ past experiences, and PBR uses users׳ personal interests and preferences. To validate the methodology, a weight management scenario is considered and experimented with. The RBR part of the methodology generates goal, weight status, and plan recommendations, the CBR part suggests the top three relevant physical activities for executing the recommended plan, and the PBR part filters out irrelevant recommendations from the suggested ones using the user׳s personal preferences and interests. To evaluate the methodology, a baseline-RBR system is developed, which is improved first using ranged rules and ultimately using a hybrid-CBR. A comparison of the results of these systems shows that hybrid-CBR outperforms the modified-RBR and baseline-RBR systems. Hybrid-CBR yields a 0.94% recall, a 0.97% precision, a 0.95% f-score, and low Type I and Type II errors.


Sensors | 2015

H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

Rahman Ali; Jamil Hussain; Muhammad Hameed Siddiqi; Maqbool Hussain; Sungyoung Lee

Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.


Sensors | 2014

Video-Based Human Activity Recognition Using Multilevel Wavelet Decomposition and Stepwise Linear Discriminant Analysis

Muhammad Hameed Siddiqi; Rahman Ali; Md. Sohel Rana; Een-Kee Hong; Eun Soo Kim; Sungyoung Lee

Video-based human activity recognition (HAR) means the analysis of motions and behaviors of human from the low level sensors. Over the last decade, automatic HAR is an exigent research area and is considered a significant concern in the field of computer vision and pattern recognition. In this paper, we have presented a robust and an accurate activity recognition system called WS-HAR that consists of wavelet transform coupled with stepwise linear discriminant analysis (SWLDA) followed by hidden Markov model (HMM). Symlet wavelet has been employed in order to extract the features from the activity frames. The most prominent features were selected by proposing a robust technique called stepwise linear discriminant analysis (SWLDA) that focuses on selecting the localized features from the activity frames and discriminating their class based on regression values (i.e., partial F-test values). Finally, we applied a well-known sequential classifier called hidden Markov model (HMM) to give the appropriate labels to the activities. In order to validate the performance of the WS-HAR, we utilized two publicly available standard datasets under two different experimental settings, n–fold cross validation scheme based on subjects; and a set of experiments was performed in order to show the effectiveness of each approach. The weighted average recognition rate for the WS-HAR was 97% across the two different datasets that is a significant improvement in classication accuracy compared to the existing well-known statistical and state-of-the-art methods.


Multimedia Tools and Applications | 2016

Human facial expression recognition using curvelet feature extraction and normalized mutual information feature selection

Muhammad Hameed Siddiqi; Rahman Ali; Muhammad Idris; Adil Mehmood Khan; Eun Soo Kim; Mincheol Whang; Sungyoung Lee

To recognize expressions accurately, facial expression systems require robust feature extraction and feature selection methods. In this paper, a normalized mutual information based feature selection technique is proposed for FER systems. The technique is derived from an existing method, that is, the max-relevance and min-redundancy (mRMR) method. We, however, propose to normalize the mutual information used in this method so that the domination of the relevance or of the redundancy can be eliminated. For feature extraction, curvelet transform is used. After the feature extraction and selection the feature space is reduced by employing linear discriminant analysis (LDA). Finally, hidden Markov model (HMM) is used to recognize the expressions. The proposed FER system (CNF-FER) is validated using four publicly available standard datasets. For each dataset, 10-fold cross validation scheme is utilized. CNF-FER outperformed the existing well-known statistical and state-of-the-art methods by achieving a weighted average recognition rate of 99 % across all the datasets.


Artificial Intelligence Review | 2015

Rough set-based approaches for discretization: a compact review

Rahman Ali; Muhammad Hameed Siddiqi; Sungyoung Lee

The extraction of knowledge from a huge volume of data using rough set methods requires the transformation of continuous value attributes to discrete intervals. This paper presents a systematic study of the rough set-based discretization (RSBD) techniques found in the literature and categorizes them into a taxonomy. In the literature, no review is solely based on RSBD. Only a few rough set discretizers have been studied, while many new developments have been overlooked and need to be highlighted. Therefore, this study presents a formal taxonomy that provides a useful roadmap for new researchers in the area of RSBD. The review also elaborates the process of RSBD with the help of a case study. The study of the existing literature focuses on the techniques adapted in each article, the comparison of these with other similar approaches, the number of discrete intervals they produce as output, their effects on classification and the application of these techniques in a domain. The techniques adopted in each article have been considered as the foundation for the taxonomy. Moreover, a detailed analysis of the existing discretization techniques has been conducted while keeping the concept of RSBD applications in mind. The findings are summarized and presented in this paper.


Journal of Information Science and Engineering | 2014

Weed Image Classification using Wavelet Transform, Stepwise Linear Discriminant Analysis, and Support Vector Machines for an Automatic Spray Control System *

Muhammad Hameed Siddiqi; Seok-Won Lee; Adil Mehmood Khan

We tested and validated the accuracy of wavelet transform along with stepwise linear discriminant analysis (SWLDA) and support vector machines (SVMs) for crop/weed classification for real time selective herbicides systems. Unlike previous systems, the proposed algorithm involves a pre-processing step, which helps to eliminate lighting effects to ensure high accuracy in real-life scenarios. We tested a large group of wavelets (46) and decomposed them up to four levels to classify weed images into weeds with broad leaves versus weeds with narrow leaves classes. SWLDA was then employed to reduce the feature space by extracting only the most meaningful features. Finally, the features provided by SWLDA were fed to the SVMs for classification. The proposed method was tested on a database of 1200 samples, which is a much larger database size than that studied previously (200-400 samples). Using confusion matrices, the crop/ weed classification results obtained using different wavelets at different decomposition levels were compared, and this approach was also compared with existing techniques that use statistical and structural approaches. The overall classification accuracy obtained using the symlet wavelet family was 98.1%. These results represent an improvement of 14% in performance compared with existing techniques.


Iete Technical Review | 2014

Depth Camera-Based Facial Expression Recognition System Using Multilayer Scheme

Muhammad Hameed Siddiqi; Rahman Ali; Abdul Sattar; Adil Mehmood Khan; Sungyoung Lee

ABSTRACT The analysis of a facial expression in telemedicine and healthcare plays a significant role in providing sufficient information about patients such as stroke and cardiac in monitoring their expressions for better management of their diseases. Facial expression recognition (FER) improves the level of interaction between human-to-human communications. The human face has a major contribution for such types of communications, which consists of lips, eyes and forehead that are considered the most informative features for FER. There are some parameters that make FER a challenging task that includes high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. Moreover, most of the previous works used existing available standard datasets and all the datasets were pose-based datasets, and they have some privacy issues because of utilizing video (RGB) cameras. Accordingly, this work presents a multilayer scheme for FER to handle these issues. In the proposed FER system, we utilized a depth camera in order to solve the privacy concerns, and the accuracy of this camera is not affected by any kind of environmental parameters. Similarly, the depth camera automatically detects and extracts the faces based on the distance between the camera and subject. For global and local feature extraction, principal component analysis (PCA) and independent component analysis (ICA) were used. A hierarchical classifier was used, where the expression category was recognized at the first level, followed by the actual expression recognition at the second level. For the entire experiments, an n-fold cross-validation scheme (based on subjects) was employed. The proposed FER system achieved a significant improvement in accuracy (98.0%) against the existing methods.

Collaboration


Dive into the Muhammad Hameed Siddiqi's collaboration.

Top Co-Authors

Avatar

Sungyoung Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge