Naimul Mefraz Khan
Ryerson University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Naimul Mefraz Khan.
Pattern Recognition | 2012
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Boubakeur Boufama
Support vector machine (SVM) is a powerful classification methodology, where the support vectors fully describe the decision surface by incorporating local information. On the other hand, nonparametric discriminant analysis (NDA) is an improvement over LDA where the normality assumption is relaxed. NDA also detects the dominant normal directions to the decision plane. This paper introduces a novel SVM+NDA model which can be viewed as an extension to the SVM by incorporating some partially global information, especially, discriminatory information in the normal direction to the decision boundary. This can also be considered as an extension to the NDA where the support vectors improve the choice of k-nearest neighbors on the decision boundary by incorporating local information. Being an extension to both SVM and NDA, it can deal with heteroscedastic and non-normal data. It also avoids the small sample size problem. Moreover, it can be reduced to the classical SVM model, so that existing softwares can be used. A kernel extension of the model, called KSVM+KNDA is also proposed to deal with nonlinear problems. We have carried an extensive comparison of the SVM+NDA to the LDA, SVM, heteroscedastic LDA (HLDA), NDA and the combined SVM and LDA on artificial, real and face recognition data sets. Results for KSVM+KNDA have also been presented. These comparisons demonstrate the advantages and superiority of our proposed model.
Pattern Recognition | 2014
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Ling Guan
In one-class classification, the low variance directions in the training data carry crucial information to build a good model of the target class. Boundary-based methods like One-Class Support Vector Machine (OSVM) preferentially separates the data from outliers along the large variance directions. On the other hand, retaining only the low variance directions can result in sacrificing some initial properties of the original data and is not desirable, specially in case of limited training samples. This paper introduces a Covariance-guided One-Class Support Vector Machine (COSVM) classification method which emphasizes the low variance projectional directions of the training data without compromising any important characteristics. COSVM improves upon the OSVM method by controlling the direction of the separating hyperplane through incorporation of the estimated covariance matrix from the training data. Our proposed method is a convex optimization problem resulting in one global optimum solution which can be solved efficiently with the help of existing numerical methods. The method also keeps the principal structure of the OSVM method intact, and can be implemented easily with the existing OSVM libraries. Comparative experimental results with contemporary one-class classifiers on numerous artificial and benchmark datasets demonstrate that our method results in significantly better classification performance. HighlightsThe low-variance directions are crucial for one-class classification (OCC).A new method of OCC emphasizing the low-variance directions is proposed.The method incorporates covariance information into convex optimization problem.Can be implemented and solved efficiently with existing software.Comparative experiments with contemporary classifiers show positive results.
international symposium on multimedia | 2014
Naimul Mefraz Khan; Stephen Lin; Ling Guan; Baining Guo
We propose a novel method for in-home physical rehabilitation, where a user can visually evaluate his/her performance compared to that of an expert. Normalized joint coordinates extracted from the Kinect skeleton are used as features. A novel Incremental Dynamic Time Warping (IDTW) algorithm is used to align the user and expert sequences. IDTW extends the classic DTW by providing accurate comparison between incomplete (the users) and complete (the experts) sequences while significantly reducing the computational time. Instead of providing a single measurement, the proposed method maps the IDTW measurements to a color-coded skeleton frame. Different colors on the limbs provide the user with an easy-to-interpret evaluation of how he or she is performing. Preliminary analysis involving different users and exercises and comparisons against the classic DTW algorithm show the effectiveness of the proposed method.
Signal, Image and Video Processing | 2014
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Ling Guan
This paper introduces a novel sparse nonparametric support vector machine classifier (SN-SVM) which combines data distribution information from two state-of-the-art kernel-based classifiers, namely, the kernel support vector machine (KSVM) and the kernel nonparametric discriminant (KND). The proposed model incorporates some near-global variations of the data provided by the KND and, hence, may be viewed as an extension to the KSVM. Similarly, since the support vectors improve the choice of
WSOM | 2013
Naimul Mefraz Khan; Matthew J. Kyan; Ling Guan
ieee international conference on automatic face gesture recognition | 2015
Naimul Mefraz Khan; Xiaoming Nan; Azhar Quddus; Edward Rosales; Ling Guan
\kappa
Neurocomputing | 2015
Naimul Mefraz Khan; Matthew J. Kyan; Ling Guan
international conference on artificial neural networks | 2012
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Ling Guan
-nearest neighbors (
advances in mobile multimedia | 2009
Naimul Mefraz Khan; Imran Shafiq Ahmad
international conference on multimedia and expo | 2016
Naimul Mefraz Khan; Xiaoming Nan; Nan Dong; Yifeng He; Matthew Kyan; Jennifer James; Ling Guan; Charles H. Davis
\kappa -NN