Riadh Ksantini
University of Windsor
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Riadh Ksantini.
Pattern Recognition | 2012
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Boubakeur Boufama
Support vector machine (SVM) is a powerful classification methodology, where the support vectors fully describe the decision surface by incorporating local information. On the other hand, nonparametric discriminant analysis (NDA) is an improvement over LDA where the normality assumption is relaxed. NDA also detects the dominant normal directions to the decision plane. This paper introduces a novel SVM+NDA model which can be viewed as an extension to the SVM by incorporating some partially global information, especially, discriminatory information in the normal direction to the decision boundary. This can also be considered as an extension to the NDA where the support vectors improve the choice of k-nearest neighbors on the decision boundary by incorporating local information. Being an extension to both SVM and NDA, it can deal with heteroscedastic and non-normal data. It also avoids the small sample size problem. Moreover, it can be reduced to the classical SVM model, so that existing softwares can be used. A kernel extension of the model, called KSVM+KNDA is also proposed to deal with nonlinear problems. We have carried an extensive comparison of the SVM+NDA to the LDA, SVM, heteroscedastic LDA (HLDA), NDA and the combined SVM and LDA on artificial, real and face recognition data sets. Results for KSVM+KNDA have also been presented. These comparisons demonstrate the advantages and superiority of our proposed model.
Pattern Recognition | 2014
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Ling Guan
In one-class classification, the low variance directions in the training data carry crucial information to build a good model of the target class. Boundary-based methods like One-Class Support Vector Machine (OSVM) preferentially separates the data from outliers along the large variance directions. On the other hand, retaining only the low variance directions can result in sacrificing some initial properties of the original data and is not desirable, specially in case of limited training samples. This paper introduces a Covariance-guided One-Class Support Vector Machine (COSVM) classification method which emphasizes the low variance projectional directions of the training data without compromising any important characteristics. COSVM improves upon the OSVM method by controlling the direction of the separating hyperplane through incorporation of the estimated covariance matrix from the training data. Our proposed method is a convex optimization problem resulting in one global optimum solution which can be solved efficiently with the help of existing numerical methods. The method also keeps the principal structure of the OSVM method intact, and can be implemented easily with the existing OSVM libraries. Comparative experimental results with contemporary one-class classifiers on numerous artificial and benchmark datasets demonstrate that our method results in significantly better classification performance. HighlightsThe low-variance directions are crucial for one-class classification (OCC).A new method of OCC emphasizing the low-variance directions is proposed.The method incorporates covariance information into convex optimization problem.Can be implemented and solved efficiently with existing software.Comparative experiments with contemporary classifiers show positive results.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008
Riadh Ksantini; Djemel Ziou; Bernard Colin; François Dubeau
In this paper, we investigate the effectiveness of a Bayesian logistic regression model to compute the weights of a pseudometric in order to improve its discriminatory capacity and thereby increase image retrieval accuracy. In the proposed Bayesian model, the prior knowledge of the observations is incorporated and the posterior distribution is approximated by a tractable Gaussian form using variational transformation and Jensens inequality, which allow a fast and straightforward computation of the weights. The pseudometric makes use of the compressed and quantized versions of wavelet decomposed feature vectors, and in our previous work, the weights were adjusted by the classical logistic regression model. A comparative evaluation of the Bayesian and classical logistic regression models is performed for content-based image retrieval, as well as for other classification tasks, in a decontextualized evaluation framework. In this same framework, we compare the Bayesian logistic regression model to some relevant state-of-the-art classification algorithms. Experimental results show that the Bayesian logistic regression model outperforms these linear classification algorithms and is a significantly better tool than the classical logistic regression model to compute the pseudometric weights and improve retrieval and classification performance. Finally, we perform a comparison with results obtained by other retrieval methods.
Eurasip Journal on Image and Video Processing | 2013
Riadh Ksantini; Boubakeur Boufama; Sara Memar
AbstractIn this paper, we propose a fast and effective polarity-based active contour for salient object detection in grey-level images and color images. The adopted variational level set formulation forces the level set function to be close to a signed distance function and therefore completely eliminates the need of the re-initialization procedure and speeds up the curve evolution. Moreover, instead of the classical and widely used gradient-based stopping function, depending on the image gradient, to stop the curve evolution, we use a polarity-based stopping function. In fact, comparatively to the gradient information, the polarity information accurately distinguishes the boundaries or edges of the salient objects in images. One other nice result of the use of polarity information is that the ad hoc manual and local initializations of the evolving curves inside and outside the image objects can be avoided. Therefore, one trivial and global initialization of the evolving curve can be performed to detect image salient objects. We also investigate the multi-spectral polarity information to generalize the proposed active contour to color images. Experiments are performed on several grey-level images and color images to show the advantage and the effectiveness of our new active contour model.
international conference on image analysis and recognition | 2010
Abdunnaser Diaf; Riadh Ksantini; Boubakeur Boufama; Rachid Benlamri
This paper proposes a novel and robust appearance-based method for human motion recognition based on the eigenspace technique. This method has three main advantages over the existing appearance-based methods. First, the Linear Discriminant Analysis (LDA) is used for dimensionality reduction and eigenspace generation, while preserving maximum separability between classes. Second, by combining a novel centering technique with an incremental procedure, the motion data becomes more concise, expressive, and less confused. Third, data storage is greatly enhanced by using a directed acyclic graph (DAG) structure based on Euclidean distance between projected data. The method is rigorously trained and tested using KTH dataset which contains a large number of motion videos partitioned into six human motions. The experimental results are very promising yielding an average recognition rate of 94.17%.
Pattern Recognition | 2010
Riadh Ksantini; Boubakeur Boufama; Djemel Ziou; Bernard Colin
The linear discriminant analysis (LDA) is a linear classifier which has proven to be powerful and competitive compared to the main state-of-the-art classifiers. However, the LDA algorithm assumes the sample vectors of each class are generated from underlying multivariate normal distributions of common covariance matrix with different means (i.e., homoscedastic data). This assumption has restricted the use of LDA considerably. Over the years, authors have defined several extensions to the basic formulation of LDA. One such method is the heteroscedastic LDA (HLDA) which is proposed to address the heteroscedasticity problem. Another method is the nonparametric DA (NDA) where the normality assumption is relaxed. In this paper, we propose a novel Bayesian logistic discriminant (BLD) model which can address both normality and heteroscedasticity problems. The normality assumption is relaxed by approximating the underlying distribution of each class with a mixture of Gaussians. Hence, the proposed BLD provides more flexibility and better classification performances than the LDA, HLDA and NDA. A subclass and multinomial versions of the BLD are proposed. The posterior distribution of the BLD model is elegantly approximated by a tractable Gaussian form using variational transformation and Jensens inequality, allowing a straightforward computation of the weights. An extensive comparison of the BLD to the LDA, support vector machine (SVM), HLDA, NDA and subclass discriminant analysis (SDA), performed on artificial and real data sets, has shown the advantages and superiority of our proposed method. In particular, the experiments on face recognition have clearly shown a significant improvement of the proposed BLD over the LDA.
International Journal of Machine Learning and Cybernetics | 2012
Riadh Ksantini; Boubakeur Boufama
The Support Vector Machine (SVM) has achieved promising classification performance. However, since it is based only on local information (Support Vectors), it is sensitive to directions with large data spread. On the other hand, Nonparametric Discriminant Analysis (NDA) is an improvement over the more general Linear Discriminant Analysis (LDA) where, the normality assumption from LDA is relaxed. Furthermore, NDA incorporates the partially global information to detect the dominant normal directions to the decision surface, which represent the true data spread. However, NDA relies on the choice of the κ-nearest neighbors (κ-NN’s) on the decision boundary. This paper introduces a novel Combined SVM and NDA (CSVMNDA) model which controls the spread of the data, while maximizing a relative margin separating the data classes. This model is considered as an improvement to SVM by incorporating the data spread information represented by the dominant normal directions to the decision boundary. This can also be viewed as an extension to the NDA where the support vectors improve the choice of κ-nearest neighbors (κ-NN’s) on the decision boundary by incorporating local information. Since our model is an extension to both SVM and NDA, it can deal with heteroscedastic and non-normal data. It also avoids the small sample size problem. Interestingly, the proposed improvements only require a rigorous and simple combination of NDA and SVM objective functions, and preserve the computational efficiency of SVM. Through the optimization of the CSVMNDA objective function, surprising performance gains were achieved on real-world problems. In particular, the experiments on face recognition have clearly shown the superiority of CSVMNDA over other state-of-the-art classification methods, especially, SVM and NDA.
International Journal of Wavelets, Multiresolution and Information Processing | 2006
Riadh Ksantini; Djemel Ziou; François Dubeau
In this paper, a simple and fast querying method for content-based image retrieval is represented. Using the multispectral gradient, a color image is split into two disjoint parts that are the homogeneous color regions and the edge regions. The homogeneous regions are represented by the traditional color histograms, and the edge regions are represented by the multispectral gradient module mean histograms. In order to measure the similarity degree between two color images both quickly and effectively, we use a one-dimensional pseudo-metric, which makes use of the one-dimensional Daubechies decomposition and compression of the extracted histograms. Our querying method is invariant to the query color image object translations and color intensities. The experimental results are reported on a collection of 10,000 LAB color images.
Signal, Image and Video Processing | 2014
Naimul Mefraz Khan; Riadh Ksantini; Imran Shafiq Ahmad; Ling Guan
This paper introduces a novel sparse nonparametric support vector machine classifier (SN-SVM) which combines data distribution information from two state-of-the-art kernel-based classifiers, namely, the kernel support vector machine (KSVM) and the kernel nonparametric discriminant (KND). The proposed model incorporates some near-global variations of the data provided by the KND and, hence, may be viewed as an extension to the KSVM. Similarly, since the support vectors improve the choice of
canadian conference on computer and robot vision | 2009
Riadh Ksantini; Farnaz Shariat; Boubakeur Boufama