Applied Soft Computing | 2021

k-relevance vectors: Considering relevancy beside nearness

 
 
 

Abstract


Abstract This study combines two different learning paradigms, k-nearest neighbor (k-NN) rule, as memory-based learning paradigm and relevance vector machines (RVM), as statistical learning paradigm. The purpose is to improve the performance of k-NN rule through selection of important features with sparse Bayesian learning method. This combination is performed in kernel space and is called k-relevance vector (k-RV). The proposed model significantly prunes irrelevant features. Our combination of k-NN and RVM presents a new concept of similarity measurement for k-NN rule, we call it k-relevancy which aims to consider “relevancy” in the feature space beside “nearness” in the input space. We also introduce a new parameter, responsible for early stopping of iterations in RVM that is able to improve the classification accuracy. Intensive experiments are conducted on several classification datasets from University of California Irvine (UCI) repository and two real datasets from computer vision domain. The performance of k-RV is highly competitive compared to a few state-of-the-arts in terms of classification accuracy.

Volume 112
Pages 107762
DOI 10.1016/J.ASOC.2021.107762
Language English
Journal Applied Soft Computing

Full Text