Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weite Li is active.

Publication


Featured researches published by Weite Li.


international symposium on neural networks | 2015

Geometric approach of quasi-linear kernel composition for support vector machine

Weite Li; Jinglu Hu

This paper proposes a geometric way to construct a quasi-linear kernel by which a quasi-linear support vector machine (SVM) is performed. A quasi-linear SVM is a SVM with quasi-linear kernel, in which the nonlinear separation boundary is approximated by using multi-local linear boundaries with interpolation. However, the local linearity extraction for the composition of quasi-linear kernel is still an open problem. In this paper, according to the geometric theory, a method based on piecewise linear classifier is proposed to extract the local linearity in a more precise and efficient way. We firstly construct a function set including multiple linear functions and each of those functions reflects one part of linearity of the whole nonlinear separation boundary. Then the obtained local linearity is added as prior information into the composition of quasi-linear kernel. Experimental results on synthetic data sets and real world data sets show that our proposed method is effective to improve classification performances.


international symposium on neural networks | 2016

A kernel level composition of multiple local classifiers for nonlinear classification

Weite Li; Bo Zhou; Jinglu Hu

Kernel functions based machine learning algorithms have been extensively studied over the past decades with successful applications in a variety of real-world tasks. In this paper, we formulate a kernel level composition method to embed multiple local classifiers (kernels) into one kernel function, so as to obtain a more flexible data-dependent kernel. Since such composite kernels are composed by multiple local classifiers interpolated with several localizing gating functions, a specific learning process is also introduced in this paper to pre-determine their parameters. Experimental results are provided to validate two major perspectives of this paper. Firstly, the introduced learning process is effective to detect local information, which is essential for the parameter pre-determination of the localizing gating functions. Secondly, the proposed composite kernel has a capacity to improve classification performance.


international symposium on neural networks | 2016

A deep quasi-linear kernel composition method for support vector machines

Weite Li; Jinglu Hu; Benhui Chen

In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.


international symposium on neural networks | 2017

Large-scale image classification using fast SVM with deep quasi-linear kernel

Peifeng Liang; Weite Li; Donghang Liu; Jinglu Hu

In this paper, a novel fast support vector machine (SVM) method combining with the deep quasi-linear kernel (DQLK) learning is proposed for large scale image classification. This method can train large-scale dataset with SVM fast using less memory space and less training time. Since SVM classifiers are constructed by support vectors (SVs) that lie close to the separation boundary, removing the other samples that are not relevant to SVs has no effect on building the separation boundary. In other word, we need to reserve the boundary samples that are likely to be SVs. The proposed method uses an approximate separation classifier obtained by training a small subset selected from training data randomly as a reference to detect and remove non-relevant samples whose normalized algebraic distance to the reference classification boundary is larger than a threshold. The proposed method is implemented in the feature space. Therefore, by means of a good kernel method the proposed method can train high dimension data and image data. The DQLK method is used to extract and construct kernel matrix for the proposed method. Experimental results on different datasets and expended very large scale datasets show that the proposed method obtains outstanding ability to deal with very large scale image classification.


international symposium on neural networks | 2017

Non-local information for a mixture of multiple linear classifiers

Weite Li; Peifeng Liang; Xin Yuan; Jinglu Hu

For many problems in machine learning fields, the data are nonlinearly distributed. One popular way to tackle this kind of data is training a local kernel machine or a mixture of several locally linear models. However, both of these approaches heavily relies on local information, such as neighbor relations of each data sample, to capture potential data distribution. In this paper, we show the non-local information is more efficient for data representation. With an implementation of a winner-take-all autoencoder, several non-local templates are trained to trace the data distribution and to represent each sample in different subspaces with a suitable weight. By training a linear model for each subspace in a divide and conquer manner, one single support vector machine can be formulated to solve nonlinear classification problems. Experimental results demonstrate that a mixture of multiple linear classifiers from non-local information performs better than or is at least competitive with state-of-the-art mixtures of locally linear models.


international symposium on neural networks | 2017

A mixture of multiple linear classifiers with sample weight and manifold regularization

Weite Li; Benhui Chen; Bo Zhou; Jinglu Hu

A mixture of multiple linear classifiers is famous for its efficiency and effectiveness to tackle nonlinear classification problems. Each classifier contains one linear function multiplied with a gated function, which restricts its corresponding classifier to a local region. Previous researches mainly focus on the partition of local regions, since its quality directly determines the performance of mixture models. However, in real-world data sets, imbalanced and insufficient labeled data are two frequently encountered problems, which also have large influences on the performance of learned classifiers but are seldom considered or explored in the context of mixture models. In this paper, these missing components are introduced into the original formulation of mixture models, namely, a sample weighting scheme for imbalanced data distributions and a manifold regularization to leverage unlabeled data. Then, two solutions with closed form are provided for parameter optimization. Experimental results in the end of our paper exhibit the significance of the added components. As a result, a mixture of multiple linear classifiers can be extended to imbalanced and semi-supervised learning problems.


international symposium on neural networks | 2017

A multilayer gated bilinear classifier: From optimizing a deep rectified network to a support vector machine

Weite Li; Jinglu Hu

A deep neural network (DNN) is called as a deep rectified network (DRN), if using Rectified Linear Units (ReLUs) as its activation function. In this paper, we show its parameters can be seen to play two important roles simultaneously: one for determining the subnetworks corresponding to the inputs and the other for the parameters of those subnetworks. This observation leads our paper to proposing a method to combine a DNN and an SVM, as a deep classifier. For a DRN trained by a common tuning algorithm, a multilayer gated bilinear classifier is designed to mimic its functionality. Its parameter set is duplicated into two independent sets, playing different roles. One set is used to generate gate signals so as to determine subnetworks corresponding to its inputs, and keeps fixed when optimizing the classifier. The other set serves as parameters of subnetworks, which are linear classifiers. Therefore, their parameters can be implicitly optimized by applying SVM optimizations. Since the DRN is only to generate gate signals, we show in experiments, that it can be trained by using supervised, or unsupervised learning, and even by transfer learning.


international symposium on neural networks | 2016

Enhancing multi-label classification based on local label constraints and classifier chains

Benhui Chen; Weite Li; Yuqing Zhang; Jinglu Hu

In the multi-label classification issue, some implicit constraints and dependencies are always existed among labels. Exploring the correlation information among different labels is important for many applications. It not only can enhance the classifier performance but also can help to interpret the classification results for some specific applications. This paper presents an improved multi-label classification method based on local label constraints and classifier chains for solving multi-label tasks with large number of labels. Firstly, in order to exploit local label constraints in multi-label problem with large number of labels, clustering approach is utilized to segment training label set into several subsets. Secondly, for each label subset, local tree-structure constraints among different labels are mined based on mutual information metric. Thirdly, based on the mined local tree-structure label constraints, a variant of classifier chain strategy is implemented to enhance the multi-label learning system. Experiment results on five multi-label benchmark datasets show that the proposed method is a competitive approach for solving multi-label classification tasks with large number of labels.


Ieej Transactions on Electrical and Electronic Engineering | 2017

A geometry-based two-step method for nonlinear classification using quasi-linear support vector machine

Weite Li; Bo Zhou; Benhui Chen; Jinglu Hu


Ieej Transactions on Electrical and Electronic Engineering | 2018

One-class classification using a support vector machine with a quasi-linear kernel: ONE-CLASS CLASSIFICATION USING SUPPORT VECTOR MACHINE WITH A QUASI-LINEAR KERNEL

Peifeng Liang; Weite Li; Hao Tian; Jinglu Hu

Collaboration


Dive into the Weite Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Zheng

Shandong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuqing Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge