Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun-Bao Li is active.

Publication


Featured researches published by Jun-Bao Li.


Information Sciences | 2008

Kernel class-wise locality preserving projection

Jun-Bao Li; Jeng-Shyang Pan; Shu-Chuan Chu

In the recent years, the pattern recognition community paid more attention to a new kind of feature extraction method, the manifold learning methods, which attempt to project the original data into a lower dimensional feature space by preserving the local neighborhood structure. Among them, locality preserving projection (LPP) is one of the most promising feature extraction techniques. However, when LPP is applied to the classification tasks, it shows some limitations, such as the ignorance of the label information. In this paper, we propose a novel local structure based feature extraction method, called class-wise locality preserving projection (CLPP). CLPP utilizes class information to guide the procedure of feature extraction. In CLPP, the local structure of the original data is constructed according to a certain kind of similarity between data points, which takes special consideration of both the local information and the class information. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Moreover, a kernel version of CLPP namely Kernel CLPP (KCLPP) is developed through applying the kernel trick to CLPP to increase its performance on nonlinear feature extraction. Experiments on ORL face database and YALE face database are performed to test and evaluate the proposed algorithm.


Neural Computing and Applications | 2009

Kernel optimization-based discriminant analysis for face recognition

Jun-Bao Li; Jeng-Shyang Pan; Zhe-Ming Lu

The selection of kernel function and its parameter influences the performance of kernel learning machine. The difference geometry structure of the empirical feature space is achieved under the different kernel and its parameters. The traditional changing only the kernel parameters method will not change the data distribution in the empirical feature space, which is not feasible to improve the performance of kernel learning. This paper applies kernel optimization to enhance the performance of kernel discriminant analysis and proposes a so-called Kernel Optimization-based Discriminant Analysis (KODA) for face recognition. The procedure of KODA consisted of two steps: optimizing kernel and projecting. KODA automatically adjusts the parameters of kernel according to the input samples and performance on feature extraction is improved for face recognition. Simulations on Yale and ORL face databases are demonstrated the feasibility of enhancing KDA with kernel optimization.


Neurocomputing | 2008

Adaptive quasiconformal kernel discriminant analysis

Jeng-Shyang Pan; Jun-Bao Li; Zhe-Ming Lu

Kernel discriminant analysis (KDA) is effective to extract nonlinear discriminative features of input samples using the kernel trick. However, the conventional KDA algorithm endures the kernel selection which has significant impact on the performances of KDA. In order to overcome this limitation, a novel nonlinear feature extraction method called adaptive quasiconformal kernel discriminant analysis (AQKDA) is proposed in this paper. AQKDA maps the data from the original input space to the high dimensional kernel space using a quasiconformal kernel. The adaptive parameters of the quasiconformal kernel are automatically calculated through optimizing an objective function designed for measuring the class separability of data in the feature space. Consequently, the nonlinear features extracted by AQKDA have the larger class separability compared with KDA. Experimental results on the two real-world datasets demonstrate the effectiveness of the proposed method.


Neural Computing and Applications | 2009

Face recognition using Gabor-based complete Kernel Fisher Discriminant analysis with fractional power polynomial models

Jun-Bao Li; Jeng-Shyang Pan; Zhe-Ming Lu

This paper presents a novel face recognition method by integrating the Gabor wavelet representation of face images and the enhanced powerful discriminator, complete Kernel Fisher Discriminant (CKFD) with fractional power polynomial (FPP) models. The novelty of this paper comes from (1) Gabor wavelet, is employed to extract desirable facial features characterized by spatial frequency, spatial locality and orientation selectivity to cope with the variations in illumination and facial expressions, which improves the recognition performance; (2) a recently proposed powerful discriminator, namely CKFD, which enhances its discriminating ability using two kinds of discriminant information (i.e., regular and irregular information), is employed to classify the Gabor features; (3) the FPP models, are employed to CKFD analysis to enhance the discriminating ability. Comparing with existing principal component analysis, linear discriminant analysis, kernel principal component analysis, KFD and CKFD methods, the proposed method gives the superior results with the ORL, Yale and UMIST face databases.


Neurocomputing | 2011

Kernel Self-optimized Locality Preserving Discriminant Analysis for feature extraction and recognition

Jun-Bao Li; Jeng-Shyang Pan; Shyi-Ming Chen

We propose Kernel Self-optimized Locality Preserving Discriminant Analysis (KSLPDA) for feature extraction and recognition. The procedure of KSLPDA is divided into two stages, i.e., one is to solve the optimal expansion of the data-dependent kernel with the proposed kernel self-optimization method, and the second is to seek the optimal projection matrix for dimensionality reduction. Since the optimal parameters of data-dependent kernel are achieved automatically through solving the constraint optimization equation, based on maximum margin criterion and Fisher criterion in the empirical feature space, KSLPDA works well on feature extraction for classification. The comparative experiments show that KSLPDA outperforms PCA, LDA, LPP, supervised LPP and kernel supervised LPP.


IEEE Sensors Journal | 2016

RSSI-Based Localization Through Uncertain Data Mapping for Wireless Sensor Networks

Qinghua Luo; Yu Peng; Jun-Bao Li; Xiyuan Peng

When localizing the position of an unknown node for wireless sensor networks, the received signal strength indicator (RSSI) value is usually considered to fit a fixed attenuation model with a corresponding communication distance. However, due to some negative factors, the relationship is not valid in the actual localization environment, which leads to a considerable localization error. Therefore, we present a method for improved RSSI-based localization through uncertain data mapping. Starting from an advanced RSSI measurement, the distributions of the RSSI data tuples are determined and expressed in terms of interval data. Then, a data tuple pattern matching strategy is applied to the RSSI data vector during the localization procedure. Experimental results in three representative wireless environments show the feasibility and effectiveness of the proposed approach.


Neural Computing and Applications | 2012

Sparse data-dependent kernel principal component analysis based on least squares support vector machine for feature extraction and recognition

Jun-Bao Li; Huijun Gao

Kernel learning is widely used in many areas, and many methods are developed. As a famous kernel learning method, kernel principal component analysis (KPCA) endures two problems in the practical applications. One is that all training samples need to be stored for the computing the kernel matrix during kernel learning. Second is that the kernel and its parameter have the heavy influence on the performance of kernel learning. In order to solve the above problem, we present a novel kernel learning namely sparse data-dependent kernel principal component analysis through reducing the training samples with sparse learning-based least squares support vector machine and adaptive self-optimizing kernel structure according to the input training samples. Experimental results on UCI datasets, ORL and YALE face databases, and Wisconsin Breast Cancer database show that it is feasible to improve KPCA on saving consuming space and optimizing kernel structure.


Information Sciences | 2014

Kernel self-optimization learning for kernel-based feature extraction and recognition

Jun-Bao Li; Yun-Heng Wang; Shu-Chuan Chu; John F. Roddick

Kernel learning is becoming an important research topic in the area of machine learning, and it has wide applications in pattern recognition, computer vision, image and signal processing. Kernel learning provides a promising solution to nonlinear problems, including nonlinear feature extraction, classification and clustering. However, in kernel-based systems, the problem of the kernel function and its parameters remains to be solved. Methods of choosing parameters from a discrete set of values have been presented in previous studies, but these methods do not change the data distribution structure in the kernel-based mapping space. Accordingly, performance is not improved because the current kernel optimization does not change the data distribution. Based on this problem, this paper presents a uniform framework for kernel self-optimization with the ability to adjust the data structure. The data-dependent kernel is extended and applied to kernel learning, and optimization equations with two criteria for measuring data discrimination are used to solve the optimal parameter values. Some experiments are performed to evaluate the performance in popular kernel learning methods, including kernel principal components analysis (KPCA), kernel discriminant analysis (KDA) and kernel locality-preserving projection (KLPP). These evaluations show that the framework of kernel self-optimization is feasible for enhancing kernel-based learning methods.


Neural Computing and Applications | 2013

3D model classification based on nonparametric discriminant analysis with kernels

Jun-Bao Li; Wen-He Sun; Yun-Heng Wang; Lin-Lin Tang

Abstract3D model classification has many applications in CAD, 3D object retrieval, and so on. The description of 3D model is crucial but difficult, which leads to the difficulty of classification. The traditional classifier has its limitation in classification of 3D model description. In this paper, we present 3D model classification-based nonparametric discriminant analysis with kernels combined with geometry projection-based histogram model for invariable feature extraction. Firstly, we present nonparametric discriminant analysis with kernels, and secondly, we proposed the invariable feature extraction method with geometry projection-based histogram model. Thirdly, we present the framework of 3D model classification using the proposed nonparametric discriminant analysis with kernels and geometry projection-based histogram model. Finally, we testify the feasibility of the proposed algorithm and performance on 3D model classification. The experimental results show that the proposed scheme is feasible and effective on 3D model classification on the public datasets.


Information Sciences | 2013

Quasiconformal kernel common locality discriminant analysis with application to breast cancer diagnosis

Jun-Bao Li; Yu Peng; Datong Liu

Dimensionality reduction (DR) is a popular method in recognition and classification in many areas, such as facial and medical imaging. In this paper, we propose a novel supervised DR method namely Quasiconformal Kernel Common Locality Discriminant Analysis (QKCLDA). QKCLDA preserves the local and discriminative relationships of the data. Moreover, it adjusts the kernel structure according to the distribution of the input data and thus possesses a classification advantage over traditional kernel-based methods. In QKCLDA, the parameter of the quasiconformal kernel is automatically calculated through optimizing an objective function of maximizing the class discriminative ability. QKCLDA is employed in breast cancer diagnoses, and some experiments using Wisconsin Diagnostic Breast Cancer (WDBC) and mini-MIAS databases have tested its feasibility and performance in assigning these diagnoses.

Collaboration


Dive into the Jun-Bao Li's collaboration.

Top Co-Authors

Avatar

Jeng-Shyang Pan

Fujian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shouda Jiang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Peng

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jingli Yang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lianlei Lin

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qinghua Luo

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiyuan Peng

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zheng Cui

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge