Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chunxia Zhao is active.

Publication


Featured researches published by Chunxia Zhao.


Neurocomputing | 2010

Robust face recognition based on illumination invariant in nonsubsampled contourlet transform domain

Yong Cheng; Yingkun Hou; Chunxia Zhao; Zuoyong Li; Yong Hu; Cailing Wang

In order to alleviate the effect of illumination variations on face recognition, a novel face recognition algorithm based on illumination invariant in nonsubsampled contourlet transform (NSCT) domain is proposed. The algorithm first performs logarithm transform on original face images under various illumination conditions, which changes multiplicative illumination model into an additive one. Then NSCT is used to decompose the logarithm transformed images. After that, adaptive NormalShrink is applied to each directional subband of NSCT for illumination invariant extraction. Experimental results on the Yale B, the extended Yale and the CMU PIE face databases show that the proposed algorithm can effectively alleviate the effect of illumination on face recognition.


Neurocomputing | 2011

Localized twin SVM via convex minimization

Qiaolin Ye; Chunxia Zhao; Ning Ye; Xiaobo Chen

Multisurface proximal support vector machine via generalized eigenvalues (GEPSVM), being an effective classification tool for supervised learning, tries to seek two nonparallel planes that are determined by solving two generalized eigenvalue problems (GEPs). The GEPs may lead to an instable classification performance, due to matrix singularity. Proximal support vector machine using local information (LIPSVM), as a variant of GEPSVM, attempts to avoid the above shortcoming through adopting a similar formulation to the Maximum Margin Criterion (MMC). The solution to an LIPSVM follows directly from solving two standard eigenvalue problems. Actually, an LIPSVM can be viewed as a reduced algorithm, because it uses the selectively generated points to train the classifier. A major advantage of an LIPSVM is that it is resistant to outliers. In this paper, following the geometric intuition of an LIPSVM, a novel multi-plane learning approach called Localized Twin SVM via Convex Minimization (LCTSVM) is proposed. This approach determines two nonparallel planes by solving two newly formed SVM-type problems. In addition to keeping the superior characteristics of an LIPSVM, an LCTSVM still has its additional edges: (1) it has similar or better classification capability compared to LIPSVM, TWSVM and LSTSVM; (2) each plane is generated from a quadratic programming problem (QPP) instead of a special convex difference optimization arising from an LIPSVM; (3) the solution can be reduced to solving two systems of linear equations, resulting in considerably lesser computational cost; and (4) it can find the global minimum. Experiments carried out on both toy and real-world problems disclose the effectiveness of an LCTSVM.


IEEE Transactions on Image Processing | 2011

Comments on "Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering

Yingkun Hou; Chunxia Zhao; Deyun Yang; Yong Cheng

In order to resolve the problem that the denoising performance has a sharp drop when noise standard deviation reaches 40, proposed to replace the wavelet transform by the DCT. In this comment, we argue that this replacement is unnecessary, and that the problem can be solved by adjusting some numerical parameters. We also present this parameter modification approach here. Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.


Optimization Methods & Software | 2012

A feature selection method for nonparallel plane support vector machine classification

Qiaolin Ye; Chunxia Zhao; Ning Ye; Hao Zheng; Xiaobo Chen

Over the past decades, 1-norm techniques based on algorithms are widely used to suppress input features. Quite different from traditional 1-norm support vector machine (SVM), direct 1-norm optimization based on the primal problem of nonparallel plane classifiers like generalized proximal support vector machine, twin support vector machine (TWSVM) and least squares twin support vector machine (LSTSVM) are not capable of generating very sparse solutions that are vital for classification and can make them easier to store and faster to compute. To address the issue, in this paper, we develop a feature selection method for LSTSVM, called a feature selection method for nonparallel plane support vector machine classification (FLSTSVM), which is specially designed for strong feature suppression. We incorporate a Tikhonov regularization term to the objective of LSTSVM, and then minimize its 1-norm measure. Solution of FLSTSVM can follow directly from solving two smaller quadratic programming problems (QPPs) arising from two primal QPPs as opposed to two dual ones in TWSVM. FLSTSVM is capable of generating very sparse solutions. This means that FLSTSVM can reduce input features, for the linear case. When a nonlinear classifier is used, few kernel functions determine the classifier. In addition to having strong feature suppression, the edge of our method still lies in its faster computing time compared to that of TWSVM, Newton Method for Linear Programming SVM (NLPSVM) and LPNewton. Lastly, this algorithm is compared on public data sets, as well as an Exclusive Or (XOR) example.


Neurocomputing | 2014

Feature selection for least squares projection twin support vector machine

Jianhui Guo; Ping Yi; Ruili Wang; Qiaolin Ye; Chunxia Zhao

In this paper, we propose a new feature selection approach for the recently proposed Least Squares Projection Twin Support Vector Machine (LSPTSVM) for binary classification. 1-norm is used in our feature selection objective so that only non-zero elements in weight vectors will be chosen as selected features. Also, the Tikhonov regularization term is incorporated to the objective of our approach to reduce the singularity problems of Quadratic Programming Problems (QPPs), and then to minimize its 1-norm measure. This approach leads to a strong feature suppression capability, called as Feature Selection for Least Squares Projection Twin Support Vector Machine (FLSPTSVM). The solutions of FLSPTSVM can be obtained by solving two smaller QPPS arising from two primal QPPs as opposed to two dual ones in Twin Support Vector Machine (TWSVM). Thus, FLSPTSVM is capable of generating sparse solutions. This means that FLSPTSVM can reduce the number of input features for a linear case. Our linear FLSPTSVM can also be extended to a nonlinear case with the kernel trick. When a nonlinear classifier is used, the number of kernel functions required for the classifier is reduced. Our experiments on publicly available datasets demonstrate that our FLSPTSVM has comparable classification accuracy to that of LSPTSVM and obtains sparse solutions.


Optimization Methods & Software | 2012

Least squares twin support vector machine classification via maximum one-class within class variance

Qiaolin Ye; Chunxia Zhao; Ning Ye

A twin support vector machine (TWSVM), as an effective classification tool, tries to find two non-parallel planes that can be produced by solving two quadratic programming problems (QPPs). The QPPs lead to higher computational costs. The least squares twin SVM (LSTSVM), as a variant of TWSVM, attempts to avoid the above deficiency and obtain two non-parallel planes directly by solving two sets of linear equations. Both TWSVM and LSTSVM operate directly on patterns using two optimizations with constraints and, respectively, use such constraints to estimate the distance between each plane for its own class and patterns of other classes. However, such approaches weaken the geometric interpretation of the generalized proximal SVM (GEPSVM) so that in many Exclusive Or examples for different distributions, they may obtain the worse classification performance. Moreover, in addition to failing to discover the local geometry inside the samples, they are sensitive to outliers. In this paper, inspired by several geometrically motivated learning algorithms and the advantages of LSTSVM, we first propose a new classifier, called LSTSVM classification via maximum one-class within-class variance (MWSVM), which is specially designed for avoiding the aforementioned deficiencies and keeping the advantages of LSTSVM. The new method directly incorporates the one-class within-class variance to the classifier, such that it is expected that the genuine geometric interpretation of GEPSVM can be kept in LSTSVM. Undoubtedly, like LSTSVM, MWSVM may lead to a worse classification performance in many cases, especially when the outliers are present. Therefore, a localized version (LMWSVM) of MWSVM is further proposed to remove the outliers effectively. Another advantage of LMWSVM is that it takes the nearby points which are closest to each other as a training set, such that the MWSVM classifier is determined by smaller size of training samples than that of LSTSVM. Naturally, it can reduce the storage space of LSTSVM, especially when extended to nonlinear cases. Experiments carried out on both toy and real-world problems disclose the effectiveness of both MWSVM and LMWSVM.


Neurocomputing | 2016

Local and global regularized sparse coding for data representation

Zhenqiu Shu; Pu Huang; Xun Yu; Zhangjing Yang; Chunxia Zhao

Recently, sparse coding has been widely adopted for data representation in real-world applications. In order to consider the geometric structure of data, we propose a novel method, local and global regularized sparse coding (LGSC), for data representation. LGSC not only models the global geometric structure by a global regression regularizer, but also takes into account the manifold structure using a local regression regularizer. Compared with traditional sparse coding methods, the proposed method can preserve both global and local geometric structures of the original high-dimensional data in a new representation space. Experimental results on benchmark datasets show that the proposed method can improve the performance of clustering.


Expert Systems With Applications | 2011

Distance difference and linear programming nonparallel plane classifier

Qiaolin Ye; Chunxia Zhao; Haofeng Zhang; Ning Ye

We first propose Distance Difference GEPSVM (DGEPSVM), a binary classifier that obtains two nonparallel planes by solving two standard eigenvalue problems. Compared with GEPSVM, this algorithm does not need to care about the singularity occurring in GEPSVM, but with better classification correctness. This formulation is capable of dealing with XOR problems with different distribution for keeping the genuine geometrical interpretation of primal GEPSVM. Moreover, the proposed algorithm gives classification correctness comparable to that of LSTSVM and TWSVM, but with lesser unknown parameters. Then, the regularization techniques are incorporated to the TWSVM. With the help of the regularized formulation, a linear programming formation for TWSVM is proposed, called FETSVM, to improve TWSVM sparsity, thereby suppressing input features. This means FETSVM is capable of reducing the number of input features, for linear case. When a nonlinear classifier is used, this means few kernel functions determine the classifier. Lastly, this algorithm is compared on artificial and public datasets. To further illustrate the effectiveness of our proposed algorithms, we also apply these algorithms to USPS handwritten digits.


Neurocomputing | 2010

AUC maximization linear classifier based on active learning and its application

Guang Han; Chunxia Zhao

Aiming at labeling and ranking difficulties caused by a large number of samples, as well as uneven distribution of samples in outdoor obstacle detection of the autonomous mobile robot, an AUC maximization linear classifier method based on active learning is proposed in this paper. This method firstly uses dynamic clustering algorithm to select the representative samples and labels these samples, then these labeled samples are put in the training set. Next, a linear classifier is trained using the AUC maximization method on the training set. The above process will be repeated until the AUC converges. The experiments are performed on real outdoor environment image database. The experiment results show that the very good detection results are obtained using the method proposed in this paper with only 120 samples. More importantly, using the proposed method can significantly reduce the workload of labeling the samples and size of the sample set, and AUC maximization proposed also excels the existing methods.


international conference on image processing | 2016

Structured Discriminative Nonnegative Matrix Factorization for hyperspectral unmixing

Xue Li; Lei Tong; Xun Yu; Jianhui Guo; Chunxia Zhao

Hyperspectral unmixing is an important technique for identifying the constituent spectra and estimating their corresponding fractions in an image. Nonnegative Matrix Factorization (NMF) has recently been widely used for hyperspectral unmixing. However, due to the complex distribution of hyperspectral data, most existing NMF algorithms cannot adequately reflect the intrinsic relationship of the data. In this paper, we propose a novel method, Structured Discriminative Nonnegative Matrix Factorization (SDNMF), to preserve the structural information of hyperspectral data. This is achieved by introducing structured discriminative regularization terms to model both local affinity and distant repulsion of observed spectral responses. Moreover, considering that the abundances of most materials are sparse, a sparseness constraint is also introduced into SDNMF. Experimental results on both synthetic and real data have validated the effectiveness of the proposed method which achieves better unmixing performance than several alternative approaches.

Collaboration


Dive into the Chunxia Zhao's collaboration.

Top Co-Authors

Avatar

Qiaolin Ye

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ning Ye

Nanjing Forestry University

View shared research outputs
Top Co-Authors

Avatar

Zhenqiu Shu

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jianhui Guo

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaobo Chen

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xue Li

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yingkun Hou

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yong Cheng

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xun Yu

Griffith University

View shared research outputs
Researchain Logo
Decentralizing Knowledge