Jiancheng Lv
Sichuan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jiancheng Lv.
IEEE Transactions on Neural Networks | 2005
Zhang Yi; Mao Ye; Jiancheng Lv; Kok Kiong Tan
The convergence of minor-component analysis (MCA) algorithms is an important issue with bearing on the use of these methods in practical applications. This correspondence studies the convergence of Fengs MCA learning algorithm via a corresponding deterministic discrete-time (DDT) system. Some sufficient convergence conditions are obtained for Fengs MCA learning algorithm with constant learning rate. Simulations are carried out to illustrate the theory
IEEE Transactions on Fuzzy Systems | 2010
Jiancheng Lv; Kok Kiong Tan; Zhang Yi; Sunan Huang
In this paper, we analyze Xu and Yuilles robust principal component analysis (RPCA) learning algorithms by means of the distance measurement in space. Based on the analysis, a family of fuzzy RPCA learning algorithms is proposed, which is robust against outliers. These algorithms can explicitly be understood from the viewpoint of fuzzy set theory, though Xu and Yuilles algorithms were proposed based on a statistical physics approach. In the proposed algorithms, an adaptive learning procedure overcomes the difficulty of selection of learning parameters in Xu and Yuilles algorithms. Furthermore, the robustness of proposed algorithms is investigated by using the theory of influence functions. Simulations are carried out to illustrate the robustness of these algorithms.
IEEE Transactions on Neural Networks | 2007
Jiancheng Lv; Zhang Yi; Kok Kiong Tan
Adaptively determining an appropriate number of principal directions for principal component analysis (PCA) neural networks is an important problem to address when one uses PCA neural networks for online feature extraction. In this letter, inspired from biological neural networks, a single-layer neural network model with lateral connections is proposed which uses an improved generalized Hebbian algorithm (GHA) to address this problem. In the proposed model, the number of principal directions can be adaptively determined to approximate the intrinsic dimensionality of the given data set so that the dimensionality of the data set can be reduced to approach the intrinsic dimensionality to any required precision through the network
IEEE Transactions on Signal Processing | 2009
Jiancheng Lv; Kok Kiong Tan; Zhang Yi; Sunan Huang
The convergence of a class of Hyvarinen-Ojas independent component analysis (ICA) learning algorithms with constant learning rates is investigated by analyzing the original stochastic discrete time (SDT) algorithms and the corresponding deterministic discrete time (DDT) algorithms. Most existing learning rates for ICA learning algorithms are required to approach zero as the learning step increases. However, this is not a reasonable requirement to impose in many practical applications. Constant learning rates overcome the shortcoming. On the other hand, the original algorithms, described by the SDT algorithms, are studied directly. Invariant sets of these algorithms are obtained so that the nondivergence of the algorithms is guaranteed in stochastic environment. In the invariant sets, the local convergence of the original algorithms is analyzed by indirectly studying the convergence of the corresponding DDT algorithms. It is rigorously proven that the trajectories of the DDT algorithms starting from the invariant sets will converge to an independent component direction with a positive kurtosis or a negative kurtosis. The convergence results can shed some light on the dynamical behaviors of the original SDT algorithms. Furthermore, the corresponding DDT algorithms are extended to the block versions of the original SDT algorithms. The block algorithms not only establish a relationship between the SDT algorithms and the corresponding DDT algorithms, but also can get a good convergence speed and accuracy in practice. Simulation examples are carried out to illustrate the theory derived.
IEEE Transactions on Neural Networks | 2007
Jiancheng Lv; Zhang Yi; Kok Kiong Tan
The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.
IEEE Transactions on Neural Networks | 2015
Jiancheng Lv; Zhang Yi; Yunxia Li
Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.
Knowledge Based Systems | 2018
Shudong Huang; Zenglin Xu; Jiancheng Lv
Abstract The goal of document co-clustering is to partition textual data sets into groups by utilizing the duality between documents (i.e., data points) and words (i.e., features). That is, the documents can be grouped based on their distribution on words, while words can be grouped based on their distribution on documents. However, traditional co-clustering methods are usually sensitive to the input affinity matrix since they partition the data based on the fixed data graph. To address this limitation, in this paper, based on nonnegative matrix tri-factorization, we propose a new framework of co-clustering with adaptive local structure learning. The proposed unified learning framework performs intrinsic structure learning and tri-factorization (i.e., 3-factor factorization) simultaneously. The intrinsic structure is adaptively learned from the results of tri-factorization, and the factors are reformulated to preserve the refined local structures of the textual data. In this way, the local structure learning and factorization can be mutually improved. Furthermore, considering the duality between documents and words, the proposed framework explores not only the adaptive local structure of the data space, but also the adaptive local structure of the feature space. In order to solve the optimization problem of our method, an efficient iterative updating algorithm is proposed with guaranteed convergence. Experiments on benchmark textual data sets demonstrate the effectiveness of the proposed method.
Knowledge Based Systems | 2016
Yongsheng Sang; Jiancheng Lv; Hong Qu; Zhang Yi
Finding shortest paths is an important problem in transportation and communication networks. This paper develops a Pulse-Coupled Neural Network (PCNN) model to efficiently compute a single-pair shortest path. Unlike most of the existing PCNN models, the proposed model is endowed with a special mechanism, called on-forward/off-backward; if a neuron fires, its neighboring neurons in a certain forward region will be excited, whereas the neurons in a backward region will be inhibited. As a result, the model can produce a restricted autowave that propagates at different speeds corresponding to different directions, which is different from the completely nondeterministic PCNN models. Compared with some traditional methods, the proposed PCNN model significantly reduces the computational cost of searching for the shortest path. Experimental results further confirmed the efficiency and effectiveness of the proposed model.
IEEE Transactions on Neural Networks | 2018
Xiaojie Li; Jiancheng Lv; Zhang Yi
Detecting boundary points (including outliers) is often more interesting than detecting normal observations, since they represent valid, interesting, and potentially valuable patterns. Since data representation can uncover the intrinsic data structure, we present an efficient representation-based method for detecting such points, which are generally located around the margin of densely distributed data, such as a cluster. For each point, the negative components in its representation generally correspond to the boundary points among its affine combination of points. In the presented method, the reverse unreachability of a point is proposed to evaluate to what degree this observation is a boundary point. The reverse unreachability can be calculated by counting the number of zero and negative components in the representation. The reverse unreachability explicitly takes into account the global data structure and reveals the disconnectivity between a data point and other points. This paper reveals that the reverse unreachability of points with lower density has a higher score. Note that the score of reverse unreachability of an outlier is greater than that of a boundary point. The top-
Archive | 2015
Xiaojie Li; Jiancheng Lv; Dongdong Cheng
m