Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Feilong Cao is active.

Publication


Featured researches published by Feilong Cao.


Neurocomputing | 2011

A study on effectiveness of extreme learning machine

Yuguang Wang; Feilong Cao; Yubo Yuan

Abstract Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.


Neurocomputing | 2008

The estimate for approximation error of neural networks: A constructive approach

Feilong Cao; Tingfan Xie; Zongben Xu

Neural networks are widely used in many applications including astronomical physics, image processing, recognition, robotics and automated target tracking, etc. Their ability to approximate arbitrary functions is the main reason for this popularity. The main result of this paper is a constructive proof of a formula for the upper bound of the approximation error by feedforward neural networks with one hidden layer of sigmoidal units and a linear output. The result can also be used to estimate complexity of the maximum error network. An example to demonstrate the theoretical result is given.


Neurocomputing | 2013

Image classification based on effective extreme learning machine

Feilong Cao; Bo Liu; Dong Sun Park

In this work, a new image classification method is proposed based on extreme k-means (EKM) and effective extreme learning machine (EELM). The proposed method has image decomposition with curvelet transform, reduces dimensionality with discriminative locality alignment (DLA), generates a set of distinctive features with EKM, and has a classification with EELM. Since EKM has a better clustering performance than k-means and EELM has a better accuracy than ELM, the proposed EKM-EELM algorithm has a significant improvement in classification rate. Extensive experiments are performed using challenging databases and results are compared against state of the art techniques. Experimental results show that the proposed method has superior performances on classification rate than some other traditional methods for image classification.


Information Sciences | 2015

A probabilistic learning algorithm for robust modeling using neural networks with random weights

Feilong Cao; Hailiang Ye; Dianhui Wang

Robust modeling approaches have received considerable attention due to its practical value to deal with the presence of outliers in data. This paper proposes a probabilistic robust learning algorithm for neural networks with random weights (NNRWs) to improve the modeling performance. The robust NNRW model is trained by optimizing a hybrid regularization loss function according to the sparsity of outliers and compressive sensing theory. The well-known expectation maximization (EM) algorithm is employed to implement our proposed algorithm under some assumptions on noise distribution. Experimental results on function approximation as well as UCI data sets for regression and classification demonstrate that the proposed algorithm is promising with good potential for real world applications.


Neurocomputing | 2014

Extended feed forward neural networks with random weights for face recognition

Jing Lu; Jianwei Zhao; Feilong Cao

Abstract Face recognition is always a hot topic in the field of pattern recognition and computer vision. Generally, images or features are often converted into vectors in the process of recognition. This method usually results in the distortion of correlative information of the elements in the vectorization of an image matrix. This paper designs a classifier called two dimensional neural network with random weights (2D-NNRW) which can use matrix data as direct input, and can preserve the image matrix structure. Specifically, the proposed classifier employs left and right projecting vectors to replace the usual high dimensional input weight in the hidden layer to keep the correlative information of the elements, and adopts the idea of neural network with random weights (NNRW) to learn all the parameters. Experiments on some famous databases validate that the proposed classifier 2D-NNRW can embody the structural character of the face image and has good performance for face recognition.


Knowledge Based Systems | 2015

A local learning algorithm for random weights networks

Jianwei Zhao; Zhihui Wang; Feilong Cao; Dianhui Wang

Robust modelling is significant to deal with complex systems with uncertainties. This paper aims to develop a novel learning algorithm for training regularized local random weights networks (RWNs). The learner model, terms as RL-RWN, is built on regularized moving least squares method and generalizes the solution obtained from the standard least square technique. Simulations are carried out using two benchmark datasets, including Auto-MPG data and surface reconstruction data. Results demonstrate that our proposed RL-RWN outperforms the original RWN and radial basis function networks.


Neurocomputing | 2010

Approximation capability of interpolation neural networks

Feilong Cao; Shaobo Lin; Zongben Xu

It is well-known that single hidden layer feed-forward neural networks (SLFNs) with at most n hidden neurons can learn n distinct samples with zero error, and the weights connecting the input neurons and the hidden neurons and the hidden node thresholds can be chosen randomly. Namely, for n distinct samples, there exist SLFNs with n hidden neurons that interpolate them. These networks are called exact interpolation networks for the samples. However, for some approximated target functions (as continuous or integrable functions) not all exact interpolation networks have good approximation effect. This paper, by using a functional approach, rigorously proves that for given distinct samples there exists an SLFN which not only exactly interpolates samples but also near best approximates the target function.


Knowledge Based Systems | 2016

Pose and illumination variable face recognition via sparse representation and illumination dictionary

Feilong Cao; Heping Hu; Jing Lu; Jianwei Zhao; Zhenghua Zhou; Jiao Wu

This paper addresses the problem of face recognition under pose and illumination variations, and proposes a novel algorithm inspired by the idea of sparse representation (SR). In order to make the SR early designed for the pose-invariant face recognition suitable for the case of pose variation, a multi-pose weighted sparse representation (MW-SR) algorithm is proposed to emphasize the contributions of the similar poses in the representation of the test image. Furthermore, when some illumination variations are added to the images, it is more reasonable to take advantage of the results of pose variable recognition and avoid the traditional SR method that adds all kinds of images with pose and illumination variations in the training dictionary. Here, a novel idea of the proposed algorithms is adding a general illumination dictionary to the training dictionary, and that once the illumination dictionary is designed, it is common for the other face databases. Extensive experiments illustrate that the proposed algorithms perform better than some existing methods for the face recognition under pose and illumination variations.


Neural Networks | 2011

Essential rate for approximation by spherical neural networks

Shaobo Lin; Feilong Cao; Zongben Xu

We consider the optimal rate of approximation by single hidden feed-forward neural networks on the unit sphere. It is proved that there exists a neural network with n neurons, and an analytic, strictly increasing, sigmoidal activation function such that the deviation of a Sobolev class W²(2r)(S(d)) from the class of neural networks Φ(n)(ϕ), behaves asymptotically as n(-2r/d-1). Namely, we prove that the essential rate of approximation by spherical neural networks is n(-2r/d-1).


Neural Networks | 2017

Recovering low-rank and sparse matrix based on the truncated nuclear norm

Feilong Cao; Jiaying Chen; Hailiang Ye; Jianwei Zhao; Zhenghua Zhou

Recovering the low-rank, sparse components of a given matrix is a challenging problem that arises in many real applications. Existing traditional approaches aimed at solving this problem are usually recast as a general approximation problem of a low-rank matrix. These approaches are based on the nuclear norm of the matrix, and thus in practice the rank may not be well approximated. This paper presents a new approach to solve this problem that is based on a new norm of a matrix, called the truncated nuclear norm (TNN). An efficient iterative scheme developed under the linearized alternating direction method multiple framework is proposed, where two novel iterative algorithms are designed to recover the sparse and low-rank components of matrix. More importantly, the convergence of the linearized alternating direction method multiple on our matrix recovering model is discussed and proved mathematically. To validate the effectiveness of the proposed methods, a series of comparative trials are performed on a variety of synthetic data sets. More specifically, the new methods are used to deal with problems associated with background subtraction (foreground object detection), and removing shadows and peculiarities from images of faces. Our experimental results illustrate that our new frameworks are more effective and accurate when compared with other methods.

Collaboration


Dive into the Feilong Cao's collaboration.

Top Co-Authors

Avatar

Jianwei Zhao

China Jiliang University

View shared research outputs
Top Co-Authors

Avatar

Zongben Xu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Yubo Yuan

East China University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shaobo Lin

China Jiliang University

View shared research outputs
Top Co-Authors

Avatar

Zhenghua Zhou

China Jiliang University

View shared research outputs
Top Co-Authors

Avatar

Yongquan Zhang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunmei Ding

China Jiliang University

View shared research outputs
Top Co-Authors

Avatar

Yuguang Wang

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Bo Liu

China Jiliang University

View shared research outputs
Researchain Logo
Decentralizing Knowledge