Yuan Lan
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuan Lan.
Neurocomputing | 2009
Yuan Lan; Yeng Chai Soh; Guang-Bin Huang
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411-1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411-1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM.
IEEE Transactions on Neural Networks | 2012
Rui Zhang; Yuan Lan; Guang-Bin Huang; Zongben Xu
Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this brief, we propose an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks. Different from other incremental ELMs (I-ELMs) whose existing hidden nodes are frozen when the new hidden nodes are added one by one, in AG-ELM the number of hidden nodes is determined in an adaptive way in the sense that the existing networks may be replaced by newly generated networks which have fewer hidden nodes and better generalization performance. We then prove that such an AG-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results demonstrate and verify that this new approach can achieve a more compact network architecture than the I-ELM.
Neurocomputing | 2010
Yuan Lan; Yeng Chai Soh; Guang-Bin Huang
Extreme learning machine (ELM) proposed by Huang et al. was developed for generalized single hidden layer feedforward networks (SLFNs) with a wide variety of hidden nodes. It proved to be very fast and effective especially for solving function approximation problems with a predetermined network structure. However, the method for determining the network structure of preliminary ELM may be tedious and may not lead to a parsimonious solution. In this paper, a systematic two-stage algorithm (named TS-ELM) is introduced to handle the problem. In the first stage, a forward recursive algorithm is applied to select the hidden nodes from the candidates randomly generated in each step and add them to the network until the stopping criterion achieves its minimum. The significance of each hidden node is then reviewed in the second stage and the insignificance ones are removed from the network, which drastically reduces the network complexity. The effectiveness of TS-ELM is verified by the empirical studies in this paper.
IEEE Transactions on Systems, Man, and Cybernetics | 2013
Rui Zhang; Yuan Lan; Guang-Bin Huang; Zongben Xu; Yeng Chai Soh
Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron alike and perform well in both regression and classification applications. The problem of determining the suitable network architectures is recognized to be crucial in the successful application of ELMs. This paper first proposes a dynamic ELM (D-ELM) where the hidden nodes can be recruited or deleted dynamically according to their significance to network performance, so that not only the parameters can be adjusted but also the architecture can be self-adapted simultaneously. Then, this paper proves in theory that such D-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results obtained over various test problems demonstrate and verify that the proposed D-ELM does a good job reducing the network size while preserving good generalization performance.
Neural Computing and Applications | 2013
Yuan Lan; Zongjiang Hu; Yeng Chai Soh; Guang-Bin Huang
Over the last two decades, automatic speaker recognition has been an interesting and challenging problem to speech researchers. It can be classified into two different categories, speaker identification and speaker verification. In this paper, a new classifier, extreme learning machine, is examined on the text-independent speaker verification task and compared with SVM classifier. Extreme learning machine (ELM) classifiers have been proposed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. They are extremely fast in learning and perform well on many artificial and real regression and classification applications. The database used to evaluate the ELM and SVM classifiers is ELSDSR corpus, and the Mel-frequency Cepstral Coefficients were extracted and used as the input to the classifiers. Empirical studies have shown that ELM classifiers and its variants could perform better than SVM classifiers on the dataset provided with less training time.
international symposium on neural networks | 2008
Yuan Lan; Yeng Chai Soh; Guang-Bin Huang
In this paper, extreme learning machine (ELM) is introduced to predict the subcellular localization of proteins based on the frequent subsequences. It is proved that ELM is extremely fast and can provide good generalization performance. We evaluated the performance of ELM on four localization sites with frequent subsequences as the feature space. A new parameter called Comparesup was introduced to help the feature selection. The performance of ELM was tested on data with different number of frequent subsequences, which were determined by different range of Comparesup. The results demonstrated that ELM performed better than previously reported results, for all of the four localization sites.
international symposium on neural networks | 2009
Yuan Lan; Yeng Chai Soh; Guang-Bin Huang
Online Sequential Extreme Learning Machine (OS-ELM) proposed by Liang et al [1] is a faster and more accurate online sequential learning algorithm as compared to other current sequential algorithms. It can learn data one-by-one or chunk-by-chunk with fixed or varying chunk size. However, there is one of the remaining challenges for OS-ELM that it could not determine the optimal network structure automatically. In this paper, we propose a Constructive Enhancement for OS-ELM (CEOS-ELM), which can add random hidden nodes one-by-one or group-by-group with fixed or varying group size. CEOS-ELM is searching for the optimal network architecture during the sequential learning process, and it can handle both additive and radial basis function (RBF) hidden nodes. The optimal number of hidden nodes can be obtained automatically after training. The simulation results show that with CEOS-ELM, the network can achieve comparable generalization performance with OS-ELM and more compact network structure.
systems, man and cybernetics | 2012
Hongming Zhou; Yuan Lan; Yeng Chai Soh; Guang-Bin Huang; Rui Zhang
Credit risk evaluation has become an increasingly important field in financial risk management for financial institutions, especially for banks and credit card companies. Many data mining and statistical methods have been applied to this field. Extreme learning machine (ELM) classifier as a type of generalized single hidden layer feed-forward networks has been used in many applications and achieve good classification accuracy. Thus, we use ELM (kernel based) as a classification tool to perform the credit risk evaluation in this paper. The simulations are done on two credit risk evaluation datasets with three different kernel functions. Simulation results show that the kernel based ELM is more suitable for credit risk evaluation than the popular used Support Vector Machines (SVMs) with consideration of overall, good and bad accuracies.
autonomous and intelligent systems | 2011
Rui Zhang; Yuan Lan; Guang-Bin Huang; Yeng Chai Soh
The extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks (SLFNs) which need not be neuron alike and perform well in both regression and classification applications. An active topic in ELMs is how to automatically determine network architectures for given applications. In this paper, we propose an extreme learning machine with adaptive growth of hidden nodes and incremental updating of output weights by an errorminimization-based method (AIE-ELM). AIE-ELM grows the randomly generated hidden nodes in an adaptive way in the sense that the existing hidden nodes may be replaced by some newly generated hidden nodes with better performance rather than always keeping those existing ones in other incremental ELMs. The output weights are updated incrementally in the same way of error minimized ELM (EM-ELM). Simulation results demonstrate and verify that our new approach can achieve a more compact network architecture than EM-ELM with better generalization performance.
Neurocomputing | 2010
Yuan Lan; Yeng Chai Soh; Guang-Bin Huang