Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiping Lin is active.

Publication


Featured researches published by Zhiping Lin.


Information Sciences | 2012

Voting based extreme learning machine

Jiuwen Cao; Zhiping Lin; Guang-Bin Huang; Nan Liu

This paper proposes an improved learning algorithm for classification which is referred to as voting based extreme learning machine. The proposed method incorporates the voting method into the popular extreme learning machine (ELM) in classification applications. Simulations on many real world classification datasets have demonstrated that this algorithm generally outperforms the original ELM algorithm as well as several recent classification algorithms.


Systems & Control Letters | 2004

Robust stability and stabilisation of 2D discrete state-delayed systems

Wojciech Paszke; James Lam; Krzysztof Galkowski; Shengyuan Xu; Zhiping Lin

Abstract In this paper, we first present sufficient stability and robust stability conditions for discrete linear state-delayed 2D systems in terms of linear matrix inequalities. All results are obtained with Fornasini–Marchesini delay model but appropriate transformation to the corresponding Roesser form is provided as well. Generalisation to the multiple state-delayed case is also given. Then the stabilisation and robust stabilisation using static state feedback are studied. Stabilising feedback gain matrices are constructed based on the solutions of certain linear matrix inequalities. An numerical example is used to illustrate the effectiveness of the approach.


Neural Processing Letters | 2012

Self-Adaptive Evolutionary Extreme Learning Machine

Jiuwen Cao; Zhiping Lin; Guang-Bin Huang

In this paper, we propose an improved learning algorithm named self-adaptive evolutionary extreme learning machine (SaE-ELM) for single hidden layer feedforward networks (SLFNs). In SaE-ELM, the network hidden node parameters are optimized by the self-adaptive differential evolution algorithm, whose trial vector generation strategies and their associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions, and the network output weights are calculated using the Moore–Penrose generalized inverse. SaE-ELM outperforms the evolutionary extreme learning machine (E-ELM) and the different evolutionary Levenberg–Marquardt method in general as it could self-adaptively determine the suitable control parameters and generation strategies involved in DE. Simulations have shown that SaE-ELM not only performs better than E-ELM with several manually choosing generation strategies and control parameters but also obtains better generalization performances than several related methods.


IEEE Transactions on Circuits and Systems I-regular Papers | 2002

Positive real control for uncertain two-dimensional systems

Shengyuan Xu; James Lam; Zhiping Lin; Krzysztof Galkowski

This brief deals with the problem of positive real control for uncertain two-dimensional (2-D) discrete systems described by the Fornasini-Marchesini local state-space model. The parameter uncertainty is time-invariant and norm-bounded. The problem we address is the design of a state feedback controller that robustly stabilizes the uncertain system and achieves the extended strictly positive realness of the resulting closed-loop system for all admissible uncertainties. A version of positive realness for 2-D discrete systems is established. Based on this, a condition for the solvability of the positive real control problem is derived in terms of a linear matrix inequality. Furthermore,the solution of a desired state feedback controller is also given. Finally, we provide a numerical example to demonstrate the applicability of the proposed approach.


Neurocomputing | 2010

Composite function wavelet neural networks with extreme learning machine

Jiuwen Cao; Zhiping Lin; Guang-Bin Huang

A new structure of wavelet neural networks (WNN) with extreme learning machine (ELM) is introduced in this paper. In the proposed wavelet neural networks, composite functions are applied at the hidden nodes and the learning is done using ELM. The input information is first processed by wavelet functions and then passed through a type of bounded nonconstant piecewise continuous activation functions g:R->R. A selection method that takes into account the domain of input space where the wavelets are not zero is used to initialize the translation and dilation parameters. The formed wavelet neural network is then trained with the computationally efficient ELM algorithm. Experimental results on the regression of some nonlinear functions and real-world data, the prediction of a chaotic signal and classifications on serval benchmark real-world data sets show that the proposed neural networks can achieve better performances in most cases than some relevant neural networks and learn much faster than neural networks training with the traditional back-propagation (BP) algorithm.


Neurocomputing | 2015

Ensemble of subset online sequential extreme learning machine for class imbalance and concept drift

Bilal Mirza; Zhiping Lin; Nan Liu

In this paper, a computationally efficient framework, referred to as ensemble of subset online sequential extreme learning machine (ESOS-ELM), is proposed for class imbalance learning from a concept-drifting data stream. The proposed framework comprises a main ensemble representing short-term memory, an information storage module representing long-term memory and a change detection mechanism to promptly detect concept drifts. In the main ensemble of ESOS-ELM, each OS-ELM network is trained with a balanced subset of the data stream. Using ELM theory, a computationally efficient storage scheme is proposed to leverage the prior knowledge of recurring concepts. A distinctive feature of ESOS-ELM is that it can learn from new samples sequentially in both the chunk-by-chunk and one-by-one modes. ESOS-ELM can also be effectively applied to imbalanced data without concept drift. On most of the datasets used in our experiments, ESOS-ELM performs better than the state-of-the-art methods for both stationary and non-stationary environments.


Multidimensional Systems and Signal Processing | 1998

Feedback Stabilizability of MIMO n-D Linear Systems

Zhiping Lin

The problem of output feedback stabilizability of multi-input-multi-output (MIMO) multidimensional (n-D) linear systems is investigated using n-D polynomial matrix theory. A simple necessary and sufficient condition for output feedback stabilizability of a given MIMO n-D linear system is derived in terms of the generating polynomials associated with any matrix fraction descriptions of the system. When a given unstable plant is feedback stabilizable, constructive method is provided for obtaining a stabilizing compensator. Moreover, a strictly causal compensator can always be constructed for a causal (not necessarily strictly causal) plant. A non-trivial example is illustrated.


Information Sciences | 2014

Bayesian signal detection with compressed measurements

Jiuwen Cao; Zhiping Lin

This paper proposes the Bayesian approach to signal detection in compressed sensing (CS) using compressed measurements directly. A general expression of the probability of error is obtained where the prior probabilities of hypotheses could be equal or unequal and the additive noise is assumed to be uncorrelated Gaussian noise with possibly unequal variances. Upper and lower bounds of the probability of error are also derived using the restricted isometry property (RIP) constant and then the more computationally feasible mutual coherence of a given sampling matrix in CS. When the difference between the prior probabilities is sufficiently small and the signal to noise ratio is relatively large, an approximate but simpler expression of the probability of error is obtained. Furthermore, a new bound of the probability of error is derived in terms of a piecewise function. Numerical simulations are also provided to illustrate the new theoretical results.


international conference on computational science | 2001

Wavelet Packet Multi-layer Perceptron for Chaotic Time Series Prediction: Effects of Weight Initialization

Kok Keong Teo; Lipo Wang; Zhiping Lin

We train the wavelet packet multi-layer perceptron neural network (WP-MLP) by backpropagation for time series prediction. Weights in the backpropagation algorithm are usually initialized with small random values. If the random initial weights happen to be far from a good solution or they are near a poor local optimum, training may take a long time or get trap in the local optimum. Proper weights initialization will place the weights close to a good solution with reduced training time and increased the possibility of reaching a good solution. In this paper, we investigate the effect of weight initialization on WP-MLP using two clustering algorithms. We test the initialization methods on WP-MLP with the sunspots and Mackey-Glass benchmark time series. We show that with proper weight initialization, better prediction performance can be attained.


Neural Processing Letters | 2013

Weighted Online Sequential Extreme Learning Machine for Class Imbalance Learning

Bilal Mirza; Zhiping Lin; Kar-Ann Toh

Most of the existing sequential learning methods for class imbalance learn data in chunks. In this paper, we propose a weighted online sequential extreme learning machine (WOS-ELM) algorithm for class imbalance learning (CIL). WOS-ELM is a general online learning method that alleviates the class imbalance problem in both chunk-by-chunk and one-by-one learning. One of the new features of WOS-ELM is that an appropriate weight setting for CIL is selected in a computationally efficient manner. In one-by-one learning of WOS-ELM, a new sample can update the classification model without waiting for a chunk to be completed. Extensive empirical evaluations on 15 imbalanced datasets show that WOS-ELM obtains comparable or better classification performance than competing methods. The computational time of WOS-ELM is also found to be lower than that of the competing CIL methods.

Collaboration


Dive into the Zhiping Lin's collaboration.

Top Co-Authors

Avatar

Guang-Bin Huang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Xiaoping Lai

Hangzhou Dianzi University

View shared research outputs
Top Co-Authors

Avatar

Lei Sun

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wee Ser

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Raimund J. Ober

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nan Liu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jiuwen Cao

Hangzhou Dianzi University

View shared research outputs
Top Co-Authors

Avatar

Yongzhi Liu

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge