Jian Xun Peng
Queen's University Belfast
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jian Xun Peng.
IEEE Transactions on Automatic Control | 2005
Kang Li; Jian Xun Peng; George W. Irwin
The identification of nonlinear dynamic systems using linear-in-the-parameters models is studied. A fast recursive algorithm (FRA) is proposed to select both the model structure and to estimate the model parameters. Unlike orthogonal least squares (OLS) method, FRA solves the least-squares problem recursively over the model order without requiring matrix decomposition. The computational complexity of both algorithms is analyzed, along with their numerical stability. The new method is shown to require much less computational effort and is also numerically more stable than OLS.
Automatica | 2006
Kang Li; Jian Xun Peng; Er-Wei Bai
This paper investigates the two-stage stepwise identification for a class of nonlinear dynamic systems that can be described by linear-in-the-parameters models, and the model has to be built from a very large pool of basis functions or model terms. The main objective is to improve the compactness of the model that is obtained by the forward stepwise methods, while retaining the computational efficiency. The proposed algorithm first generates an initial model using a forward stepwise procedure. The significance of each selected term is then reviewed at the second stage and all insignificant ones are replaced, resulting in an optimised compact model with significantly improved performance. The main contribution of this paper is that these two stages are performed within a well-defined regression context, leading to significantly reduced computational complexity. The efficiency of the algorithm is confirmed by the computational complexity analysis, and its effectiveness is demonstrated by the simulation results.
IEEE Transactions on Neural Networks | 2006
Jian Xun Peng; Kang Li; De-Shuang Huang
This paper proposes a novel hybrid forward algorithm (HFA) for the construction of radial basis function (RBF) neural networks with tunable nodes. The main objective is to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. In this study, it is achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. This is a mixed integer hard problem and the proposed HFA tackles this problem using an integrated analytic framework, leading to significantly improved network performance and reduced memory usage for the network construction. The computational complexity analysis confirms the efficiency of the proposed algorithm, and the simulation results demonstrate its effectiveness
IEEE Transactions on Automatic Control | 2007
Jian Xun Peng; Kang Li; George W. Irwin
A continuous forward algorithm (CFA) is proposed for nonlinear modelling and identification using radial basis function (RBF) neural networks. The problem considered here is simultaneous network construction and parameter optimization, well-known to be a mixed integer hard one. The proposed algorithm performs these two tasks within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Computational complexity analysis and simulation results confirm the effectiveness
IEEE Transactions on Neural Networks | 2008
Jian Xun Peng; Kang Li; George W. Irwin
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.
Neurocomputing | 2007
Kang Li; Jian Xun Peng
Neural input selection is an important stage in neural network configuration. For neural modeling and control of nonlinear dynamic systems, the inputs to the neural networks may include any system variable of interest with various time lags. To choose a set of significant inputs is a combinational problem, and the selection procedure can be very time consuming. In this paper, a model-based neural input selection method is proposed. Essentially, the neural input selection is transformed into the problem of identifying the significant terms for a linear-in-the-parameters model. A fast method is then proposed to identify significant nonlinear terms or functions, from which the neural inputs are grouped and selected. Both theoretic analysis and simulation examples demonstrate the effectiveness and efficiency of the proposed model-based approach.
IEEE Transactions on Circuits and Systems | 2009
Kang Li; Jian Xun Peng; Er-Wei Bai
The identification of nonlinear dynamic systems using radial basis function (RBF) neural models is studied in this paper. Given a model selection criterion, the main objective is to effectively and efficiently build a parsimonious compact neural model that generalizes well over unseen data. This is achieved by simultaneous model structure selection and optimization of the parameters over the continuous parameter space. It is a mixed-integer hard problem, and a unified analytic framework is proposed to enable an effective and efficient two-stage mixed discrete-continuous identification procedure. This novel framework combines the advantages of an iterative discrete two-stage subset selection technique for model structure determination and the calculus-based continuous optimization of the model parameters. Computational complexity analysis and simulation studies confirm the efficacy of the proposed algorithm.
Neurocomputing | 2011
Jian Xun Peng; Stuart Ferguson; Karen Rafferty; Paul Kelly
This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modeled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited. An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain-based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.
Cognition, Technology & Work | 2003
Kang Li; Stephen Thompson; Peter A. Wieringa; Jian Xun Peng; G.R. Duan
Artificial neural networks and genetic algorithms are two intelligent approaches initially targeted to model human information processing and natural evolutionary process, with the aim of using the models in problem solving. During the last decade these two intelligent approaches have been widely applied to a variety of social, economic and engineering systems. In this paper, they have been shown as modelling tools to support human supervisory control to reduce fossil fuel power plant emissions, particularly NOx emissions. Human supervisory control of fossil fuel power generation plants has been studied, and the need of an advisory system for operator support is emphasized. Plant modelling is an important block in such an advisory system and is the key issue of this study. In particular, three artificial neural network models and a genetic algorithm-based grey-box model have been built to model and predict the NOx emissions in a coal-fired power plant. In non-linear dynamic system modelling, training data is always limited and cannot cover all system dynamics; therefore the generalization performance of the resultant model over unseen data is the focus of this study. These models will then be used in the advisory system to support human operators on aspects such as task analysis, condition monitoring and operation optimization, with the aim of improving thermal efficiency, reducing pollutant emissions and ensuring that the power system runs safely.
pervasive computing and communications | 2012
Victoria Stewart; Stuart Ferguson; Jian Xun Peng; Karen Rafferty
This paper describes a simple application for mobile devices which automatically recognizes different physical activity. This application could be used to log exercise sessions for the purpose of aiding weight management or improving sporting performance.