Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George W. Irwin is active.

Publication


Featured researches published by George W. Irwin.


IEEE Transactions on Automatic Control | 2005

A fast nonlinear model identification method

Kang Li; Jian Xun Peng; George W. Irwin

The identification of nonlinear dynamic systems using linear-in-the-parameters models is studied. A fast recursive algorithm (FRA) is proposed to select both the model structure and to estimate the model parameters. Unlike orthogonal least squares (OLS) method, FRA solves the least-squares problem recursively over the model order without requiring matrix decomposition. The computational complexity of both algorithms is analyzed, along with their numerical stability. The new method is shown to require much less computational effort and is also numerically more stable than OLS.


IEEE Transactions on Aerospace and Electronic Systems | 2000

Multiple model bootstrap filter for maneuvering target tracking

Shaun McGinnity; George W. Irwin

The extension of the bootstrap filter to the multiple model target tracking problem is considered. Bayesian bootstrap filtering is a very powerful technique since it represents samples by random samples and is therefore not restricted to linear, Gaussian systems, making it ideal for the multiple model problem where very complex densities can be generated.


IEEE Transactions on Neural Networks | 1992

A neural network regulator for turbogenerators

Q. H. Wu; B. W. Hogg; George W. Irwin

A neural network (NN) based regulator for nonlinear, multivariable turbogenerator control is presented. A hierarchical architecture of an NN is proposed for regulator design, consisting of two subnetworks which are used for input-output (I-O) mapping and control, respectively, based on the back-propagation (BP) algorithm. The regulator has the flexibility for accepting more sensory information to cater to multi-input, multioutput systems. Its operation does not require a reference model or inverse system model and it can produce more acceptable control signals than are obtained by using sign of plant errors during training I-O mapping of turbogenerator systems using NNs has been investigated and the regulator has been implemented on a complex turbogenerator system model. Simulation results show satisfactory control performance and illustrate the potential of the NN regulator in comparison with an existing adaptive controller.


IEEE Transactions on Neural Networks | 1997

Nonlinear control structures based on embedded neural system models

Gordon Lightbody; George W. Irwin

This paper investigates in detail the possible application of neural networks to the modeling and adaptive control of nonlinear systems. Nonlinear neural-network-based plant modeling is first discussed, based on the approximation capabilities of the multilayer perceptron. A structure is then proposed to utilize feedforward networks within a direct model reference adaptive control strategy. The difficulties involved in training this network, embedded within the closed-loop are discussed and a novel neural-network-based sensitivity modeling approach proposed to allow for the backpropagation of errors through the plant to the neural controller. Finally, a novel nonlinear internal model control (IMC) strategy is suggested, that utilizes a nonlinear neural model of the plant to generate parameter estimates over the nonlinear operating region for an adaptive linear internal model, without the problems associated with recursive parameter identification algorithms. Unlike other neural IMC approaches the linear control law can then be readily designed. A continuous stirred tank reactor was chosen as a realistic nonlinear case study for the techniques discussed in the paper.


International Journal of Systems Science | 2008

Model selection approaches for non-linear system identification: a review

Xia Hong; Richard Mitchell; Sheng Chen; Chris J. Harris; Kang Li; George W. Irwin

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.


IEEE Transactions on Neural Networks | 1998

A hybrid linear/nonlinear training algorithm for feedforward neural networks

Seán McLoone; M. D. Brown; George W. Irwin; A. Lightbody

This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.


Annual Reviews in Control | 2012

A review on improving the autonomy of unmanned surface vehicles through intelligent collision avoidance manoeuvres

Sable Campbell; Wasif Naeem; George W. Irwin

In recent years unmanned vehicles have grown in popularity, with an ever increasing number of applications in industry, the military and research within air, ground and marine domains. In particular, the challenges posed by unmanned marine vehicles in order to increase the level of autonomy include automatic obstacle avoidance and conformance with the Rules of the Road when navigating in the presence of other maritime traffic. The USV Master Plan which has been established for the US Navy outlines a list of objectives for improving autonomy in order to increase mission diversity and reduce the amount of supervisory intervention. This paper addresses the specific development needs based on notable research carried out to date, primarily with regard to navigation, guidance, control and motion planning. The integration of the International Regulations for Avoiding Collisions at Sea within the obstacle avoidance protocols seeks to prevent maritime accidents attributed to human error. The addition of these critical safety measures may be key to a future growth in demand for USVs, as they serve to pave the way for establishing legal policies for unmanned vessels.


Neurocomputing | 2001

Improving neural network training solutions using regularisation

Seán McLoone; George W. Irwin

Abstract This paper describes the application of regularisation to the training of feedforward neural networks, as a means of improving the quality of solutions obtained. The basic principles of regularisation theory are outlined for both linear and nonlinear training and then extended to cover a new hybrid training algorithm for feedforward neural networks recently proposed by the authors. The concept of functional regularisation is also introduced and discussed in relation to MLP and RBF networks. The tendency for the hybrid training algorithm and many linear optimisation strategies to generate large magnitude weight solutions when applied to ill-conditioned neural paradigms is illustrated graphically and reasoned analytically. While such weight solutions do not generally result in poor fits, it is argued that they could be subject to numerical instability and are therefore undesirable. Using an illustrative example it is shown that, as well as being beneficial from a generalisation perspective, regularisation also provides a means for controlling the magnitude of solutions.


IEEE Transactions on Control Systems and Technology | 2008

Nonlinear PCA With the Local Approach for Diesel Engine Fault Detection and Diagnosis

Xun Wang; Uwe Kruger; George W. Irwin; Geoffrey McCullough; Neil McDowell

This brief examines the application of nonlinear statistical process control to the detection and diagnosis of faults in automotive engines. In this statistical framework, the computed score variables may have a complicated nonparametric distribution function, which hampers statistical inference, notably for fault detection and diagnosis. This brief shows that introducing the statistical local approach into nonlinear statistical process control produces statistics that follow a normal distribution, thereby enabling a simple statistical inference for fault detection. Further, for fault diagnosis, this brief introduces a compensation scheme that approximates the fault condition signature. Experimental results from a Volkswagen 1.9-L turbo-charged diesel engine are included.


IEEE Transactions on Neural Networks | 1999

RBF principal manifolds for process monitoring

David Wilson; George W. Irwin; Gordon Lightbody

This paper describes a novel means for creating a nonlinear extension of principal component analysis (PCA) using radial basis function (RBF) networks. This algorithm comprises two distinct stages: projection and self-consistency. The projection stage contains a single network, trained to project data from a high- to a low-dimensional space. Training requires solution of a generalized eigenvector equation. The second stage, trained using a novel hybrid nonlinear optimization algorithm, then performs the inverse transformation. Issues relating to the practical implementation of the procedure are discussed, and the algorithm is demonstrated on a nonlinear test problem. An example of the application of the algorithm to data from a benchmark simulation of an industrial overheads condenser and reflux drum rig is also included. This shows the usefulness of the procedure in detecting and isolating both sensor and process faults. Pointers for future research in this area are also given.

Collaboration


Dive into the George W. Irwin's collaboration.

Top Co-Authors

Avatar

Kang Li

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Seán McLoone

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Kee

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

E. Swidenbank

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Gordon Dodds

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sanjay Sharma

Plymouth State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge