Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T.J. Gordon is active.

Publication


Featured researches published by T.J. Gordon.


Vehicle System Dynamics | 2000

An Extended Adaptive Kalman Filter for Real-time State Estimation of Vehicle Handling Dynamics

Matt C. Best; T.J. Gordon; P.J. Dixon

This paper considers a method for estimating vehicle handling dynamic states in real-time, using a reduced sensor set; the information is essential for vehicle handling stability control and is also valuable in chassis design evaluation. An extended (nonlinear) Kalman filter is designed to estimate the rapidly varying handling state vector. This employs a low order (4 DOF) handling model which is augmented to include adaptive states (cornering stiffnesses) to compensate for tyre force nonlinearities. The adaptation is driven by steer-induced variations in the longitudinal vehicle acceleration. The observer is compared with an equivalent linear, model-invariant Kalman filter. Both filters are designed and tested against data from a high order source model which simulates six degrees of freedom for the vehicle body, and employs a combined-slip Pacejka tyre model. A performance comparison is presented, which shows promising results for the extended filter, given a sensor set comprising three accelerometers only. The study also presents an insight into the effect of correlated error sources in this application, and it concludes with a discussion of the new observers practical viability.


Vehicle System Dynamics | 1991

A Comparison of Adaptive LQG and Nonlinear Controllers for Vehicle Suspension Systems

T.J. Gordon; C. Marsh; M.G. Milsted

A design feature of many computer-controlled suspension systems, is their ability to adapt control law parameters to suit prevailing road conditions. Here, for systems employing high bandwidth actuators and state variable feedback control, the benefits of such adaptation are shown to be at best marginal. An optimal adaptive LQG system is compared with a fixed structure nonlinear feedback controller in the context of a simple quarter-vehicle suspension model. Performance comparisons are made, and trends considered under more realistic conditions. In consequence the overall usefulness of this type of adaptation is called into question.


Mechatronics | 1997

Continuous Action Reinforcement Learning Applied To Vehicle Suspension Control

M. N. Howell; G.P. Frost; T.J. Gordon; Q. H. Wu

A new reinforcement learning algorithm is introduced which can be applied over a continuous range of actions. The learning algorithm is reward-inaction based, with a set of probability density functions being used to determine the action set. An experimental study is presented, based on the control of a semi-active suspension system on a road-going, four wheeled, passenger vehicle. The control objective is to minimise the mean square acceleration of the vehicle body, thus improving the ride isolation qualities of the vehicle. This represents a difficult class of learning problems, owing to the stochastic nature of the road input disturbance together with unknown high order dynamics, sensor noise and the non-linear (semi-active) control actuators. The learning algorithm described here operates over a bounded continuous action set, is robust to high levels of noise and is ideally suited to operating in a parallel computing environment.


Engineering Applications of Artificial Intelligence | 2001

Continuous action reinforcement learning automata and their application to adaptive digital filter design

M. N. Howell; T.J. Gordon

Abstract In the design of adaptive IIR filters, the multi-modal nature of the error surfaces can limit the use of gradient-based and other iterative search methods. Stochastic learning automata have previously been shown to have global optimisation properties making them suitable for the optimisation of filter coefficients. Continuous action reinforcement learning automata are presented as an extension to the standard automata which operate over discrete parameter sets. Global convergence is claimed, and demonstrations are carried out via a number of computer simulations.


Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering | 2005

A comparison of braking and differential control of road vehicle yaw-sideslip dynamics

Matthew Hancock; R.A. Williams; T.J. Gordon; Matt C. Best

Abstract Two actuation mechanisms are considered for the comparison of performance capabilities in improving the yaw-sideslip handling characteristics of a road vehicle. Yaw moments are generated either by the use of single-wheel braking or via driveline torque distribution using an overdriven active rear differential. For consistency, a fixed reference vehicle system is used, and the two controllers are synthesized via a single design methodology. Performance measures relate to both open-loop and closed-loop driving demands, and include both on-centre and limit handling manoeuvres.


systems man and cybernetics | 2002

Genetic learning automata for function optimization

M. N. Howell; T.J. Gordon; F. V. Brandao

Stochastic learning automata and genetic algorithms (GAs) have previously been shown to have valuable global optimization properties. Learning automata have, however, been criticized for having a relatively slow rate of convergence. In this paper, these two techniques are combined to provide an increase in the rate of convergence for the learning automata and also to improve the chances of escaping local optima. The technique separates the genotype and phenotype properties of the GA and has the advantage that the degree of convergence can be quickly ascertained. It also provides the GA with a stopping rule. If the technique is applied to real-valued function optimization problems, then bounds on the range of the values within which the global optima is expected can be determined throughout the search process. The technique is demonstrated through a number of bit-based and real-valued function optimization examples.


Proceedings of the Institution of Mechanical Engineers. Part D Journal of automobile engineering. | 2002

AN AUTOMATED DRIVER BASED ON CONVERGENT VECTOR FIELDS

T.J. Gordon; Matt C. Best; P.J. Dixon

Abstract This paper describes a new general framework for the action of an automated driver (or driver model) to provide the control of longitudinal and lateral dynamics of a road vehicle. The context of the problem is assumed to be in high-speed competitive driving, as in motor racing, where the requirement is for maximum possible speed along a track, making use of a reference path (racing line) but with the capacity for obstacle avoidance and recovery from large excursions. While not necessarily representative of a human driver, the analysis provides worthwhile insight into the nature of the driving task and offers a new approach for vehicle lateral and longitudinal control; it also has applications in less demanding applications such as Advanced Cruise Control systems. As is common in the literature, the driving task is broken down into two distinct subtasks: path planning and local feedback control. In the first of these tasks, an essentially geometric approach is taken here, which makes use of a vector field analysis. At each location x the automated driver is to prescribe a vector w for the desired vehicle mass centre velocity; the spatial distribution and global properties of w(x) provide essential information for stability analysis, as well as control reference. The resulting vector field is considered in the context of limited friction and limited mass centre accelerations, leading to constraints on ∇w. Provided such constraints are satisfied, and using suitable adaptation of w(x) when required, it is shown that feedback control can be applied to guarantee stable asymptotic tracking of a reference path, even under limit handling conditions. A specific implementation of the method is included, using dual non-linear SISO (single-input single-output) controllers.


International Journal of Vehicle Design | 2006

On the synthesis of driver inputs for the simulation of closed-loop handling manoeuvres

T.J. Gordon; Matt C. Best

This paper concerns a new Dual Model methodology for the synthesis of steering, throttle and braking inputs for the closed-loop simulation of linear or non-linear vehicle handling dynamics. The method provides near-optimal driver control inputs that are both insensitive to driver model assumptions, and feasible for use with complex non-linear vehicle handling models. The paper describes the Dual Model technique, and evaluates its effectiveness, in the context of a low-order non-linear handling model, via comparison with independently derived optimal control inputs. A test case of an obstacle avoidance manoeuvre is considered. The methodology is particularly applicable to the design and development of future chassis control systems.


Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering | 1993

Stochastic optimal control of active vehicle suspensions using learning automata

T.J. Gordon; C Marsh; Q. H. Wu

This paper is concerned with the application of reinforcement learning to the stochastic optimal control of an idealized active vehicle suspension system. The use of learning automata in optimal control is a new application of this machine learning technique, and the principal aim of this work is to define and demonstrate the method in a relatively simple context, as well as to compare performance against results obtained from standard linear optimal control theory. The most distinctive feature of the approach is that no formal modelling is involved in the control system design; once implemented, learning takes place on-line, and the automaton improves its control performance with respect to a predefined cost function. An important new feature of the method is the use of subset actions, which enables the automaton to reduce the size of its action set at any particular instant, without imposing any global restrictions on the controller that is eventually learnt. The results, though based on simulation studies, suggest that there is great potential for implementing learning control in active vehicle suspensions, as well as for many other systems.


Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering | 1996

Moderated Reinforcement Learning of Active and Semi-Active Vehicle Suspension Control Laws

G.P. Frost; T.J. Gordon; M. N. Howell; Q. H. Wu

This paper is concerned with the application of reinforcement learning to the dynamic ride control of an active vehicle suspension system. The study makes key extensions to earlier simulation work to enable on-line implementation of the learning automaton methodology using an actual vehicle. Extensions to the methodology allow safe and continuous learning to take place on the road, using a limited instrumentation set. An important new feature is the use of a moderator to set physical limits on the vehicle states. It is shown that the addition of the moderator has little direct effect on the systems ability to learn, and allows learning to take place continuously even when there are unstable controllers present. The study concludes with the results of an experimental trial using vehicle hardware, where the successful synthesis of a semi-active ride controller is demonstrated.

Collaboration


Dive into the T.J. Gordon's collaboration.

Top Co-Authors

Avatar

Matt C. Best

Loughborough University

View shared research outputs
Top Co-Authors

Avatar

M. N. Howell

Loughborough University

View shared research outputs
Top Co-Authors

Avatar

Q. H. Wu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

G.P. Frost

Loughborough University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Marsh

Loughborough University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. N. Hunt

Loughborough University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge