Andrew I. Hanna
University of East Anglia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew I. Hanna.
PLOS Biology | 2010
Amelia A. Green; J. Richard Kennaway; Andrew I. Hanna; J. Andrew Bangham; Enrico Coen
A combination of experimental analysis and mathematical modelling shows how the genetic control of tissue polarity plays a fundamental role in the development and evolution of form.
IEEE Signal Processing Letters | 2001
Danilo P. Mandic; Andrew I. Hanna; Moe Razaz
A fully adaptive normalized nonlinear gradient descent (FANNGD) algorithm for online adaptation of nonlinear neural filters is proposed. An adaptive stepsize that minimizes the instantaneous output error of the filter is derived using a linearization performed by a Taylor series expansion of the output error. For rigor, the remainder of the truncated Taylor series expansion within the expression for the adaptive learning rate is made adaptive and is updated using gradient descent. The FANNGD algorithm is shown to converge faster than previously introduced algorithms of this kind.
Hfsp Journal | 2008
Sandra Bensmihen; Andrew I. Hanna; Nicolas B. Langlade; José Luis Micol; Andrew Bangham; Enrico Coen
A key approach to understanding how genes control growth and form is to analyze mutants in which shape and size have been perturbed. Although many mutants of this kind have been described in plants and animals, a general quantitative framework for describing them has yet to be established. Here we describe an approach based on Principal Component Analysis of organ landmarks and outlines. Applying this method to a collection of leaf shape mutants in Arabidopsis and Antirrhinum allows low‐dimensional spaces to be constructed that capture the key variations in shape and size. Mutant phenotypes can be represented as vectors in these allometric spaces, allowing additive gene interactions to be readily described. The principal axis of each allometric space reflects size variation and an associated shape change. The shape change is similar to that observed during the later stages of normal development, suggesting that many phenotypic differences involve modulations in the timing of growth arrest. Comparison between allometric mutant spaces from different species reveals a similar range of phenotypic possibilities. The spaces therefore provide a general quantitative framework for exploring and comparing the development and evolution of form.
Neural Networks | 2003
Andrew I. Hanna; Danilo P. Mandic
A complex-valued nonlinear gradient descent (CNGD) learning algorithm for a simple finite impulse response (FIR) nonlinear neural adaptive filter with an adaptive amplitude of the complex activation function is proposed. This way the amplitude of the complex-valued analytic nonlinear activation function of a neuron in the learning algorithm is made gradient adaptive to give the complex-valued adaptive amplitude nonlinear gradient descent (CAANGD). Such an algorithm is beneficial when dealing with signals that have rich dynamical behavior. Simulations on the prediction of complex-valued coloured and nonlinear input signals show the gradient adaptive amplitude, CAANGD, outperforming the standard CNGD algorithm.
IEEE Transactions on Signal Processing | 2003
Andrew I. Hanna; Danilo P. Mandic
A fully adaptive normalized nonlinear complex-valued gradient descent (FANNCGD) learning algorithm for training nonlinear (neural) adaptive finite impulse response (FIR) filters is derived. First, a normalized nonlinear complex-valued gradient descent (NNCGD) algorithm is introduced. For rigour, the remainder of the Taylor series expansion of the instantaneous output error in the derivation of NNCGD is made adaptive at every discrete time instant using a gradient-based approach. This results in the fully adaptive normalized nonlinear complex-valued gradient descent learning algorithm that is suitable for nonlinear complex adaptive filtering with a general holomorphic activation function and is robust to the initial conditions. Convergence analysis of the proposed algorithm is provided both analytically and experimentally. Experimental results on the prediction of colored and nonlinear inputs show the FANNCGD outperforming other algorithms of this kind.
international conference on acoustics, speech, and signal processing | 2002
Andrew I. Hanna; Danilo P. Mandic
A backpropagation based algorithm for training nonlinear complex valued feed-forward neural networks employed as nonlinear adaptive filters is derived. The proposed normalised complex backpropagation (NCBP) algorithm is an improvement on the complex backpropagation (CBP) algorithm by including an adaptive normalised learning rate. This is achieved by performing a minisation of the complex-valued instantaneous output error that has been expanded via a Taylor series expansion. The proposed algorithm is applicable to any complex-valued nonlinear architecture. Experiments on complex coloured and nonlinear signals confirm that the NCBP algorithm outperforms the standard CBP algorithm.
Neural Processing Letters | 2003
Andrew I. Hanna; Danilo P. Mandic
A complex-valued data-reusing nonlinear gradient descent (CDRNGD) learning algorithm for a class of complex-valued nonlinear neural adaptive filters is introduced and the affinity between the family of data-reusing algorithms and the class of normalised gradient descent algorithms is examined. Error bounds on the class of complex data-reusing algorithms are established and indicate the stability of such algorithms. Experiments on nonlinear inputs show the class of complex data-reusing algorithms outperforming the standard complex nonlinear gradient descent algorithms and converging to the normalised complex non-linear gradient descent algorithm without experiencing the stability problems commonly encountered with normalised gradient descent algorithms.
Journal of The Franklin Institute-engineering and Applied Mathematics | 2003
Andrew I. Hanna; Danilo P. Mandic
Nonlinear system identification and prediction is a complex task, and often non-parametric models such as neural networks are used in place of intricate mathematics. To that cause, recently an improved approach to nonlinear system identification using neural networks was presented in Gupta and Sinha (J. Franklin Inst. 336 (1999) 721). Therein a learning algorithm was proposed in which both the slope of the activation function at a neuron, β, and the learning rate, η, were made adaptive. The proposed algorithm assumes that η and β are independent variables. Here, we show that the slope and the learning rate are not independent in a general dynamical neural network, and this should be taken into account when designing a learning algorithm. Further, relationships between η and β are developed which helps reduce the number of degrees of freedom and computational complexity in an optimisation task of training a fully adaptive neural network. Simulation results based on Gupta and Sinha (1999) and the proposed approach support the analysis.
international conference on acoustics, speech, and signal processing | 2002
Danilo P. Mandic; Andrew I. Hanna; Dai I. Kim
An algorithm for training nonlinear adaptive finite impulse response (FIR) filters employed for nonlinear prediction and system identification is introduced. This general adaptive normalised nonlinear gradient descent (ANNGD) algorithm is fully gradient adaptive, unlike previously proposed algorithms of this kind. It is derived based upon the Taylor series expansion of the instantaneous output error of the filter. For rigour, the remainder of the Taylor series expansion in the derivation of the algorithm is made adaptive thus providing an adaptive learning rate. Experiments on coloured and nonlinear signals confirm that the ANNGD outperforms the other algorithms of this kind.
ieee workshop on neural networks for signal processing | 2002
Woon Chong Siaw; Su Lee Goh; Andrew I. Hanna; Christos Boukis; Danilo P. Mandic
A class of algorithms for training neural adaptive filters employed for nonlinear adaptive filtering is introduced. Sign algorithms incorporated with the fully adaptive normalised nonlinear gradient descent (SFANNGD) algorithm, normalised nonlinear gradient descent (SNNGD) algorithm and nonlinear gradient descent (SNGD) algorithm are proposed. The SFANNGD, SNNGD and the SNGD are derived based upon the principle of the sign algorithm used in the least mean square (LMS) filters. Experiments on nonlinear signals confirm that SFANNGD, SNNGD and the SNGD algorithms perform on par as compared to their basic algorithms but the sign algorithm decreases the overall computational complexity of the adaptive filter algorithms.