Adrian Haith
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adrian Haith.
Journal of Computational Neuroscience | 2008
Liam Paninski; Adrian Haith; Gabor Szirtes
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
international conference on development and learning | 2007
Adrian Haith; Sethu Vijayakumar
Many computational models of vestibiilo-ocular reflex (VOR) adaptation have been proposed, however none of these models have explicitly highlighted the distinction between adaptation to dynamics transformations, in which the intrinsic properties of the oculomotor plant change, and kinematic transformations, in which the extrinsic relationship between head velocity and desired eye velocity changes (most VOR adaptation experiments use kinematic transformations to manipulate the desired response). We show that whether a transformation is kinematic or dynamic in nature has a strong impact upon the speed and stability of learning for different control architectures. Specifically, models based on a purely feedforward control architecture, as is commonly used in feedback-error learning (FEL), are guaranteed to be stable under kinematic transformations, but are susceptible to slow convergence and instability under dynamics transformations. On the other hand, models based on a recurrent cerebellar architecture [7] perform well under dynamics but not kinematics transformations. We apply this insight to derive a new model of the VOR/OKR system which is stable against transformations of both the plant dynamics and the task kinematics.
Biological Cybernetics | 2009
Adrian Haith; Sethu Vijayakumar
The exact role of the cerebellum in motor control and learning is not yet fully understood. The structure, connectivity and plasticity within cerebellar cortex has been extensively studied, but the patterns of connectivity and interaction with other brain structures, and the computational significance of these patterns, is less well known and a matter of debate. Two contrasting models of the role of the cerebellum in motor adaptation have previously been proposed. Most commonly, the cerebellum is employed in a purely feedforward pathway, with its output contributing directly to the outgoing motor command. The cerebellum must then learn an inverse model of the motor apparatus in order to achieve accurate control. More recently, Porrill et al. (Proc Biol Sci 271(1541):789–796, 2004) and Porrill et al. (PLoS Comput Biol 3:1935–1950, 2007a) and Porrill et al. (Neural Comput 19(1), 170–193, 2007b) have highlighted the potential importance of these recurrent connections by proposing an alternative architecture in which the cerebellum is embedded in a recurrent loop with brainstem control circuitry. In this framework, the feedforward connections are not necessary at all. The cerebellum must learn a forward model of the motor apparatus for accurate motor commands to be generated. We show here how these two models exhibit contrasting yet complimentary learning capabilities. Central to the differences in performance between architectures is that there are two distinct kinds of disturbance to which a motor system may need to adapt (1) changes in the relationship between the motor command and the observed outcome and (2) changes in the relationship between the stimulus and the desired outcome. The computational distinction between these two kinds of transformation is subtle and has therefore often been overlooked. However, the implications for learning turn out to be significant: learning with a feedforward architecture is robust following changes in the stimulus-desired outcome mapping but not necessarily the motor command-outcome mapping, while learning with a recurrent architecture is robust under changes in the motor command-outcome mapping but not necessarily the stimulus-desired outcome mapping. We first analyse these differences theoretically and through simulations in the vestibulo-ocular reflex (VOR), then illustrate how these same concepts apply more generally with a model of reaching movements.
neural information processing systems | 2008
Adrian Haith; Carl P. T. Jackson; Rowland Miall; Sethu Vijayakumar
Archive | 2008
Adrian Haith; Carl P. T. Jackson; Chris Miall; Sethu Vijayakumar
Archive | 2011
Sethu Vijayakumar; Timothy M. Hospedales; Adrian Haith
Archive | 2009
Djordje Mitrovic; Stefan Klanke; Sethu Vijayakumar; Adrian Haith
Archive | 2009
Djordje Mitrovic; Stefan Klanke; Sethu Vijayakumar; Adrian Haith
Archive | 2008
Adrian Haith; Sethu Vijayakumar
Archive | 2008
Adrian Haith; Sethu Vijayakumar