Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy P. Lillicrap is active.

Publication


Featured researches published by Timothy P. Lillicrap.


Nature Communications | 2016

Random synaptic feedback weights support error backpropagation for deep learning

Timothy P. Lillicrap; Daniel Cownden; Douglas Tweed; Colin J. Akerman

The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neurons axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.


Neuron | 2013

Preference Distributions of Primary Motor Cortex Neurons Reflect Control Solutions Optimized for Limb Biomechanics

Timothy P. Lillicrap; Stephen H. Scott

Neurons in monkey primary motor cortex (M1) tend to be most active for certain directions of hand movement and joint-torque loads applied to the limb. The origin and function of these biases in preference distribution are unclear but may be key to understanding the causal role of M1 in limb control. We demonstrate that these distributions arise naturally in a network model that commands muscle activity and is optimized to control movements and counter applied forces. In the model, movement and load preference distributions matching those observed empirically are only evident when particular features of the musculoskeletal system were included: limb geometry, intersegmental dynamics, and the force-length/velocity properties of muscle were dominant factors in dictating movement preferences, and the presence of biarticular muscles dictated load preferences. Our results suggest a general principle: neural activity in M1 commands muscle activity and is optimized for the physics of the motor effector.


eLife | 2017

Towards deep learning with segregated dendrites

Jordan Guerguiev; Timothy P. Lillicrap; Blake A. Richards

Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.


Evolution | 2011

Inclusive fitness analysis on mathematical groups.

Peter D. Taylor; Timothy P. Lillicrap; Daniel Cownden

Recent work on the evolution of behaviour is set in a structured population, providing a systematic way to describe gene flow and behavioural interactions. To obtain analytical results one needs a structure with considerable regularity. Our results apply to such “homogeneous” structures (e.g., lattices, cycles, and island models). This regularity has been formally described by a “node‐transitivity” condition but in mathematics, such internal symmetry is powerfully described by the theory of mathematical groups. Here, this theory provides elegant direct arguments for a more general version of a number of existing results. Our main result is that in large “group‐structured” populations, primary fitness effects on others play no role in the evolution of the behaviour. The competitive effects of such a trait cancel the primary effects, and the inclusive fitness effect is given by the direct effect of the actor on its own fitness. This result is conditional on a number of assumptions such as (1) whether generations overlap, (2) whether offspring dispersal is symmetric, (3) whether the trait affects fecundity or survival, and (4) whether the underlying group is abelian. We formulate a number of results of this type in finite and infinite populations for both Moran and Wright–Fisher demographies.


Neural Computation | 2008

Sensitivity derivatives for flexible sensorimotor learning

Mohamed N Abdelghani; Timothy P. Lillicrap; Douglas B. Tweed

To learn effectively, an adaptive controller needs to know its sensitivity derivativesthe variables that quantify how system performance depends on the commands from the controller. In the case of biological sensorimotor control, no one has explained how those derivatives themselves might be learned, and some authors suggest they are not learned at all but are known innately. Here we show that this knowledge cannot be solely innate, given the adaptive flexibility of neural systems. And we show how it could be learned using forms of information transport that are available in the brain. The mechanism, which we call implicit supervision, helps explain the flexibility and speed of sensorimotor learning and our ability to cope with high-dimensional work spaces and tools.


Experimental Brain Research | 2013

Adapting to inversion of the visual field: a new twist on an old problem.

Timothy P. Lillicrap; Pablo Moreno-Briseño; Rosalinda Díaz; Douglas Tweed; Nikolaus F. Troje; Juan Fernandez-Ruiz

While sensorimotor adaptation to prisms that displace the visual field takes minutes, adapting to an inversion of the visual field takes weeks. In spite of a long history of the study, the basis of this profound difference remains poorly understood. Here, we describe the computational issue that underpins this phenomenon and presents experiments designed to explore the mechanisms involved. We show that displacements can be mastered without altering the updated rule used to adjust the motor commands. In contrast, inversions flip the sign of crucial variables called sensitivity derivatives—variables that capture how changes in motor commands affect task error and therefore require an update of the feedback learning rule itself. Models of sensorimotor learning that assume internal estimates of these variables are known and fixed predicted that when the sign of a sensitivity derivative is flipped, adaptations should become increasingly counterproductive. In contrast, models that relearn these derivatives predict that performance should initially worsen, but then improve smoothly and remain stable once the estimate of the new sensitivity derivative has been corrected. Here, we evaluated these predictions by looking at human performance on a set of pointing tasks with vision perturbed by displacing and inverting prisms. Our experimental data corroborate the classic observation that subjects reduce their motor errors under inverted vision. Subjects’ accuracy initially worsened and then improved. However, improvement was jagged rather than smooth and performance remained unstable even after 8xa0days of continually inverted vision, suggesting that subjects improve via an unknown mechanism, perhaps a combination of cognitive and implicit strategies. These results offer a new perspective on classic work with inverted vision.


Journal of Neurophysiology | 2015

Temporal evolution of both premotor and motor cortical tuning properties reflect changes in limb biomechanics.

Aaron J. Suminski; Philip Mardoum; Timothy P. Lillicrap; Nicholas G. Hatsopoulos

A prevailing theory in the cortical control of limb movement posits that premotor cortex initiates a high-level motor plan that is transformed by the primary motor cortex (MI) into a low-level motor command to be executed. This theory implies that the premotor cortex is shielded from the motor periphery, and therefore, its activity should not represent the low-level features of movement. Contrary to this theory, we show that both dorsal (PMd) and ventral premotor (PMv) cortexes exhibit population-level tuning properties that reflect the biomechanical properties of the periphery similar to those observed in M1. We recorded single-unit activity from M1, PMd, and PMv and characterized their tuning properties while six rhesus macaques performed a reaching task in the horizontal plane. Each area exhibited a bimodal distribution of preferred directions during execution consistent with the known biomechanical anisotropies of the muscles and limb segments. Moreover, these distributions varied in orientation or shape from planning to execution. A network model shows that such population dynamics are linked to a change in biomechanics of the limb as the monkey begins to move, specifically to the state-dependent properties of muscles. We suggest that, like M1, neural populations in PMd and PMv are more directly linked with the motor periphery than previously thought.


Journal of Neurophysiology | 2010

Complex Spatiotemporal Tuning in Human Upper-Limb Muscles

J. Andrew Pruszynski; Timothy P. Lillicrap; Stephen H. Scott

Correlations between neural activity in primary motor cortex (M1) and arm kinematics have recently been shown to be temporally extensive and spatially complex. These results provide a sophisticated account of M1 processing and suggest that M1 neurons encode high-level movement trajectories, termed pathlets. However, interpreting pathlets is difficult because the mapping between M1 activity and arm kinematics is indirect: M1 activity can generate movement only via spinal circuitry and the substantial complexities of the musculoskeletal system. We hypothesized that filter-like complexities of the musculoskeletal system are sufficient to generate temporally extensive and spatially complex correlations between motor commands and arm kinematics. To test this hypothesis, we extended the computational and experimental method proposed for extracting pathlets from M1 activity to extract pathlets from muscle activity. Unlike M1 activity, it is clear that muscle activity does not encode arm kinematics. Accordingly, any spatiotemporal correlations in muscle pathlets can be attributed to musculoskeletal complexities rather than explicit higher-order representations. Our results demonstrate that extracting muscle pathlets is a robust and repeatable process. Pathlets extracted from the same muscle but different subjects or from the same muscle on different days were remarkably similar and roughly appropriate for that muscles mechanical action. Critically, muscle pathlets included extensive spatiotemporal complexity, including kinematic features before and after the present muscle activity, similar to that reported for M1 neurons. These results suggest the possibility that M1 pathlets at least partly reflect the filter-like complexities of the periphery rather than high-level representations.


Journal of Logic and Computation | 2012

Relevance Realization and the Emerging Framework in Cognitive Science

John Vervaeke; Timothy P. Lillicrap; Blake A Richards

We argue that an explanation of relevance realization is a pervasive problem within cognitive science, and that it is becoming the criterion of the cognitive in terms of which a new framework for doing cognitive science is emerging. We articulate that framework and then make use of it to provide the beginnings of a theory of relevance realization that incorporates many existing insights implicit within the contributing disciplines of cognitive science. We also introduce some theoretical and potentially technical innovations motivated by the articulation of those insights. Finally, we show how the explication of the framework and development of the theory help to clear up some important incompleteness and confusions within both Montagues work and Sperber and Wilsons theory of relevance.


Journal of Neurophysiology | 2016

Primary motor cortex neurons classified in a postural task predict muscle activation patterns in a reaching task

Ethan A. Heming; Timothy P. Lillicrap; Mohsen Omrani; Troy M. Herter; J. Andrew Pruszynski; Stephen H. Scott

Primary motor cortex (M1) activity correlates with many motor variables, making it difficult to demonstrate how it participates in motor control. We developed a two-stage process to separate the process of classifying the motor field of M1 neurons from the process of predicting the spatiotemporal patterns of its motor field during reaching. We tested our approach with a neural network model that controlled a two-joint arm to show the statistical relationship between network connectivity and neural activity across different motor tasks. In rhesus monkeys, M1 neurons classified by this method showed preferred reaching directions similar to their associated muscle groups. Importantly, the neural population signals predicted the spatiotemporal dynamics of their associated muscle groups, although a subgroup of atypical neurons reversed their directional preference, suggesting a selective role in antagonist control. These results highlight that M1 provides important details on the spatiotemporal patterns of muscle activity during motor skills such as reaching.

Collaboration


Dive into the Timothy P. Lillicrap's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Andrew Pruszynski

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Isaac Kurtzer

New York Institute of Technology College of Osteopathic Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge