Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian S. Howard is active.

Publication


Featured researches published by Ian S. Howard.


Experimental Brain Research | 2008

The statistics of natural hand movements

James N. Ingram; Konrad P. Körding; Ian S. Howard; Daniel M. Wolpert

Humans constantly use their hands to interact with the environment and they engage spontaneously in a wide variety of manual activities during everyday life. In contrast, laboratory-based studies of hand function have used a limited range of predefined tasks. The natural movements made by the hand during everyday life have thus received little attention. Here, we developed a portable recording device that can be worn by subjects to track movements of their right hand as they go about their daily routine outside of a laboratory setting. We analyse the kinematic data using various statistical methods. Principal component analysis of the joint angular velocities showed that the first two components were highly conserved across subjects, explained 60% of the variance and were qualitatively similar to those reported in previous studies of reach-to-grasp movements. To examine the independence of the digits, we developed a measure based on the degree to which the movements of each digit could be linearly predicted from the movements of the other four digits. Our independence measure was highly correlated with results from previous studies of the hand, including the estimated size of the digit representations in primary motor cortex and other laboratory measures of digit individuation. Specifically, the thumb was found to be the most independent of the digits and the index finger was the most independent of the fingers. These results support and extend laboratory-based studies of the human hand.


Journal of Neuroscience Methods | 2009

A modular planar robotic manipulandum with end-point torque control

Ian S. Howard; James N. Ingram; Daniel M. Wolpert

Robotic manipulanda are extensively used in investigation of the motor control of human arm movements. They permit the application of translational forces to the arm based on its state and can be used to probe issues ranging from mechanisms of neural control to biomechanics. However, most current designs are optimized for studying either motor learning or stiffness. Even fewer include end-point torque control which is important for the simulation of objects and the study of tool use. Here we describe a modular, general purpose, two-dimensional planar manipulandum (vBOT) primarily optimized for dynamic learning paradigms. It employs a carbon fibre arm arranged as a parallelogram which is driven by motors via timing pulleys. The design minimizes the intrinsic dynamics of the manipulandum without active compensation. A novel variant of the design (WristBOT) can apply torques at the handle using an add-on cable drive mechanism. In a second variant (StiffBOT) a more rigid arm can be substituted and zero backlash belts can be used, making the StiffBOT more suitable for the study of stiffness. The three variants can be used with custom built display rigs, mounting, and air tables. We investigated the performance of the vBOT and its variants in terms of effective end-point mass, viscosity and stiffness. Finally we present an object manipulation task using the WristBOT. This demonstrates that subjects can perceive the orientation of the principal axis of an object based on haptic feedback arising from its rotational dynamics.


PLOS Biology | 2004

A Neuroeconomics Approach to Inferring Utility Functions in Sensorimotor Control

Konrad P. Körding; Izumi Fukunaga; Ian S. Howard; James N. Ingram; Daniel M. Wolpert

Making choices is a fundamental aspect of human life. For over a century experimental economists have characterized the decisions people make based on the concept of a utility function. This function increases with increasing desirability of the outcome, and people are assumed to make decisions so as to maximize utility. When utility depends on several variables, indifference curves arise that represent outcomes with identical utility that are therefore equally desirable. Whereas in economics utility is studied in terms of goods and services, the sensorimotor system may also have utility functions defining the desirability of various outcomes. Here, we investigate the indifference curves when subjects experience forces of varying magnitude and duration. Using a two-alternative forced-choice paradigm, in which subjects chose between different magnitude–duration profiles, we inferred the indifference curves and the utility function. Such a utility function defines, for example, whether subjects prefer to lift a 4-kg weight for 30 s or a 1-kg weight for a minute. The measured utility function depends nonlinearly on the force magnitude and duration and was remarkably conserved across subjects. This suggests that the utility function, a central concept in economics, may be applicable to the study of sensorimotor control.


Current Biology | 2010

Multiple Grasp-Specific Representations of Tool Dynamics Mediate Skillful Manipulation

James N. Ingram; Ian S. Howard; J. Randall Flanagan; Daniel M. Wolpert

Summary Skillful tool use requires knowledge of the dynamic properties of tools in order to specify the mapping between applied force and tool motion [1–3]. Importantly, this mapping depends on the orientation of the tool in the hand. Here we investigate the representation of dynamics during skillful manipulation of a tool that can be grasped at different orientations. We ask whether the motor system uses a single general representation of dynamics for all grasp contexts or whether it uses multiple grasp-specific representations. Using a novel robotic interface [4], subjects rotated a virtual tool whose orientation relative to the hand could be varied. Subjects could immediately anticipate the force direction for each orientation of the tool based on its visual geometry, and, with experience, they learned to parameterize the force magnitude. Surprisingly, this parameterization of force magnitude showed limited generalization when the orientation of the tool changed. Had subjects parameterized a single general representation, full generalization would be expected. Thus, our results suggest that object dynamics are captured by multiple representations, each of which encodes the mapping associated with a specific grasp context. We suggest that the concept of grasp-specific representations may provide a unifying framework for interpreting previous results related to dynamics learning.


The Journal of Neuroscience | 2012

Gone in 0.6 Seconds: The Encoding of Motor Memories Depends on Recent Sensorimotor States

Ian S. Howard; James N. Ingram; David W. Franklin; Daniel M. Wolpert

Real-world tasks often require movements that depend on a previous action or on changes in the state of the world. Here we investigate whether motor memories encode the current action in a manner that depends on previous sensorimotor states. Human subjects performed trials in which they made movements in a randomly selected clockwise or counterclockwise velocity-dependent curl force field. Movements during this adaptation phase were preceded by a contextual phase that determined which of the two fields would be experienced on any given trial. As expected from previous research, when static visual cues were presented in the contextual phase, strong interference (resulting in an inability to learn either field) was observed. In contrast, when the contextual phase involved subjects making a movement that was continuous with the adaptation-phase movement, a substantial reduction in interference was seen. As the time between the contextual and adaptation movement increased, so did the interference, reaching a level similar to that seen for static visual cues for delays >600 ms. This contextual effect generalized to purely visual motion, active movement without vision, passive movement, and isometric force generation. Our results show that sensorimotor states that differ in their recent temporal history can engage distinct representations in motor memory, but this effect decays progressively over time and is abolished by ∼600 ms. This suggests that motor memories are encoded not simply as a mapping from current state to motor command but are encoded in terms of the recent history of sensorimotor states.


Journal of Neurophysiology | 2011

Separate representations of dynamics in rhythmic and discrete movements: evidence from motor learning

Ian S. Howard; James N. Ingram; Daniel M. Wolpert

Rhythmic and discrete arm movements occur ubiquitously in everyday life, and there is a debate as to whether these two classes of movements arise from the same or different underlying neural mechanisms. Here we examine interference in a motor-learning paradigm to test whether rhythmic and discrete movements employ at least partially separate neural representations. Subjects were required to make circular movements of their right hand while they were exposed to a velocity-dependent force field that perturbed the circularity of the movement path. The direction of the force-field perturbation reversed at the end of each block of 20 revolutions. When subjects made only rhythmic or only discrete circular movements, interference was observed when switching between the two opposing force fields. However, when subjects alternated between blocks of rhythmic and discrete movements, such that each was uniquely associated with one of the perturbation directions, interference was significantly reduced. Only in this case did subjects learn to corepresent the two opposing perturbations, suggesting that different neural resources were employed for the two movement types. Our results provide further evidence that rhythmic and discrete movements employ at least partially separate control mechanisms in the motor system.


Journal of Neurophysiology | 2010

Context-Dependent Partitioning of Motor Learning in Bimanual Movements

Ian S. Howard; James N. Ingram; Daniel M. Wolpert

Human subjects easily adapt to single dynamic or visuomotor perturbations. In contrast, when two opposing dynamic or visuomotor perturbations are presented sequentially, interference is often observed. We examined the effect of bimanual movement context on interference between opposing perturbations using pairs of contexts, in which the relative direction of movement between the two arms was different across the pair. When each perturbation direction was associated with a different bimanual context, such as movement of the arms in the same direction versus movement in the opposite direction, interference was dramatically reduced. This occurred over a short period of training and was seen for both dynamic and visuomotor perturbations, suggesting a partitioning of motor learning for the different bimanual contexts. Further support for this was found in a series of transfer experiments. Having learned a single dynamic or visuomotor perturbation in one bimanual context, subjects showed incomplete transfer of this learning when the context changed, even though the perturbation remained the same. In addition, we examined a bimanual context in which one arm was moved passively and show that the reduction in interference requires active movement. The sensory consequences of movement are thus insufficient to allow opposing perturbations to be co-represented. Our results suggest different bimanual movement contexts engage at least partially separate representations of dynamics and kinematics in the motor system.


Journal of Neurophysiology | 2013

The effect of contextual cues on the encoding of motor memories.

Ian S. Howard; Daniel M. Wolpert; David W. Franklin

Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.


The Journal of Neuroscience | 2008

Composition and decomposition in bimanual dynamic learning.

Ian S. Howard; James N. Ingram; Daniel M. Wolpert

Our ability to skillfully manipulate an object often involves the motor system learning to compensate for the dynamics of the object. When the two arms learn to manipulate a single object they can act cooperatively, whereas when they manipulate separate objects they control each object independently. We examined how learning transfers between these two bimanual contexts by applying force fields to the arms. In a coupled context, a single dynamic is shared between the arms, and in an uncoupled context separate dynamics are experienced independently by each arm. In a composition experiment, we found that when subjects had learned uncoupled force fields they were able to transfer to a coupled field that was the sum of the two fields. However, the contribution of each arm repartitioned over time so that, when they returned to the uncoupled fields, the error initially increased but rapidly reverted to the previous level. In a decomposition experiment, after subjects learned a coupled field, their error increased when exposed to uncoupled fields that were orthogonal components of the coupled field. However, when the coupled field was reintroduced, subjects rapidly readapted. These results suggest that the representations of dynamics for uncoupled and coupled contexts are partially independent. We found additional support for this hypothesis by showing significant learning of opposing curl fields when the context, coupled versus uncoupled, was alternated with the curl field direction. These results suggest that the motor system is able to use partially separate representations for dynamics of the two arms acting on a single object and two arms acting on separate objects.


PLOS Computational Biology | 2011

A Single-Rate Context-Dependent Learning Process Underlies Rapid Adaptation to Familiar Object Dynamics

James N. Ingram; Ian S. Howard; J. Randall Flanagan; Daniel M. Wolpert

Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.

Collaboration


Dive into the Ian S. Howard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Huckvale

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Piers Messum

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge