Liana E. Brown
Trent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Liana E. Brown.
Journal of Cognitive Neuroscience | 2009
Liana E. Brown; Elizabeth T. Wilson; Paul L. Gribble
Neural representations of novel motor skills can be acquired through visual observation. We used repetitive transcranial magnetic stimulation (rTMS) to test the idea that this “motor learning by observing” is based on engagement of neural processes for learning in the primary motor cortex (M1). Human subjects who observed another person learning to reach in a novel force environment imposed by a robot arm performed better when later tested in the same environment than subjects who observed movements in a different environment. rTMS applied to M1 after observation reduced the beneficial effect of observing congruent forces, and eliminated the detrimental effect of observing incongruent forces. Stimulation of a control site in the frontal cortex had no effect on reaching. Our findings represent the first direct evidence that neural representations of motor skills in M1, a cortical region whose role has been firmly established for active motor learning, also underlie motor learning by observing.
Experimental Brain Research | 2005
Liana E. Brown; Brooke A. Halpert; Melvyn A. Goodale
Anatomical and physiological evidence suggests that vision-for-perception and vision-for-action may be differently sensitive to increasingly peripheral stimuli, and to stimuli in the upper and lower visual fields (VF). We asked participants to fixate one of 24 randomly presented LED arranged radially in eight directions and at three eccentricities around a central target location. One of two (small, large) target objects was presented briefly, and participants responded in two ways. For the action task, they reached for and grasped the target. For the perception task, they estimated target height by adjusting thumb-finger separation. In a final set of trials for each task, participants knew that target size would remain constant. We found that peak aperture increased with eccentricity for grasping, but not for perceptual estimations of size. In addition, peak grip aperture, but not size-estimation aperture, was more variable when targets were viewed in the upper as opposed to the lower VF. A second experiment demonstrated that prior knowledge about object size significantly reduced the variability of perceptual estimates, but had no effect on the variability of grip aperture. Overall, these results support the claim that peripheral VF stimuli are processed differently for perception and action. Moreover, they support the idea that the lower VF is specialized for the control of manual prehension. Finally, the effect of prior knowledge about target size on performance substantiates claims that perception is more tightly linked to memory systems than action.
Experimental Brain Research | 2003
Liana E. Brown; David A. Rosenbaum; Robert L. Sainburg
Previous research has shown that even when limb position drifts considerably during continuous blind performance, the topological and metrical properties of generated hand paths remain remarkably invariant. We tested two possible accounts of this intriguing effect. According to one hypothesis, position drift is due to degradation of limb-position information. This hypothesis predicted that drift of static hand positions at movement reversals should not depend on movement speed. According to the other hypothesis, position drift is due to degradation of movement information. This hypothesis predicted that drift of static hand positions at movement reversals should vary with movement speed. We tested these two hypotheses by varying the required movement speed when normal human adults performed back-and-forth manual positioning movements in the absence of visual feedback. Movement distance and direction were well preserved even though hand positions between movements drifted considerably. In accord with the movement error hypothesis, but not in accord with the position hypothesis, the rate at which hand positions drifted depended on movement speed. The data are consistent with the idea that hand position, which defines the origin of the trajectory control coordinate system, and movement trajectory are controlled by distinct neural mechanisms.
The Journal of Neuroscience | 2007
Liana E. Brown; Elizabeth T. Wilson; Melvyn A. Goodale; Paul L. Gribble
There are reciprocal connections between visual and motor areas of the cerebral cortex. Although recent studies have provided intriguing new insights, in comparison with volume of research on the visual control of movement, relatively little is known about how movement influences vision. The motor system is perfectly suited to learn about environmental forces. Does environmental force information, learned by the motor system, influence visual processing? Here, we show that learning to compensate for a force applied to the hand influenced how participants predicted target motion for interception. Ss trained in one of three constant force fields by making reaching movements while holding a robotic manipulandum. The robot applied forces in a null [null force field (NFF)], leftward [leftward force field (LFF)], or [rightward force field (RFF)] direction. Training was followed immediately with an interception task. The target accelerated from left to right and Sss task was to stab it. When viewing time was optimal for prediction, the RFF group initiated their responses earlier and hit more targets, and the LFF group initiated their responses later and hit fewer targets, than the NFF group. In follow-up experiments, we show that motor learning is necessary, and we rule out the possibility that explicit force direction information drives how Ss altered their predictions of visual motion. Environmental force information, acquired by motor learning, influenced how the motion of nearby visual targets was predicted.
Neuropsychologia | 2009
Liana E. Brown; Brendan F. Morrissey; Melvyn A. Goodale
Here we show that pointing movements made to visual targets projected onto the palm of the hand are more precise and accurate than those made to targets projected onto back of the hand. This advantage may be related to the fact that the number of cortical bimodal neurons coding both visual and tactile stimuli increases with tactile receptor density, which is known to be higher in glabrous than in hairy skin.
Journal of Neurophysiology | 2010
Liana E. Brown; Elizabeth T. Wilson; Sukhvinder S. Obhi; Paul L. Gribble
Watching an actor make reaching movements in a perturbing force field provides the observer with information about how to compensate for that force field. Here we asked two questions about the nature of information provided to the observer. Is it important that the observer learn the difference between errant (curved) movements and goal (straight) movements by watching the actor progress in a relatively orderly fashion from highly curved to straight movements over a series of trials? Or is it sufficient that the observer sees only reaching errors in the force field (FF)? In the first experiment, we found that observers performed better if they observed reaches in a FF that was congruent, rather than incongruent, with the FF used in a later test. Observation-trial order had no effect on performance, suggesting that observers understood the goal in advance and perhaps learned about the force-field by observing movement curvature. Next we asked whether observers learn optimally by observing the actors mistakes (high-error trials), if they learn by watching the actor perform with expertise in the FF (low-error trials), or if they need to see contrast between errant and goal behavior (a mixture of both high- and low-error trials). We found that observers who watched high-error trials were most affected by observation but that significant learning also occurred if observers watched only some high-error trials. This result suggests that observers learn to adapt their reaching to an unpredictable FF best when they see the actor making mistakes.
PLOS ONE | 2011
Liana E. Brown; Robert Doole; Nicole Malfait
Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented.
Journal of Experimental Psychology: Human Perception and Performance | 2002
Liana E. Brown; Cathleen M. Moore; David A. Rosenbaum
Does visual processing differ for action and recognition? To address this question, the authors capitalized on research showing that color is preferred over binocular disparity in the ventral (recognition) stream, whereas disparity is preferred over color in the dorsal (action) stream. Participants searched for oblique targets among vertical distractors in displays defined only by color or disparity. Action-task participants stamped the target with a handheld block, whereas recognition-task participants lifted the block through a target-compatible gap. Analyses of reaction time and time-varying hand orientation showed that disparity and color displays were processed equally efficiently during action, but disparity was processed less efficiently than color during recognition. The results suggest that visual processing differs for action and recognition.
Frontiers in Psychology | 2013
Liana E. Brown; Melvyn A. Goodale
Research suggests that, like near-hand effects, visual targets appearing near the tip of a hand-held real or virtual tool are treated differently than other targets. This paper reviews neurological and behavioral evidence relevant to near-tool effects and describes how the effect varies with the functional properties of the tool and the knowledge of the participant. In particular, the paper proposes that motor knowledge plays a key role in the appearance of near-tool effects.
Frontiers in Psychology | 2013
Robin M. Langerak; Carina L. La Mantia; Liana E. Brown
Visual targets can be processed more quickly and reliably when a hand is placed near the target. Both unimodal and bimodal representations of hands are largely lateralized to the contralateral hemisphere, and since each hemisphere demonstrates specialized cognitive processing, it is possible that targets appearing near the left hand may be processed differently than targets appearing near the right hand. The purpose of this study was to determine whether visual processing near the left and right hands interacts with hemispheric specialization. We presented hierarchical-letter stimuli (e.g., small characters used as local elements to compose large characters at the global level) near the left or right hands separately and instructed participants to discriminate the presence of target letters (X and O) from non-target letters (T and U) at either the global or local levels as quickly as possible. Targets appeared at either the global or local level of the display, at both levels, or were absent from the display; participants made foot-press responses. When discriminating target presence at the global level, participants responded more quickly to stimuli presented near the left hand than near either the right hand or in the no-hand condition. Hand presence did not influence target discrimination at the local level. Our interpretation is that left-hand presence may help participants discriminate global information, a right hemisphere (RH) process, and that the left hand may influence visual processing in a way that is distinct from the right hand.