Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel K. Wood is active.

Publication


Featured researches published by Daniel K. Wood.


Psychological Science | 2011

One to Four, and Nothing More Nonconscious Parallel Individuation of Objects During Action Planning

Jason P. Gallivan; Craig S. Chapman; Daniel K. Wood; Jennifer L. Milne; Daniel Ansari; Jody C. Culham; Melvyn A. Goodale

Much of the current understanding about the capacity limits on the number of objects that can be simultaneously processed comes from studies of visual short-term memory, attention, and numerical cognition. Consistent reports suggest that, despite large variability in the perceptual tasks administered (e.g., object tracking, counting), a limit of three to four visual items can be independently processed in parallel. In the research reported here, we asked whether this limit also extends to the domain of action planning. Using a unique rapid visuomotor task and a novel analysis of reach trajectories, we demonstrated an upper limit to the number of targets that can be simultaneously encoded for action, a capacity limit that also turns out to be no more than three to four. Our findings suggest that conscious perceptual processing and nonconscious movement planning are constrained by a common underlying mechanism limited by the number of items that can be simultaneously represented.


Journal of Vision | 2011

Visual salience dominates early visuomotor competition in reaching behavior

Daniel K. Wood; Jason P. Gallivan; Craig S. Chapman; Jennifer L. Milne; Jody C. Culham; Melvyn A. Goodale

In this study, we investigated whether visual salience influences the competition between potential targets during reach planning. Participants initiated rapid pointing movements toward multiple potential targets, with the final target being cued only after the reach was initiated. We manipulated visual salience by varying the luminance of potential targets. Across two separate experiments, we demonstrate that initial reach trajectories are directed toward more salient targets, even when there are twice as many targets (and therefore twice the likelihood of the final target appearing) on the opposite side of space. We also show that this salience bias is time-dependent, as evidenced by the return of spatially averaged reach trajectories when participants were given an additional 500-ms preview of the target display prior to the cue to move. This study shows both when and to what extent task-irrelevant luminance differences affect the planning of reaches to multiple potential targets.


Behavioural Brain Research | 2010

Short-term motor plasticity revealed in a visuomotor decision-making task

Craig S. Chapman; Jason P. Gallivan; Daniel K. Wood; Jennifer L. Milne; Jody C. Culham; Melvyn A. Goodale

Selecting and executing an action toward only one object in our complex environments presents the visuomotor system with a significant challenge. To overcome this problem, the motor system is thought to simultaneously encode multiple motor plans, which then compete for selection. The decision between motor plans is influenced both by incoming sensory information and previous experience-which itself is comprised of long-term (e.g. weeks, months) and recent (seconds, minutes, hours) information. In this study, we were interested in how the recent trial-to-trial visuomotor experience would be factored into upcoming movement decisions made between competing potential targets. To this aim, we used a unique rapid reaching task to investigate how reach trajectories would be spatially influenced by previous decisions. Our task required subjects to initiate speeded reaches toward multiple potential targets before one was cued in-flight. A novel statistical analysis of the reach trajectories revealed that in cases of target uncertainty, subjects initiated a spatially averaged trajectory toward the midpoint of potential target locations before correcting to the selected target location. Interestingly, when the same target location was consecutively cued, reaches were biased toward that location on the next trial and this effect accumulated across trials. Beyond providing supporting evidence that potential reach locations are encoded and compete in parallel, our results strongly suggest that this motor competition is biased by recent trial history.


Psychological Science | 2013

Connecting the Dots Object Connectedness Deceives Perception but Not Movement Planning

Jennifer L. Milne; Craig S. Chapman; Jason P. Gallivan; Daniel K. Wood; Jody C. Culham; Melvyn A. Goodale

The perceptual system parses complex scenes into discrete objects. Parsing is also required for planning visually guided movements when more than one potential target is present. To examine whether visual perception and motor planning use the same or different parsing strategies, we used the connectedness illusion, in which observers typically report seeing fewer targets if pairs of targets are connected by short lines. We found that despite this illusion, when observers are asked to make speeded reaches toward targets in such displays, their reaches are unaffected by the presence of the connecting lines. Instead, their movement plans, as revealed by their movement trajectories, are influenced by the number of potential targets irrespective of whether connecting lines are present or not. This suggests that scene parsing for perception depends on mechanisms that are distinct from those that allow observers to plan rapid and efficient target-directed movements in situations with multiple potential targets.


European Journal of Neuroscience | 2015

Transient visual responses reset the phase of low-frequency oscillations in the skeletomotor periphery

Daniel K. Wood; Chao Gu; Brian D. Corneil; Paul L. Gribble; Melvyn A. Goodale

We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus‐locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low‐frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system.


Journal of Neurophysiology | 2016

Feature-based attention and spatial selection in frontal eye fields during natural scene search

Pavan Ramkumar; Patrick N. Lawlor; Joshua I. Glaser; Daniel K. Wood; Adam N. Phillips; Mark A. Segraves; Konrad P. Körding

When we search for visual objects, the features of those objects bias our attention across the visual landscape (feature-based attention). The brain uses these top-down cues to select eye movement targets (spatial selection). The frontal eye field (FEF) is a prefrontal brain region implicated in selecting eye movements and is thought to reflect feature-based attention and spatial selection. Here, we study how FEF facilitates attention and selection in complex natural scenes. We ask whether FEF neurons facilitate feature-based attention by representing search-relevant visual features or whether they are primarily involved in selecting eye movement targets in space. We show that search-relevant visual features are weakly predictive of gaze in natural scenes and additionally have no significant influence on FEF activity. Instead, FEF activity appears to primarily correlate with the direction of the upcoming eye movement. Our result demonstrates a concrete need for better models of natural scene search and suggests that FEF activity during natural scene search is explained primarily by spatial selection.


Experimental Brain Research | 2011

Selection of wrist posture in conditions of motor ambiguity

Daniel K. Wood; Melvyn A. Goodale

In our everyday motor interactions with objects, we often encounter situations where the features of an object are determinate (i.e., not perceptually ambiguous), but the mapping between those features and appropriate movement patterns is indeterminate, resulting in a lack of any clear preference for one posture over another. We call this indeterminacy in stimulus-response mapping ‘motor ambiguity’. Here, we use a grasping task to investigate the decision mechanisms that mediate the basic behavior of selecting one wrist posture over another in conditions of motor ambiguity. Using one of two possible wrist postures, participants grasped a dowel that was presented at various orientations. At most orientations, there was a clear preference for one wrist posture over the other. Within a small range of orientations, however, participants were variable in their posture selection due to the fact that the dowel was ambiguous with respect to the hand posture it afforded. We observed longer reaction times (RT) during ‘ambiguous’ trials than during the ‘unambiguous’ trials. In two subsequent experiments, we explored the effects of foreknowledge and trial history on the selection of wrist posture. We found that foreknowledge led to shorter RT unless the previous trial involved selecting a posture in the ambiguous region, in which case foreknowledge gave no RT advantage. These results are discussed within the context of existing models of sensorimotor decision making.


Journal of Neurophysiology | 2016

Role of expected reward in frontal eye field during natural scene search.

Joshua I. Glaser; Daniel K. Wood; Patrick N. Lawlor; Pavan Ramkumar; Konrad P. Körding; Mark A. Segraves

When a saccade is expected to result in a reward, both neural activity in oculomotor areas and the saccade itself (e.g., its vigor and latency) are altered (compared with when no reward is expected). As such, it is unclear whether the correlations of neural activity with reward indicate a representation of reward beyond a movement representation; the modulated neural activity may simply represent the differences in motor output due to expected reward. Here, to distinguish between these possibilities, we trained monkeys to perform a natural scene search task while we recorded from the frontal eye field (FEF). Indeed, when reward was expected (i.e., saccades to the target), FEF neurons showed enhanced responses. Moreover, when monkeys accidentally made eye movements to the target, firing rates were lower than when they purposively moved to the target. Thus, neurons were modulated by expected reward rather than simply the presence of the target. We then fit a model that simultaneously included components related to expected reward and saccade parameters. While expected reward led to shorter latency and higher velocity saccades, these behavioral changes could not fully explain the increased FEF firing rates. Thus, FEF neurons appear to encode motivational factors such as reward expectation, above and beyond the kinematic and behavioral consequences of imminent reward.


The Journal of Neuroscience | 2016

A Trial-by-Trial Window into Sensorimotor Transformations in the Human Motor Periphery.

Chao Gu; Daniel K. Wood; Paul L. Gribble; Brian D. Corneil

The appearance of a novel visual stimulus generates a rapid stimulus-locked response (SLR) in the motor periphery within 100 ms of stimulus onset. Here, we recorded SLRs from an upper limb muscle while humans reached toward (pro-reach) or away (anti-reach) from a visual stimulus. The SLR on anti-reaches encoded the location of the visual stimulus rather than the movement goal. Further, SLR magnitude was attenuated when subjects reached away from rather than toward the visual stimulus. Remarkably, SLR magnitudes also correlated with reaction times on both pro-reaches and anti-reaches, but did so in opposite ways: larger SLRs preceded shorter latency pro-reaches but longer latency anti-reaches. Although converging evidence suggests that the SLR is relayed via a tectoreticulospinal pathway, our results show that task-related signals modulate visual signals feeding into this pathway. The SLR therefore provides a trial-by-trial window into how visual information is integrated with cognitive control in humans. SIGNIFICANCE STATEMENT The presentation of a visual stimulus elicits a trial-by-trial stimulus-locked response (SLR) on the human limb within 100 ms. Here, we show that the SLR continues to reflect stimulus location even when subjects move in the opposite direction (an anti-reach). Remarkably, the attenuation of SLR magnitude reflected the cognitive control required to generate a correct anti-reach, with greater degrees of attenuation preceding shorter-latency anti-reaches and no attenuation preceding error trials. Our results are strikingly similar to neurophysiological recordings in the superior colliculus of nonhuman primates generating anti-saccades, implicating the tectoreticulospinal pathway. Measuring SLR magnitude therefore provides an unprecedented trial-by-trial opportunity to assess the influence of cognitive control on the initial processing of a visual stimulus in humans.


The Journal of Neuroscience | 2009

Simultaneous Encoding of Potential Grasping Movements in Macaque Anterior Intraparietal Area

Jason P. Gallivan; Daniel K. Wood

The remarkable ability of the visuomotor system to rapidly and flexibly plan goal-directed actions relies on neural mechanisms that remain poorly understood. For instance, we are only beginning to understand how the brain selects one movement plan when many others could also accomplish the same

Collaboration


Dive into the Daniel K. Wood's collaboration.

Top Co-Authors

Avatar

Melvyn A. Goodale

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer L. Milne

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Jody C. Culham

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Jason P. Gallivan

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Jason P. Gallivan

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian D. Corneil

University of Western Ontario

View shared research outputs
Researchain Logo
Decentralizing Knowledge