Jason D. Connolly
University of Western Ontario
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jason D. Connolly.
Experimental Brain Research | 2003
Jason D. Connolly; Richard A. Andersen; Melvyn A. Goodale
Event-related functional magnetic resonance imaging was used to examine activation in the posterior parietal cortex when subjects made pointing movements or saccades to the same spatial location. One region, well positioned to be homologous to the monkey parietal reach region (PRR), responded preferentially during memory-delay trials in which the subject planned to point to a specific location as compared to trials in which the subject planned to make a saccade to that same location. We therefore conclude that activation in this region is related to specific motor intent; i.e. it encodes information related to the subjects intention to make a specific movement to a particular spatial location.
Nature Neuroscience | 2002
Jason D. Connolly; Melvyn A. Goodale; Ravi S. Menon; Douglas P. Munoz
We used functional magnetic resonance imaging (fMRI) to study readiness and intention signals in frontal and parietal areas that have been implicated in planning saccadic eye movements—the frontal eye fields (FEF) and intraparietal sulcus (IPS). To track fMRI signal changes correlated with readiness to act, we used an event-related design with variable gap periods between disappearance of the fixation point and appearance of the target. To track changes associated with intention, subjects were instructed before the gap period to make either a pro-saccade (look at target) or an anti-saccade (look away from target). FEF activation increased during the gap period and was higher for anti- than for pro-saccade trials. No signal increases were observed during the gap period in the IPS. Our findings suggest that within the frontoparietal networks that control saccade generation, the human FEF, but not the IPS, is critically involved in preparatory set, coding both the readiness and intention to perform a particular movement.
Experimental Brain Research | 1999
Jason D. Connolly; Melvyn A. Goodale
Abstract Although it is obvious that vision plays a primary role in reaching and grasping objects, the sources of the visual information used in programming and controlling various aspects of these movements is still being investigated. One source of visual information is feedback relating to the characteristics of the reach itself – for example, the speed and trajectory of the moving limb and the change in the posture of the hand and fingers. The present study selectively eliminated this source of visual information by blocking the subject’s view of the reaching limb with an opaque barrier while still enabling subjects to view the goal object. Thus, a direct comparison was made between standard (closed-loop) and object-only (open-loop) visual-feedback conditions in a situation in which the light levels and contrast between an object and its surroundings were equivalent in both viewing conditions. Reach duration was longer with proportionate increases in both the acceleration and deceleration phases when visual feedback of the reaching limb was prevented. Maximum grip aperture and the proportion of movement time at which it occurred were the same in both conditions. Thus, in contrast to previous studies that did not employ constant light levels across closed- and open-loop reaching conditions, a dissociation was found between the spatial and temporal dimensions of grip formation. It appears that the posture of the hand can be programmed without visual feedback of the hand – presumably via a combination of visual information about the goal object and proprioceptive feedback (and/or efference copy). Nevertheless, maximum grip aperture (like the kinematic markers examined in the transport component) was also delayed when visual feedback of the reaching limb was selectively prevented. In other words, the relative timing of kinematic events was essentially unchanged, reflecting perhaps a tight coupling between the transport and grip components.
Proceedings of the Royal Society of London B: Biological Sciences | 2012
David C. Lyon; Jason D. Connolly
The visual system in primates is represented by a remarkably large expanse of the cerebral cortex. While more precise investigative studies that can be performed in non-human primates contribute towards understanding the organization of the human brain, there are several issues of visual cortex organization in monkey species that remain unresolved. In all, more than 20 areas comprise the primate visual cortex, yet there is little agreement as to the exact number, size and visual field representation of all but three. A case in point is the third visual area, V3. It is found relatively early in the visual system hierarchy, yet over the last 40 years its organization and even its very existence have been a matter of debate among prominent neuroscientists. In this review, we discuss a large body of recent work that provides straightforward evidence for the existence of V3. In light of this, we then re-examine results from several seminal reports and provide parsimonious re-interpretations in favour of V3. We conclude with analysis of human and monkey functional magnetic resonance imaging literature to make the case that a complete V3 is an organizational feature of all primate species and may play a greater role in the dorsal stream of visual processing.
Frontiers in Human Neuroscience | 2013
Cristiana Cavina-Pratesi; Jason D. Connolly; Arthur David Milner
Optic ataxia is a neuropsychological disorder that affects the ability to interact with objects presented in the visual modality following either unilateral or bilateral lesions of the posterior parietal cortex (PPC). Patients with optic ataxia fail to reach accurately for objects, particularly when they are presented in peripheral vision. The present review will focus on a series of experiments performed on patient M.H. Following a lesion restricted largely to the left PPC, he developed mis-reaching behavior when using his contralesional right arm for movements directed toward the contralesional (right) visual half-field. Given the clear-cut specificity of this patients deficit, whereby reaching actions are essentially spared when executed toward his ipsilateral space or when using his left arm, M.H. provides a valuable “experiment of nature” for investigating the role of the PPC in performing different visually guided actions. In order to address this, we used kinematic measurement techniques to investigate M.H.s reaching and grasping behavior in various tasks. Our experiments support the idea that optic ataxia is highly function-specific: it affects a specific sub-category of visually guided actions (reaching but not grasping), regardless of their specific end goal (both reaching toward an object and reaching to avoid an obstacle); and finally, is independent of the limb used to perform the action (whether the arm or the leg). Critically, these results are congruent with recent functional MRI experiments in neurologically intact subjects which suggest that the PPC is organized in a function-specific, rather than effector-specific, manner with different sub-portions of its mantle devoted to guiding actions according to their specific end-goal (reaching, grasping, or looking), rather than according to the effector used to perform them (leg, arm, hand, or eyes).
The Journal of Comparative Neurology | 2012
Jason D. Connolly; Maziar Hashemi-Nezhad; David C. Lyon
The visual cortex of cats is highly evolved. Analogously to the brains of primates, large numbers of visual areas are arranged hierarchically and can be parsed into separate dorsal and ventral streams for object recognition and visuospatial representation. Within early primate visual areas, V1 and V2, and to a lesser extent V3, the two streams are relatively segregated and relayed in parallel to higher order cortex, although there is some evidence suggesting an alignment of V2 and V3 to one stream over the other. For cats, there is no evidence of anatomical segregation in areas 18 and 19, the analogs to V2 and V3. However, previous work was only qualitative in nature. Here we re‐examined the feedback connectivity patterns of areas 18/19 in quantitative detail. To accomplish this, we used a genetically modified rabies virus that acts as a retrograde tracer and fills neurons with fluorescent protein. After injections into area 19, many more neurons were labeled in putative ventral stream area 21a than in putative dorsal stream region posterolateral suprasylvian complex of areas (PLS), and the dendrites of neurons in 21a were significantly more complex. Conversely, area 18 injections labeled more neurons in PLS, and these were more complex than neurons in 21a. We infer from our results that area 19 in cat is more aligned to the ventral stream and area 18 to the dorsal stream. Based on the success of our approach, we suggest that this method could be applied to resolve similar issues related to primate V3. J. Comp. Neurol. 520:988–1004, 2012.
Cerebral Cortex | 2015
Jason D. Connolly; Quoc C. Vuong; Alexander Thiele
The brain must convert retinal coordinates into those required for directing an effector. One prominent theory holds that, through a combination of visual and motor/proprioceptive information, head-/body-centered representations are computed within the posterior parietal cortex (PPC). An alternative theory, supported by recent visual and saccade functional magnetic resonance imaging (fMRI) topographic mapping studies, suggests that PPC neurons provide a retinal/eye-centered coordinate system, in which the coding of a visual stimulus location and/or intended saccade endpoints should remain unaffected by changes in gaze position. To distinguish between a retinal/eye-centered and a head-/body-centered coordinate system, we measured how gaze direction affected the representation of visual space in the parietal cortex using fMRI. Subjects performed memory-guided saccades from a central starting point to locations “around the clock.” Starting points varied between left, central, and right gaze relative to the head-/body midline. We found that memory-guided saccadotopic maps throughout the PPC showed spatial reorganization with very subtle changes in starting gaze position, despite constant retinal input and eye movement metrics. Such a systematic shift is inconsistent with models arguing for a retinal/eye-centered coordinate system in the PPC, but it is consistent with head-/body-centered coordinate representations.
Experimental Brain Research | 2016
Jason D. Connolly; Robert W. Kentridge; Cristiana Cavina-Pratesi
AbstractThere has been concentrated debate over four decades as to whether or not the nonhuman primate parietal cortex codes for intention or attention. In nonhuman primates, certain studies report results consistent with an intentional role, whereas others provide support for coding of visual-spatial attention. Until now, no one has yet directly contrasted an established motor “intention” paradigm with a verified “attention” paradigm within the same protocol. This debate has continued in both the nonhuman primate and healthy human brain and is subsequently timely. We incorporated both paradigms across two distinct temporal epochs within a whole-parietal slow event-related human functional magnetic resonance imaging experiment. This enabled us to examine whether or not one paradigm proves more effective at driving the neural response across three intraparietal areas. As participants performed saccadic eye and/or pointing tasks, discrete event-related components with dissociable responses were elicited in distinct sub-regions of human parietal cortex. Critically, the posterior intraparietal area showed robust activity consistent with attention (no intention planning). The most contentious area in the literature, the middle intraparietal area produced activation patterns that further reinforce attention coding in human parietal cortex. Finally, the anterior intraparietal area showed the same pattern. Therefore, distributed coding of attention is relatively more pronounced across the two computations within human parietal cortex.
Cortex | 2016
Freya Copley-Mills; Jason D. Connolly; Cristiana Cavina-Pratesi
Comparison between real and pantomimed actions is used in neuroscience to dissociate stimulus-driven (real) as compared to internally driven (pantomimed) visuomotor transformations, with the goal of testing models of vision (Milner & Goodale, 1995) and diagnosing neuropsychological deficits (apraxia syndrome). Real actions refer to an overt movement directed toward a visible target whereas pantomimed actions refer to an overt movement directed either toward an object that is no longer available. Although similar, real and pantomimed actions differ in their kinematic parameters and in their neural substrates. Pantomimed-reach-to-grasp-actions show reduced reaching velocities, higher wrist movements, and reduced grip apertures. In addition, seminal neuropsychological studies and recent neuroimaging findings confirmed that real and pantomimed actions are underpinned by separate brain networks. Although previous literature suggests differences in the praxis system between males and females, no research to date has investigated whether or not gender differences exist in the context of real versus pantomimed reach-to-grasp actions. We asked ten male and ten female participants to perform real and pantomimed reach-to-grasp actions toward objects of different sizes, either with or without visual feedback. During pantomimed actions participants were required to pick up an imaginary object slightly offset relative to the location of the real one (which was in turn the target of the real reach-to-grasp actions). Results demonstrate a significant difference between the kinematic parameters recorded in male and female participants performing pantomimed, but not real reach-to-grasp tasks, depending on the availability of visual feedback. With no feedback both males and females showed smaller grip aperture, slower movement velocity and lower reach height. Crucially, these same differences were abolished when visual feedback was available in male, but not in female participants. Our results suggest that male and female participants should be evaluated separately in the clinical environment and in future research in the field.
Experimental Brain Research | 2015
Christopher J. Worssam; Lewis C. Meade; Jason D. Connolly
It has been demonstrated that both visual feedback and the presence of certain types of non-target objects in the workspace can affect kinematic measures and the trajectory path of the moving hand during reach-to-grasp movements. Yet no study to date has examined the possible effect of providing non-obstructing three-dimensional (3D) depth cues within the workspace and with consistent retinal inputs and whether or not these alter manual prehension movements. Participants performed a series of reach-to-grasp movements in both open- (without visual feedback) and closed-loop (with visual feedback) conditions in the presence of one of three possible 3D depth cues. Here, it is reported that preventing online visual feedback (or not) and the presence of a particular depth cue had a profound effect on kinematic measures for both the reaching and grasping components of manual prehension—despite the fact that the 3D depth cues did not act as a physical obstruction at any point. The depth cues modulated the trajectory of the reaching hand when the target block was located on the left side of the workspace but not on the right. These results are discussed in relation to previous reports and implications for brain–computer interface decoding algorithms are provided.