Steven L. Prime
York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven L. Prime.
The Journal of Neuroscience | 2010
Michael Vesia; Steven L. Prime; Xiaogang Yan; Lauren E. Sergio; J. Douglas Crawford
Single-unit recordings in macaque monkeys have identified effector-specific regions in posterior parietal cortex (PPC), but functional neuroimaging in the human has yielded controversial results. Here we used on-line repetitive transcranial magnetic stimulation (rTMS) to determine saccade and reach specificity in human PPC. A short train of three TMS pulses (separated by an interval of 100 ms) was delivered to superior parieto-occipital cortex (SPOC), a region over the midposterior intraparietal sulcus (mIPS), and a site close to caudal IPS situated over the angular gyrus (AG) during a brief memory interval while subjects planned either a saccade or reach with the left or right hand. Behavioral measures then were compared to controls without rTMS. Stimulation of mIPS and AG produced similar patterns: increased end-point variability for reaches and decreased saccade accuracy for contralateral targets. In contrast, stimulation of SPOC deviated reach end points toward visual fixation and had no effect on saccades. Contralateral-limb specificity was highest for AG and lowest for SPOC. Visual feedback of the hand negated rTMS-induced disruptions of the reach plan for mIPS and AG, but not SPOC. These results suggest that human SPOC is specialized for encoding retinally peripheral reach goals, whereas more anterior-lateral regions (mIPS and AG) along the IPS possess overlapping maps for saccade and reach planning and are more closely involved in motor details (i.e., planning the reach vector for a specific hand). This work provides the first causal evidence for functional specificity of these parietal regions in healthy humans.
Human Factors | 2002
Christian Richard; Richard D. Wright; Cheryl Ee; Steven L. Prime; Yujiro Shimizu; John Vavrik
The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.
The Journal of Neuroscience | 2008
Steven L. Prime; Michael Vesia; J. Douglas Crawford
The posterior parietal cortex (PPC) plays a role in spatial updating of goals for eye and arm movements across saccades, but less is known about its role in updating perceptual memory. We reported previously that transsaccadic memory has a capacity for storing the orientations of three to four Gabor patches either within a single fixation (fixation task) or between separate fixations (saccade task). Here, we tested the role of the PPC in transsaccadic memory in eight subjects by simultaneously applying single-pulse transcranial magnetic stimulation (TMS) over the right and left PPC, over several control sites, and comparing these to behavioral controls with no TMS. In TMS trials, we randomly delivered pulses at one of three different time intervals around the time of the saccade, or at an equivalent time in the fixation task. Controls confirmed that subjects could normally retain at least three visual features. TMS over the left PPC and a control site had no significant effect on this performance. However, TMS over the right PPC disrupted memory performance in both tasks. This TMS-induced effect was most disruptive in the saccade task, in particular when stimulation coincided more closely with saccade timing. Here, the capacity to compare presaccadic and postsaccadic features was reduced to one object, as expected if the spatial aspect of memory was disrupted. This finding suggests that right PPC plays a role in the spatial processing involved in transsaccadic memory of visual features. We propose that this process uses saccade-related feedback signals similar to those observed in spatial updating.
Experimental Brain Research | 2006
Steven L. Prime; Matthias Niemeier; Jd Crawford
Transsaccadic integration (TSI) refers to the perceptual integration of visual information collected across separate gaze fixations. Current theories of TSI disagree on whether it relies solely on visual algorithms or also uses extra-retinal signals. We designed a task in which subjects had to rely on internal oculomotor signals to synthesize remembered stimulus features presented within separate fixations. Using a mouse-controlled pointer, subjects estimated the intersection point of two successively presented bars, in the dark, under two conditions: Saccade task (bars viewed in separate fixations) and Fixation task (bars viewed in one fixation). Small, but systematic biases were observed in both intersection tasks, including position-dependent vertical undershoots and order-dependent horizontal biases. However, the magnitude of these errors was statistically indistinguishable in the Saccade and Fixation tasks. Moreover, part of the errors in the Saccade task were dependent on saccade metrics, showing that egocentric oculomotor signals were used to fuse remembered location and orientation features across saccades. We hypothesize that these extra-retinal signals are normally used to reduce the computational load of calculating visual correspondence between fixations. We further hypothesize that TSI may be implemented within dynamically updated recurrent feedback loops that interconnect a common eye-centered map in occipital cortex with both the “dorsal” and “ventral” streams of visual analysis.
Philosophical Transactions of the Royal Society B | 2011
Steven L. Prime; Michael Vesia; J. Douglas Crawford
Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP.
Cerebral Cortex | 2010
Steven L. Prime; Michael Vesia; J. Douglas Crawford
We recently showed that transcranial magnetic stimulation (TMS) over the right parietal eye fields disrupts memory of object features and locations across saccades. We applied TMS over the frontal eye fields (FEF) as subjects compared the feature details of visual targets presented either within a single eye fixation (Fixation Task) or across a saccade (Saccade Task). TMS pulses were randomly delivered at one of 3 time intervals around the time of the saccade, or at equivalent times in the Fixation Task. A No-TMS control confirmed that subjects could normally retain approximately 3 visual features. TMS in the Fixation Task had no effect compared with No-TMS, but differences among TMS times were found during right FEF stimulation. TMS over either the right or left FEF disrupted memory performance in the Saccade Task when stimulation coincided most closely with the saccade. The capacity to compare pre-and postsaccadic features was reduced to 1-2 objects, as expected if the spatial aspect of memory was disrupted. These findings suggest that the FEF plays a role in the spatial processing involved in trans-saccadic memory of visual features. We propose that this process employs saccade-related feedback signals similar to those observed in spatial updating.
Experimental Brain Research | 2015
Melissa C. Bulloch; Steven L. Prime; Jonathan J. Marotta
Grasping moving objects involves both spatial and temporal predictions. The hand is aimed at a location where it will meet the object, rather than the position at which the object is seen when the reach is initiated. Previous eye–hand coordination research from our laboratory, utilizing stationary objects, has shown that participants’ initial gaze tends to be directed towards the eventual location of the index finger when making a precision grasp. This experiment examined how the speed and direction of a computer-generated block’s movement affect gaze and selection of grasp points. Results showed that when the target first appeared, participants anticipated the target’s eventual movement by fixating well ahead of its leading edge in the direction of eventual motion. Once target movement began, participants shifted their fixation to the leading edge of the target. Upon reach initiation, participants then fixated towards the top edge of the target. As seen in our previous work with stationary objects, final fixations tended towards the final index finger contact point on the target. Moreover, gaze and kinematic analyses revealed that it was direction that most influenced fixation locations and grasp points. Interestingly, participants fixated further ahead of the target’s leading edge when the direction of motion was leftward, particularly at the slower speed—possibly the result of mechanical constraints of intercepting leftward-moving targets with one’s right hand.
Behavior Research Methods | 2011
Jane Lawrence; Kamyar Abhari; Steven L. Prime; Benjamin P. Meek; Loni Desanghere; Lee A. Baugh; Jonathan J. Marotta
The development of noninvasive neuroimaging techniques, such as fMRI, has rapidly advanced our understanding of the neural systems underlying the integration of visual and motor information. However, the fMRI experimental design is restricted by several environmental elements, such as the presence of the magnetic field and the restricted view of the participant, making it difficult to monitor and measure behaviour. The present article describes a novel, specialized software package developed in our laboratory called Biometric Integration Recording and Analysis (BIRA). BIRA integrates video with kinematic data derived from the hand and eye, acquired using MRI-compatible equipment. The present article demonstrates the acquisition and analysis of eye and hand data using BIRA in a mock (0 Tesla) scanner. A method for collecting and integrating gaze and kinematic data in fMRI studies on visuomotor behaviour has several advantages: Specifically, it will allow for more sophisticated, behaviourally driven analyses and eliminate potential confounds of gaze or kinematic data.
Experimental Brain Research | 2013
Steven L. Prime; Jonathan J. Marotta
Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action—e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under openand closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger’s grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks’ centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an ‘immediate grasping’ task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream.
Experimental Brain Research | 2007
Steven L. Prime; Lia Tsotsos; Gerald P. Keith; J. Douglas Crawford