Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gunnar Blohm is active.

Publication


Featured researches published by Gunnar Blohm.


Journal of Vision | 2007

Computations for geometrically accurate visually guided reaching in 3-D space.

Gunnar Blohm; J. Douglas Crawford

A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute the visual (gaze-centered) desired movement vector. It then transformed these eye-centered plans into shoulder-centered motor plans using extraretinal eye and head position signals accounting for the complete 3-D eye-in-head and head-on-shoulder geometry (i.e., translation and rotation). We compared actual memory-guided reaching performance to the predictions of the model. By removing extraretinal signals (i.e., eye-head rotations and the offset between the centers of rotation of the eye and head) from the model, we developed a compensation index describing how accurately the brain performs the 3-D visuomotor transformation for different head-restrained and head-unrestrained gaze positions as well as for eye and head roll. Overall, subjects did not show errors predicted when extraretinal signals were ignored. Their reaching performance was accurate and the compensation index revealed that subjects accounted for the 3-D visuomotor transformation geometry. This was also the case for the initial portion of the movement (before proprioceptive feedback) indicating that the desired reach plan is computed in a feed-forward fashion. These findings show that the visuomotor transformation for reaching implements an internal model of the complete eye-to-shoulder linkage geometry and does not only rely on feedback control mechanisms. We discuss the relevance of this model in predicting reaching behavior in several patient groups.


Journal of Vision | 2007

Influence of initial hand and target position on reach errors in optic ataxic and normal subjects

Aarlenne Z. Khan; J. Douglas Crawford; Gunnar Blohm; Christian Urquizar; Yves Rossetti; Laure Pisella

Recent neurophysiological studies suggest that reach planning areas in the posterior parietal cortex encode both target and initial hand position in gaze-centered coordinates, which could be used to calculate a desired movement vector. We tested how varying gaze, target position, and initial hand position affected reach errors in two left unilateral optic ataxia patients with right PPC damage and seven neurologically intact controls. Both controls and patients reaching errors revealed an influence of target position in gaze-centered coordinates; however, both patients mean errors were offset toward the left, with greater errors when the target was in their left visual field, consistent with the damage to the right PPC. Control subjects also showed a large quasi-independent shoulder-centered influence of target position. This effect was much less present in patient C.F., who had more medial damage to the PPC. In contrast, for patient O.K., who had more lateral PPC damage, the shoulder-centered effect was larger and interacted with the gaze-centered influence of target position. All subjects errors also revealed a shoulder-centered influence of the initial hand position, with larger influences on the patients reaching errors. Both patients also showed an interactive influence of the shoulder-centered and gaze-centered initial hand positions. These results suggest that the target and the hand are compared at more than one level in the visuomotor pathway in multiple reference frames, and these comparisons are then integrated. Depending on the location of the damage within the PPC, these comparisons are disrupted, changing the relative influence of hand and target position in different reference frames on the final reaching movement.


Journal of Computational Neuroscience | 2006

A model that integrates eye velocity commands to keep track of smooth eye displacements.

Gunnar Blohm; Lance M. Optican; Philippe Lefèvre

Past results have reported conflicting findings on the oculomotor system’s ability to keep track of smooth eye movements in darkness. Whereas some results indicate that saccades cannot compensate for smooth eye displacements, others report that memory-guided saccades during smooth pursuit are spatially correct. Recently, it was shown that the amount of time before the saccade made a difference: short-latency saccades were retinotopically coded, whereas long-latency saccades were spatially coded. Here, we propose a model of the saccadic system that can explain the available experimental data. The novel part of this model consists of a delayed integration of efferent smooth eye velocity commands. Two alternative physiologically realistic neural mechanisms for this integration stage are proposed. Model simulations accurately reproduced prior findings. Thus, this model reconciles the earlier contradictory reports from the literature about compensation for smooth eye movements before saccades because it involves a slow integration process.


Journal of Vision | 2003

Smooth anticipatory eye movements alter the memorized position of flashed targets.

Gunnar Blohm; Marcus Missal; Philippe Lefèvre

Briefly flashed visual stimuli presented during smooth object- or self-motion are systematically mislocalized. This phenomenon is called the flash-lag effect (Nijhawan, 1994). All previous studies had one common characteristic, the subjects sense of motion. Here we asked whether motion perception is a necessary condition for the flash-lag effect to occur. In our first experiment, we briefly flashed a target during smooth anticipatory eye movements in darkness and subjects had to orient their gaze toward the perceived flash position. Subjects reported to have no sense of eye motion during anticipatory movements. In our second experiment, subjects had to adjust a cursor on the perceived position of the flash. As a result, we show that gaze orientation reflects the actual perceived flash position. Furthermore, a flash-lag effect is present despite the absence of motion perception. Moreover, the time course of gaze orientation shows that the flash-lag effect appeared immediately after the egocentric to allocentric reference frame transformation.


Experimental Brain Research | 2007

Comparing limb proprioception and oculomotor signals during hand-guided saccades

L. Ren; Gunnar Blohm; Jd Crawford

We previously showed that saccades tend to overshoot briefly flashed targets that were manually displaced in the dark (Ren et al. 2006). However it was not clear if the overshoot originated from a sensory error in measuring hand displacement or from a premotor error in saccade programming, because gaze and hand position started at the same central position. Here, we tested between these hypotheses by dissociating the initial eye and hand position. Five hand/target positions (center, far, near, right, left) on a frontally-placed horizontal surface were used in four paradigms: Center or Peripheral Eye-hand Association (CA or PA, both gaze and right hand started from the center or a same peripheral location) and Hand or Eye Dissociation (HD or ED, hand or gaze started from one of three non-target peripheral locations). Subjects never received any visual feedback about the final target location and the subjects’ hand displacement. In the CA paradigm, subjects showed the same overshoot that we showed previously. However, changing both initial eye and hand positions relative to the final target (PA) affected the pattern, significantly altering the directions of overshoots. Changing only the initial position of hand (HD) did not have this effect, whereas changing only initial eye position (ED) had the same effect as the PA condition (CAxa0≈xa0HD, PAxa0≈xa0ED). Furthermore, multiple regression analysis showed that the direction of the ideal saccade contributed significantly to the endpoint direction error, not the direction of the hand path. These results suggest that these errors do not primarily arise from misestimates of the hand trajectory, but rather from a process of comparing the initial eye position and the limb proprioceptive signal during saccade programming.


Reference Module in Neuroscience and Biobehavioral Psychology#R##N#Encyclopedia of Neuroscience | 2009

Spatial Transformations for Eye–Hand Coordination

Gunnar Blohm; Aarlenne Z. Khan; J. D. Crawford

Eye-hand coordination is complex because it involves the visual guidance of both the eyes and hands, while simultaneously using eye movements to optimize vision. Since only hand motion directly affects the external world, eye movements are the slave in this system. This eye-hand visuomotor system incorporates closed-loop visual feedback but here we focus on early feedforward mechanisms that allow primates to make spatially accurate reaches. First, we consider how the parietal cortex might store and update gaze-centered representations of reach targets during a sequence of gaze shifts and fixations. Recent evidence suggests that such representations might be compared with hand position signals within this early gaze-centered frame. However, the resulting motor error commands cannot be treated independently of their frame of origin or the frame of their destined motor command. Behavioral experiments show that the brain deals with the nonlinear aspects of such reference frame transformations, and incorporates internal models of the complex linkage geometry of the eye-head-shoulder system. These transformations are modeled as a series of vector displacement commands, rotated by eye and head orientation, and implemented between parietal and frontal cortex through efficient parallel neuronal architectures. Finally, we consider how this reach system might interact with the visually guided grasp system through both parallel and coordinated neural algorithms.


Journal of Neurophysiology | 2002

What Triggers Catch-Up Saccades During Visual Tracking?

Sophie de Brouwer; Demet Yüksel; Gunnar Blohm; Marcus Missal; Philippe Lefèvre


Journal of Neurophysiology | 2006

Proprioceptive guidance of saccades in eye-hand coordination.

L. Ren; Aarlenne Z. Khan; Gunnar Blohm; Denise Y. P. Henriques; Lauren E. Sergio; J. D. Crawford


Journal of Vision | 2010

Egocentric distance estimation requires eye-head position signals

Gunnar Blohm; J. Douglas Crawford


Archive | 2018

Predicted tracking error triggers catch-up saccades during smooth pursuit

Omri Nachmani; Jonathan D Coutinho; Aarlenne Z. Khan; Philippe Lefèvre; Gunnar Blohm

Collaboration


Dive into the Gunnar Blohm's collaboration.

Top Co-Authors

Avatar

Philippe Lefèvre

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guillaume Leclercq

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcus Missal

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aarlenne Z. Khan

Smith-Kettlewell Institute

View shared research outputs
Top Co-Authors

Avatar

W. Pieter Medendorp

Canadian Institutes of Health Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge