Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Dassonville is active.

Publication


Featured researches published by Paul Dassonville.


Visual Neuroscience | 1992

Oculomotor localization relies on a damped representation of saccadic eye displacement in human and nonhuman primates

Paul Dassonville; John Schlag; Madeleine Schlag-Rey

The oculomotor system has long been thought to rely on an accurate representation of eye displacement or position in a successful attempt to reconcile a stationary targets retinal instability (caused by motion of the eyes) with its corresponding spatial invariance. This is in stark contrast to perceptual localization, which has been shown to rely on a sluggish representation of eye displacement, achieving only partial compensation for the retinal displacement caused by saccadic eye movements. Recent studies, however, have begun to case doubt on the belief that the oculomotor system possess a signal of eye displacement superior to that of the perceptual system. To verify this, five humans and one monkey (Macaca nemestrina) served as subjects in this study of oculomotor localization abilities. Subjects were instructed to make eye movements, as accurately as possible, to the locations of three successive visual stimuli. Presentation of the third stimulus (2-ms duration) was timed so that it fell before, during, or after the subjects saccade from the first stimulus to the second. Localization errors in each subject (human and nonhuman) were consistent with the hypothesis that the oculomotor system has access to only a damped representation of eye displacement--a representation similar to that found in perceptual localization studies.


Experimental Brain Research | 1997

Manual interception of moving targets I. Performance and movement initiation

Nicholas L. Port; Daeyeol Lee; Paul Dassonville; Apostolos P. Georgopoulos

Abstract We investigated the capacities of human subjects to intercept moving targets in a two-dimensional (2D) space. Subjects were instructed to intercept moving targets on a computer screen using a cursor controlled by an articulated 2D manipulandum. A target was presented in 1 of 18 combinations of three acceleration types (constant acceleration, constant deceleration, and constant velocity) and six target motion times, from 0.5 to 2.0 s. First, subjects held the cursor in a start zone located at the bottom of the screen along the vertical meridian. After a pseudorandom hold period, the target appeared in the lower left or right corner of the screen and traveled at 45º toward an interception zone located on the vertical meridian 12.5 cm above the start zone. For a trial to be considered successful, the subject’s cursor had to enter the interception zone within 100 ms of the target’s arrival at the center of the interception zone and stay inside a slightly larger hold zone. Trials in which the cursor arrived more than 100 ms before the target were classified as ”early errors,” whereas trials in which the cursor arrived more than 100 ms after the target were classified as ”late errors.” Given the criteria above, the task proved to be difficult for the subjects. Only 41.3% (1080 out of 2614) of the movements were successful, whereas the remaining 58.7% were temporal (i.e., early or late) errors. A large majority of the early errors occurred in trials with decelerating targets, and their percentage tended to increase with longer target motion times. In contrast, late errors occurred in relation to all three target acceleration types, and their percentage tended to decrease with longer target motion times. Three models of movement initiation were investigated. First, the threshold-distance model, originally proposed for optokinetic eye movements to constant-velocity visual stimuli, maintains that response time is composed of two parts, a constant processing time and the time required for the stimulus to travel a threshold distance. This model only partially fit our data. Second, the threshold-τ model, originally proposed as a strategy for movement initiation, assumes that the subject uses the first-order estimate of time-to-contact (τ) to determine when to initiate the interception movement. Similar to the threshold distance model, the threshold-τ model only partially fit the data. Finally, a dual-strategy model was developed which allowed for the adoption of either of the two strategies for movement initiation; namely, a strategy based on the threshold-distance model (”reactive” strategy) and another based on the threshold-τ model (”predictive” strategy). This model provided a good fit to the data. In fact, individual subjects preferred to use one or the other strategy. This preference was allowed to be manifested at long target motion times, whereas shorter target motion times (i.e., 0.5 s and 0.8 s) forced the subjects to use only the reactive strategy.


NeuroImage | 2001

The Effect of Stimulus-Response Compatibility on Cortical Motor Activation

Paul Dassonville; Scott M. Lewis; Xiao Hong Zhu; Kamil Ugurbil; Seong Gi Kim; James Ashe

Stimulus-response compatibility (SRC) is a general term describing the relationship between a triggering stimulus and its associated motor response. The relationship between stimulus and response can be manipulated at the level of the set of stimulus and response characteristics (set-level) or at the level of the mapping between the individual elements of the stimulus and response sets (element-level). We used functional magnetic resonance imaging (fMRI) to investigate the effects of SRC on functional activation in cortical motor areas. Using behavioral tasks to separately evaluate set- and element-level compatibility, and their interaction, we measured the volume of functional activation in 11 cortical motor areas, in the anterior frontal cortex, and in the superior temporal lobe. Element-level compatibility effects were associated with significant activation in the pre-supplementary motor area (preSMA), the dorsal (PMd) and ventral (PMv) premotor areas, and the parietal areas (inferior, superior, intraparietal sulcus, precuneus). The activation was lateralized to the right hemisphere for most of the areas. Set-level compatibility effects resulted in significant activation in the inferior frontal gyri, anterior cingulate and cingulate motor areas, the PMd, PMv, preSMA, the parietal areas (inferior, superior, intraparietal sulcus, precuneus), and in the superior temporal lobe. Activation in the majority of these areas was lateralized to the left hemisphere. Finally, there was an interaction between set and element-level compatibility in the middle and superior frontal gyri, in an area co-extensive with the dorsolateral prefrontal cortex, suggesting that this area provided the neural substrate for common processing stages, such as working memory and attention, which are engaged when both levels of SRC are manipulated at once.


Vision Research | 1995

The Use of Egocentric and Exocentric Location Cues in Saccadic Programming

Paul Dassonville; John Schlag; Madeleine Schlag-Rey

Theoretically, the location of a visual target can be encoded with respect to the locations of other stimuli in the visual image (exocentric cues), or with respect to the observer (egocentric cues). Egocentric localization in the oculomotor system has been shown to rely on an internal representation of eye position that inaccurately encodes the time-course of saccadic eye movements, resulting in the mislocalization of visual targets presented near the time of a saccade. In the present investigation, subjects were instructed to localize perisaccadic stimuli in the presence or absence of a visual stimulus that could provide exocentric location information. Saccadic localization was more accurate in the presence of the exocentric cue, suggesting that localization is based on a combination of exocentric and egocentric cues. These findings indicate the need to reassess previously reported neurophysiological studies of spatial accuracy and current models of oculomotor control, which have focused almost exclusively on the egocentric localization abilities of the brain.


Vision Research | 2004

The induced Roelofs effect: two visual systems or the shift of a single reference frame?

Paul Dassonville; Bruce Bridgeman; Jagdeep Kaur Bala; Paul Thiem; Anthony Chad Sampanes

Cognitive judgments about an objects location are distorted by the presence of a large frame offset left or right of an observers midline. Sensorimotor responses, however, seem immune to this induced Roelofs illusion, with observers able to accurately point to the targets location. These findings have traditionally been used as evidence for a dissociation of the visual processing required for cognitive judgments and sensorimotor responses. However, a recent alternative hypothesis suggests that the behavioral dissociation is expected if the visual system uses a single frame of reference whose origin (the apparent midline) is biased toward the offset frame. The two theories make qualitatively distinct predictions in a paradigm in which observers are asked to indicate the direction symmetrically opposite the targets position. The collaborative findings of two laboratories clearly support the biased-midline hypothesis.


Cognitive Psychology | 2004

Evidence against a Central Bottleneck during the Attentional Blink: Multiple Channels for Configural and Featural Processing.

Edward Awh; John T. Serences; Paul Laurey; Harpreet Dhaliwal; Thomas van der Jagt; Paul Dassonville

When a visual target is identified, there is a period of several hundred milliseconds when the processing of subsequent targets is impaired, a phenomenon labeled the attentional blink (AB). The emerging consensus is that the identification of a visual target temporarily occupies a limited attentional resource that is essential for all visual perception. The present results challenge this view. With the same digit discrimination task that impaired subsequent letter discrimination for several hundred milliseconds, we found no disruption of subsequent face discrimination. These results suggest that all stimuli do not compete for access to a single resource for visual perception. We propose a multi-channel account of interference in the AB paradigm.


Experimental Brain Research | 1995

Haptic localization and the internal representation of the hand in space

Paul Dassonville

As the hand actively explores the environment, contact with an object leads to neuronal activity in the topographic maps of somatosensory cortex. However, the brain must combine this somatotopically encoded tactile information with an internal representation of the hands location in space if it is to determine the position of the object in three-dimensional space (3-D haptic localization). To investigate the fidelity of this internal representation in human subjects, a small tactual stimulator, light enough to be worn on the subjects hand, was used to present a brief mechanical pulse (6-ms duration) to the right index finger before, during, or after a fast, visually evoked movement of the right hand. In experiment 1, subjects responded by pointing to the perceived location of the mechanical stimulus in 3-D space. Stimuli presented shortly before or during the visually evoked movement were systematically mislocalized, with the reported location of the stimulus approximately equal to the location occupied by the hand 90 ms after stimulus onset. This pattern of errors indicates a representation of the movement that fails to account for the change in the hands location during somatosensory delays and, in some subjects, inaccurately depicts the velocity of the actual movement. In experiment 2, subjects were instructed to verbally indicate the perceived temporal relationship of the stimulus and the visually evoked movement (i.e., by reporting whether the stimulus was presented “before,” “during,” or “after” the movement). On average, stimuli presented in the 38-ms period before movement onset were more likely to be perceived as having occurred during rather than before the movement. Similarly, stimuli in the 145-ms period before movement termination were more likely to be perceived as having occurred after rather than during the movement. The analogous findings of experiments 1 and 2 indicate that the same inaccurate representation of dynamic hand position is used to both localize tactual stimuli in 3-D space and construct the perception of arm movement.


Experimental Brain Research | 1992

The frontal eye field provides the goal of saccadic eye movement

Paul Dassonville; John Schlag; Madeleine Schlag-Rey

SummaryMicrostimulation of oculomotor regions in primate cortex normally evokes saccadic eye movements of stereotypic directions and amplitudes. The fixed-vector nature of the evoked movements is compatible with the creation of either an artificial retinal or motor error signal. However, when microstimulation is applied during an ongoing natural saccade, the starting eye position of the evoked movement differs from the eye position at stimulation onset (due to the latency of the evoked saccade). An analysis of the effect of this eye position discrepancy on the trajectory of the eventual evoked saccade can clarify the oculomotor role of the structure stimulated. The colliding saccade paradigm of microstimulation was used in the present study to investigate the type of signals conveyed by visual, visuomovement, and movement unit activities in the primate frontal eye field. Colliding saccades elicited from all sites were found to compensate for the portion of the initial movement occurring between stimulation and evoked movement onset, plus a portion of the initial movement occurring before stimulation. This finding suggests that activity in the frontal eye field encodes a retinotopic goal that is converted by a downstream structure into the vector of the eventual saccade.


Experimental Brain Research | 1989

Interactions between natural and electrically evoked saccades

John Schlag; Madeleine Schlag-Rey; Paul Dassonville

SummaryFixed-vector saccades evoked by electrical stimulation may result from the elicitation of a retinal error signal directing the eyes toward a goal, or from the elicitation of a motor error signal determining the vector itself. Theoretically, the two mechanisms can be differentiated by delivering the stimulation while the eyes are already in motion (colliding saccade paradigm), thereby changing the eye position from which the evoked saccade starts. Only in the first case is the trajectory of the evoked saccade expected to be modified to compensate for part of the ongoing eye movement. An attempt was made to distinguish retinal vs. motor error mechanisms by applying the colliding saccade paradigm of stimulation to 29 sites throughout the superior colliculus (SC) of two trained monkeys. Compensatory evoked saccades, as predicted by the retinal error hypothesis, were obtained consistently in the superficial layers and at deeper sites where visual unit responses could be recorded. Conversely, in deep layers where only presaccadic activity was found, evoked saccades either were not affected by collision or summed their vectors with that of the ongoing movement. These last observations are both consistent with the hypothesis that the signal produced from deep sites was an initial motor error. A second observation was incidentally made: when stimulation was applied to the most superficial SC region, it definitively erased the goal of the ongoing saccade, and the latter did not resume its interrupted course. The colliding saccade paradigm may be useful in clarifying the role of structures involved in oculomotor function.


Journal of Autism and Developmental Disorders | 2009

A Specific Autistic Trait that Modulates Visuospatial Illusion Susceptibility

Elizabeth Walter; Paul Dassonville; Tiana M. Bochsler

Although several accounts of autism have predicted that the disorder should be associated with a decreased susceptibility to visual illusions, previous experimental results have been mixed. This study examined whether a link between autism and illusion susceptibility can be more convincingly demonstrated by assessing the relationships between susceptibility and the extent to which several individual autistic traits are exhibited as a continuum in a population of college students. A significant relationship was observed between the systemizing trait and susceptibility to a subset of the tested illusions (the rod-and-frame, Roelofs, Ponzo and Poggendorff illusions). These results provide support for the idea that autism involves an imbalance between the processing of local and global cues, more heavily weighted toward local features than in the typically developed individual.

Collaboration


Dive into the Paul Dassonville's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Schlag

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott M. Reed

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Apostolos P. Georgopoulos

United States Department of Veterans Affairs

View shared research outputs
Researchain Logo
Decentralizing Knowledge