Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Servos is active.

Publication


Featured researches published by Philip Servos.


Vision Research | 1992

The role of binocular vision in prehension: a kinematic analysis

Philip Servos; Melvyn A. Goodale; Lorna S. Jakobson

This study examined the contribution of binocular vision to the control of human prehension. Subjects reached out and grasped oblong blocks under conditions of either monocular or binocular vision. Kinematic analyses revealed that prehensile movements made under monocular viewing differed substantially from those performed under binocular conditions. In particular, grasping movements made under monocular viewing conditions showed longer movement times, lower peak velocities, proportionately longer deceleration phases, and smaller grip apertures than movements made under binocular viewing. In short, subjects appeared to be underestimating the distance of objects (and as a consequence, their size) under monocular viewing. It is argued that the differences in performance between the two viewing conditions were largely a reflection of differences in estimates of the targets size and distance obtained prior to movement onset. This study provides the first clear kinematic evidence that binocular vision (stereopsis and possibly vergence) makes a significant contribution to the accurate programming of prehensile movements in humans.


Perception | 1994

The Role of Surface Information in Object Recognition: Studies of a Visual Form Agnosic and Normal Subjects

G. Keith Humphrey; Melvyn A. Goodale; Lorna S. Jakobson; Philip Servos

Three experiments were conducted to explore the role of colour and other surface properties in object recognition. The effects of manipulating the availability of surface-based information on object naming in a patient with visual form agnosia and in two age-matched control subjects were examined in experiment 1. The objects were presented under seven different viewing conditions ranging from a full view of the actual objects to line drawings of those same objects. The presence of colour and other surface properties aided the recognition of natural objects such as fruits and vegetables in both the patient and the control subjects. Experiment 2 was focused on four of the critical viewing conditions used in experiment 1 but with a large sample of normal subjects. As in experiment 1, it was found that surface properties, particularly colour, aided the naming of natural objects. The presence of colour did not facilitate the naming of manufactured objects. Experiment 3 was focused on possible ways by which colour could assist in the recognition of natural objects and it was found that object naming was facilitated only if the objects were presented in their usual colour. The results of the experiments show that colour does improve recognition for some types of objects and that the improvement occurs at a high level of visual analysis.


Experimental Brain Research | 1994

Binocular vision and the on-line control of human prehension

Philip Servos; Melvyn A. Goodale

The contribution of binocular visual feedback to the kinematics of human prehension was studied in two related experiments. In both experiments, the field of view of each eye was independently controlled by means of goggles fitted with liquid-crystal shutters. While wearing these goggles, which permitted either a binocular or a monocular view of the world, subjects were required to reach out and grasp a target object, which varied in size and position from trial to trial. In experiment 1, two viewing conditions were used. In one condition, binocular vision was available throughout the entire trial; in the second condition, the initial binocular view was replaced by a monocular view after the reaching movement had been initiated. When only monocular feedback was available, subjects showed a prolonged deceleration phase, although the time they spent in contact with the object was the same in both conditions. In experiment 2, monocular vision was available throughout a given trial in one condition and was replaced by binocular vision upon movement initiation in the second condition. Subjects in this experiment also displayed a prolonged deceleration phase in the monocular feedback condition relative to their performance in the binocular feedback condition. Unlike experiment 1, however, allowing only monocular feedback resulted in an increase in the amount of time subjects spent in contact with the object. Moreover, the object contact phases under the two conditions of experiment 2 were much longer than those observed in experiment 1, in which subjects received initial binocular views of the object. This latter finding suggests that an initial binocular view provides better information about the size and location of the object-information that allows subjects to form their final grasp more efficiently. In summary, these findings make it clear that binocular vision makes important contributions to both the planning and the on-line control of skilled, visually guided reaching and grasping movements.


Journal of Cognitive Neuroscience | 2003

Perceiving Biological Motion: Dissociating Visible Speech from Walking

Andrea Santi; Philip Servos; Eric Vatikiotis-Bateson; Takaaki Kuratate; Kevin G. Munhall

Neuropsychological research suggests that the neural system underlying visible speech on the basis of kinematics is distinct from the system underlying visible speech of static images of the face and identifying whole-body actions from kinematics alone. Functional magnetic resonance imaging was used to identify the neural systems underlying point-light visible speech, as well as perception of a walking/jumping point-light body, to determine if they are independent. Although both point-light stimuli produced overlapping activation in the right middle occipital gyrus encompassing area KO and the right inferior temporal gyrus, they also activated distinct areas. Perception of walking biological motion activated a medial occipital area along the lingual gyrus close to the cuneus border, and the ventromedial frontal cortex, neither of which was activated by visible speech biological motion. In contrast, perception of visible speech biological motion activated right V5 and a network of motor-related areas (Brocas area, PM, M1, and supplementary motor area (SMA)), none of which were activated by walking biological motion. Many of the areas activated by seeing visible speech biological motion are similar to those activated while speechreading from an actual face, with the exception of M1 and medial SMA. The motor-related areas found to be active during point-light visible speech are consistent with recent work characterizing the human mirror system (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996).


NeuroImage | 2004

Distributed digit somatotopy in primary somatosensory cortex

Simon A. Overduin; Philip Servos

We obtained high-resolution somatotopic maps of the human digits using 4.0 T functional magnetic resonance imaging (fMRI). In separate experiments, the volar surface of either the right thumb, index, or ring finger was stimulated in a sliding-window fashion in both distal-to-proximal and proximal-to-distal directions using a custom-built pneumatic apparatus. Analysis of the functional images was restricted to Brodmanns areas 3b and 1 and control areas 4 and 3a, as well as a randomized simulation of the functional data in each of these areas. Using in-house algorithms, we detected discrete regions of cortical activation showing phase reversal coinciding with alternation in stimulation direction. Most stimulation-related phase maps of the digits were obtained in areas 3b and 1, rather than areas 3a or 4, despite the somatic input to the latter two areas. The area 3b and 1 representations thus appear to be relatively discrete and somatotopic compared to other somatic processing regions. Our results within areas 3b and 1 confirm the nonlinear mapping of the body surface suggested by recordings in nonhuman primates in terms of phase band topography, scaling, and frequency relative to the actual digit surfaces. The scaling and frequency nonlinearities were more evident within area 3b than area 1, suggesting a functional differentiation of these regions as has previously been observed only in more invasive recordings. Specifically, the area 1 representations were larger overall than those observed in area 3b, and the frequencies of area 3b phase bands and voxels were related disproportionately to thumb and index finger stimulation and to particular areas on the digit surface, suggesting a weighting based in part on receptor distribution.


Experimental Brain Research | 2002

Grasping two-dimensional images and three-dimensional objects in visual-form agnosia

David A. Westwood; James Danckert; Philip Servos; Melvyn A. Goodale

Visually guided prehension is controlled by a specialized visuomotor system in the posterior parietal cortex. It is not clear how this system responds to visual stimuli that lack three-dimensional (3D) structure, such as two-dimensional (2D) images of objects. We asked a neurological patient with visual-form agnosia (patient D.F.) to grasp 3D objects and 2D images of the same objects and to estimate their sizes manually. D.F.’s grip aperture was scaled to the sizes of the 2D and 3D target stimuli, but her manual estimates were poorly correlated with object size. Control participants demonstrated appropriate size-scaling in both the grasping and manual size-estimation tasks, but tended to use a smaller peak aperture when reaching to grasp 2D images. We conclude that: (1) the dorsal stream grasping system does not discriminate in a fundamental way between 2D and 3D objects, and (2) neurologically normal participants might adopt a different visuomotor strategy for target objects that are recognized to be ungraspable. These findings are consistent with the view that the dorsal grasping system accesses a pragmatic, spatial representation of the target object, whereas the ventral system accesses a more comprehensive, volumetric description of the object.


Experimental Brain Research | 2000

Distance estimation in the visual and visuomotor systems

Philip Servos

Abstract Previous work has demonstrated that monocular vision affects the kinematics of skilled visually guided reaching movements in humans. In these experiments, prior to movement onset, subjects appeared to be underestimating the distance of objects (and as a consequence, their size) under monocular viewing relative to their reaches made under binocular control. The present series of experiments was conducted to assess whether this underestimation was a consequence of a purely visual distance underestimation under monocular viewing or whether it was due to some implicit inaccuracy in calibrating the reach by a visuomotor system normally under binocular control. In a purely perceptual task, a group of subjects made similar explicit distance estimations of the objects used in the prehension task under monocular and binocular viewing conditions, with no time constraints. A second group of subjects made these explicit distance estimations with only 500-ms views of the objects. No differences were found between monocular and binocular viewing in either of these explicit distance-estimation tasks. The limited-views subjects also performed a visually guided reaching task under monocular and binocular conditions and showed the previously demonstrated monocular underestimation (in that their monocular grasping movements showed lower peak velocities and smaller grip apertures). A distance underestimation of 4.1 cm in the monocular condition was computed by taking the y intercepts of the monocular and binocular peak velocity functions and dividing them by a common slope that minimised the sum of squares error. This distance underestimation was then used to predict the corresponding underestimation of size that should have been observed in the monocular reaches – a value closely approximating the observed value of 0.61 cm. Taken together, these results suggest that the monocular underestimation in the prehension task is not a consequence of a purely perceptual bias but rather it is visuomotor in nature – a monocular input to a system that normally calibrates motor output on the basis of binocular vision.


Neuroreport | 1999

fMRI evidence for an inverted face representation in human somatosensory cortex

Philip Servos; Stephen A. Engel; Joseph S. Gati; Ravi S. Menon

We provide evidence that the face component of the somatosensory homunculus is actually upside down rather than right-side up along the central sulcus of the human brain. We pneumatically stimulated the forehead or chin of neurologically intact humans while acquiring fMRI images of somatosensory cortex. During forehead stimulation cortical regions along relatively inferior portions of the postcentral gyrus were most active whereas during chin stimulation relatively superior regions were most active. These data are consistent with an inverted face representation along the central sulcus of the human brain.


Neuroreport | 1998

Somatotopy of the human arm using fMRI.

Philip Servos; Jeffrey M. Zacks; David E. Rumelhart; Gary H. Glover

WE describe a technique for mapping out human somatosensory cortex using functional magnetic resonance imaging (fMRI). To produce cortical activation, a pneumatic apparatus presented subjects with a periodic series of air puffs in which a sliding window of five locations moved along the ventral surface of the left arm in a proximal-to-distal or distal-to-proximal direction. This approach, in which the phase-delay of the stimulus can be used to produce somatotopic maps of somatosensory cortex, is based on a method used to generate retinotopic maps of visual cortex. Functional images were acquired using an echoplanar 1.5T scanner and a T2*- weighted spiral acquisition pulse sequence. The periodic series of air puffs created phase-related activation in two cortical regions of the contralateral parietal lobe, the posterior bank of the central sulcus and a more posterior and lateral region.


Brain and Cognition | 2005

Haptic face identification activates ventral occipital and temporal areas: An fMRI study

Andrea R. Kilgour; Ryo Kitada; Philip Servos; Thomas W. James; Susan J. Lederman

Many studies in visual face recognition have supported a special role for the right fusiform gyrus. Despite the fact that faces can also be recognized haptically, little is known about the neural correlates of haptic face recognition. In the current fMRI study, neurologically intact participants were intensively trained to identify specific facemasks (molded from live faces) and specific control objects. When these stimuli were presented in the scanner, facemasks activated left fusiform and right hippocampal/parahippocampal areas (and other regions) more than control objects, whereas the latter produced no activity greater than the facemasks. We conclude that these ventral occipital and temporal areas may play an important role in the haptic identification of faces at the subordinate level. We further speculate that left fusiform gyrus may be recruited more for facemasks than for control objects because of the increased need for sequential processing by the haptic system.

Collaboration


Dive into the Philip Servos's collaboration.

Top Co-Authors

Avatar

Melvyn A. Goodale

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Tyler D. Bancroft

Wilfrid Laurier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peggy J. Planetta

Wilfrid Laurier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Keith Humphrey

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Joseph S. Gati

Robarts Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ravi S. Menon

University of Western Ontario

View shared research outputs
Researchain Logo
Decentralizing Knowledge