Michael T Swanston
University of Dundee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael T Swanston.
Perception | 1992
Michael T Swanston; Nicholas J. Wade
The motion aftereffect (MAE) was measured with retinally moving vertical gratings positioned above and below (flanking) a retinally stationary central grating (experiments 1 and 2). Motion over the retina was produced by leftward motion of the flanking gratings relative to the stationary eyes, and by rightward eye or head movements tracking the moving (but retinally stationary) central grating relative to the stationary (but retinally moving) surround gratings. In experiment 1 the motion occurred within a fixed boundary on the screen, and oppositely directed MAEs were produced in the central and flanking gratings with static fixation; but with eye or head tracking MAEs were reported only in the central grating. In experiment 2 motion over the retina was equated for the static and tracking conditions by moving blocks of grating without any dynamic occlusion and disclosure at the boundaries. Both conditions yielded equivalent leftward MAEs of the central grating in the same direction as the prior flanking motion, ie an MAE was consistently produced in the region that had remained retinally stationary. No MAE was recorded in the flanking gratings, even though they moved over the retina during adaptation. When just two gratings were presented, MAEs were produced in both, but in opposite directions (experiments 3 and 4). It is concluded that the MAE is a consequence of adapting signals for the relative motion between elements of a display.
Attention Perception & Psychophysics | 1988
Michael T Swanston; Nicholas J. Wade
If physical movements are to be seen veridically, it is necessary to distinguish between displacements over the retina due to self-motion and those due to object motion. When target motion is in a different direction from that of a pursuit eye movement, the perceived motion of the target is known to be shifted in direction toward the retinal path, indicating a partial failure of compensation for eye movements (Becklen, Wallach, & Nitzberg, 1984). The experiments reported here compared the perception of target motion when the head and/or eyes were moving in a direction different from that of the target. In three experiments, target motion was varied in direction, phase, and extent with respect to pursuit movements. In all cases, the compensation was less effective for head than for eye movements, although this difference was least when the extent of the tracked and target motions was the same. Compensation for pursuit eye movements was better than that reported in previous studies.
Perception | 1987
Michael T Swanston; Nicholas J. Wade; Ross H Day
For veridical detection of object motion any moving detecting system must allocate motion appropriately between itself and objects in space. A model for such allocation is developed for simplified situations (points of light in uniform motion in a frontoparallel plane). It is proposed that motion of objects is registered and represented successively at four levels within frames of reference that are defined by the detectors themselves or by their movements. The four levels are referred to as retinocentric, orbitocentric, egocentric, and geocentric. Thus the retinocentric signal is combined with that for eye rotation to give an orbitocentric signal, and the left and right orbitocentric signals are combined to give an egocentric representation. Up to the egocentric level, motion representation is angular rather than three-dimensional. The egocentric signal is combined with signals for head and body movement and for egocentric distance to give a geocentric representation. It is argued that although motion perception is always geocentric, relevant registrations also occur at the three earlier levels. The model is applied to various veridical and nonveridical motion phenomena.
Attention Perception & Psychophysics | 1984
Nicholas J. Wade; Charles M. M. de Weert; Michael T Swanston
Binocular rivalry between a horizontal and a vertical grating was examined in six experiments. The gratings could be presented in a static form or dynamically so that either one or both gratings moved. The motion consisted of a symmetrical transformation of the gratings about their centers, so that the lines moved outwards or inwards. During rivalry, a moving pattern was visible for about 50% longer than an equivalently oriented static pattern (Experiments 1, 2, and 4). When both gratings were in motion (Experiments 3 and 5), the course of rivalry was similar to that found for two static gratings. The duration of dominance of the moving grating was influenced by its velocity (Experiment 6). The results are interpreted in terms of the stimulus strengths of the static and dynamic patterns.
Perception | 1993
Nicholas J. Wade; Michael T Swanston; Charles M. M. de Weert
A brief history of quantitative assessments of interocular transfer (IOT) of the motion aftereffect (MAE) is presented. Recent research indicates that the MAE occurs as a consequence of adapting detectors for relative rather than retinal motion. When gratings above and below a stationary, fixated grating are moved in an otherwise dark field the central, retinally stationary grating appears to move in the opposite direction; when tested with stationary gratings an MAE is almost entirely confined to the central grating. The IOT of such an MAE was measured in experiment 1: the display was presented to one eye with a black field in the other. The IOT was about 30% of the monocular MAE. Similar values were found in experiment 2, in which the contralateral eye received an equivalent central stationary grating during adaptation and test. The dichoptic interaction of the processes involved in the MAE was examined by presenting the central gratings to both eyes and a single flanking grating above in one eye and below in the other (experiment 3). The MAE was tested with either the same or the contralateral pairing. Oppositely directed MAEs were found for the central and flanking gratings, but they were confined mainly to the conditions in which the configurations presented during adaptation were present in the same eyes during test. In experiment 4, the surround MAEs were compared after adaptation with two moving gratings in one eye or with a similar dichoptic configuration, and they were of similar duration. In a final experiment the MAE was tested either monocularly or binocularly after alternating adaptation of the left and right eyes and was found to be of the same duration. It is concluded that the MAE is a consequence of adapting relational-motion detectors, which are either monocular or of the binocular OR class.
Perception | 1987
Nicholas J. Wade; Michael T Swanston
Induced motion occurs when there is a misallocation of nonuniform motion. Theories of induced motion are reviewed with respect to the model for uniform motion recently proposed by Swanston, Wade, and Day. Theories based on single processes operating at one of the retinocentric, orbitocentric, egocentric, or geocentric levels are not able to account for all aspects of the phenomenon. It is therefore suggested that induced motion is a consequence of combining two different types of motion signals: one provides information by registering the motion with respect to the retina, orbit, and egocentre; the other provides information only on the relational motions between the pattern elements. Simple rules are given for defining a frame of reference for the relational motion process, which can result in a reallocation of the motion signals. It is proposed that the two signals in combination are weighted differentially, with the greater influence coming from the relational signals. Procedures for determining the weighting factors are described, and predictions from the model are examined.
Attention Perception & Psychophysics | 1992
Michael T Swanston; Nicholas J. Wade; Hiroshi Ono; Koichi Shibuta
A horizontally moving target was followed by rotation of the eyes alone or by a lateral movement of the head. These movements resulted in the retinal displacement of a vertically moving target from its perceived path, the amplitude of which was determined by the phase and amplitude of the object motion and of the eye or head movements. In two experiments, we tested the prediction from our model of spatial motion (Swanston, Wade, & Day, 1987) that perceived distance interacts with compensation for head movements, but not with compensation-for eye movements with respect to a stationary head. In both experiments, when the vertically moving target was seen at a distance different from its physical distance, its perceived path was displaced relative to that seen when there was no error in pereived distance, or when it was pursued by eye movements alone. In a third experiment, simultaneous measurements of eye and head position during lateral head movements showed that errors in fixation were not sufficient to require modification of the retinal paths determined by the geometry of the observation conditions in Experiments 1 and 2.
Perception | 1990
Michael T Swanston; Nicholas J. Wade; Hiroshi Ono
In the model of motion perception proposed by Swanston, Wade, and Day (1987, Perception 16 143–159) it was suggested that retinocentric motion and eye movement information are combined independently for each eye, to give left and right orbitocentric representations of movement. The weighted orbitocentric values are then added, to give a single egocentric representation. It is shown that for a physical motion observed without pursuit eye movements this formulation predicts a reduction in the perceived extent of motion with monocular as opposed to binocular viewing. This prediction was tested, and shown to be incorrect. Accordingly, a modification of the model is proposed, in which the left and right retinocentric signals are weighted according to the presence or absence of stimulation, and combined to give a binocular retinocentric representation. In a similar way left-eye and right-eye position signals are combined to give a single binocular eye movement signal for version. This is then added to the binocular retinocentric signal to give the egocentric representation. This modification provides a unified account of both static visual direction and movement perception.
Attention Perception & Psychophysics | 1986
Michael T Swanston; Walter C. Gogel
The research investigated the perceived motion in depth resulting from the optical expansion or contraction of objects. A theoretical analysis of this cue was made in terms of the size-distance invariance hypothesis. For presenting stimuli, a computer simulation was developed which simulated the physical motion in depth of a constant-sized object at a constant velocity. A series of experiments showed that the extent of perceived motion in depth did not relate to the change in perceived stimulus size as predicted by the size-distance invariance hypothesis. Instead, substantial perceptions of depth motion occurred even though the ratio of the terminal perceived sizes was similar to the ratio of the terminal visual angles. Extending past research, a theoretical account based on the existence of two distinct processes involved in responding to size and distance was applied successfully. One process expressed by the size-distance invariance hypothesis determines the response to immediate, sensorily specified information. The second process involves the effect of size remembered from a previous perception (off-sized judgments) upon the response to distance. As determined by measurement obtained from using the head motion procedure, this remembered (representational) size, as it occurs in successive instants of the optical expansion pattern, can be translated by the visual system into a robust perception of distance.
Vision Research | 1996
Nicholas J. Wade; Lothar Spillmann; Michael T Swanston
The visual motion aftereffect (MAE) typically occurs when stationary contours are presented to a retinal region that has previously been exposed to motion. It can also be generated following observation of a stationary grating when two gratings (above and below it) move laterally: the surrounding gratings induce motion in the opposite direction in the central one. Following adaptation, the centre appears to move in the direction opposite to the previously induced motion, but little or no MAE is visible in the surround gratings [Swanston & Wade (1992) Perception, 21, 569-582]. The stimulus conditions that generate the MAE from induced motion were examined in five experiments. It was found that: the central MAE occurs when tested with stationary centre and surround gratings following adaptation to surround motion alone (Expt 1); no MAEs in either the centre or surround can be measured when the test stimulus is the centre alone or the surround alone (Expt 2); the maximum MAE in the central grating occurs when the same surround region is adapted and tested (Expt 3); the duration of the MAE is dependent upon the spatial frequency of the surround but not the centre (Expt 4); MAEs can be observed in the surround gratings when they are themselves surrounded by stationary gratings during test (Expt 5). It is concluded that the linear MAE occurs as a consequence of adapting restricted retinal regions to motion but it can only be expressed when nonadapted regions are also tested.