Gerrit W. Maus
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerrit W. Maus.
Cerebral Cortex | 2013
Gerrit W. Maus; Jamie Ward; Romi Nijhawan; David Whitney
How does the visual system assign the perceived position of a moving object? This question is surprisingly complex, since sluggish responses of photoreceptors and transmission delays along the visual pathway mean that visual cortex does not have immediate information about a moving objects position. In the flash-lag effect (FLE), a moving object is perceived ahead of an aligned flash. Psychophysical work on this illusion has inspired models for visual localization of moving objects. However, little is known about the underlying neural mechanisms. Here, we investigated the role of neural activity in areas MT+ and V1/V2 in localizing moving objects. Using short trains of repetitive Transcranial Magnetic Stimulation (TMS) or single pulses at different time points, we measured the influence of TMS on the perceived location of a moving object. We found that TMS delivered to MT+ significantly reduced the FLE; single pulse timings revealed a broad temporal tuning with maximum effect for TMS pulses, 200 ms after the flash. Stimulation of V1/V2 did not significantly influence perceived position. Our results demonstrate that area MT+ contributes to the perceptual localization of moving objects and is involved in the integration of position information over a long time window.
Journal of Experimental Psychology: Human Perception and Performance | 2009
Gerrit W. Maus; Romi Nijhawan
When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the perceived forward displacement of the moving object depended on speed and that offsets were localized accurately. Two competing representations of position for moving objects are proposed: 1 based on a spatially extrapolated internal model, and the other based on transient signals elicited by sudden changes in the object trajectory that can correct the forward-shifted position. Experiment 3 measured forward displacements for moving objects that disappeared only for a short time or abruptly reduced contrast by various amounts. Manipulating the relative strength of the 2 position representations in this way resulted in intermediate positions being perceived, with weaker motion signals or stronger transients leading to less forward displacement. This 2-process mechanism is advantageous because it uses available information about object position to maximally reduce spatio-temporal localization errors.
PLOS ONE | 2011
Gerrit W. Maus; Jason Fischer; David Whitney
Crowding is a fundamental bottleneck in object recognition. In crowding, an object in the periphery becomes unrecognizable when surrounded by clutter or distractor objects. Crowding depends on the positions of target and distractors, both their eccentricity and their relative spacing. In all previous studies, position has been expressed in terms of retinal position. However, in a number of situations retinal and perceived positions can be dissociated. Does retinal or perceived position determine the magnitude of crowding? Here observers performed an orientation judgment on a target Gabor patch surrounded by distractors that drifted toward or away from the target, causing an illusory motion-induced position shift. Distractors in identical physical positions led to worse performance when they drifted towards the target (appearing closer) versus away from the target (appearing further). This difference in crowding corresponded to the difference in perceived positions. Further, the perceptual mislocalization was necessary for the change in crowding, and both the mislocalization and crowding scaled with drift speed. The results show that crowding occurs after perceived positions have been assigned by the visual system. Crowding does not operate in a purely retinal coordinate system; perceived positions need to be taken into account.
Frontiers in Psychology | 2010
Gerrit W. Maus; Sarah Weigelt; Romi Nijhawan; Lars Muckli
A gradually fading moving object is perceived to disappear at positions beyond its luminance detection threshold, whereas abrupt offsets are usually localized accurately. What role does retinotopic activity in visual cortex play in this motion-induced mislocalization of the endpoint of fading objects? Using functional magnetic resonance imaging (fMRI), we localized regions of interest (ROIs) in retinotopic maps abutting the trajectory endpoint of a bar moving either toward or away from this position while gradually decreasing or increasing in luminance. Area V3A showed predictive activity, with stronger fMRI responses for motion toward versus away from the ROI. This effect was independent of the change in luminance. In Area V1 we found higher activity for high-contrast onsets and offsets near the ROI, but no significant differences between motion directions. We suggest that perceived final positions of moving objects are based on an interplay of predictive position representations in higher motion-sensitive retinotopic areas and offset transients in primary visual cortex.
Psychological Science | 2008
Gerrit W. Maus; Romi Nijhawan
The flash-lag effect, in which a moving object is perceived ahead of a colocalized flash, has led to keen empirical and theoretical debates. To test the proposal that a predictive mechanism overcomes neural delays in vision by shifting objects spatially, we asked observers to judge the final position of a bar moving into the retinal blind spot. The bar was perceived to disappear in positions well inside the unstimulated area. Given that photoreceptors are absent in the blind spot, the perceived shift must be based on the history of the moving object. Such predictive overshoots are suppressed when a moving object disappears abruptly from the retina, triggering retinal transient signals. No such transient-driven suppression occurs when the object disappears by virtue of moving into the blind spot. The extrapolated position of the moving bar revealed in this manner provides converging support for visual prediction.
Journal of Vision | 2011
Anna Kosovicheva; Gerrit W. Maus; Stuart Anstis; Patrick Cavanagh; Peter U. Tse; David Whitney
Motion can bias the perceived location of a stationary stimulus (Whitney & Cavanagh, 2000), but whether this occurs at a high level of representation or at early, retinotopic stages of visual processing remains an open question. As coding of orientation emerges early in visual processing, we tested whether motion could influence the spatial location at which orientation adaptation is seen. Specifically, we examined whether the tilt aftereffect (TAE) depends on the perceived or the retinal location of the adapting stimulus, or both. We used the flash-drag effect (FDE) to produce a shift in the perceived position of the adaptor away from its retinal location. Subjects viewed a patterned disk that oscillated clockwise and counterclockwise while adapting to a small disk containing a tilted linear grating that was flashed briefly at the moment of the rotation reversals. The FDE biased the perceived location of the grating in the direction of the disks motion immediately following the flash, allowing dissociation between the retinal and perceived location of the adaptor. Brief test gratings were subsequently presented at one of three locations-the retinal location of the adaptor, its perceived location, or an equidistant control location (antiperceived location). Measurements of the TAE at each location demonstrated that the TAE was strongest at the retinal location, and was larger at the perceived compared to the antiperceived location. This indicates a skew in the spatial tuning of the TAE consistent with the FDE. Together, our findings suggest that motion can bias the location of low-level adaptation.
Current Biology | 2013
Gerrit W. Maus; Wesley Chaney; Alina Liberman; David Whitney
Summary Adaptation is one of the longest-studied phenomena in perception and neuroscience. Adaptation generally results in negative perceptual aftereffects: after prolonged exposure to a specific feature, perception of a neutral stimulus is biased in the opposite direction [1,2]. A recent paper in Current Biology [3] challenged this view by proposing that, additionally, adaptation biases perception in the same direction as features observed over a relatively long time from the past. This finding challenges dominant theories of visual adaptation; however, here we argue that long-term positive correlations are not due to neural or perceptual processes but arise due to short-term negative aftereffects. Thus, existing models of adaptation remain unchallenged, and critical evaluations of how adaptation could predictively aid perception are still needed.
PLOS ONE | 2016
Gerrit W. Maus; David Whitney
We usually do not notice the blind spot, a receptor-free region on the retina. Stimuli extending through the blind spot appear filled in. However, if an object does not reach through but ends in the blind spot, it is perceived as “cut off” at the boundary. Here we show that even when there is no corresponding stimulation at opposing edges of the blind spot, well known motion-induced position shifts also extend into the blind spot and elicit a dynamic filling-in process that allows spatial structure to be extrapolated into the blind spot. We presented observers with sinusoidal gratings that drifted into or out of the blind spot, or flickered in counterphase. Gratings moving into the blind spot were perceived to be longer than those moving out of the blind spot or flickering, revealing motion-dependent filling-in. Further, observers could perceive more of a grating’s spatial structure inside the blind spot than would be predicted from simple filling-in of luminance information from the blind spot edge. This is evidence for a dynamic filling-in process that uses spatiotemporal information from the motion system to extrapolate visual percepts into the scotoma of the blind spot. Our findings also provide further support for the notion that an explicit spatial shift of topographic representations contributes to motion-induced position illusions.
Journal of Vision | 2017
Zhimin Chen; Gerrit W. Maus; David Whitney; Rachel Denison
During perceptual rivalry, an observers perceptual experience alternates over time despite constant sensory stimulation. Perceptual alternations are thought to be driven by conflicting or ambiguous retinal image features at a particular spatial location and modulated by global context from surrounding locations. However, rivalry can also occur between two illusory stimuli—such as two filled-in stimuli within the retinal blind spot. In this “filling-in rivalry,” what observers perceive in the blind spot changes in the absence of local stimulation. It remains unclear if filling-in rivalry shares common mechanisms with other types of rivalry. We measured the dynamics of rivalry between filled-in percepts in the blind spot, finding a high degree of exclusivity (perceptual dominance of one filled-in percept, rather than a perception of transparency), alternation rates that were highly consistent for individual observers, and dynamics that closely resembled other forms of perceptual rivalry. The results suggest that mechanisms common to a wide range of rivalry situations need not rely on conflicting retinal image signals.
Scientific Reports | 2018
Zhimin Chen; Rachel Denison; David Whitney; Gerrit W. Maus
When occlusion and binocular disparity cues conflict, what visual features determine how they combine? Sensory cues, such as T-junctions, have been suggested to be necessary for occlusion to influence stereoscopic depth perception. Here we show that illusory occlusion, with no retinal sensory cues, interacts with binocular disparity when perceiving depth. We generated illusory occlusion using stimuli filled in across the retinal blind spot. Observers viewed two bars forming a cross with the intersection positioned within the blind spot. One of the bars was presented binocularly with a disparity signal; the other was presented monocularly, extending through the blind spot, with no defined disparity. When the monocular bar was perceived as filled in through the blind spot, it was perceived as occluding the binocular bar, generating illusory occlusion. We found that this illusory occlusion influenced perceived stereoscopic depth: depth estimates were biased to be closer or farther, depending on whether a bar was perceived as in front of or behind the other bar, respectively. Therefore, the perceived relative depth position, based on filling-in cues, set boundaries for interpreting metric stereoscopic depth cues. This suggests that filling-in can produce opaque surface representations that can trump other depth cues such as disparity.