Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Cass is active.

Publication


Featured researches published by John Cass.


Current Biology | 2010

Visual sensitivity underlying changes in visual consciousness

David Alais; John Cass; Robert P O'Shea; Randolph Blake

When viewing a different stimulus with each eye, we experience the remarkable phenomenon of binocular rivalry: alternations in consciousness between the stimuli [1, 2]. According to a popular theory first proposed in 1901, neurons encoding the two stimuli engage in reciprocal inhibition [3-8] so that those processing one stimulus inhibit those processing the other, yielding consciousness of one dominant stimulus at any moment and suppressing the other. Also according to the theory, neurons encoding the dominant stimulus adapt, weakening their activity and the inhibition they can exert, whereas neurons encoding the suppressed stimulus recover from adaptation until the balance of activity reverses, triggering an alternation in consciousness. Despite its popularity, this theory has one glaring inconsistency with data: during an episode of suppression, visual sensitivity to brief probe stimuli in the dominant eye should decrease over time and should increase in the suppressed eye, yet sensitivity appears to be constant [9, 10]. Using more appropriate probe stimuli (experiment 1) in conjunction with a new method (experiment 2), we found that sensitivities in dominance and suppression do show the predicted complementary changes.


Journal of Vision | 2010

Breaking camouflage: Binocular disparity reduces contrast masking in natural images

Susan G. Wardle; John Cass; Kevin R. Brooks; David Alais

To study the effect of blur adaptation on accommodative variability, accommodative responses and pupil diameters in myopes (n = 22) and emmetropes (n = 19) were continuously measured before, during, and after exposure to defocus blur. Accommodative and pupillary response measurements were made by an autorefractor during a monocular reading exercise. The text was presented on a computer screen at 33 cm viewing distance with a rapid serial visual presentation paradigm. After baseline testing and a 5-min rest, blur was induced by wearing either an optimally refractive lens, or a +1.0 DS or a +3.0 DS defocus lens. Responses were continuously measured during a 5-min period of adaptation. The lens was then removed, and measurements were again made during a 5-min post-adaptation period. After a second 5-min rest, a final post-adaptation period was measured. No significant change of baseline accommodative responses was found after the 5-min period of adaptation to the blurring lenses (p > 0.05). Compared to the pre-adaptation level, both refractive groups had similar and significant increases in accommodative variability right after blur adaptation to both defocus lenses. After the second rest period, the accommodative variability in both groups returned to the pre-adaptation level. The results indicate that blur adaptation has a short-term effect on the accommodative system to elevate instability of the accommodative response. Mechanisms underlying the increase in accommodative variability by blur adaptation and possible influences of the accommodation stability on myopia development were discussed.


The Journal of Neuroscience | 2013

Rapid Recalibration to Audiovisual Asynchrony

Erik Van der Burg; David Alais; John Cass

To combine information from different sensory modalities, the brain must deal with considerable temporal uncertainty. In natural environments, an external event may produce simultaneous auditory and visual signals yet they will invariably activate the brain asynchronously due to different propagation speeds for light and sound, and different neural response latencies once the signals reach the receptors. One strategy the brain uses to deal with audiovisual timing variation is to adapt to a prevailing asynchrony to help realign the signals. Here, using psychophysical methods in human subjects, we investigate audiovisual recalibration and show that it takes place extremely rapidly without explicit periods of adaptation. Our results demonstrate that exposure to a single, brief asynchrony is sufficient to produce strong recalibration effects. Recalibration occurs regardless of whether the preceding trial was perceived as synchronous, and regardless of whether a response was required. We propose that this rapid recalibration is a fast-acting sensory effect, rather than a higher-level cognitive process. An account in terms of response bias is unlikely due to a strong asymmetry whereby stimuli with vision leading produce bigger recalibrations than audition leading. A fast-acting recalibration mechanism provides a means for overcoming inevitable audiovisual timing variation and serves to rapidly realign signals at onset to maximize the perceptual benefits of audiovisual integration.


Vision Research | 2006

Evidence for two interacting temporal channels in human visual processing.

John Cass; David Alais

Previous studies have generally estimated that two independent channels underlie human temporal vision: one broad and low-pass, the other high, and band-pass. We confirm this with iso-oriented targets and masks. With orthogonal masks, the same high-frequency channel emerges but no low-pass channel is observed, indicating the high-frequency channel is orientation invariant, and possibly pre-cortical in origin. In contrast, orientation dependence for low frequencies suggests a cortical origin. Subsequent masking experiments using unoriented spatiotemporal-filtered noise demonstrated that high-frequency masks (>8Hz) suppress low-frequency targets (1 and 4Hz), but low frequencies do not suppress high frequencies. This asymmetry challenges the traditional assumption of channel independence. To explain this, we propose a two-channel model in which a non-orientation-selective high-frequency channel suppresses an orientation-tuned low-frequency channel. This asymmetry may: (i) equalise the over-representation of low temporal-frequency energy in natural stimuli (1/f power spectrum); (ii) contribute to motion deblurring.


PLOS ONE | 2010

Efficient visual search from synchronized auditory signals requires transient audiovisual events.

Erik Van der Burg; John Cass; Christian N. L. Olivers; Jan Theeuwes; David Alais

Background A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. Methodology/Principal Findings Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. Conclusions/Significance Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.


Vision Research | 2005

Dynamics of collinear contrast facilitation are consistent with long-range horizontal striate transmission

John Cass; Branka Spehar

It is well established that activity of striate neurons may be either facilitated or suppressed by visual stimuli presented outside of their classical receptive field (CRF) limits. Whilst two general mechanisms have been identified as candidates for these contextual effects; those based on extra-striate feedback and long-range horizontal striate connections; the physiological data supporting these models is both ambiguous and inconsistent. Here we investigate psychophysically the phenomenon of collinear facilitation, in which contrast detection thresholds for foveally presented Gabor stimuli are reduced via concurrent presentation of remote collinear flankers. Using backward noise masking, we demonstrate that the minimum exposure duration required to induce facilitation increases monotonically with greater target-flanker separation. The inferred cortical propagation velocities of this process (0.10-0.23 ms(-1)) closely correspond with depolarising activity observed to travel across striate cortex of several species. These dynamics strongly suggest that contrast facilitation is mediated via long-range horizontal striate connections. This conclusion complements a recent suggestion that collinear induced long-range suppressive dynamics depend on extra-striate feedback.


PLOS ONE | 2010

Multisensory Perceptual Learning of Temporal Order: Audiovisual Learning Transfers to Vision but Not Audition

David Alais; John Cass

Background An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Methodology/Principal Findings Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. Conclusions/Significance The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.


Journal of Vision | 2009

Temporal whitening: transient noise perceptually equalizes the 1/f temporal amplitude spectrum.

John Cass; David Alais; Branka Spehar; Peter J. Bex

Naturally occurring luminance distributions are approximately 1/f in their spatial and temporal amplitude spectra. By systematically varying the spatio-temporal profile of broadband noise stimuli, we demonstrate that humans invariably overestimate the proportion of high spatial and temporal frequency energy. Critically, we find that that the strength of this bias is of a magnitude that predicts a perceptually equalized response to the spatio-temporal fall off in the natural amplitude spectrum. This interpretation is supported by our finding that the magnitude of this transient response bias, while evident across a broad range of narrowband spatial frequencies (0.25-8 cycles/deg), decreases above 2 cycles/deg, which itself compensates for the increase in temporal frequency energy previously observed at high spatial frequencies as a consequence of small fixational eye movements (M. Rucci, R. Iovin, M. Poletti, & F. Santini, 2007). Additional temporal masking and adaptation experiments reveal a transiently biased asymmetry. Whereas temporal frequencies >4 Hz mask and adapt 1- and 15-Hz targets, lower masking and adaptation frequencies have much less effect on sensitivity to 15-Hz compared with 1-Hz targets. These results imply that the visual system over-represents its transient input to an extent that predicts an equalized temporal channel response to the low-frequency-biased structure of natural scenes.


Journal of Vision | 2009

Orientation-tuned suppression in binocular rivalry reveals general and specific components of rivalry suppression

Sjoerd Stuit; John Cass; Chris L. E. Paffen; David Alais

During binocular rivalry (BR), conflicting monocular images are alternately suppressed from awareness. During suppression of an image, contrast sensitivity for probes is reduced by approximately 0.3-0.5 log units relative to when the image is in perceptual dominance. Previous studies on rivalry suppression have led to controversies concerning the nature and extent of suppression during BR. We tested for feature-specific suppression using orthogonal rivaling gratings and measuring contrast sensitivity to small grating probes at a range of orientations in a 2AFC orientation discrimination task. Results indicate that suppression is not uniform across orientations: suppression was much greater for orientations close to that of the suppressed grating. The higher suppression was specific to a narrow range around the suppressed rival grating, with a tuning similar to V1 orientation bandwidths. A similar experiment tested for spatial frequency tuning and found that suppression was stronger for frequencies close to that of the suppressed grating. Interestingly, no tuned suppression was observed when a flicker-and-swap paradigm was used, suggesting that tuned suppression occurs only for lower-level, interocular rivalry. Together, the results suggest there are two components to rivalry suppression: a general feature-invariant component and an additional component specifically tuned to the rivaling features.


Journal of Vision | 2009

Orientation bandwidths are invariant across spatiotemporal frequency after isotropic components are removed

John Cass; Sjoerd Stuit; Peter J. Bex; David Alais

It is well established that mammalian visual cortex possesses a large proportion of orientation-selective neurons. Attempts to measure the bandwidth of these mechanisms psychophysically have yielded highly variable results ( approximately 6 degrees -180 degrees ). Two stimulus factors have been proposed to account for this variability: spatial and temporal frequency; with several studies indicating broader bandwidths at low spatial and high temporal frequencies. We estimated orientation bandwidths using a classic overlay masking paradigm across a range of spatiotemporal frequencies (0.5, 2, and 8 c.p.d.; 1.6 and 12.5 Hz) with target and mask presented either monoptically or dichoptically. A standard three-parameter Gaussian model (amplitude and width, mean fixed at 0 degrees ) confirms that bandwidths generally increase at low spatial and high temporal frequencies. When incorporating an additional orientation-untuned (isotropic) amplitude component, however, we find that not only are the amplitudes of isotropic and orientation-tuned components highly dependent upon stimulus spatiotemporal frequency, but orientation bandwidths are highly invariant ( approximately 30 degrees half width half amplitude). These results suggest that previously reported spatiotemporally contingent bandwidth effects may have confounded bandwidth with isotropic (so-called cross-orientation) masking. Interestingly, the magnitudes of all monoptically derived parameter estimates were found to transfer dichoptically suggesting a cortical locus for both isotropic and orientation-tuned masking.

Collaboration


Dive into the John Cass's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter J. Bex

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Branka Spehar

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Jan Theeuwes

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Deborah Apthorp

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven C. Dakin

University College London

View shared research outputs
Top Co-Authors

Avatar

Marina Zannoli

Paris Descartes University

View shared research outputs
Researchain Logo
Decentralizing Knowledge