Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michelle To is active.

Publication


Featured researches published by Michelle To.


Vision Research | 2011

Vision out of the corner of the eye

Michelle To; B. C. Regan; Dora Wood; J. D. Mollon

The margin of the temporal visual field lies more than 90° from the line of sight and is critical for detecting incoming threats and for balance and locomotive control. We show (i) contrast sensitivity beyond 70° is higher for moving stimuli than for stationary, and in the outermost region, only moving stimuli are visible; (ii) sensitivity is highest for motion in directions near the vertical and horizontal axes and is higher for forward than for backward directions; (iii) the former anisotropy arises early in the visual pathway; (iv) thresholds for discriminating direction are lowest for upward and downward motion.


Proceedings of the Royal Society of London B: Biological Sciences | 2011

A general rule for sensory cue summation: evidence from photographic, musical, phonetic and cross-modal stimuli

Michelle To; Roland Baddeley; Tom Troscianko; David J. Tolhurst

The Euclidean and MAX metrics have been widely used to model cue summation psychophysically and computationally. Both rules happen to be special cases of a more general Minkowski summation rule , where m = 2 and ∞, respectively. In vision research, Minkowski summation with power m = 3–4 has been shown to be a superior model of how subthreshold components sum to give an overall detection threshold. Recently, we have previously reported that Minkowski summation with power m = 2.84 accurately models summation of suprathreshold visual cues in photographs. In four suprathreshold discrimination experiments, we confirm the previous findings with new visual stimuli and extend the applicability of this rule to cue combination in auditory stimuli (musical sequences and phonetic utterances, where m = 2.95 and 2.54, respectively) and cross-modal stimuli (m = 2.56). In all cases, Minkowski summation with power m = 2.5–3 outperforms the Euclidean and MAX operator models. We propose that this reflects the summation of neuronal responses that are not entirely independent but which show some correlation in their magnitudes. Our findings are consistent with electrophysiological research that demonstrates signal correlations (r = 0.1–0.2) between sensory neurons when these are presented with natural stimuli.


tests and proofs | 2009

Perception of differences in natural-image stimuli: Why is peripheral viewing poorer than fovealq

Michelle To; Iain D. Gilchrist; Tom Troscianko; J. S. B. Kho; David J. Tolhurst

Visual Difference Predictor (VDP) models have played a key role in digital image applications such as the development of image quality metrics. However, little attention has been paid to their applicability to peripheral vision. Central (i.e., foveal) vision is extremely sensitive for the contrast detection of simple stimuli such as sinusoidal gratings, but peripheral vision is less sensitive. Furthermore, crowding is a well-documented phenomenon whereby differences in suprathreshold peripherally viewed target objects (such as individual letters or patches of sinusoidal grating) become more difficult to discriminate when surrounded by other objects (flankers). We examine three factors that might influence the degree of crowding with natural-scene stimuli (cropped from photographs of natural scenes): (1) location in the visual field, (2) distance between target and flankers, and (3) flanker-target similarity. We ask how these factors affect crowding in a suprathreshold discrimination experiment where observers rate the perceived differences between two sequentially presented target patches of natural images. The targets might differ in the shape, size, arrangement, or color of items in the scenes. Changes in uncrowded peripheral targets are perceived to be less than for the same changes viewed foveally. Consistent with previous research on simple stimuli, we find that crowding in the periphery (but not in the fovea) reduces the magnitudes of perceived changes even further, especially when the flankers are closer and more similar to the target. We have tested VDP models based on the response behavior of neurons in visual cortex and the inhibitory interactions between them. The models do not explain the lower ratings for peripherally viewed changes even when the lower peripheral contrast sensitivity was accounted for; nor could they explain the effects of crowding, which others have suggested might arise from errors in the spatial localization of features in the peripheral image. This suggests that conventional VDP models do not port well to peripheral vision.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Use of a vision model to quantify the significance of factors effecting target conspicuity

M. A. Gilmore; C. K. Jones; A. W. Haynes; David J. Tolhurst; Michelle To; T Troscianko; Paul G. Lovell; C.A. Párraga; K. Pickavance

When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.


Journal of Vision | 2017

Modeling grating contrast discrimination dippers: The role of surround suppression

Michelle To; Mazviita Chirimuuta; David J. Tolhurst

We consider the role of nonlinear inhibition in physiologically realistic multineuronal models of V1 to predict the dipper functions from contrast discrimination experiments with sinusoidal gratings of different geometries. The dip in dipper functions has been attributed to an expansive transducer function, which itself is attributed to two nonlinear inhibitory mechanisms: contrast normalization and surround suppression. We ran five contrast discrimination experiments, with targets and masks of different sizes and configurations: small Gabor target/small mask, small target/large mask, large target/large mask, small target/in-phase annular mask, and small target/out-of-phase annular mask. Our V1 modeling shows that the results for small Gabor target/small mask, small target/large mask, large target/large mask configurations are easily explained only if the model includes surround suppression. This is compatible with the finding that an in-phase annular mask generates only little threshold elevation while the out-of-phase mask was more effective. Surrounding mask gratings cannot be equated with surround suppression at the receptive-field level. We examine whether normalization and surround suppression occur simultaneously (parallel model) or sequentially (a better reflection of neurophysiology). The Akaike Criterion Difference showed that the sequential model was better than the parallel, but the difference was small. The large target/large mask dipper experiment was not well fit by our models, and we suggest that this may reflect selective attention for its uniquely larger test stimulus. The best-fit model replicates some behaviors of single V1 neurons, such as the decrease in receptive-field size with increasing contrast.


Vision Research | 2011

Discrimination of natural scenes in central and peripheral vision

Michelle To; Iain D. Gilchrist; T Troscianko; David J. Tolhurst


Proceedings of the Royal Society of London B: Biological Sciences | 2008

Summation of perceptual cues in natural visual scenes.

Michelle To; Paul G. Lovell; T Troscianko; David J. Tolhurst


Seeing and Perceiving | 2010

Magnitude of perceived change in natural images may be linearly proportional to differences in neuronal firing rates

David J. Tolhurst; Michelle To; Mazviita Chirimuuta; Tom Troscianko; Pei-Ying Chua; P. George Lovell


Journal of Vision | 2015

Perception of differences in naturalistic dynamic scenes, and a V1-based model.

Michelle To; Iain D. Gilchrist; David J. Tolhurst


Journal of Vision | 2010

Minkowski summation of cues in complex visual discriminations using natural scene stimuli

Michelle To; P. George Lovell; Tom Troscianko; David J. Tolhurst

Collaboration


Dive into the Michelle To's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. C. Regan

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge