Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew F. Tang is active.

Publication


Featured researches published by Matthew F. Tang.


Psychonomic Bulletin & Review | 2014

Training and the attentional blink: Limits overcome or expectations raised?

Matthew F. Tang; David R. Badcock; Troy A. W. Visser

The attentional blink (AB) refers to a deficit in reporting the second of two sequentially presented targets when they are separated by less than 500 ms. Two decades of research has suggested that the AB is a robust phenomenon that is likely attributable to a fundamental limit in sequential object processing. This assumption, however, has recently been undermined by a demonstration that the AB can be eliminated after only a few hundred training trials (Choi, Chang, Shibata, Sasaki, & Watanabe in Proceedings of the National Academy of Sciences 109:12242–12247, 2012). In the present work, we examined whether this training benefited performance directly, by eliminating processing limitations as claimed, or indirectly, by creating expectations about when targets would appear. Consistent with the latter option, when temporal expectations were reduced, training-related improvements declined significantly. This suggests that whereas training may ameliorate the AB indirectly, the processing limits evidenced in the AB cannot be directly eliminated by brief exposure to the task.


Journal of Vision | 2015

The broad orientation dependence of the motion streak aftereffect reveals interactions between form and motion neurons.

Matthew F. Tang; J. Edwin Dickinson; Troy A. W. Visser; David R. Badcock

The extended integration time of visual neurons can lead to the production of the neural equivalent of an orientation cue along the axis of motion in response to fast-moving objects. The dominant model argues that these motion streaks resolve the inherent directional uncertainty arising from the small size of receptive fields in V1, by combining spatial orientation with motion signals in V1. This model was tested in humans using visual aftereffects, in which adapting to a static grating causes the perceived direction of a subsequently presented motion stimulus to be tilted away from the adapting orientation. We found that a much broader range of orientations produced aftereffects than predicted by the current model, suggesting that these orientation cues influence motion perception at a later stage than V1. We also found that varying the spatial frequency of the adaptor changed the aftereffect from repulsive to attractive for motion-test but not form-test stimuli. Finally, manipulations of V1 excitability, using transcranial stimulation, reduced the aftereffect, suggesting that the orientation cue is dependent on V1. These results can be accounted for if the orientation information from the motion streak, gathered in V1, enters the motion system at a later stage of motion processing, most likely V5. A computational model of motion direction is presented incorporating gain modifications of broadly tuned motion-selective neurons by narrowly tuned orientation-selective cells in V1, which successfully accounts for the extant data. These results reinforce the suggestion that orientation places strong constraints on motion processing but in a previously undescribed manner.


European Journal of Neuroscience | 2013

Anodal transcranial direct current stimulation over auditory cortex degrades frequency discrimination by affecting temporal, but not place, coding

Matthew F. Tang; Geoffrey R. Hammond

We report three studies of the effects of anodal transcranial direct current stimulation (tDCS) over auditory cortex on audition in humans. Experiment 1 examined whether tDCS enhances rapid frequency discrimination learning. Human subjects were trained on a frequency discrimination task for 2 days with anodal tDCS applied during the first day with the second day used to assess effects of stimulation on retention. This revealed that tDCS did not affect learning but did degrade frequency discrimination during both days. Follow‐up testing 2–3 months after stimulation showed no long‐term effects. Following the unexpected results, two additional experiments examined the effects of tDCS on the underlying mechanisms of frequency discrimination, place and temporal coding. Place coding underlies frequency selectivity and was measured using psychophysical tuning curves with broader curves indicating poorer frequency selectivity. Temporal coding is determined by measuring the ability to discriminate sounds with different fine temporal structure. We found that tDCS does not broaden frequency selectivity but instead degraded the ability to discriminate tones with different fine temporal structure. The overall results suggest anodal tDCS applied over auditory cortex degrades frequency discrimination by affecting temporal, but not place, coding mechanisms.


PLOS ONE | 2016

Are Participants Aware of the Type and Intensity of Transcranial Direct Current Stimulation

Matthew F. Tang; Geoffrey R. Hammond; David R. Badcock

Transcranial direct current stimulation (tDCS) is commonly used to alter cortical excitability but no experimental study has yet determined whether human participants are able to distinguish between the different types (anodal, cathodal, and sham) of stimulation. If they can then they are not blind to experimental conditions. We determined whether participants could identify different types of stimulation (anodal, cathodal, and sham) and current strengths after experiencing the sensations of stimulation during current onset and offset (which are associated with the most intense sensations) in Experiment 1 and also with a prolonged period of stimulation in Experiment 2. We first familiarized participants with anodal, cathodal, and sham stimulation at both 1 and 2 mA over either primary motor or visual cortex while their sensitivity to small changes in visual stimuli was assessed. The different stimulation types were then applied for a short (Experiment 1) or extended (Experiment 2) period with participants indicating the type and strength of the stimulation on the basis of the evoked sensations. Participants were able to identify the intensity of stimulation with shorter, but not longer periods, of stimulation at better than chance levels but identification of the different stimulation types was at chance levels. This result suggests that even after exposing participants to stimulation, and ensuring they are fully aware of the existence of a sham condition, they are unable to identify the type of stimulation from transient changes in stimulation intensity or from more prolonged stimulation. Thus participants are able to identify intensity of stimulation but not the type of stimulation.


Attention Perception & Psychophysics | 2014

Temporal cues and the attentional blink: A further examination of the role of expectancy in sequential object perception

Troy A. W. Visser; Matthew F. Tang; David R. Badcock; James T. Enns

Although perception is typically constrained by limits in available processing resources, these constraints can be overcome if information about environmental properties, such as the spatial location or expected onset time of an object, can be used to direct resources to particular sensory inputs. In this work, we examined these temporal expectancy effects in greater detail in the context of the attentional blink (AB), in which identification of the second of two targets is impaired when the targets are separated by less than about half a second. We replicated previous results showing that presenting information about the expected onset time of the second target can overcome the AB. Uniquely, we also showed that information about expected onset (a) reduces susceptibility to distraction, (b) can be derived from salient temporal consistencies in intertarget intervals across exposures, and (c) is more effective when presented consistently rather than intermittently, along with trials that do not contain expectancy information. These results imply that temporal expectancy can benefit object processing at perceptual and postperceptual stages, and that participants are capable of flexibly encoding consistent timing information about environmental events in order to aid perception.


Journal of Vision | 2013

The shape of motion perception: Global pooling of transformational apparent motion

Matthew F. Tang; J. Edwin Dickinson; Troy A. W. Visser; Mark Edwards; David R. Badcock

Transformational apparent motion (TAM) is a visual phenomenon highlighting the utility of form information in motion processing. In TAM, smooth apparent motion is perceived when shapes in certain spatiotemporal arrangements change. It has been argued that TAM relies on a separate high-level form-motion system. Few studies have, however, systematically examined how TAM relates to conventional low-level motion-energy systems. To this end, we report a series of experiments showing that, like conventional motion stimuli, multiple TAM signals can combine into a global motion percept. We show that, contrary to previous claims, TAM does not require selective attention, and instead, multiple TAM signals can be simultaneously combined with coherence thresholds reflecting integration across the entire stimulus area. This system is relatively weak, less tolerant to noise, and easily overridden when motion energy cues are sufficiently strong. We conclude that TAM arises from high-level form-motion information that enters the motion system by, at least, the stage of global motion pooling.


Journal of Vision | 2015

Role of form information in motion pooling and segmentation.

Matthew F. Tang; J. Edwin Dickinson; Troy A. W. Visser; Mark Edwards; David R. Badcock

Traditional theories of visual perception have focused on either form or motion processing, implying a functional separation. However, increasing evidence indicates that these features interact at early stages of visual processing. The current study examined a well-known form-motion interaction, where a shape translates along a circular path behind opaque apertures, giving the impression of either independently translating lines (segmentation) or a globally coherent, translating shape. The purpose was to systemically examine how low-level motion information and form information interact to determine which percept is reported. To this end, we used a stimulus with boundaries comprising multiple, spatially-separated Gabor patches with three to eight sides. Results showed that shapes with four or fewer sides appeared to move in a segmented manner, whereas those with more sides were integrated as a solid shape. The separation between directions, rather than the total number of sides, causes this switch between integrated or segmented percepts. We conclude that the change between integration and segmentation depends on whether local motion directions can be independently resolved. We also reconcile previous results on the influence of shape closure on motion integration: Shapes that form open contours cause segmentation, but with no corresponding enhanced sensitivity for shapes forming closed contours. Overall, our results suggest that the resolution of the local motion signal determines whether motion segmentation or integration is perceived with only a small overall influence of form.


PLOS Computational Biology | 2018

Stochastic resonance enhances the rate of evidence accumulation during combined brain stimulation and perceptual decision-making

Onno van der Groen; Matthew F. Tang; Nicole Wenderoth; Jason B. Mattingley

Perceptual decision-making relies on the gradual accumulation of noisy sensory evidence. It is often assumed that such decisions are degraded by adding noise to a stimulus, or to the neural systems involved in the decision making process itself. But it has been suggested that adding an optimal amount of noise can, under appropriate conditions, enhance the quality of subthreshold signals in nonlinear systems, a phenomenon known as stochastic resonance. Here we asked whether perceptual decisions made by human observers obey these stochastic resonance principles, by adding noise directly to the visual cortex using transcranial random noise stimulation (tRNS) while participants judged the direction of coherent motion in random-dot kinematograms presented at the fovea. We found that adding tRNS bilaterally to visual cortex enhanced decision-making when stimuli were just below perceptual threshold, but not when they were well below or above threshold. We modelled the data under a drift diffusion framework, and showed that bilateral tRNS selectively increased the drift rate parameter, which indexes the rate of evidence accumulation. Our study is the first to provide causal evidence that perceptual decision-making is susceptible to a stochastic resonance effect induced by tRNS, and to show that this effect arises from selective enhancement of the rate of evidence accumulation for sub-threshold sensory events.


bioRxiv | 2017

Prediction Error and Repetition Suppression Have Distinct Effects on Neural Representations of Visual Information

Matthew F. Tang; Cooper A. Smout; Ehsan Arabzadeh; Jason B. Mattingley

Predictive coding theories argue recent experience establishes expectations in the brain that generate prediction errors when violated. Prediction errors provide a possible explanation for repetition suppression, in which evoked neural activity is attenuated across repeated presentations of the same stimulus. According to the predictive coding account, repetition suppression arises because repeated stimuli are expected, non-repeated stimuli are unexpected and thus elicit larger neural responses. Here we employed electroencephalography in humans to test the predictive coding account of repetition suppression by presenting sequences of gratings with orientations that were expected to repeat or change in separate blocks. We applied multivariate forward modelling to determine how orientation selectivity was affected by repetition and prediction. Unexpected stimuli were associated with significantly enhanced orientation selectivity, whereas there was no such influence on selectivity for repeated stimuli. Our results suggest that repetition suppression and expectation have separable effects on neural representations of visual feature information.


Journal of Vision | 2017

Separate banks of information channels encode size and aspect ratio

J. Edwin Dickinson; Sarah K. Morgan; Matthew F. Tang; David R. Badcock

Size and aspect ratio are ecologically important visual attributes. Relative size confers depth, and aspect ratio is a size-invariant cue to object identity. The mechanisms of their analyses by the visual system are uncertain. In a series of three psychophysical experiments we show that adaptation causes perceptual repulsion in these properties. Experiment 1 shows that adaptation to a square causes a subsequently viewed smaller (larger) test square to appear smaller (larger) still. Experiment 2 reveals that a test rectangle with an aspect ratio (height/width) of two appears more slender after adaptation to rectangles with aspect ratios less than two, while the same test stimulus appears more squat after adaptation to a rectangle with an aspect ratio greater than two. Significantly, aftereffect magnitudes peak and then decline as the sizes or aspect ratios of adaptor and test diverge. Experiment 3 uses the results of Experiments 1 and 2 to show that the changes in perceived aspect ratio are due to adaptation to aspect ratio rather than adaptation to the height and width of the stimuli. The results are consistent with the operation of distinct banks of information channels tuned for different values of each property. The necessary channels have log-Gaussian sensitivity profiles, have equal widths when expressed as ratios, are labeled with their preferred magnitudes, and are distributed at exponentially increasing intervals. If an adapting stimulus reduces each channels sensitivity in proportion to its activation then the displacement of the centroid of activity due to a subsequently experienced test stimulus predicts the measured size or aspect ratio aftereffect.

Collaboration


Dive into the Matthew F. Tang's collaboration.

Top Co-Authors

Avatar

David R. Badcock

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Troy A. W. Visser

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

J. Edwin Dickinson

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Mark Edwards

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Geoffrey R. Hammond

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ehsan Arabzadeh

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge