Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James M. G. Tsui is active.

Publication


Featured researches published by James M. G. Tsui.


The Journal of Neuroscience | 2009

Pattern Motion Selectivity of Spiking Outputs and Local Field Potentials in Macaque Visual Cortex

Farhan A. Khawaja; James M. G. Tsui; Christopher C. Pack

The dorsal pathway of the primate visual cortex is involved in the processing of motion signals that are useful for perception and behavior. Along this pathway, motion information is first measured by the primary visual cortex (V1), which sends specialized projections to extrastriate regions such as the middle temporal area (MT). Previous work with plaid stimuli has shown that most V1 neurons respond to the individual components of moving stimuli, whereas some MT neurons are capable of estimating the global motion of the pattern. In this work, we show that the majority of neurons in the medial superior temporal area (MST), which receives input from MT, have this pattern-selective property. Interestingly, the local field potentials (LFPs) measured simultaneously with the spikes often exhibit properties similar to that of the presumptive feedforward input to each area: in the high-gamma frequency band, the LFPs in MST are as component selective as the spiking outputs of MT, and MT LFPs have plaid responses that are similar to the spiking outputs of V1. In the lower LFP frequency bands (beta and low gamma), component selectivity is very common, and pattern selectivity is almost entirely absent in both MT and MST. Together, these results suggest a surprisingly strong link between the sensory tuning of cortical LFPs and afferent inputs, with important implications for the interpretation of imaging studies and for models of cortical function.


Journal of Neurophysiology | 2010

The Role of V1 Surround Suppression in MT Motion Integration

James M. G. Tsui; J. Nicholas Hunter; Richard T. Born; Christopher C. Pack

Neurons in the primate extrastriate cortex are highly selective for complex stimulus features such as faces, objects, and motion patterns. One explanation for this selectivity is that neurons in these areas carry out sophisticated computations on the outputs of lower-level areas such as primary visual cortex (V1), where neuronal selectivity is often modeled in terms of linear spatiotemporal filters. However, it has long been known that such simple V1 models are incomplete because they fail to capture important nonlinearities that can substantially alter neuronal selectivity for specific stimulus features. Thus a key step in understanding the function of higher cortical areas is the development of realistic models of their V1 inputs. We have addressed this issue by constructing a computational model of the V1 neurons that provide the strongest input to extrastriate cortical middle temporal (MT) area. We find that a modest elaboration to the standard model of V1 direction selectivity generates model neurons with strong end-stopping, a property that is also found in the V1 layers that provide input to MT. With this computational feature in place, the seemingly complex properties of MT neurons can be simulated by assuming that they perform a simple nonlinear summation of their inputs. The resulting model, which has a very small number of free parameters, can simulate many of the diverse properties of MT neurons. In particular, we simulate the invariance of MT tuning curves to the orientation and length of tilted bar stimuli, as well as the accompanying temporal dynamics. We also show how this property relates to the continuum from component to pattern selectivity observed when MT neurons are tested with plaids. Finally, we confirm several key predictions of the model by recording from MT neurons in the alert macaque monkey. Overall our results demonstrate that many of the seemingly complex computations carried out by high-level cortical neurons can in principle be understood by examining the properties of their inputs.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Perceptual and neural consequences of rapid motion adaptation

Davis M. Glasser; James M. G. Tsui; Christopher C. Pack; Duje Tadin

Nervous systems adapt to the prevailing sensory environment, and the consequences of this adaptation can be observed in the responses of single neurons and in perception. Given the variety of timescales underlying events in the natural world, determining the temporal characteristics of adaptation is important to understanding how perception adjusts to its sensory environment. Previous work has shown that neural adaptation can occur on a timescale of milliseconds, but perceptual adaptation has generally been studied over relatively long timescales, typically on the order of seconds. This disparity raises important questions. Can perceptual adaptation be observed at brief, functionally relevant timescales? And if so, how do its properties relate to the rapid adaptation seen in cortical neurons? We address these questions in the context of visual motion processing, a perceptual modality characterized by rapid temporal dynamics. We demonstrate objectively that 25 ms of motion adaptation is sufficient to generate a motion aftereffect, an illusory sensation of movement experienced when a moving stimulus is replaced by a stationary pattern. This rapid adaptation occurs regardless of whether the adapting motion is perceived. In neurophysiological recordings from the middle temporal area of primate visual cortex, we find that brief motion adaptation evokes direction-selective responses to subsequently presented stationary stimuli. A simple model shows that these neural responses can explain the consequences of rapid perceptual adaptation. Overall, we show that the motion aftereffect is not merely an intriguing perceptual illusion, but rather a reflection of rapid neural and perceptual processes that can occur essentially every time we experience motion.


Current Biology | 2008

Brief motion stimuli preferentially activate surround-suppressed neurons in macaque visual area MT

Jan Churan; Farhan A. Khawaja; James M. G. Tsui; Christopher C. Pack

Summary Intuitively one might think that larger objects should be easier to see, and indeed performance on visual tasks generally improves with increasing stimulus size [1,2]. Recently, a remarkable exception to this rule was reported [3]: when a high-contrast, moving stimulus is presented very briefly, motion perception deteriorates as stimulus size increases. This psychophysical surround suppression has been interpreted as a correlate of the neuronal surround suppression that is commonly found in the visual cortex [3–5]. However, many visual cortical neurons lack surround suppression, and so one might expect that the brain would simply use their outputs to discriminate the motion of large stimuli. Indeed previous work has generally found that observers rely on whichever neurons are most informative about the stimulus to perform similar psychophysical tasks [6]. Here we show that the responses of neurons in the middle temporal (MT) area of macaque monkeys provide a simple resolution to this paradox. We find that surround-suppressed MT neurons integrate motion signals relatively quickly, so that by comparison non-suppressed neurons respond poorly to brief stimuli. Thus, psychophysical surround suppression for brief stimuli can be viewed as a consequence of a strategy that weights neuronal responses according to how informative they are about a given stimulus. If this interpretation is correct, then it follows that any psychophysical experiment that uses brief motion stimuli will effectively probe the responses of MT neurons that have strong surround suppression.


Journal of Neurophysiology | 2011

Contrast sensitivity of MT receptive field centers and surrounds.

James M. G. Tsui; Christopher C. Pack

Neurons throughout the visual system have receptive fields with both excitatory and suppressive components. The latter are responsible for a phenomenon known as surround suppression, in which responses decrease as a stimulus is extended beyond a certain size. Previous work has shown that surround suppression in the primary visual cortex depends strongly on stimulus contrast. Such complex center-surround interactions are thought to relate to a variety of functions, although little is known about how they affect responses in the extrastriate visual cortex. We have therefore examined the interaction of center and surround in the middle temporal (MT) area of the macaque (Macaca mulatta) extrastriate cortex by recording neuronal responses to stimuli of different sizes and contrasts. Our findings indicate that surround suppression in MT is highly contrast dependent, with the strongest suppression emerging unexpectedly at intermediate stimulus contrasts. These results can be explained by a simple model that takes into account the nonlinear contrast sensitivity of the neurons that provide input to MT. The model also provides a qualitative link to previous reports of a topographic organization of area MT based on clusters of neurons with differing surround suppression strength. We show that this organization can be detected in the gamma-band local field potentials (LFPs) and that the model parameters can predict the contrast sensitivity of these LFP responses. Overall our results show that surround suppression in area MT is far more common than previously suspected, highlighting the potential functional importance of the accumulation of nonlinearities along the dorsal visual pathway.


Archive | 2009

Temporal Dynamics of Motion Integration

Richard T. Born; James M. G. Tsui; Christopher C. Pack

In order to correctly determine the velocity of moving objects, the brain must integrate information derived from a large number of local detectors. The geometry of objects, the presence of occluding surfaces and the restricted receptive fields of early motion detectors conspire to render many of these measurements unreliable. One possible solution to this problem, often referred to as the “aperture problem,” involves differential weighting of local cues according to their fidelity: measurements made near two-dimensional object features called “terminators” are selectively integrated, whereas one-dimensional motion signals emanating from object contours are given less weight. A large number of experiments have assessed the integration of these different kinds of motion cues using perceptual reports, eye movements and neuronal activity. All of the results show striking qualitative similarities in the temporal sequence of integration: the earliest responses reveal a non-selective integration which becomes progressively selective over a period of time. In this chapter we propose a simple mechanistic model based on end-stopped, direction-selective neurons in V1 of the macaque, and use it to account for the dynamics observed in perception, eye movements, and neural responses in MT.


Journal of Vision | 2010

Effects of onset-transients on the perception of visual motion

Jan Churan; Farhan A. Khawaja; James M. G. Tsui; Alby Richard; Christopher C. Pack


Journal of Vision | 2010

Neuronal and psychophysical responses to brief motion stimuli

Jan Churan; Farhan A. Khawaja; James M. G. Tsui; Christopher C. Pack


Archive | 2015

Motion-Sensitive Macaque Area MT Temporal Dynamics of Direction Tuning in

A. van Wezel; Bart G. Borghuis; Roger J. E. Bours; M.J.M. Lankheet; Davis M. Glasser; James M. G. Tsui; Christopher C. Pack; Duje Tadin; Nicholas S. C. Price; Danielle L. Prescott; Romesh D. Kumbhani; Yasmine El-Shamayleh; J. Anthony Movshon


Archive | 2015

Behavioral Time Course of Microstimulation in

Erik P. Cook; Davis M. Glasser; James M. G. Tsui; Christopher C. Pack; Duje Tadin

Collaboration


Dive into the James M. G. Tsui's collaboration.

Top Co-Authors

Avatar

Christopher C. Pack

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar

Farhan A. Khawaja

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duje Tadin

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Jan Churan

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alby Richard

Montreal Neurological Institute and Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Kohn

Albert Einstein College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge