Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Geoffrey M. Ghose is active.

Publication


Featured researches published by Geoffrey M. Ghose.


Visual Neuroscience | 1999

Visual response latencies of magnocellular and parvocellular LGN neurons in macaque monkeys.

John H. R. Maunsell; Geoffrey M. Ghose; John A. Assad; Carrie J. McAdams; C. E. Boudreau; Brett D. Noerager

Signals relayed through the magnocellular layers of the LGN travel on axons with faster conduction speeds than those relayed through the parvocellular layers. As a result, magnocellular signals might reach cerebral cortex appreciably before parvocellular signals. The relative speed of these two channels cannot be accurately predicted based solely on axon conduction speeds, however. Other factors, such as different degrees of convergence in the magnocellular and parvocellular channels and the retinal circuits that feed them, can affect the time it takes for magnocellular and parvocellular signals to activate cortical neurons. We have investigated the relative timing of visual responses mediated by the magnocellular and parvocellular channels. We recorded individually from 78 magnocellular and 80 parvocellular neurons in the LGN of two anesthetized monkeys. Visual response latencies were measured for small spots of light of various intensities. Over a wide range of stimulus intensities the fastest magnocellular response latencies preceded the fastest parvocellular response latencies by about 10 ms. Because parvocellular neurons are far more numerous than magnocellular neurons, convergence in cortex could reduce the magnocellular advantage by allowing parvocellular signals to generate detectable responses sooner than expected based on the responses of individual parvocellular neurons. An analysis based on a simple model using neurophysiological data collected from the LGN shows that convergence in cortex could eliminate or reverse the magnocellular advantage. This observation calls into question inferences that have been made about ordinal relationships of neurons based on timing of responses.


Nature | 2002

Attentional modulation in visual cortex depends on task timing

Geoffrey M. Ghose; John H. R. Maunsell

Paying attention to a stimulus selectively increases the ability to process it. For example, when subjects attend to a specific region of a visual scene, their sensitivity to changes at that location increases. A large number of studies describe the behavioural consequences and neurophysiological correlates of attending to spatial locations. There has, in contrast, been little study of the allocation of attention over time. Because subjects can anticipate predictable events with great temporal precision, it seems probable that they might dynamically shift their attention when performing a familiar perceptual task whose constraints changed over time. We trained monkeys to respond to a stimulus change where the probability of occurrence changed over time. Recording from area V4 of the visual cortex in these animals, we found that the modulation of neuronal responses changed according to the probability of the change occurring at that instant. Thus, we show that the attentional modulation of sensory neurons reflects a subjects anticipation of the timing of behaviourally relevant events.


Neuron | 1999

Specialized Representations in Visual Cortex: A Role for Binding?

Geoffrey M. Ghose; John H. R. Maunsell

of neurons that represent elements of a given object, and that this correlated firing labels the neuronal activity associated with one object. Different frequencies or Seeing is deceptively simple. We perceive objects, sym-phases of modulation could be used to simultaneously bols, movements, and other aspects of the visual scene label different objects. With this approach, neurons dis-without effort or awareness of the mechanisms that pro-tributed across visual cortex, or across different sensory cess visual information. But our continuous and seam-and motor systems (Engel et al., 1997), could be dynami-less visual perceptions depend on the activity of billions cally and selectively bound whenever their activity was of individual neurons. The first step in seeing an object associated with a single object. is the generation of a pattern of activity that is distributed The temporal correlation hypothesis is based on many across hundreds, if not millions, of photoreceptors. neurophysiological studies that have shown that the These in turn activate many other neurons in more cen-spike rate of cortical neurons sometimes oscillates at tral structures. frequencies between 30 and 70 Hz, and that neurons How the activity of widely distributed neurons can driven by a single stimulus sometimes oscillate in syn-lead to unitary percepts has been a key question in chrony (reviewed by Engel et al., 1992b, 1997; Singer and neuroscience dating to the origins of the modern con-Gray, 1995). This synchronization has been observed cept of neurons. The apparent continuity of perception between neurons in different visual areas (Eckhorn et was one of the major philosophical arguments against al., 1988; Engel et al., 1991b; Roelfsema et al., 1997), Cajals neuron doctrine. How could discrete anatomical between sites in different cerebral hemispheres (Engel units be responsible? It has been an enduring subject et al., 1991a; Nowak et al., 1995b), and between sensory of inquiry since then. For example, Kö hler and Held and motor regions (Bressler et al., 1993; Murthy and (1949) suggested that unitary visual perception depends Fetz, 1996a, 1996b). on currents that flow through the cerebral cortex as if The scope of the temporal correlation hypothesis is it were a volume conductor, a concept which led to limited. It does not address how a particular group of experiments that tested perceptual capability after em-neurons is segmented from other active cells, or how bedding wires or insulators into cortex to disrupt the synchrony is achieved. Nor does it attempt to explain hypothesized …


The Journal of Neuroscience | 2008

Spatial Summation Can Explain the Attentional Modulation of Neuronal Responses to Multiple Stimuli in Area V4

Geoffrey M. Ghose; John H. R. Maunsell

Although many studies have shown that the activity of individual neurons in a variety of visual areas is modulated by attention, a fundamental question remains unresolved: can attention alter the visual representations of individual neurons? One set of studies, primarily relying on the attentional modulations observed when a single stimulus is presented within the receptive field of a neuron, suggests that neuronal selectivities, such as orientation or direction tuning, are not fundamentally altered by attention (Salinas and Abbott, 1997; McAdams and Maunsell, 1999; Treue and Martinez Trujillo, 1999). Another set of studies, relying on modulations observed when multiple stimuli are presented within a receptive field, suggests that attention can alter the weighting of sensory inputs (Moran and Desimone, 1985; Luck et al., 1997; Reynolds et al., 1999; Chelazzi et al., 2001). In these studies, when preferred and nonpreferred stimuli are simultaneously presented, responses are much stronger when attention is directed to the preferred stimulus than when it is directed to the nonpreferred stimulus. In this study, we recorded neuronal responses from individual neurons in visual cortical area V4 to both single and paired stimuli with a variety of attentional allocations and stimulus combinations. For each neuron studied, we constructed a quantitative model of input summation and then tested various models of attention. In many neurons, we are able to explain neuronal responses across the entire range of stimuli and attentional allocations tested. Specifically, we are able to reconcile seemingly inconsistent observations of single and paired stimuli attentional modulation with a new model in which attention can facilitate or suppress specific inputs to a neuron but does not fundamentally alter the integration of these inputs.


Current Opinion in Neurobiology | 2004

Learning in mammalian sensory cortex.

Geoffrey M. Ghose

The improvement of perceptual capabilities with training can offer important insight into the physiological basis of learning in the cerebral cortex. The rapid time course and ease with which some perceptual capabilities can improve suggest that learning is an integral part of normal perception. Electrophysiological and neuroimaging studies of sensory systems in the cortex suggest that the changes underlying perceptual learning can occur in a variety of areas and are likely to involve multiple mechanisms. In particular, recent psychological and physiological studies suggest that perceptual learning might often involve the task-specific suppression of signals that interfere with performance.


The Journal of Neuroscience | 2007

Spatial Attention Does Not Strongly Modulate Neuronal Responses in Early Human Visual Cortex

Daniel Yoshor; Geoffrey M. Ghose; William H. Bosking; Ping Sun; John H. R. Maunsell

Attention can dramatically enhance behavioral performance based on a visual stimulus, but the degree to which attention modulates activity in early visual cortex is unclear. Whereas single-unit studies of spatial attention in monkeys have repeatedly revealed relatively modest attentional modulations in V1, human functional magnetic resonance imaging studies demonstrate a large attentional enhancement of the blood oxygen level-dependent (BOLD) signal in V1. To explore this discrepancy, we used intracranial electrodes to directly measure the effect of spatial attention on the responses of neurons near the human occipital pole. We found that spatial attention does not robustly modulate stimulus-driven local field potentials in early human visual cortex, but instead produces modest modulations that are consistent with those seen in monkey neurophysiology experiments. This finding suggests that the neuronal activity that underlies visual attention in humans is similar to that found in other primates and that behavioral state may alter the linear relationship between neuronal activity and BOLD.


Journal of Neurophysiology | 2009

Attentional Modulation of Visual Responses by Flexible Input Gain

Geoffrey M. Ghose

Although it is clear that sensory responses in the cortex can be strongly modulated by stimuli outside of classical receptive fields as well as by extraretinal signals such as attention and anticipation, the exact rules governing the neuronal integration of sensory and behavioral signals remain unclear. For example, most experiments studying sensory interactions have not explored attention, while most studies of attention have relied on the responses to relatively limited sets of stimuli. However, a recent study of V4 responses, in which location, orientation, and spatial attention were systematically varied, suggests that attention can both facilitate and suppress specific sensory inputs to a neuron according to behavioral relevance. To explore the implications of such input gain, we modeled the effects of a center-surround organization of attentional modulation using existing receptive field models of sensory integration. The model is consistent with behavioral measurements of a suppressive effect that surrounds the facilitatory locus of spatial attention. When this center-surround modulation is incorporated into realistic models of sensory integration, it is able to explain seemingly disparate observations of attentional effects in the neurophysiological literature, including spatial shifts in receptive field position and the preferential modulation of low contrast stimuli. The model is also consistent with recent formulations of attention to features in which gain is variably applied among cells with different receptive field properties. Consistent with functional imaging results, the model predicts that spatial attention effects will vary between different visual areas and suggests that attention may act through a common mechanism of selective and flexible gain throughout the visual system.


Visual Neuroscience | 1997

Intracortical connections are not required for oscillatory activity in the visual cortex

Geoffrey M. Ghose; Ralph D. Freeman

Synchronized oscillatory discharge in the visual cortex has been proposed to underlie the linking of retinotopically disparate features into perceptually coherent objects. These proposals have largely relied on the premise that the oscillations arise from intracortical circuitry. However, strong oscillations within both the retina and the lateral geniculate nucleus (LGN) have been reported recently. To evaluate the possibility that cortical oscillations arise from peripheral pathways, we have developed two plausible models of single cell oscillatory discharge that specifically exclude intracortical networks. In the first model, cortical oscillatory discharge near 50 Hz in frequency arises from the integration of signals from strongly oscillatory cells within the LGN. The model also predicts the incidence of 50-Hz oscillatory cells within the cortex. Oscillatory discharge around 30 Hz is explained in a second model by the presence of intrinsically oscillatory cells within cortical layer 5. Both models generate spike trains whose power spectra and mean firing rates are in close agreement with experimental observations of simple and complex cells. Considered together, the two models can largely account for the nature and incidence of oscillatory discharge in the cats visual cortex. The validity of these models is consistent with the possibility that oscillations are generated independently of intracortical interactions. Because these models rely on intrinsic stimulus-independent oscillators within the retina and cortex, the results further suggest that oscillatory activity within the cortex is not necessarily associated with the processing of high-order visual information.


Vision Research | 2010

Attention directed by expectations enhances receptive fields in cortical area MT.

Geoffrey M. Ghose; David W. Bearl

Expectations, especially those formed on the basis of extensive training, can substantially enhance visual performance. However, it is not clear that the physiological mechanisms underlying this enhancement are identical to those examined by experiments in which attention is directed by explicit instructions rather than strong expectations. To study the changes in visual representations associated with strong expectations, we trained animals to detect a brief motion pulse that was embedded in noise. Because the nature of the pulse and the statistics of its appearance were well known to the animals, they formed strong expectations which determined their behavioral performance. We used white-noise methods to infer the receptive field structure of single neurons in area MT while they were performing this task. Incorporating non-linearities, we compared receptive fields during periods of time when the animals were expecting the motion pulse with periods of time when they were not. We found receptive field changes consistent with an increased reliability in signaling pulse occurrence. Moreover, these changes were not consistent with a simple gain modulation. The results suggest that strong expectations can create very specific changes in the visual representations at a cellular level to enhance performance.


Journal of Vision | 2006

Strategies optimize the detection of motion transients

Geoffrey M. Ghose

Strategies are implicitly formed when a task is consistent and can be used to improve performance. To investigate how strategies can alter perceptual performance, I trained animals in a reaction time (RT) detection task in which the probability of a fixed duration motion pulse appearing varied over time in a consistent manner. Consistent with previous studies suggesting the implicit representation of task timing, I found that RTs were inversely related to the probability of the pulse appearing and decreased with training. I then inferred the sensory integration underlying responses using behavioral reverse correlation analysis. This analysis revealed that training and anticipation optimized detection by improving the correlation between sensory integration and the spatiotemporal extent of the motion pulse. Moreover, I found that these improvements in sensory integration could largely explain observed changes in the distribution of RT with training and anticipation. These results suggest that training can increase detection performance by optimizing sensory integration according to implicitly formed representations of the likelihood and nature of the stimulus.

Collaboration


Dive into the Geoffrey M. Ghose's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Yoshor

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

William H. Bosking

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Brett D. Noerager

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge