Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack L. Gallant is active.

Publication


Featured researches published by Jack L. Gallant.


Nature | 2008

Identifying natural images from human brain activity

Kendrick Kay; Thomas Naselaris; Ryan J. Prenger; Jack L. Gallant

A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.


The Journal of Neuroscience | 2005

Do We Know What the Early Visual System Does

Matteo Carandini; Jonathan B. Demb; Valerio Mante; David J. Tolhurst; Yang Dan; Bruno A. Olshausen; Jack L. Gallant; Nicole C. Rust

We can claim that we know what the visual system does once we can predict neural responses to arbitrary stimuli, including those seen in nature. In the early visual system, models based on one or more linear receptive fields hold promise to achieve this goal as long as the models include nonlinear mechanisms that control responsiveness, based on stimulus context and history, and take into account the nonlinearity of spike generation. These linear and nonlinear mechanisms might be the only essential determinants of the response, or alternatively, there may be additional fundamental determinants yet to be identified. Research is progressing with the goals of defining a single “standard model” for each stage of the visual pathway and testing the predictive power of these models on the responses to movies of natural scenes. These predictive models represent, at a given stage of the visual pathway, a compact description of visual computation. They would be an invaluable guide for understanding the underlying biophysical and anatomical mechanisms and relating neural responses to visual perception.


Current Biology | 2011

Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies

Shinji Nishimoto; An T. Vu; Thomas Naselaris; Yuval Benjamini; Bin Yu; Jack L. Gallant

Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3-5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6-8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.


Neuron | 2012

A continuous semantic space describes the representation of thousands of object and action categories across the human brain.

Alexander G. Huth; Shinji Nishimoto; An T. Vu; Jack L. Gallant

Humans can see and name thousands of distinct object and action categories, so it is unlikely that each category is represented in a distinct brain area. A more efficient scheme would be to represent categories as locations in a continuous semantic space mapped smoothly across the cortical surface. To search for such a space, we used fMRI to measure human brain activity evoked by natural movies. We then used voxelwise models to examine the cortical representation of 1,705 object and action categories. The first few dimensions of the underlying semantic space were recovered from the fit models by principal components analysis. Projection of the recovered semantic space onto cortical flat maps shows that semantic selectivity is organized into smooth gradients that cover much of visual and nonvisual cortex. Furthermore, both the recovered semantic space and the cortical organization of the space are shared across different individuals.


The Journal of Neuroscience | 2004

Natural stimulus statistics alter the receptive field structure of v1 neurons.

Stephen V. David; William E. Vinje; Jack L. Gallant

Studies of the primary visual cortex (V1) have produced models that account for neuronal responses to synthetic stimuli such as sinusoidal gratings. Little is known about how these models generalize to activity during natural vision. We recorded neural responses in area V1 of awake macaques to a stimulus with natural spatiotemporal statistics and to a dynamic grating sequence stimulus. We fit nonlinear receptive field models using each of these data sets and compared how well they predicted time-varying responses to a novel natural visual stimulus. On average, the model fit using the natural stimulus predicted natural visual responses more than twice as accurately as the model fit to the synthetic stimulus. The natural vision model produced better predictions in >75% of the neurons studied. This large difference in predictive power suggests that natural spatiotemporal stimulus statistics activate nonlinear response properties in a different manner than the grating stimulus. To characterize this modulation, we compared the temporal and spatial response properties of the model fits. During natural stimulation, temporal responses often showed a stronger late inhibitory component, indicating an effect of nonlinear temporal summation during natural vision. In addition, spatial tuning underwent complex shifts, primarily in the inhibitory, rather than excitatory, elements of the response profile. These differences in late and spatially tuned inhibition accounted fully for the difference in predictive power between the two models. Both the spatial and temporal statistics of the natural stimulus contributed to the modulatory effects.


Nature | 2016

Natural speech reveals the semantic maps that tile human cerebral cortex

Alexander G. Huth; Wendy A. de Heer; Thomas L. Griffiths; Frédéric E. Theunissen; Jack L. Gallant

The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.


Neuron | 2003

Goal-related activity in V4 during free viewing visual search: Evidence for a ventral stream visual salience map

James A. Mazer; Jack L. Gallant

Natural exploration of complex visual scenes depends on saccadic eye movements toward important locations. Saccade targeting is thought to be mediated by a retinotopic map that represents the locations of salient features. In this report, we demonstrate that extrastriate ventral area V4 contains a retinotopic salience map that guides exploratory eye movements during a naturalistic free viewing visual search task. In more than half of recorded cells, visually driven activity is enhanced prior to saccades that move the fovea toward the location previously occupied by a neurons spatial receptive field. This correlation suggests that bottom-up processing in V4 influences the oculomotor planning process. Half of the neurons also exhibit top-down modulation of visual responses that depends on search target identity but not visual stimulation. Convergence of bottom-up and top-down processing streams in area V4 results in an adaptive, dynamic map of salience that guides oculomotor planning during natural vision.


Proceedings of the National Academy of Sciences of the United States of America | 2002

Spatial frequency and orientation tuning dynamics in area V1

James A. Mazer; William E. Vinje; Josh D McDermott; Peter H. Schiller; Jack L. Gallant

Spatial frequency (SF) and orientation tuning are intrinsic properties of neurons in primary visual cortex (area V1). To investigate the neural mechanisms mediating selectivity in the awake animal, we measured the temporal dynamics of SF and orientation tuning. We adapted a high-speed reverse-correlation method previously used to characterize orientation tuning dynamics in anesthetized animals to estimate efficiently the complete spatiotemporal receptive fields in area V1 of behaving macaques. We found that SF and orientation tuning are largely separable over time in single neurons. However, spatiotemporal receptive fields also contain a small nonseparable component that reflects a significant difference in response latency for low and high SF stimuli. The observed relationship between stimulus SF and latency represents a dynamic shift in SF tuning, and suggests that single V1 neurons might receive convergent input from the magno- and parvocellular processing streams. Although previous studies with anesthetized animals suggested that orientation tuning could change dramatically over time, we find no substantial evidence of dynamic changes in orientation tuning.


Neuron | 2000

A Human Extrastriate Area Functionally Homologous to Macaque V4

Jack L. Gallant; Rachel E Shoup; James A. Mazer

Extrastriate area V4 is crucial for intermediate form vision and visual attention in nonhuman primates. Human neuroimaging suggests that an area in the lingual sulcus/fusiform gyrus may correspond to ventral V4 (V4v). We studied a human neurological patient, AR, with a putative V4v lesion. The lesion does not affect early visual processing (luminance, orientation, and motion perception). However, it does impair hue perception, intermediate form vision, and visual attention in the upper contralateral visual field. Form deficits occur during discrimination of illusory borders, Glass patterns, curvature, and non-Cartesian patterns. Attention deficits occur during discrimination of the relative positions of object parts, detection of low-salience targets, and orientation discrimination in the presence of distractors. This pattern of deficits is consistent with the known properties of area V4 in nonhuman primates, indicating that ARs lesion affects a cortical region functionally homologous to macaque V4.


Nature Neuroscience | 2013

Attention during natural vision warps semantic representation across the human brain

Tolga Çukur; Shinji Nishimoto; Alexander G. Huth; Jack L. Gallant

Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.

Collaboration


Dive into the Jack L. Gallant's collaboration.

Top Co-Authors

Avatar

Thomas Naselaris

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Kendrick Kay

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James A. Mazer

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

An T. Vu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David C. Van Essen

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

An Vu

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge