Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kendrick Kay is active.

Publication


Featured researches published by Kendrick Kay.


Nature | 2008

Identifying natural images from human brain activity

Kendrick Kay; Thomas Naselaris; Ryan J. Prenger; Jack L. Gallant

A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.


NeuroImage | 2011

Encoding and decoding in fMRI.

Thomas Naselaris; Kendrick Kay; Shinji Nishimoto; Jack L. Gallant

Over the past decade fMRI researchers have developed increasingly sensitive techniques for analyzing the information represented in BOLD activity. The most popular of these techniques is linear classification, a simple technique for decoding information about experimental stimuli or tasks from patterns of activity across an array of voxels. A more recent development is the voxel-based encoding model, which describes the information about the stimulus or task that is represented in the activity of single voxels. Encoding and decoding are complementary operations: encoding uses stimuli to predict activity while decoding uses activity to predict information about the stimuli. However, in practice these two operations are often confused, and their respective strengths and weaknesses have not been made clear. Here we use the concept of a linearizing feature space to clarify the relationship between encoding and decoding. We show that encoding and decoding operations can both be used to investigate some of the most common questions about how information is represented in the brain. However, focusing on encoding models offers two important advantages over decoding. First, an encoding model can in principle provide a complete functional description of a region of interest, while a decoding model can provide only a partial description. Second, while it is straightforward to derive an optimal decoding model from an encoding model it is much more difficult to derive an encoding model from a decoding model. We propose a systematic modeling approach that begins by estimating an encoding model for every voxel in a scan and ends by using the estimated encoding models to perform decoding.


Neuron | 2009

Bayesian Reconstruction of Natural Images from Human Brain Activity

Thomas Naselaris; Ryan J. Prenger; Kendrick Kay; Michael Oliver; Jack L. Gallant

Recent studies have used fMRI signals from early visual areas to reconstruct simple geometric patterns. Here, we demonstrate a new Bayesian decoder that uses fMRI signals from early and anterior visual areas to reconstruct complex natural images. Our decoder combines three elements: a structural encoding model that characterizes responses in early visual areas, a semantic encoding model that characterizes responses in anterior visual areas, and prior information about the structure and semantic content of natural images. By combining all these elements, the decoder produces reconstructions that accurately reflect both the spatial structure and semantic category of the objects contained in the observed natural image. Our results show that prior information has a substantial effect on the quality of natural image reconstructions. We also demonstrate that much of the variance in the responses of anterior visual areas to complex natural images is explained by the semantic category of the image alone.


The Journal of Neuroscience | 2007

Topographic organization in and near human visual area V4

Kathleen A. Hansen; Kendrick Kay; Jack L. Gallant

The existence and location of a human counterpart of macaque visual area V4 are disputed. To resolve this issue, we used functional magnetic resonance imaging to obtain topographic maps from human subjects, using visual stimuli and tasks designed to maximize accuracy of topographic maps of the fovea and parafovea and to measure the effects of attention on topographic maps. We identified multiple topographic transitions, each clearly visible in ≥75% of the maps, that we interpret as boundaries of distinct cortical regions. We call two of these regions dorsal V4 and ventral V4 (together comprising human area V4) because they share several defining characteristics with the macaque regions V4d and V4v (which together comprise macaque area V4). Ventral V4 is adjacent to V3v, and dorsal V4 is adjacent to parafoveal V3d. Ventral V4 and dorsal V4 meet in the foveal confluence shared by V1, V2, and V3. Ventral V4 and dorsal V4 represent complementary regions of the visual field, because ventral V4 represents the upper field and a subregion of the lower field, whereas dorsal V4 represents lower-field locations that are not represented by ventral V4. Finally, attentional modulation of spatial tuning is similar across dorsal and ventral V4, but attention has a smaller effect in V3d and V3v and a larger effect in a neighboring lateral occipital region.


Nature Medicine | 2013

Quantifying the local tissue volume and composition in individual brains with magnetic resonance imaging

Aviv Mezer; Jason D. Yeatman; Nikola Stikov; Kendrick Kay; Nam-Joon Cho; Robert F. Dougherty; Michael L. Perry; Josef Parvizi; Le H. Hua; Kim Butts-Pauly; Brian A. Wandell

Here, we describe a quantitative neuroimaging method to estimate the macromolecular tissue volume (MTV), a fundamental measure of brain anatomy. By making measurements over a range of field strengths and scan parameters, we tested the key assumptions and the robustness of the method. The measurements confirm that a consistent quantitative estimate of MTV can be obtained across a range of scanners. MTV estimates are sufficiently precise to enable a comparison between data obtained from an individual subject with control population data. We describe two applications. First, we show that MTV estimates can be combined with T1 and diffusion measurements to augment our understanding of the tissue properties. Second, we show that MTV provides a sensitive measure of disease status in individual patients with multiple sclerosis. The MTV maps are obtained using short clinically appropriate scans that can reveal how tissue changes influence behavior and cognition.


Nature Methods | 2014

Evaluation and statistical inference for human connectomes

Franco Pestilli; Jason D. Yeatman; Ariel Rokem; Kendrick Kay; Brian A. Wandell

Diffusion-weighted imaging coupled with tractography is currently the only method for in vivo mapping of human white-matter fascicles. Tractography takes diffusion measurements as input and produces the connectome, a large collection of white-matter fascicles, as output. We introduce a method to evaluate the evidence supporting connectomes. Linear fascicle evaluation (LiFE) takes any connectome as input and predicts diffusion measurements as output, using the difference between the measured and predicted diffusion signals to quantify the prediction error. We use the prediction error to evaluate the evidence that supports the properties of the connectome, to compare tractography algorithms and to test hypotheses about tracts and connections.


Journal of Neurophysiology | 2013

Compressive spatial summation in human visual cortex

Kendrick Kay; Jonathan Winawer; Aviv Mezer; Brian A. Wandell

Neurons within a small (a few cubic millimeters) region of visual cortex respond to stimuli within a restricted region of the visual field. Previous studies have characterized the population response of such neurons using a model that sums contrast linearly across the visual field. In this study, we tested linear spatial summation of population responses using blood oxygenation level-dependent (BOLD) functional MRI. We measured BOLD responses to a systematic set of contrast patterns and discovered systematic deviation from linearity: the data are more accurately explained by a model in which a compressive static nonlinearity is applied after linear spatial summation. We found that the nonlinearity is present in early visual areas (e.g., V1, V2) and grows more pronounced in relatively anterior extrastriate areas (e.g., LO-2, VO-2). We then analyzed the effect of compressive spatial summation in terms of changes in the position and size of a viewed object. Compressive spatial summation is consistent with tolerance to changes in position and size, an important characteristic of object representation.


Frontiers in Neuroscience | 2013

GLMdenoise: a fast, automated technique for denoising task-based fMRI data

Kendrick Kay; Ariel Rokem; Jonathan Winawer; Robert F. Dougherty; Brian A. Wandell

In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested.


Nature Neuroscience | 2009

I can see what you see

Kendrick Kay; Jack L. Gallant

Previous studies have attempted to decode functional imaging data to infer the perceptual state of an observer, but the level of detail has been limited. A new decoding study reconstructs accurate pictures of what an observer has seen.


Trends in Cognitive Sciences | 2015

Resolving Ambiguities of MVPA Using Explicit Models of Representation

Thomas Naselaris; Kendrick Kay

We advocate a shift in emphasis within cognitive neuroscience from multivariate pattern analysis (MVPA) to the design and testing of explicit models of neural representation. With such models, it becomes possible to identify the specific representations encoded in patterns of brain activity and to map them across the brain.

Collaboration


Dive into the Kendrick Kay's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Naselaris

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikolaus Kriegeskorte

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge