Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Baldassano is active.

Publication


Featured researches published by Christopher Baldassano.


NeuroImage | 2013

Differential connectivity within the Parahippocampal Place Area.

Christopher Baldassano; Diane M. Beck; Li Fei-Fei

The Parahippocampal Place Area (PPA) has traditionally been considered a homogeneous region of interest, but recent evidence from both human studies and animal models has suggested that PPA may be composed of functionally distinct subunits. To investigate this hypothesis, we utilize a functional connectivity measure for fMRI that can estimate connectivity differences at the voxel level. Applying this method to whole-brain data from two experiments, we provide the first direct evidence that anterior and posterior PPA exhibit distinct connectivity patterns, with anterior PPA more strongly connected to regions in the default mode network (including the parieto-medial temporal pathway) and posterior PPA more strongly connected to occipital visual regions. We show that object sensitivity in PPA also has an anterior-posterior gradient, with stronger responses to abstract objects in posterior PPA. These findings cast doubt on the traditional view of PPA as a single coherent region, and suggest that PPA is composed of one subregion specialized for the processing of low-level visual features and object shape, and a separate subregion more involved in memory and scene context.


NeuroImage | 2012

Voxel-level functional connectivity using spatial regularization

Christopher Baldassano; Marius Cătălin Iordan; Diane M. Beck; Li Fei-Fei

Discovering functional connectivity between and within brain regions is a key concern in neuroscience. Due to the noise inherent in fMRI data, it is challenging to characterize the properties of individual voxels, and current methods are unable to flexibly analyze voxel-level connectivity differences. We propose a new functional connectivity method which incorporates a spatial smoothness constraint using regularized optimization, enabling the discovery of voxel-level interactions between brain regions from the small datasets characteristic of fMRI experiments. We validate our method in two separate experiments, demonstrating that we can learn coherent connectivity maps that are consistent with known results. First, we examine the functional connectivity between early visual areas V1 and VP, confirming that this connectivity structure preserves retinotopic mapping. Then, we show that two category-selective regions in ventral cortex - the Parahippocampal Place Area (PPA) and the Fusiform Face Area (FFA) - exhibit an expected peripheral versus foveal bias in their connectivity with visual area hV4. These results show that our approach is powerful, widely applicable, and capable of uncovering complex connectivity patterns with only a small amount of input data.


Journal of Experimental Psychology: General | 2016

Visual Scenes are Categorized by Function

Michelle R. Greene; Christopher Baldassano; Andre Esteva; Diane M. Beck; Li Fei-Fei

How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scenes category may be determined by the scenes function.


PeerJ | 2015

Parcellating connectivity in spatial maps.

Christopher Baldassano; Diane M. Beck; Li Fei-Fei

A common goal in biological sciences is to model a complex web of connections using a small number of interacting units. We present a general approach for dividing up elements in a spatial map based on their connectivity properties, allowing for the discovery of local regions underlying large-scale connectivity matrices. Our method is specifically designed to respect spatial layout and identify locally-connected clusters, corresponding to plausible coherent units such as strings of adjacent DNA base pairs, subregions of the brain, animal communities, or geographic ecosystems. Instead of using approximate greedy clustering, our nonparametric Bayesian model infers a precise parcellation using collapsed Gibbs sampling. We utilize an infinite clustering prior that intrinsically incorporates spatial constraints, allowing the model to search directly in the space of spatially-coherent parcellations. After showing results on synthetic datasets, we apply our method to both functional and structural connectivity data from the human brain. We find that our parcellation is substantially more effective than previous approaches at summarizing the brain’s connectivity structure using a small number of clusters, produces better generalization to individual subject data, and reveals functional parcels related to known retinotopic maps in visual cortex. Additionally, we demonstrate the generality of our method by applying the same model to human migration data within the United States. This analysis reveals that migration behavior is generally influenced by state borders, but also identifies regional communities which cut across state lines. Our parcellation approach has a wide range of potential applications in understanding the spatial structure of complex biological networks.


bioRxiv | 2016

Two Distinct Scene-Processing Networks Connecting Vision and Memory.

Christopher Baldassano; Andre Esteva; Li Fei-Fei; Diane M. Beck

Visual Abstract A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.


Journal of Vision | 2016

Pinpointing the peripheral bias in neural scene-processing networks during natural viewing

Christopher Baldassano; Li Fei-Fei; Diane M. Beck

Peripherally presented stimuli evoke stronger activity in scene-processing regions than foveally presented stimuli, suggesting that scene understanding is driven largely by peripheral information. We used functional MRI to investigate whether functional connectivity evoked during natural perception of audiovisual movies reflects this peripheral bias. For each scene-sensitive region--the parahippocampal place area (PPA), retrosplenial cortex, and occipital place area--we computed two measures: the extent to which its activity could be predicted by V1 activity (connectivity strength) and the eccentricities within V1 to which it was most closely related (connectivity profile). Scene regions were most related to peripheral voxels in V1, but the detailed nature of this connectivity varied within and between these regions. The retrosplenial cortex showed the most consistent peripheral bias but was less predictable from V1 activity, while the occipital place area was related to a wider range of eccentricities and was strongly coupled to V1. We divided the PPA along its posterior-anterior axis into retinotopic maps PHC1, PHC2, and anterior PPA, and found that a peripheral bias was detectable throughout all subregions, though the anterior PPA showed a less consistent relationship to eccentricity and a substantially weaker overall relationship to V1. We also observed an opposite foveal bias in object-perception regions including the lateral occipital complex and fusiform face area. These results show a fine-scale relationship between eccentricity biases and functional correlation during natural perception, giving new insight into the structure of the scene-perception network.


Cerebral Cortex | 2016

Human–Object Interactions Are More than the Sum of Their Parts

Christopher Baldassano; Diane M. Beck; Li Fei-Fei

Abstract Understanding human‐object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human‐object pairs, and isolated humans and objects. A number of visual regions process features of human‐object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human‐object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human‐object interaction categories that are not predicted by their individual components, indicating that they encode human‐object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human‐object interactions necessary for social perception.


NeuroImage | 2017

Mapping between fMRI responses to movies and their natural language annotations

Kiran Vodrahalli; Po-Hsuan Chen; Yingyu Liang; Christopher Baldassano; Janice Chen; Esther Yong; Christopher J. Honey; Uri Hasson; Peter J. Ramadge; Kenneth A. Norman; Sanjeev Arora

ABSTRACT Several research groups have shown how to map fMRI responses to the meanings of presented stimuli. This paper presents new methods for doing so when only a natural language annotation is available as the description of the stimulus. We study fMRI data gathered from subjects watching an episode of BBCs Sherlock (Chen et al., 2017), and learn bidirectional mappings between fMRI responses and natural language representations. By leveraging data from multiple subjects watching the same movie, we were able to perform scene classification with 72% accuracy (random guessing would give 4%) and scene ranking with average rank in the top 4% (random guessing would give 50%). The key ingredients underlying this high level of performance are (a) the use of the Shared Response Model (SRM) and its variant SRM‐ICA (Chen et al., 2015; Zhang et al., 2016) to aggregate fMRI data from multiple subjects, both of which are shown to be superior to standard PCA in producing low‐dimensional representations for the tasks in this paper; (b) a sentence embedding technique adapted from the natural language processing (NLP) literature (Arora et al., 2017) that produces semantic vector representation of the annotations; (c) using previous timestep information in the featurization of the predictor data. These optimizations in how we featurize the fMRI data and text annotations provide a substantial improvement in classification performance, relative to standard approaches. HIGHLIGHTSWe learn maps between fMRI data and fine‐grained text annotations.The Shared Response Model highlights movie‐related variance in the fMRI response.Semantic annotations can be featurized with weighted sums of word embeddings.Using previous timepoints helps with fMRI to Text, but hurts Text to fMRI.Our methods attain high performance on scene classification and ranking tasks.


eLife | 2018

Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior

Iris I. A. Groen; Michelle R. Greene; Christopher Baldassano; Li Fei-Fei; Diane M. Beck; Chris I. Baker

Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.


Journal of Vision | 2013

Differential Connectivity Within the Parahippocampal Place Area

Christopher Baldassano; Diane M. Beck; Li Fei-Fei

article i nfo Article history: The Parahippocampal Place Area (PPA) has traditionally been considered a homogeneous region of interest, but recent evidence from both human studies and animal models has suggested that PPA may be composed of func- tionally distinct subunits. To investigate this hypothesis, we utilize a functional connectivity measure for fMRI that can estimate connectivity differences at the voxel level. Applying this method to whole-brain data from two experiments, we provide the first direct evidence that anterior and posterior PPA exhibit distinct connectivity patterns, with anterior PPA more strongly connected to regions in the default mode network (including the parieto-medial temporal pathway) and posterior PPA more strongly connected to occipital visual regions. We show that object sensitivity in PPA also has an anterior-posterior gradient, with stronger responses to abstract objects in posterior PPA. These findings cast doubt on the traditional view of PPA as a single coherent region, and suggest that PPA is composed of one subregion specialized for the processing of low-level visual features and object shape, and a separate subregion more involved in memory and scene context.

Collaboration


Dive into the Christopher Baldassano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris I. Baker

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Iris I. A. Groen

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janice Chen

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge