Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony Stigliani is active.

Publication


Featured researches published by Anthony Stigliani.


Cerebral Cortex | 2017

The Cytoarchitecture of Domain-specific Regions in Human High-level Visual Cortex

Kevin S. Weiner; Michael Barnett; Simon Lorenz; Julian Caspers; Anthony Stigliani; Katrin Amunts; Karl Zilles; Bruce Fischl; Kalanit Grill-Spector

Abstract A fundamental hypothesis in neuroscience proposes that underlying cellular architecture (cytoarchitecture) contributes to the functionality of a brain area. However, this hypothesis has not been tested in human ventral temporal cortex (VTC) that contains domain‐specific regions causally involved in perception. To fill this gap in knowledge, we used cortex‐based alignment to register functional regions from living participants to cytoarchitectonic areas in ex vivo brains. This novel approach reveals 3 findings. First, there is a consistent relationship between domain‐specific regions and cytoarchitectonic areas: each functional region is largely restricted to 1 cytoarchitectonic area. Second, extracting cytoarchitectonic profiles from face‐ and place‐selective regions after back‐projecting each region to 20‐&mgr;m thick histological sections indicates that cytoarchitectonic properties distinguish these regions from each other. Third, some cytoarchitectonic areas contain more than 1 domain‐specific region. For example, face‐, body‐, and character‐selective regions are located within the same cytoarchitectonic area. We summarize these findings with a parsimonious hypothesis incorporating how cellular properties may contribute to functional specialization in human VTC. Specifically, we link computational principles to correlated axes of functional and cytoarchitectonic segregation in human VTC, in which parallel processing across domains occurs along a lateral‐medial axis while transformations of information within domain occur along an anterior‐posterior axis.


The Journal of Neuroscience | 2016

The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex.

Kevin S. Weiner; Jacques Jonas; Jesse Gomez; Louis Maillard; Hélène Brissart; Gabriela Hossu; Corentin Jacques; David Loftus; Sophie Colnat-Coulbois; Anthony Stigliani; Michael Barnett; Kalanit Grill-Spector; Bruno Rossion

Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks.


NeuroImage | 2017

Defining the most probable location of the parahippocampal place area using cortex-based alignment and cross-validation.

Kevin S. Weiner; Michael Barnett; Nathan Witthoft; Golijeh Golarai; Anthony Stigliani; Kendrick Kay; Jesse Gomez; Vaidehi Natu; Katrin Amunts; Karl Zilles; Kalanit Grill-Spector

ABSTRACT The parahippocampal place area (PPA) is a widely studied high‐level visual region in the human brain involved in place and scene processing. The goal of the present study was to identify the most probable location of place‐selective voxels in medial ventral temporal cortex. To achieve this goal, we first used cortex‐based alignment (CBA) to create a probabilistic place‐selective region of interest (ROI) from one group of 12 participants. We then tested how well this ROI could predict place selectivity in each hemisphere within a new group of 12 participants. Our results reveal that a probabilistic ROI (pROI) generated from one group of 12 participants accurately predicts the location and functional selectivity in individual brains from a new group of 12 participants, despite between subject variability in the exact location of place‐selective voxels relative to the folding of parahippocampal cortex. Additionally, the prediction accuracy of our pROI is significantly higher than that achieved by volume‐based Talairach alignment. Comparing the location of the pROI of the PPA relative to published data from over 500 participants, including data from the Human Connectome Project, shows a striking convergence of the predicted location of the PPA and the cortical location of voxels exhibiting the highest place selectivity across studies using various methods and stimuli. Specifically, the most predictive anatomical location of voxels exhibiting the highest place selectivity in medial ventral temporal cortex is the junction of the collateral and anterior lingual sulci. Methodologically, we make this pROI freely available (vpnl.stanford.edu/PlaceSelectivity), which provides a means to accurately identify a functional region from anatomical MRI data when fMRI data are not available (for example, in patient populations). Theoretically, we consider different anatomical and functional factors that may contribute to the consistent anatomical location of place selectivity relative to the folding of high‐level visual cortex. HIGHLIGHTSA probabilistic place ROI was created from cortex‐based alignment in 24 participantsCross‐validation shows that this ROI predicts place selectivity in new participantsThis ROI predicts voxels with peak place selectivity in >500 participantsThe collateral/lingual sulcal junction is most predictive of place selectivityWe share this predictive ROI with the field (vpnl.stanford.edu/PlaceSelectivity)


Proceedings of the National Academy of Sciences of the United States of America | 2017

Encoding model of temporal processing in human visual cortex

Anthony Stigliani; Brianna Jeska; Kalanit Grill-Spector

Significance How is temporal information processed in human visual cortex? To address this question, we used fMRI and a two temporal channel-encoding model. This approach not only explains cortical responses for time-varying stimuli ranging from milliseconds to seconds but finds differential temporal processing across human visual cortex. While motion-sensitive regions are dominated by transient responses, ventral regions that process the content of the visual input surprisingly show both sustained and transient responses, with the latter exceeding the former. This transient processing may foster rapid extraction of the gist of the scene. Importantly, our encoding approach marks a transformative advancement in the temporal resolution of fMRI, as it enables linking fMRI responses to the timescale of neural computations in cortex. How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain.


Interface Focus | 2018

The functional neuroanatomy of face perception: from brain measurements to deep neural networks

Kalanit Grill-Spector; Kevin S. Weiner; Jesse Gomez; Anthony Stigliani; Vaidehi Natu

A central goal in neuroscience is to understand how processing within the ventral visual stream enables rapid and robust perception and recognition. Recent neuroscientific discoveries have significantly advanced understanding of the function, structure and computations along the ventral visual stream that serve as the infrastructure supporting this behaviour. In parallel, significant advances in computational models, such as hierarchical deep neural networks (DNNs), have brought machine performance to a level that is commensurate with human performance. Here, we propose a new framework using the ventral face network as a model system to illustrate how increasing the neural accuracy of present DNNs may allow researchers to test the computational benefits of the functional architecture of the human brain. Thus, the review (i) considers specific neural implementational features of the ventral face network, (ii) describes similarities and differences between the functional architecture of the brain and DNNs, and (iii) provides a hypothesis for the computational value of implementational features within the brain that may improve DNN performance. Importantly, this new framework promotes the incorporation of neuroscientific findings into DNNs in order to test the computational benefits of fundamental organizational features of the visual system.


bioRxiv | 2018

Differential Sustained and Transient Temporal Processing Across Visual Streams

Anthony Stigliani; Brianna Jeska; Kalanit Grill-Spector

How do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-tomoment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions respond to both sustained and transient components of the visual input. Responses to sustained stimuli exhibit adaptation, whereas responses to transient stimuli are surprisingly larger for stimulus offsets than onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations. AUTHOR SUMMARY How does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a temporal encoding model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with stimulus onsets and offsets but not the unchanging aspects of the visual input. That is, they compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components, with the former exhibiting adaptation. Surprisingly, in these ventral regions responses to stimulus offsets were larger than onsets. We suggest that the former may reflect a memory trace of the stimulus, when it is no longer visible, and the latter may reflect rapid processing of new items at stimulus onset. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.


The Journal of Neuroscience | 2016

Development of Neural Sensitivity to Face Identity Correlates with Perceptual Discriminability

Vaidehi Natu; Michael Barnett; Jake Hartley; Jesse Gomez; Anthony Stigliani; Kalanit Grill-Spector


Journal of Vision | 2014

Differential rate of temporal processing across category-selective regions in human high-level visual cortex

Anthony Stigliani; Kevin S. Weiner; Kalanit Grill-Spector


Journal of Vision | 2018

Modeling the temporal dynamics of high-level visual cortex

Anthony Stigliani


Journal of Vision | 2017

Development of neural sensitivity to face identity correlates with perceptual discriminability

Vaidehi Natu; Michael Barnett; Jake Hartley; Jesse Gomez; Anthony Stigliani; Kalanit Grill-Spector

Collaboration


Dive into the Anthony Stigliani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Barnett

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karl Zilles

University of Düsseldorf

View shared research outputs
Top Co-Authors

Avatar

Katrin Amunts

University of Düsseldorf

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge