Noah Benson
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Noah Benson.
The Journal of Neuroscience | 2017
Jingyang Zhou; Noah Benson; Kendrick Kay; Jonathan Winawer
Combining sensory inputs over space and time is fundamental to vision. Population receptive field models have been successful in characterizing spatial encoding throughout the human visual pathways. A parallel question, how visual areas in the human brain process information distributed over time, has received less attention. One challenge is that the most widely used neuroimaging method, fMRI, has coarse temporal resolution compared with the time-scale of neural dynamics. Here, via carefully controlled temporally modulated stimuli, we show that information about temporal processing can be readily derived from fMRI signal amplitudes in male and female subjects. We find that all visual areas exhibit subadditive summation, whereby responses to longer stimuli are less than the linear prediction from briefer stimuli. We also find fMRI evidence that the neural response to two stimuli is reduced for brief interstimulus intervals (indicating adaptation). These effects are more pronounced in visual areas anterior to V1-V3. Finally, we develop a general model that shows how these effects can be captured with two simple operations: temporal summation followed by a compressive nonlinearity. This model operates for arbitrary temporal stimulation patterns and provides a simple and interpretable set of computations that can be used to characterize neural response properties across the visual hierarchy. Importantly, compressive temporal summation directly parallels earlier findings of compressive spatial summation in visual cortex describing responses to stimuli distributed across space. This indicates that, for space and time, cortex uses a similar processing strategy to achieve higher-level and increasingly invariant representations of the visual world. SIGNIFICANCE STATEMENT Combining sensory inputs over time is fundamental to seeing. Two important temporal phenomena are summation, the accumulation of sensory inputs over time, and adaptation, a response reduction for repeated or sustained stimuli. We investigated these phenomena in the human visual system using fMRI. We built predictive models that operate on arbitrary temporal patterns of stimulation using two simple computations: temporal summation followed by a compressive nonlinearity. Our new temporal compressive summation model captures (1) subadditive temporal summation, and (2) adaptation. We show that the model accounts for systematic differences in these phenomena across visual areas. Finally, we show that for space and time, the visual system uses a similar strategy to achieve increasingly invariant representations of the visual world.
bioRxiv | 2017
Jingyang Zhou; Noah Benson; Kendrick Kay; Jonathan Winawer
Neuronal responses in visual cortex show a diversity of complex temporal properties. These properties include sub-additive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low stimulus contrast. Here, we hypothesize that these seemingly disparate effects can be explained by a single, shared computational mechanism. We propose a model consisting of a linear stage, followed by history-dependent gain control. The model accounts for these various temporal phenomena, tested against an unusually diverse set of measurements - intracranial electrodes in patients, fMRI, and macaque single unit spiking. The model further enables us to uncover a systematic and rich variety of temporal encoding strategies across visual cortex: First, temporal receptive field shape differs both across and within visual field maps. Second, later visual areas show more rapid and pronounced adaptation. Our study provides a new framework to understand the transformation between visual input and dynamical cortical responses.The visual system analyzes image properties across multiple spatial and temporal scales. Population receptive field (“pRF”) models have successfully characterized spatial representations across the human visual pathways. Here, we studied temporal representations, measuring fMRI and electrocorticographic (“ECoG”) responses in posterior, lateral, ventral, and dorsal visual areas to briefly viewed contrast patterns. We built a temporal pRF model employing linear summation and time-varying divisive normalization. Our model accurately predicts the fMRI amplitude and ECoG broadband time-course, accounting for two phenomena – accumulation of stimulus information over time (summation), and response reduction with prolonged or repeated exposure (adaptation). We find systematic differences in these properties: summation periods are increasingly long and adaptation more pronounced in higher compared to earlier visual areas. We propose that several features of temporal responses – adaptation, summation, and the timescale of temporal dynamics – can be understood as resulting from a small number of canonical neuronal computations. Significance Statement Combining sensory inputs over time is fundamental to seeing. Due to temporal integration, we do not perceive the flicker in fluorescent lights nor the discrete sampling of movie frames; instead we see steady illumination and continuous motion. As a result of adaptation, elements of a scene that suddenly change in appearance are more salient than elements that do not. Here we investigated how the human nervous system combines visual information over time, measuring both functional MRI and intracortical EEG. We built predictive models using canonical neural computations, and account for temporal integration and adaptation. The models capture systematic differences in how information is combined in different visual areas, and generalize across instruments, subjects, and stimuli.
bioRxiv | 2017
Jingyang Zhou; Noah Benson; Kendrick Kay; Jonathan Winawer
Neuronal responses in visual cortex show a diversity of complex temporal properties. These properties include sub-additive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low stimulus contrast. Here, we hypothesize that these seemingly disparate effects can be explained by a single, shared computational mechanism. We propose a model consisting of a linear stage, followed by history-dependent gain control. The model accounts for these various temporal phenomena, tested against an unusually diverse set of measurements - intracranial electrodes in patients, fMRI, and macaque single unit spiking. The model further enables us to uncover a systematic and rich variety of temporal encoding strategies across visual cortex: First, temporal receptive field shape differs both across and within visual field maps. Second, later visual areas show more rapid and pronounced adaptation. Our study provides a new framework to understand the transformation between visual input and dynamical cortical responses.The visual system analyzes image properties across multiple spatial and temporal scales. Population receptive field (“pRF”) models have successfully characterized spatial representations across the human visual pathways. Here, we studied temporal representations, measuring fMRI and electrocorticographic (“ECoG”) responses in posterior, lateral, ventral, and dorsal visual areas to briefly viewed contrast patterns. We built a temporal pRF model employing linear summation and time-varying divisive normalization. Our model accurately predicts the fMRI amplitude and ECoG broadband time-course, accounting for two phenomena – accumulation of stimulus information over time (summation), and response reduction with prolonged or repeated exposure (adaptation). We find systematic differences in these properties: summation periods are increasingly long and adaptation more pronounced in higher compared to earlier visual areas. We propose that several features of temporal responses – adaptation, summation, and the timescale of temporal dynamics – can be understood as resulting from a small number of canonical neuronal computations. Significance Statement Combining sensory inputs over time is fundamental to seeing. Due to temporal integration, we do not perceive the flicker in fluorescent lights nor the discrete sampling of movie frames; instead we see steady illumination and continuous motion. As a result of adaptation, elements of a scene that suddenly change in appearance are more salient than elements that do not. Here we investigated how the human nervous system combines visual information over time, measuring both functional MRI and intracortical EEG. We built predictive models using canonical neural computations, and account for temporal integration and adaptation. The models capture systematic differences in how information is combined in different visual areas, and generalize across instruments, subjects, and stimuli.
Journal of Vision | 2015
Noah Benson; Geoffrey K. Aguirre; Jonathan Winawer
Recent advances in retinotopic modeling have led to the development of retinotopic template maps that depend only on the sulcal topology of the individual subject yet predict individual retinotopy with high accuracy (Benson et al., 2014, PLoS Comput. Biol. 10:e1003538). These templates are constructed via registration of the aggregate retinotopic map of many subjects to a retinotopic model and thus provide a strong prior for the retinotopic organization of an individual subject. We ask whether empirical retinotopic data measured in individual subjects combined with the template prior have greater prediction accuracy than template maps alone, empirical maps alone, and smoothed empirical maps. We examine the quality of retinotopic predictions in 28 subjects whose V1-V3 retinotopic maps were measured out to 10° (21 subjects), 20° (6 subjects), and 48° of eccentricity (1 subject) using fMRI. Refined templates were produced by registering the empirical map to a retinotopic model using the registration of the aggregate-based template map as the starting position for the registration. When the refined map derived from a low quality partial scan of a subject is compared to the empirical map derived from the remaining scan, we found that the refined templates (median absolute errors: 0.67° eccentricity, 20.34° polar angle) were more accurate than aggregate-based template maps (1.34° eccentricity, 20.55° polar angle). Additionally, the ability to predict the retinotopic organization outside of the extent of the stimulus is slightly improved in the refined templates (0.44° eccentricity, 18.03° polar angle) versus the aggregate templates (1.44° eccentricity, 21.10° polar angle). Finally, the refined maps have better representation of the vertical meridia than either empirical or smoothed empirical maps and eliminate the edge biases from smoothing. Combining a template model with measured data provides a better representation of an individuals retinotopic map than either data or template alone. Meeting abstract presented at VSS 2015.
bioRxiv | 2018
Jingyang Zhou; Noah Benson; Kendrick Kay; Jonathan Winawer
Neuronal responses in visual cortex show a diversity of complex temporal properties. These properties include sub-additive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low stimulus contrast. Here, we hypothesize that these seemingly disparate effects can be explained by a single, shared computational mechanism. We propose a model consisting of a linear stage, followed by history-dependent gain control. The model accounts for these various temporal phenomena, tested against an unusually diverse set of measurements - intracranial electrodes in patients, fMRI, and macaque single unit spiking. The model further enables us to uncover a systematic and rich variety of temporal encoding strategies across visual cortex: First, temporal receptive field shape differs both across and within visual field maps. Second, later visual areas show more rapid and pronounced adaptation. Our study provides a new framework to understand the transformation between visual input and dynamical cortical responses.The visual system analyzes image properties across multiple spatial and temporal scales. Population receptive field (“pRF”) models have successfully characterized spatial representations across the human visual pathways. Here, we studied temporal representations, measuring fMRI and electrocorticographic (“ECoG”) responses in posterior, lateral, ventral, and dorsal visual areas to briefly viewed contrast patterns. We built a temporal pRF model employing linear summation and time-varying divisive normalization. Our model accurately predicts the fMRI amplitude and ECoG broadband time-course, accounting for two phenomena – accumulation of stimulus information over time (summation), and response reduction with prolonged or repeated exposure (adaptation). We find systematic differences in these properties: summation periods are increasingly long and adaptation more pronounced in higher compared to earlier visual areas. We propose that several features of temporal responses – adaptation, summation, and the timescale of temporal dynamics – can be understood as resulting from a small number of canonical neuronal computations. Significance Statement Combining sensory inputs over time is fundamental to seeing. Due to temporal integration, we do not perceive the flicker in fluorescent lights nor the discrete sampling of movie frames; instead we see steady illumination and continuous motion. As a result of adaptation, elements of a scene that suddenly change in appearance are more salient than elements that do not. Here we investigated how the human nervous system combines visual information over time, measuring both functional MRI and intracortical EEG. We built predictive models using canonical neural computations, and account for temporal integration and adaptation. The models capture systematic differences in how information is combined in different visual areas, and generalize across instruments, subjects, and stimuli.
bioRxiv | 2018
Noah Benson; Jonathan Winawer
A major question in human neuroscience is how cortical function is mapped onto surface anatomy. Retinotopic maps, which tile about a quarter of the cortical surface, have served as a testbed to address this question. Prior work has shown that the location and retinotopic organization of posterior visual field maps, V1-V3, tend to broadly align to the cortical folding pattern. Here, we develop a new Bayesian method to accurately model retinotopic organization in individual subjects, and demonstrate that there are substantial individual differences in the mapping between function (retinotopic organization) and structure, even after co-registration of the surface anatomies. The Bayesian method combines observation-a subjects retinotopic measurements from small amounts of fMRI time-with a prior-a retinotopic atlas produced from group-average measurements. This process, which we apply to human V1, V2, and V3, automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation retinotopy data more accurately than an atlas alone or, in most cases, an independent retinotopic dataset alone. The model accurately predicts map organization in the periphery, well beyond the region of visual space used for training the model. We use the Bayesian fits to characterize map properties, such as cortical magnification and the size of population receptive fields, and to assess the degree to which structure-function relationships differ between individuals. Further, we show how the method can be extended to 9 additional visual field maps. We propose that the Bayesian maps are more accurate than previous maps because they account for both regularities across subjects and individual differences in the relationship between anatomy and function.Human visual cortex is organized into multiple retinotopic maps. Characterizing the arrangement of these maps on the cortical surface is essential to many visual neuroscience studies. Typically, maps are obtained by voxel-wise analysis of fMRI data. This method, while useful, maps only a portion of the visual field and is limited by measurement noise and subjective assessment of boundaries. We developed a novel Bayesian mapping approach which combines observation--a subject9s retinotopic measurements from small amounts of fMRI time--with a prior--a learned retinotopic atlas. This process automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation data more accurately than an atlas alone or independent datasets alone. This new method can be used to improve the accuracy of retinotopic mapping, to analyze large fMRI datasets automatically, and to quantify differences in map properties as a function of health, development and natural variation between individuals.
bioRxiv | 2018
Noah Benson; Jonathan Winawer
A major question in human neuroscience is how cortical function is mapped onto surface anatomy. Retinotopic maps, which tile about a quarter of the cortical surface, have served as a testbed to address this question. Prior work has shown that the location and retinotopic organization of posterior visual field maps, V1-V3, tend to broadly align to the cortical folding pattern. Here, we develop a new Bayesian method to accurately model retinotopic organization in individual subjects, and demonstrate that there are substantial individual differences in the mapping between function (retinotopic organization) and structure, even after co-registration of the surface anatomies. The Bayesian method combines observation-a subjects retinotopic measurements from small amounts of fMRI time-with a prior-a retinotopic atlas produced from group-average measurements. This process, which we apply to human V1, V2, and V3, automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation retinotopy data more accurately than an atlas alone or, in most cases, an independent retinotopic dataset alone. The model accurately predicts map organization in the periphery, well beyond the region of visual space used for training the model. We use the Bayesian fits to characterize map properties, such as cortical magnification and the size of population receptive fields, and to assess the degree to which structure-function relationships differ between individuals. Further, we show how the method can be extended to 9 additional visual field maps. We propose that the Bayesian maps are more accurate than previous maps because they account for both regularities across subjects and individual differences in the relationship between anatomy and function.Human visual cortex is organized into multiple retinotopic maps. Characterizing the arrangement of these maps on the cortical surface is essential to many visual neuroscience studies. Typically, maps are obtained by voxel-wise analysis of fMRI data. This method, while useful, maps only a portion of the visual field and is limited by measurement noise and subjective assessment of boundaries. We developed a novel Bayesian mapping approach which combines observation--a subject9s retinotopic measurements from small amounts of fMRI time--with a prior--a learned retinotopic atlas. This process automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation data more accurately than an atlas alone or independent datasets alone. This new method can be used to improve the accuracy of retinotopic mapping, to analyze large fMRI datasets automatically, and to quantify differences in map properties as a function of health, development and natural variation between individuals.
bioRxiv | 2018
Noah Benson; Jonathan Winawer
A major question in human neuroscience is how cortical function is mapped onto surface anatomy. Retinotopic maps, which tile about a quarter of the cortical surface, have served as a testbed to address this question. Prior work has shown that the location and retinotopic organization of posterior visual field maps, V1-V3, tend to broadly align to the cortical folding pattern. Here, we develop a new Bayesian method to accurately model retinotopic organization in individual subjects, and demonstrate that there are substantial individual differences in the mapping between function (retinotopic organization) and structure, even after co-registration of the surface anatomies. The Bayesian method combines observation-a subjects retinotopic measurements from small amounts of fMRI time-with a prior-a retinotopic atlas produced from group-average measurements. This process, which we apply to human V1, V2, and V3, automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation retinotopy data more accurately than an atlas alone or, in most cases, an independent retinotopic dataset alone. The model accurately predicts map organization in the periphery, well beyond the region of visual space used for training the model. We use the Bayesian fits to characterize map properties, such as cortical magnification and the size of population receptive fields, and to assess the degree to which structure-function relationships differ between individuals. Further, we show how the method can be extended to 9 additional visual field maps. We propose that the Bayesian maps are more accurate than previous maps because they account for both regularities across subjects and individual differences in the relationship between anatomy and function.Human visual cortex is organized into multiple retinotopic maps. Characterizing the arrangement of these maps on the cortical surface is essential to many visual neuroscience studies. Typically, maps are obtained by voxel-wise analysis of fMRI data. This method, while useful, maps only a portion of the visual field and is limited by measurement noise and subjective assessment of boundaries. We developed a novel Bayesian mapping approach which combines observation--a subject9s retinotopic measurements from small amounts of fMRI time--with a prior--a learned retinotopic atlas. This process automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation data more accurately than an atlas alone or independent datasets alone. This new method can be used to improve the accuracy of retinotopic mapping, to analyze large fMRI datasets automatically, and to quantify differences in map properties as a function of health, development and natural variation between individuals.
bioRxiv | 2018
Noah Benson; Keith Jamison; Michael Arcaro; An Vu; Matt Glasser; Tim Coalson; David C. Van Essen; Essa Yacoub; Kamil Ugurbil; Jonathan Winawer
Journal of Vision | 2017
Noah Benson; William Broderick; Heiko Müller; Jonathan Winawer