Ari Rosenberg
University of Chicago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ari Rosenberg.
Proceedings of the National Academy of Sciences of the United States of America | 2015
Ari Rosenberg; Jaclyn Sky Patterson; Dora E. Angelaki
Significance Autism is a pervasive disorder that broadly impacts perceptual, cognitive, social, and motor functioning. Across individuals, the disorder manifests with a large degree of phenotypic diversity. Here, we propose that autism symptomatology reflects alterations in neural computation. Using neural network simulations, we show that a reduction in the amount of inhibition occurring through a computation called divisive normalization can account for perceptual consequences reported in autism, as well as proposed changes in the extent to which past experience influences the interpretation of current sensory information in individuals with the disorder. A computational perspective can help bridge our understandings of the genetic/molecular basis of autism and its behavioral characteristics, providing insights into the disorder and possible courses of treatment. Autism is a neurodevelopmental disorder that manifests as a heterogeneous set of social, cognitive, motor, and perceptual symptoms. This system-wide pervasiveness suggests that, rather than narrowly impacting individual systems such as affection or vision, autism may broadly alter neural computation. Here, we propose that alterations in nonlinear, canonical computations occurring throughout the brain may underlie the behavioral characteristics of autism. One such computation, called divisive normalization, balances a neuron’s net excitation with inhibition reflecting the overall activity of the neuronal population. Through neural network simulations, we investigate how alterations in divisive normalization may give rise to autism symptomatology. Our findings show that a reduction in the amount of inhibition that occurs through divisive normalization can account for perceptual consequences of autism, consistent with the hypothesis of an increased ratio of neural excitation to inhibition (E/I) in the disorder. These results thus establish a bridge between an E/I imbalance and behavioral data on autism that is currently absent. Interestingly, our findings implicate the context-dependent, neuronal milieu as a key factor in autism symptomatology, with autism reflecting a less “social” neuronal population. Through a broader discussion of perceptual data, we further examine how altered divisive normalization may contribute to a wide array of the disorder’s behavioral consequences. These analyses show how a computational framework can provide insights into the neural basis of autism and facilitate the generation of falsifiable hypotheses. A computational perspective on autism may help resolve debates within the field and aid in identifying physiological pathways to target in the treatment of the disorder.
Journal of Neurophysiology | 2008
Naoum P. Issa; Ari Rosenberg; T. Robert Husson
The organization of primary visual cortex has been heavily studied for nearly 50 years, and in the last 20 years functional imaging has provided high-resolution maps of its tangential organization. Recently, however, the usefulness of maps like those of orientation and spatial frequency (SF) preference has been called into question because they do not, by themselves, predict how moving images are represented in V1. In this review, we discuss a model for cortical responses (the spatiotemporal filtering model) that specifies the types of cortical maps needed to predict distributed activity within V1. We then review the structure and interrelationships of several of these maps, including those of orientation, SF, and temporal frequency preference. Finally, we discuss tests of the model and the sufficiency of the requisite maps in predicting distributed cortical responses. Although the spatiotemporal filtering model does not account for all responses within V1, it does, with reasonable accuracy, predict population responses to a variety of complex stimuli.
Current Opinion in Neurobiology | 2014
Robert L Seilheimer; Ari Rosenberg; Dora E. Angelaki
Fundamental to our perception of a unified and stable environment is the capacity to combine information across the senses. Although this process appears seamless as an adult, the brains ability to successfully perform multisensory cue combination takes years to develop and relies on a number of complex processes including cue integration, cue calibration, causal inference, and reference frame transformations. Further complexities exist because multisensory cue combination is implemented across time by populations of noisy neurons. In this review, we discuss recent behavioral studies exploring how the brain combines information from different sensory systems, neurophysiological studies relating behavior to neuronal activity, and a theory of neural sensory encoding that can account for many of these experimental findings.
The Journal of Neuroscience | 2013
Ari Rosenberg; Noah J. Cowan; Dora E. Angelaki
An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an objects 3D spatial orientation.
The Journal of Neuroscience | 2010
Ari Rosenberg; T. Robert Husson; Naoum P. Issa
A fundamental goal of visual neuroscience is to identify the neural pathways representing different image features. It is widely argued that the early stages of these pathways represent linear features of the visual scene and that the nonlinearities necessary to represent complex visual patterns are introduced later in cortex. We tested this by comparing the responses of subcortical and cortical neurons to interference patterns constructed by summing sinusoidal gratings. Although a linear mechanism can detect the component gratings, a nonlinear mechanism is required to detect an interference pattern resulting from their sum. Consistent with in vitro retinal ganglion cell recordings, we found that interference patterns are represented subcortically by cat LGN Y-cells, but not X-cells. Linear and nonlinear tuning properties of LGN Y-cells were then characterized and compared quantitatively with those of cortical area 18 neurons responsive to interference patterns. This comparison revealed a high degree of similarity between the two neural populations, including the following: (1) the representation of similar spatial frequencies in both their linear and nonlinear responses, (2) comparable orientation selectivity for the high spatial frequency carrier of interference patterns, and (3) the same difference in their temporal frequency selectivity for drifting gratings versus the envelope of interference patterns. The present findings demonstrate that the nonlinear subcortical Y-cell pathway represents complex visual patterns and likely underlies cortical responses to interference patterns. We suggest that linear and nonlinear mechanisms important for encoding visual scenes emerge in parallel through distinct pathways originating at the retina.
Visual Neuroscience | 2008
Ari Rosenberg; Pascal Wallisch; David C. Bradley
Motion transparency occurs when multiple object velocities are present within a local region of retinotopic space. Transparent signals can carry information useful in the segmentation of moving objects and in the extraction of three-dimensional structure from relative motion cues. However, the physiological substrate underlying the detection of motion transparency is poorly understood. Direction tuned neurons in area MT are suppressed by transparent stimuli, suggesting that other motion sensitive areas may be needed to represent this signal robustly. Recent neuroimaging evidence implicated two such areas in the macaque superior temporal sulcus. We studied one of these, FST, with electrophysiological methods and found that a large fraction of the neurons responded well to two opposite directions of motion and to transparent stimuli containing those same directions. A linear combination of MT-like responses qualitatively reproduces this behavior and predicts that FST neurons can be tuned for transparent motion containing specific direction and depth components. We suggest that FST plays a role in motion segmentation based on transparent signals.
The Journal of Neuroscience | 2007
Jing X. Zhang; Ari Rosenberg; Atul K. Mallik; Husson Tr; Naoum P. Issa
The organization of cat primary visual cortex has been well mapped using simple stimuli such as sinusoidal gratings, revealing superimposed maps of orientation and spatial frequency preferences. However, it is not yet understood how complex images are represented across these maps. In this study, we ask whether a linear filter model can explain how cortical spatial frequency domains are activated by complex images. The model assumes that the response to a stimulus at any point on the cortical surface can be predicted by its individual orientation, spatial frequency, and temporal frequency tuning curves. To test this model, we imaged the pattern of activity within cat area 17 in response to stimuli composed of multiple spatial frequencies. Consistent with the predictions of the model, the stimuli activated low and high spatial frequency domains differently: at low stimulus drift speeds, both domains were strongly activated, but activity fell off in high spatial frequency domains as drift speed increased. To determine whether the filter model quantitatively predicted the activity patterns, we measured the spatiotemporal tuning properties of the functional domains in vivo and calculated expected response amplitudes from the model. The model accurately predicted cortical response patterns for two types of complex stimuli drifting at a variety of speeds. These results suggest that the distributed activity of primary visual cortex can be predicted from cortical maps like those of orientation and SF preference generated using simple, sinusoidal stimuli, and that dynamic visual acuity is degraded at or before the level of area 17.
Vision Research | 2008
Atul K. Mallik; T. Robert Husson; Jing X. Zhang; Ari Rosenberg; Naoum P. Issa
To determine the organization of spatial frequency (SF) preference within cat Area 17, we imaged responses to stimuli with different SFs using optical intrinsic signals (ISI) and flavoprotein autofluorescence (AFI). Previous studies have suggested that neurons cluster based on SF preference, but a recent report argued that SF maps measured with ISI were artifacts of the vascular bed. Because AFI derives from a non-hemodynamic signal, it is less contaminated by vasculature. The two independent imaging methods produced similar SF preference maps in the same animals, suggesting that the patchy organization of SF preference is a genuine feature of Area 17.
Proceedings of the National Academy of Sciences of the United States of America | 2014
Ari Rosenberg; Dora E. Angelaki
Significance The world is 3D, but our eyes sense 2D projections. Constructing 3D spatial representations is consequently a complex problem the brain must solve for us to interact with the environment. Robust 3D representations can theoretically be created by combining distinct visual signals according to their reliabilities, which depend on factors such as an object’s orientation and distance. Here, we show that reliability constrains the integration of texture and disparity cues for 3D orientation in macaque parietal cortex. Consistent with human perceptual studies, the contribution of texture cues is found to increase as the object’s slant (i.e., depth variation) increases. This finding suggests that the parietal cortex is capable of combining multiple visual signals to perform statistical inference about the 3D world. Creating accurate 3D representations of the world from 2D retinal images is a fundamental task for the visual system. However, the reliability of different 3D visual signals depends inherently on viewing geometry, such as how much an object is slanted in depth. Human perceptual studies have correspondingly shown that texture and binocular disparity cues for object orientation are combined according to their slant-dependent reliabilities. Where and how this cue combination occurs in the brain is currently unknown. Here, we search for neural correlates of this property in the macaque caudal intraparietal area (CIP) by measuring slant tuning curves using mixed-cue (texture + disparity) and cue-isolated (texture or disparity) planar stimuli. We find that texture cues contribute more to the mixed-cue responses of CIP neurons that prefer larger slants, consistent with theoretical and psychophysical results showing that the reliability of texture relative to disparity cues increases with slant angle. By analyzing responses to binocularly viewed texture stimuli with conflicting texture and disparity information, some cells that are sensitive to both cues when presented in isolation are found to disregard one of the cues during cue conflict. Additionally, the similarity between texture and mixed-cue responses is found to be greater when this cue conflict is eliminated by presenting the texture stimuli monocularly. The present findings demonstrate reliability-dependent contributions of visual orientation cues at the level of the CIP, thus revealing a neural correlate of this property of human visual perception.
The Journal of Neuroscience | 2013
Dakin Cj; Elmore Lc; Ari Rosenberg
To create a stable representation of the visual scene, the brain relies on information provided by the vestibular and proprioceptive systems. Two well studied examples of this are the vestibular-ocular reflex, which counter-rotates the eyes during head rotation to reduce retinal image motion, and