Nicolas P. Cottaris
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicolas P. Cottaris.
Nature | 1998
Nicolas P. Cottaris; Russell L. De Valois
The ability to distinguish colour from intensity variations is a difficult computational problem for the visual system because each of the three cone photoreceptor types absorb all wavelengths of light, although their peak sensitivities are at relatively short (S cones), medium (M cones), or long (L cones) wavelengths. The first stage in colour processing is the comparison of the outputs of different cone types by spectrally opponent neurons in the retina and upstream in the lateral geniculate nucleus. Some neurons receive opponent inputs from L and M cones, whereas others receive input from S cones opposed by combined signals from L and M cones. Here we report how the outputs of the L/M- and S-opponent geniculate cell types are combined in time at the next stage of colour processing, in the macaque primary visual cortex (V1). Some V1 neurons respond to a single chromatic region, with either a short (68–95 ms) or a longer (96–135 ms) latency, whereas others respond to two chromatic regions with a difference in latency of 20–30 ms. Across all types, short latency responses are mostly evoked by L/M-opponent inputs whereas longer latency responses are evoked mostly by S-opponent inputs. Furthermore, neurons with late S-cone inputs exhibit dynamic changes in the sharpness of their chromatic tuning over time. We propose that the sparse, S-opponent signal in the lateral geniculate nucleus is amplified in area V1, possibly through recurrent excitatory networks. This results in a delayed, sluggish cortical S-cone signal which is then integrated with L/M-opponent signals to rotate the lateral geniculate nucleus chromatic axes,.
Vision Research | 2000
Russell L. De Valois; Nicolas P. Cottaris; Luke E. Mahon; Sylvia D. Elfar; J. Anthony Wilson
The spatio-temporal receptive fields (RFs) of cells in the macaque monkey lateral geniculate nucleus (LGN) and striate cortex (V1) have been examined and two distinct sub-populations of non-directional V1 cells have been found: those with a slow largely monophasic temporal RF, and those with a fast very biphasic temporal response. These two sub-populations are in temporal quadrature, the fast biphasic cells crossing over from one response phase to the reverse just as the slow monophasic cells reach their peak response. The two sub-populations also differ in the spatial phases of their RFs. A principal components analysis of the spatio-temporal RFs of directional V1 cells shows that their RFs could be constructed by a linear combination of two components, one of which has the temporal and spatial characteristics of a fast biphasic cell, and the other the temporal and spatial characteristics of a slow monophasic cell. Magnocellular LGN cells are fast and biphasic and lead the fast-biphasic V1 subpopulation by 7 ms; parvocellular LGN cells are slow and largely monophasic and lead the slow monophasic V1 sub-population by 12 ms. We suggest that directional V1 cells get inputs in the approximate temporal and spatial quadrature required for motion detection by combining signals from the two non-directional cortical sub-populations which have been identified, and that these sub-populations have their origins in magno and parvo LGN cells, respectively.
Journal of Neural Engineering | 2005
Nicolas P. Cottaris; Sylvia D. Elfar
We considered the problem of determining how the retinal network may interact with electrical epiretinal stimulation in shaping the spike trains of ON and OFF ganglion cells, and thus the synaptic input to first-stage cortical neurons. To do so, we developed a biophysical model of the retinal network with nine stacked neuronal mosaics. Here, we describe the models behavior under (i) electrical stimulation of a retina with complete cone photoreceptor loss, but an otherwise intact circuitry and (ii) electrical stimulation of a fully-functional retina. Our results show that electrical stimulation alone results in indiscriminate excitation of ON and OFF ganglion cells and a patchy input to the cortex with islands of excitation among regions of no net excitation. Activation of the retinal network biases the excitation of ON relative to OFF ganglion cells, and in addition, gradually interpolates and focuses the initial, patchy synaptic input to the cortex. As stimulation level increases, the cortical input spreads beyond the area occupied by the electrode contact. Further, at very strong stimulation levels, ganglion cell responses begin to saturate, resulting in a significant distortion in the spatial profile of the cortical input. These findings occur in both the normal and the degenerated retina simulations, but the normal retina exhibits a tighter spatiotemporal response. The complex spatiotemporal dynamics of the prosthetic input to the cortex that are revealed by our model should be addressed by prosthetic image encoders and by studies that simulate prosthetic vision.
Journal of Vision | 2014
Benjamin S. Heasly; Nicolas P. Cottaris; Daniel P. Lichtman; Bei Xiao; David H. Brainard
RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.
bioRxiv | 2018
William S. Tuten; Robert F. Cooper; Pavan Tiruveedhula; Alfredo Dubra; Austin Roorda; Nicolas P. Cottaris; David H. Brainard; Jessica I. W. Morgan
Psychophysical inferences about the neural mechanisms supporting spatial vision can be undermined by uncertainties introduced by optical aberrations and fixational eye movements, particularly in fovea where the neuronal grain of the visual system is fine. We examined the effect of these pre-neural factors on photopic spatial summation in the human fovea using a custom adaptive optics scanning light ophthalmoscope that provided control over optical aberrations and retinal stimulus motion. Consistent with previous results, Ricco’s area of complete summation encompassed multiple photoreceptors when measured with ordinary amounts of ocular aberrations and retinal stimulus motion. When both factors were minimized experimentally, summation areas were essentially unchanged, suggesting that foveal spatial summation is limited by post-receptoral neural pooling. We compared our behavioral data to predictions generated with a physiologically-inspired front-end model of the visual system, and were able to capture the shape of the summation curves obtained with and without pre-retinal factors using a single post-receptoral summing filter of fixed spatial extent. Given our data and modeling, neurons in the magnocellular visual pathway, such as parasol ganglion cells, provide a candidate neural correlate of Ricco’s area in the central fovea.
Interface Focus | 2018
David H. Brainard; Nicolas P. Cottaris; Ana Radonjić
Perceived object colour and material help us to select and interact with objects. Because there is no simple mapping between the pattern of an objects image on the retina and its physical reflectance, our perceptions of colour and material are the result of sophisticated visual computations. A long-standing goal in vision science is to describe how these computations work, particularly as they act to stabilize perceived colour and material against variation in scene factors extrinsic to object surface properties, such as the illumination. If we take seriously the notion that perceived colour and material are useful because they help guide behaviour in natural tasks, then we need experiments that measure and models that describe how they are used in such tasks. To this end, we have developed selection-based methods and accompanying perceptual models for studying perceived object colour and material. This focused review highlights key aspects of our work. It includes a discussion of future directions and challenges, as well as an outline of a computational observer model that incorporates early, known, stages of visual processing and that clarifies how early vision shapes selection performance.
electronic imaging | 2017
Haomiao Jiang; Nicolas P. Cottaris; James Golden; David H. Brainard; Joyce E. Farrell; Brian A. Wandell
Humans resolve the spatial alignment between two visual stimuli at a resolution that is substantially finer than the spacing between the foveal cones. In this paper, we analyze the factors that limit the information at the cone photoreceptors that is available to make these acuity judgments (Vernier acuity). We use open-source software, ISETBIO to quantify the stimulus and encoding stages in the front-end of the human visual system, starting with a description of the stimulus spectral radiance and a computational model that includes the physiological optics, inert ocular pigments, eye movements, photoreceptor sampling and absorptions. The simulations suggest that the visual system extracts the information available within the spatiotemporal pattern of photoreceptor absorptions within a small spatial (0.12 deg) and temporal (200 ms) regime. At typical display luminance levels, the variance arising from the Poisson absorptions and small eye movements (tremors and microsaccades) both appear to be critical limiting factors for Vernier acuity.
bioRxiv | 2018
Nicolas P. Cottaris; Haomiao Jiang; Xiaomao Ding; Brian A. Wandell; David H. Brainard
We present a computational observer model of the human spatial contrast sensitivity (CSF) function based on the Image Systems EngineeringTools for Biology (ISETBio) simulation framework. We demonstrate that ISETBio-derived CSFs agree well with CSFs derived using traditional ideal observer approaches, when the mosaic, optics, and inference engine are matched. Further simulations extend earlier work by considering more realistic cone mosaics, more recent measurements of human physiological optics, and the effect of varying the inference engine used to link visual representations to psy-chohysical performance. Relative to earlier calculations, our simulations show that the spatial structure of realistic cone mosaics reduces upper bounds on performance at low spatial frequencies, whereas realistic optics derived from modern wavefront measurements lead to increased upper bounds high spatial frequencies. Finally, we demonstrate that the type of inference engine used has a substantial effect on the absolute level of predicted performance. Indeed, the performance gap between an ideal observer with exact knowledge of the relevant signals and human observers is greatly reduced when the inference engine has to learn aspects of the visual task. ISETBio-derived estimates of stimulus representations at different stages along the visual pathway provide a powerful tool for computing the limits of human performance.
bioRxiv | 2018
Xiaomao Ding; Ana Radonjić; Nicolas P. Cottaris; Haomiao Jiang; Brian A. Wandell; David H. Brainard
The spectral properties of the ambient illumination provide useful information about time of day and weather. We study the perceptual representation of illumination by analyzing measurements of how well people discriminate between illuminations across scene configurations. More specifically, we compare human performance to a computational-observer analysis that evaluates the information available in the isomerizations of the cones in a model human photoreceptor mosaic. Some patterns of human performance are predicted by the computational observer, other aspects are not. The analysis clarifies which aspects of performance require additional explanation in terms of the action of visual mechanisms beyond the isomerization of light by the cones.
bioRxiv | 2018
James Golden; Cordelia Erickson-Davis; Nicolas P. Cottaris; Nikhil Parthasarathy; Fred Rieke; David H. Brainard; Brian A. Wandell; E. J. Chichilnisky
The nature of artificial vision with a retinal prosthesis, and the degree to which the brain can adapt to the unnatural input from such a device, are poorly understood. Therefore, the development of current and future devices may be aided by theory and simulations that help to infer and understand what prosthesis patients see. A biologically-informed, extensible computational framework is presented here to predict visual perception and the potential effect of learning with a subretinal prosthesis. The framework relies on optimal linear reconstruction of the stimulus from retinal responses to infer the visual information available to the patient. A simulation of the physiological optics of the eye and light responses of the major retinal neurons was used to calculate the optimal linear transformation for reconstructing natural images from retinal activity. The result was then used to reconstruct the visual stimulus during the artificial activation expected from a subretinal prosthesis in a degenerated retina, as a proxy for inferred visual perception. Several simple observations reveal the potential utility of such a simulation framework. The inferred perception obtained with prosthesis activation was substantially degraded compared to the inferred perception obtained with normal retinal responses, as expected given the limited resolution and lack of cell type specificity of the prosthesis. Consistent with clinical findings and the importance of cell type specificity, reconstruction using only ON cells, and not OFF cells, was substantially more accurate. Finally, when reconstruction was re-optimized for prosthesis stimulation, simulating the greatest potential for learning by the patient, the accuracy of inferred perception was much closer to that of healthy vision. The reconstruction approach thus provides a more complete method for exploring the potential for treating blindness with retinal prostheses than has been available previously. It may also be useful for interpreting patient data in clinical trials, and for improving prosthesis design.