Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno A. Olshausen is active.

Publication


Featured researches published by Bruno A. Olshausen.


Vision Research | 1997

Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1 ?

Bruno A. Olshausen; David J. Field

The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and bandpass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli.


Network: Computation In Neural Systems | 1996

Natural image statistics and efficient coding

Bruno A. Olshausen; David J. Field

Natural images contain characteristic statistical regularities that set them apart from purely random images. Understanding what these regularities are can enable natural images to be coded more efficiently. In this paper, we describe some of the forms of structure that are contained in natural images, and we show how these are related to the response properties of neurons at early stages of the visual system. Many of the important forms of structure require higher-order (i.e. more than linear, pairwise) statistics to characterize, which makes models based on linear Hebbian learning, or principal components analysis, inappropriate for finding efficient codes for natural images. We suggest that a good objective for an efficient coding of natural scenes is to maximize the sparseness of the representation, and we show that a network that learns sparse codes of natural scenes succeeds in developing localized, oriented, bandpass receptive fields similar to those in the mammalian striate cortex.


Proceedings of the National Academy of Sciences of the United States of America | 2002

Shape perception reduces activity in human primary visual cortex.

Scott O. Murray; Daniel Kersten; Bruno A. Olshausen; Paul R. Schrater; David L. Woods

Visual perception involves the grouping of individual elements into coherent patterns that reduce the descriptive complexity of a visual scene. The physiological basis of this perceptual simplification remains poorly understood. We used functional MRI to measure activity in a higher object processing area, the lateral occipital complex, and in primary visual cortex in response to visual elements that were either grouped into objects or randomly arranged. We observed significant activity increases in the lateral occipital complex and concurrent reductions of activity in primary visual cortex when elements formed coherent shapes, suggesting that activity in early visual areas is reduced as a result of grouping processes performed in higher areas. These findings are consistent with predictive coding models of vision that postulate that inferences of high-level areas are subtracted from incoming sensory information in lower areas through cortical feedback.


The Journal of Neuroscience | 2005

Do We Know What the Early Visual System Does

Matteo Carandini; Jonathan B. Demb; Valerio Mante; David J. Tolhurst; Yang Dan; Bruno A. Olshausen; Jack L. Gallant; Nicole C. Rust

We can claim that we know what the visual system does once we can predict neural responses to arbitrary stimuli, including those seen in nature. In the early visual system, models based on one or more linear receptive fields hold promise to achieve this goal as long as the models include nonlinear mechanisms that control responsiveness, based on stimulus context and history, and take into account the nonlinearity of spike generation. These linear and nonlinear mechanisms might be the only essential determinants of the response, or alternatively, there may be additional fundamental determinants yet to be identified. Research is progressing with the goals of defining a single “standard model” for each stage of the visual pathway and testing the predictive power of these models on the responses to movies of natural scenes. These predictive models represent, at a given stage of the visual pathway, a compact description of visual computation. They would be an invaluable guide for understanding the underlying biophysical and anatomical mechanisms and relating neural responses to visual perception.


Neural Computation | 2005

How Close Are We to Understanding V1

Bruno A. Olshausen; David J. Field

A wide variety of papers have reviewed what is known about the function of primary visual cortex. In this review, rather than stating what is known, we attempt to estimate how much is still unknown about V1 function. In particular, we identify five problems with the current view of V1 that stem largely from experimental and theoretical biases, in addition to the contributions of nonlinearities in the cortex that are not well understood. Our purpose is to open the door to new theories, a number of which we describe, along with some proposals for testing them.


Journal of The Optical Society of America A-optics Image Science and Vision | 1999

Probabilistic framework for the adaptation and comparison of image codes

Michael S. Lewicki; Bruno A. Olshausen

We apply a Bayesian method for inferring an optimal basis to the problem of finding efficient image codes for natural scenes. The basis functions learned by the algorithm are oriented and localized in both space and frequency, bearing a resemblance to two-dimensional Gabor functions, and increasing the number of basis functions results in a greater sampling density in position, orientation, and scale. These properties also resemble the spatial receptive fields of neurons in the primary visual cortex of mammals, suggesting that the receptive-field structure of these neurons can be accounted for by a general efficient coding principle. The probabilistic framework provides a method for comparing the coding efficiency of different bases objectively by calculating their probability given the observed data or by measuring the entropy of the basis function coefficients. The learned bases are shown to have better coding efficiency than traditional Fourier and wavelet bases. This framework also provides a Bayesian solution to the problems of image denoising and filling in of missing pixels. We demonstrate that the results obtained by applying the learned bases to these problems are improved over those obtained with traditional techniques.


Neural Computation | 2008

Sparse coding via thresholding and local competition in neural circuits

Christopher J. Rozell; Don H. Johnson; Richard G. Baraniuk; Bruno A. Olshausen

While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuron-like elements operating in parallel. These algorithms use thresholding functions to induce local (usually one-way) inhibitory competitions between nodes to produce sparse representations. LCAs produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Additionally, LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms.


Journal of Vision | 2003

Timecourse of neural signatures of object recognition.

Jeffrey S. Johnson; Bruno A. Olshausen

How long does it take for the human visual system to recognize objects? This issue is important for understanding visual cortical function as it places constraints on models of the information processing underlying recognition. We designed a series of event-related potential (ERP) experiments to measure the timecourse of electrophysiological correlates of object recognition. We find two distinct types of components in the ERP recorded during categorization of natural images. One is an early presentation-locked signal arising around 135 ms that is present when there are low-level feature differences between images. The other is a later, recognition-related component arising between 150-300 ms. Unlike the early component, the latency of the later component covaries with the subsequent reaction time. In contrast to previous studies suggesting that the early, presentation-locked component of neural activity is correlated to recognition, these results imply that the neural signatures of recognition have a substantially later and variable time of onset.


IEEE Journal of Selected Topics in Signal Processing | 2011

Learning Sparse Codes for Hyperspectral Imagery

Adam S. Charles; Bruno A. Olshausen; Christopher J. Rozell

The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized, could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: 1) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials; 2) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and 3) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.


Journal of Computational Neuroscience | 1995

A multiscale dynamic routing circuit for forming size- and position-invariant object representations.

Bruno A. Olshausen; Charles H. Anderson; David C. Van Essen

We describe a neural model for forming size- and position-invariant representations of visual objects. The model is based on a previously proposed dynamic routing circuit that remaps selected portions of an input array into an object-centered reference frame. Here, we show how a multiscale representation may be incorporated at the input stage of the model, and we describe the control architecture and dynamics for a hierarchical, multistage routing circuit. Specific neurobiological substrates and mechanisms for the model are proposed, and a number of testable predictions are described.

Collaboration


Dive into the Bruno A. Olshausen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher J. Rozell

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charles H. Anderson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David C. Van Essen

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Phil Sallee

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge