Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David J. Field is active.

Publication


Featured researches published by David J. Field.


Vision Research | 1997

Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1 ?

Bruno A. Olshausen; David J. Field

The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and bandpass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli.


Neural Computation | 1994

What is the goal of sensory coding

David J. Field

A number of recent attempts have been made to describe early sensory coding in terms of a general information processing strategy. In this paper, two strategies are contrasted. Both strategies take advantage of the redundancy in the environment to produce more effective representations. The first is described as a compact coding scheme. A compact code performs a transform that allows the input to be represented with a reduced number of vectors (cells) with minimal RMS error. This approach has recently become popular in the neural network literature and is related to a process called Principal Components Analysis (PCA). A number of recent papers have suggested that the optimal compact code for representing natural scenes will have units with receptive field profiles much like those found in the retina and primary visual cortex. However, in this paper, it is proposed that compact coding schemes are insufficient to account for the receptive field properties of cells in the mammalian visual pathway. In contrast, it is proposed that the visual system is near to optimal in representing natural scenes only if optimality is defined in terms of sparse distributed coding. In a sparse distributed code, all cells in the code have an equal response probability across the class of images but have a low response probability for any single image. In such a code, the dimensionality is not reduced. Rather, the redundancy of the input is transformed into the redundancy of the firing pattern of cells. It is proposed that the signature for a sparse code is found in the fourth moment of the response distribution (i.e., the kurtosis). In measurements with 55 calibrated natural scenes, the kurtosis was found to peak when the bandwidths of the visual code matched those of cells in the mammalian visual cortex. Codes resembling wavelet transforms are proposed to be effective because the response histograms of such codes are sparse (i.e., show high kurtosis) when presented with natural scenes. It is proposed that the structure of the image that allows sparse coding is found in the phase spectrum of the image. It is suggested that natural scenes, to a first approximation, can be considered as a sum of self-similar local functions (the inverse of a wavelet). Possible reasons for why sensory systems would evolve toward sparse coding are presented.


Network: Computation In Neural Systems | 1996

Natural image statistics and efficient coding

Bruno A. Olshausen; David J. Field

Natural images contain characteristic statistical regularities that set them apart from purely random images. Understanding what these regularities are can enable natural images to be coded more efficiently. In this paper, we describe some of the forms of structure that are contained in natural images, and we show how these are related to the response properties of neurons at early stages of the visual system. Many of the important forms of structure require higher-order (i.e. more than linear, pairwise) statistics to characterize, which makes models based on linear Hebbian learning, or principal components analysis, inappropriate for finding efficient codes for natural images. We suggest that a good objective for an efficient coding of natural scenes is to maximize the sparseness of the representation, and we show that a network that learns sparse codes of natural scenes succeeds in developing localized, oriented, bandpass receptive fields similar to those in the mammalian striate cortex.


Neural Computation | 2005

How Close Are We to Understanding V1

Bruno A. Olshausen; David J. Field

A wide variety of papers have reviewed what is known about the function of primary visual cortex. In this review, rather than stating what is known, we attempt to estimate how much is still unknown about V1 function. In particular, we identify five problems with the current view of V1 that stem largely from experimental and theoretical biases, in addition to the contributions of nonlinearities in the cortex that are not well understood. Our purpose is to open the door to new theories, a number of which we describe, along with some proposals for testing them.


Journal of The Optical Society of America A-optics Image Science and Vision | 1990

Human discrimination of fractal images

David C. Knill; David J. Field; Daniel Kerstent

In order to transmit information in images efficiently, the visual system should be tuned to the statistical structure of the ensemble of images that it sees. Several authors have suggested that the ensemble of natural images exhibits fractal behavior and, therefore, has a power spectrum that drops off proportionally to 1/f beta (2 less than beta less than 4). In this paper we investigate the question of which value of the exponent beta describes the power spectrum of the ensemble of images to which the visual system is optimally tuned. An experiment in which subjects were asked to discriminate randomly generated noise textures based on their spectral drop-off was used. Whereas the discrimination-threshold function of an ideal observer was flat for different spectral drop-offs, human observers showed a broad peak in sensitivity for 2.8 less than beta less than 3.6. The results are consistent with, but do not provide direct evidence for, the theory that the visual system is tuned to an ensemble of images with Markov statistics.


Journal of Physiology-paris | 2003

Contour integration and cortical processing

Robert F. Hess; Anthony Hayes; David J. Field

Our understanding of visual processing in general, and contour integration in particular, has undergone great change over the last 10 years. There is now an accumulation of psychophysical and neurophysiological evidence that the outputs of cells with conjoint orientation preference and spatial position are integrated in the process of explication of rudimentary contours. Recent neuroanatomical and neurophysiological results suggest that this process takes place at the cortical level V1. The code for contour integration may be a temporal one in that it may only manifest itself in the latter part of the spike train as a result of feedback and lateral interactions. Here we review some of the properties of contour integration from a psychophysical perspective and we speculate on their underlying neurophysiological substrate.


Vision Research | 1994

Is the spatial deficit in strabismic amblyopia due to loss of cells or an uncalibrated disarray of cells

Robert F. Hess; David J. Field

We examine two competing explanations for the spatial localization deficit in human strabismic amblyopia, namely neural undersampling and uncalibrated neural disarray. An undersampling hypothesis would predict an associated deficit for contrast discrimination for which we find no evidence in strabismic amblyopia. A neural disarray hypothesis would predict an associated deficit in the degree to which stimuli appear spatially distorted. We find evidence for such a deficit in strabismic amblyopia. We propose that the spatial deficit in strabismic amblyopia is due to a filter-based distortion which is unable to be re-calibrated by higher visual centres.


Perception | 2000

Local Contrast in Natural Images: Normalisation and Coding Efficiency

Nuala Brady; David J. Field

The visual system employs a gain control mechanism in the cortical coding of contrast whereby the response of each cell is normalised by the integrated activity of neighbouring cells. While restricted in space, the normalisation pool is broadly tuned for spatial frequency and orientation, so that a cells response is adapted by stimuli which fall outside its ‘classical’ receptive field. Various functions have been attributed to divisive gain control: in this paper we consider whether this output nonlinearity serves to increase the information carrying capacity of the neural code. 46 natural scenes were analysed with the use of oriented, frequency-tuned filters whose bandwidths were chosen to match those of mammalian striate cortical cells. The images were logarithmically transformed so that the filters responded to a luminance ratio or contrast. In the first study, the response of each filter was calibrated relative to its response to a grating stimulus, and local image contrast was expressed in terms of the familiar Michelson metric. We found that the distribution of contrasts in natural images is highly kurtotic, peaking at low values and having a long exponential tail. There is considerable variability in local contrast, both within and between images. In the second study we compared the distribution of response activity before and after implementing contrast normalisation, and noted two major changes. Response variability, both within and between scenes, is reduced by normalisation, and the entropy of the response distribution is increased after normalisation, indicating a more efficient transfer of information.


Vision Research | 1998

The role of “contrast enhancement” in the detection and appearance of visual contours

Robert F. Hess; Steven C. Dakin; David J. Field

We test the proposition that the appearance and detection of visual contours is based on an increase in the perceived contrast of contour elements. First we show that detection of contours is quite possible in the presence of very high levels of variability in contrast. Second we show that inclusion in a contour does not induce Gabor patches to appear to be of higher contrast than patches outside of a contour. These results suggest that, contrary to a number of current models, contrast or its assumed physiological correlate (the mean firing rate of early cortical neurons) is not the determining information for identifying the contour.


Journal of The Optical Society of America A-optics Image Science and Vision | 2007

Estimates of the information content and dimensionality of natural scenes from proximity distributions

Damon M. Chandler; David J. Field

Natural scenes, like most all natural data sets, show considerable redundancy. Although many forms of redundancy have been investigated (e.g., pixel distributions, power spectra, contour relationships, etc.), estimates of the true entropy of natural scenes have been largely considered intractable. We describe a technique for estimating the entropy and relative dimensionality of image patches based on a function we call the proximity distribution (a nearest-neighbor technique). The advantage of this function over simple statistics such as the power spectrum is that the proximity distribution is dependent on all forms of redundancy. We demonstrate that this function can be used to estimate the entropy (redundancy) of 3x3 patches of known entropy as well as 8x8 patches of Gaussian white noise, natural scenes, and noise with the same power spectrum as natural scenes. The techniques are based on assumptions regarding the intrinsic dimensionality of the data, and although the estimates depend on an extrapolation model for images larger than 3x3, we argue that this approach provides the best current estimates of the entropy and compressibility of natural-scene patches and that it provides insights into the efficiency of any coding strategy that aims to reduce redundancy. We show that the sample of 8x8 patches of natural scenes used in this study has less than half the entropy of 8x8 white noise and less than 60% of the entropy of noise with the same power spectrum. In addition, given a finite number of samples (<2(20)) drawn randomly from the space of 8x8 patches, the subspace of 8x8 natural-scene patches shows a dimensionality that depends on the sampling density and that for low densities is significantly lower dimensional than the space of 8x8 patches of white noise and noise with the same power spectrum.

Collaboration


Dive into the David J. Field's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Hayes

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Nuala Brady

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge