Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roland Baddeley is active.

Publication


Featured researches published by Roland Baddeley.


Proceedings of the Royal Society of London B: Biological Sciences | 1997

Responses of neurons in primary and inferior temporal visual cortices to natural scenes

Roland Baddeley; L. F. Abbott; Michael C. A. Booth; Frank Sengpiel; Tobe Freeman; Edward A. Wakeman; Edmund T. Rolls

The primary visual cortex (V1) is the first cortical area to receive visual input, and inferior temporal (IT) areas are among the last along the ventral visual pathway. We recorded, in area V1 of anaesthetized cats and area IT of awake macaque monkeys, responses of neurons to videos of natural scenes. Responses were analysed to test various hypotheses concerning the nature of neural coding in these two regions. A variety of spike–train statistics were measured including spike–count distributions, interspike interval distributions, coefficients of variation, power spectra, Fano factors and different sparseness measures. All statistics showed non–Poisson characteristics and several revealed self–similarity of the spike trains. Spike–count distributions were approximately exponential in both visual areas for eight different videos and for counting windows ranging from 50 ms to 5 s. The results suggest that the neurons maximize their information carrying capacity while maintaining a fixed long–term–average firing rate, or equivalently, minimize their average firing rate for a fixed information carrying capacity.


Vision Research | 1998

Different mechanisms underlie three inhibitory phenomena in cat area 17

Frank Sengpiel; Roland Baddeley; Tobe Freeman; Richard Harrad; Colin Blakemore

Recently, it has been proposed that all suppressive phenomena observed in the primary visual cortex (V1) are mediated by a single mechanism, involving inhibition by pools of neurons, which, between them, represent a wide range of stimulus specificities. The strength of such inhibition would depend on the stimulus that produces it (particularly its contrast) rather than on the firing rate of the inhibited cell. We tested this hypothesis by measuring contrast-response functions (CRFs) of neurons in cat V1 for stimulation of the classical receptive field of the dominant eye with an optimal grating alone, and in the presence of inhibition caused by (1) a superimposed orthogonal grating (cross-orientation inhibition); (2) a surrounding iso-oriented grating (surround inhibition); and (3) an orthogonal grating in the other eye (interocular suppression). We fitted hyperbolic ratio functions and found that the effect of cross-orientation inhibition was best described as a rightward shift of the CRF (contrast-gain control), while surround inhibition and interocular suppression were primarily characterised as downward shifts of the CRF (response-gain control). However, the latter also showed a component of contrast-gain control. The two modes of suppression were differently distributed between the layers of cortex. Response-gain control prevailed in layer 4, whereas cells in layers 2/3, 5 and 6 mainly showed contrast-gain control. As in human observers, surround gratings caused suppression when the central grating was of high contrast, but in over a third of the cells tested, enhanced responses for low-contrast central stimuli, hence actually decreasing threshold contrast.


Biological Cybernetics | 1995

Non-linear data structure extraction using simple hebbian networks

Colin Fyfe; Roland Baddeley

We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to selforganise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA.


Network: Computation In Neural Systems | 1996

Searching for filters with 'interesting' output distributions: an uninteresting direction to explore?

Roland Baddeley

It has been independently proposed, by Barlow, Field, Intrator and co-workers, that the receptive fields of neurons in V1 are optimized to generate sparse, Kurtotic, or interesting output probability distributions. We investigate the empirical evidence for this further and argue that filters can produce interesting output distributions simply because natural images have variable local intensity variance. If the proposed filters have zero DC, then the probability distribution of filter outputs (and hence the output Kurtosis) is well predicted simply from these effects of variable local variance. This suggests that finding filters with high output Kurtosis does not necessarily signal interesting image structure. It is then argued that finding filters that maximize output Kurtosis generates filters that are incompatible with observed physiology. In particular the optimal difference-of-Gaussian (DOG) filter should have the smallest possible scale, an on-centre off-surround cell should have a negative DC, and that the ratio of centre width to surround width should approach unity. This is incompatible with the physiology. Further, it is also predicted that oriented filters should always be oriented in the vertical direction, and of all the filters tested, the filter with the highest output Kurtosis has the lowest signal-to-noise ratio (the filter is simply the difference of two neighbouring pixels). Whilst these observations are not incompatible with the brain using a sparse representation, it does argue that little significance should be placed on finding filters with highly Kurtotic output distributions. It is therefore argued that other constraints are required in order to understand the development of visual receptive fields.


Network: Computation In Neural Systems | 1995

Finding compact and sparse-distributed representations of visual images

Colin Fyfe; Roland Baddeley

Some recent work has investigated the dichotomy between compact coding using dimensionality reduction and sparse-distributed coding in the context of understanding biological information processing. We introduce an artificial neural network which self-organizes on the basis of simple Hebbian learning and negative feedback of activation and show that it is capable both of forming compact codings of data distributions and of identifying filters most sensitive to sparse-distributed codes. The network is extremely simple and its biological relevance is investigated via its response to a set of images which are typical of everyday life. However, an analysis of the networks identification of the filter for sparse coding reveals that this coding may not be globally optimal and that there exists an innate limiting factor which cannot be transcended.


Cognitive Science | 1997

The Correlational Structure of Natural Images and the Calibration of Spatial Representations

Roland Baddeley

Abstract Physiologists have long proposed that correlated input activity is important in normal sensory development. Here it is postulated that the visual system is sensitive to the correlation in image intensity across the visual field, and that these correlations are used to help calibrate spatial representations. Since measurements made near to each other in the visual field are more correlated than measurements made at a distance, the degree of correlation can be used as an estimate of the distance between two measurements and can therefore be used to calibrate a roughly organized spatial representation. We therefore explored the hypothesis that low level spatial representations are calibrated using a signal based on image intensity correlation. If the visual system uses input statistics to calibrate its spatial representation, then any distortions and anisotropies in these input statistics should be mirrored by distortions in the representation of space. To test the psychological implications of this hypothesis, a collection of 81 images of open and urban landscapes were used to estimate the degree of correlation between image intensity measurement pairs as a function of both distance and orientation. Doing this we show that a system that used the statistics measured to calibrate its representation would show: 1. (1) a horizontal-vertical illusion; 2. (2) the magnitude of this illusion would depend on the amount of open and urban landscapes in the environment; 3. (3) there would be a nontrivial relationship between line orientation and judged length. Analogues of all these distortions and regularities can be found in the psychophysical literature on distance estimation. This gives strength to the proposal that spatial representations are calibrated using input statistics.


Neural Computation | 1997

Optimal, unsupervised learning in invariant object recognition

Guy Wallis; Roland Baddeley

A means for establishing transformation-invariant representations of objects is proposed and analyzed, in which different views are associated on the basis of the temporal order of the presentation of these views, as well as their spatial similarity. Assuming knowledge of the distribution of presentation times, an optimal linear learning rule is derived. Simulations of a competitive network trained on a character recognition task are then used to highlight the success of this learning rule in relation to simple Hebbian learning and to show that the theory can give accurate quantitative predictions for the optimal parameters for such networks.


Biological Cybernetics | 1997

Nonlinear principal components analysis of neuronal spike train data

David Fotheringhame; Roland Baddeley

Abstract. Many recent approaches to decoding neural spike trains depend critically on the assumption that for low-pass filtered spike trains, the temporal structure is optimally represented by a small number of linear projections onto the data. We therefore tested this assumption of linearity by comparing a linear factor analysis technique (principal components analysis) with a nonlinear neural network based method. It is first shown that the nonlinear technique can reliably identify a neuronally plausible nonlinearity in synthetic spike trains. However, when applied to the outputs from primary visual cortical neurons, this method shows no evidence for significant temporal nonlinearities. The implications of this are discussed.


Archive | 1995

Topographic Map Formation as Statistical Inference

Roland Baddeley

Neurons representing similar aspects of the world are often found close together in the cortex. It is proposed that this phenomenon can be modelled using a statistical approach. We start by using a neural network to find the “features” that were most likely to have generated the observed probability distribution of inputs. These features can be found using a Boltzmann machine architecture, but the results of this simple network are unsatisfactory. By adding two additional constraints (priors), that all representational units have the same probability of being true, and that nearby representational units are correlated, the network is shown to be capable of extracting distributed, spatially localised topographic representations based on an input of natural images. This is believed to be the first network capable of achieving this.


Archive | 1995

Edge Enhancement and Exploratory Projection Pursuit

Colin Fyfe; Roland Baddeley

We present a neural network algorithm based on simple Hebbian learning which allows the finding of higher order structure in data. The neural netxad work uses negative feedback of activation to self-organisej such networks have previously been shown to be capable of performing Principal Comxad ponent Analysis (PCA). In this paper, this is extended to Exploratory Projection Pursuit (EPP) which is a statistical method for investigating structure in high-dimensional data sE(ts. Recently, it has been proposed [3, 5] that one way of choosing an appropriate filter for processing a particular domain is to find the filter with the highest output kurtosis. We pursue this avenue further by using the developed neural network to find the filter with the highest output kurtosis when applied to a collection of natural images. The method does not appear to work but interesting lessons can be derived from our failure.

Collaboration


Dive into the Roland Baddeley's collaboration.

Top Co-Authors

Avatar

Colin Fyfe

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge