Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben D. B. Willmore is active.

Publication


Featured researches published by Ben D. B. Willmore.


Neuron | 2011

Contrast Gain Control in Auditory Cortex

Neil C. Rabinowitz; Ben D. B. Willmore; Jan W. H. Schnupp; Andrew J. King

Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neurons preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds.


The Journal of Neuroscience | 2010

Neural representation of natural images in visual area V2.

Ben D. B. Willmore; Ryan J. Prenger; Jack L. Gallant

Area V2 is a major visual processing stage in mammalian visual cortex, but little is currently known about how V2 encodes information during natural vision. To determine how V2 represents natural images, we used a novel nonlinear system identification approach to obtain quantitative estimates of spatial tuning across a large sample of V2 neurons. We compared these tuning estimates with those obtained in area V1, in which the neural code is relatively well understood. We find two subpopulations of neurons in V2. Approximately one-half of the V2 neurons have tuning that is similar to V1. The other half of the V2 neurons are selective for complex features such as those that occur in natural scenes. These neurons are distinguished from V1 neurons mainly by the presence of stronger suppressive tuning. Selectivity in these neurons therefore reflects a balance between excitatory and suppressive tuning for specific features. These results provide a new perspective on how complex shape selectivity arises, emphasizing the role of suppressive tuning in determining stimulus selectivity in higher visual cortex.


PLOS Biology | 2013

Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

Neil C. Rabinowitz; Ben D. B. Willmore; Andrew J. King; Jan W. H. Schnupp

Along the auditory pathway from auditory nerve to midbrain to cortex, individual neurons adapt progressively to sound statistics, enabling the discernment of foreground sounds, such as speech, over background noise.


The Journal of Neuroscience | 2012

Spectrotemporal contrast kernels for neurons in primary auditory cortex.

Neil C. Rabinowitz; Ben D. B. Willmore; Jan W. H. Schnupp; Andrew J. King

Auditory neurons are often described in terms of their spectrotemporal receptive fields (STRFs). These map the relationship between features of the sound spectrogram and firing rates of neurons. Recently, we showed that neurons in the primary fields of the ferret auditory cortex are also subject to gain control: when sounds undergo smaller fluctuations in their level over time, the neurons become more sensitive to small-level changes (Rabinowitz et al., 2011). Just as STRFs measure the spectrotemporal features of a sound that lead to changes in the firing rates of neurons, in this study, we sought to estimate the spectrotemporal regions in which sound statistics lead to changes in the gain of neurons. We designed a set of stimuli with complex contrast profiles to characterize these regions. This allowed us to estimate the STRFs of cortical neurons alongside a set of spectrotemporal contrast kernels. We find that these two sets of integration windows match up: the extent to which a stimulus feature causes the firing rate of a neuron to change is strongly correlated with the extent to which the contrast of that feature modulates the gain of the neuron. Adding contrast kernels to STRF models also yields considerable improvements in the ability to capture and predict how auditory cortical neurons respond to statistically complex sounds.


Network: Computation In Neural Systems | 2003

Methods for first-order kernel estimation: simple-cell receptive fields from responses to natural scenes.

Ben D. B. Willmore; Darragh Smyth

Recent studies have recovered receptive-field maps of simple cells in visual cortex from their responses to natural scene stimuli. Natural scenes have many theoretical and practical advantages over traditional, artificial stimuli; however, the receptive-field estimation methods are more complex than for white-noise stimuli. Here, we describe and justify several of these methods—spectral correction of the reverse correlation estimate, direct least-squares solution, iterative least-squares algorithms and regularized least-squares solutions. We investigate the pros and cons of the different methods, and evaluate them in a head-to-head comparison for simulated simple-cell data. This shows that, at least for quasilinear simulated simple cells, a regularized solution (‘reginv’) is most efficient, requiring fewer stimulus presentations for high-resolution reconstruction of the first-order kernel. We also investigate several practical issues that determine the success of this kind of experiment—the effects of neuronal nonlinearities, response variability and the choice of stimulus regime.


Neural Computation | 2008

The berkeley wavelet transform: A biologically inspired orthogonal wavelet transform

Ben D. B. Willmore; Ryan J. Prenger; Michael C.-K. Wu; Jack L. Gallant

We describe the Berkeley wavelet transform (BWT), a two-dimensional triadic wavelet transform. The BWT comprises four pairs of mother wavelets at four orientations. Within each pair, one wavelet has odd symmetry, and the other has even symmetry. By translation and scaling of the whole set (plus a single constant term), the wavelets form a complete, orthonormal basis in two dimensions. The BWT shares many characteristics with the receptive fields of neurons in mammalian primary visual cortex (V1). Like these receptive fields, BWT wavelets are localized in space, tuned in spatial frequency and orientation, and form a set that is approximately scale invariant. The wavelets also have spatial frequency and orientation bandwidths that are comparable with biological values. Although the classical Gabor wavelet model is a more accurate description of the receptive fields of individual V1 neurons, the BWT has some interesting advantages. It is a complete, orthonormal basis and is therefore inexpensive to compute, manipulate, and invert. These properties make the BWT useful in situations where computational power or experimental data are limited, such as estimation of the spatiotemporal receptive fields of neurons.


The Journal of Physiology | 2014

Hearing in noisy environments: noise invariance and contrast gain control

Ben D. B. Willmore; James E. Cooke; Andrew J. King

Contrast gain control has recently been identified as a fundamental property of the auditory system. Electrophysiological recordings in ferrets have shown that neurons continuously adjust their gain (their sensitivity to change in sound level) in response to the contrast of sounds that are heard. At the level of the auditory cortex, these gain changes partly compensate for changes in sound contrast. This means that sounds which are structurally similar, but have different contrasts, have similar neuronal representations in the auditory cortex. As a result, the cortical representation is relatively invariant to stimulus contrast and robust to the presence of noise in the stimulus. In the inferior colliculus (an important subcortical auditory structure), gain changes are less reliably compensatory, suggesting that contrast‐ and noise‐invariant representations are constructed gradually as one ascends the auditory pathway. In addition to noise invariance, contrast gain control provides a variety of computational advantages over static neuronal representations; it makes efficient use of neuronal dynamic range, may contribute to redundancy‐reducing, sparse codes for sound and allows for simpler decoding of population responses. The circuits underlying auditory contrast gain control are still under investigation. As in the visual system, these circuits may be modulated by factors other than stimulus contrast, forming a potential neural substrate for mediating the effects of attention as well as interactions between the senses.


The Journal of Neuroscience | 2016

Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing.

Ben D. B. Willmore; Oliver Schoppe; Andrew J. King; Jan W. H. Schnupp; Nicol S. Harper

Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.


Frontiers in Computational Neuroscience | 2016

Measuring the Performance of Neural Models.

Oliver Schoppe; Nicol S. Harper; Ben D. B. Willmore; Andrew J. King; Jan W. H. Schnupp

Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CCnorm, Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CCnorm is better behaved in that it is effectively bounded between −1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CCnorm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CCnorm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models.


Vision Research | 2012

Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

Ben D. B. Willmore; Harry Bulstrode; David J. Tolhurst

Highlights ► Contrast normalization can be introduced to a BCM neural network. ► The resulting network efficiently represents natural images. ► Contrast normalization prevents redundant representation of image structure. ► Neurally plausible model for neonatal development of receptive fields in V1.

Collaboration


Dive into the Ben D. B. Willmore's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan W. H. Schnupp

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan W. H. Schnupp

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge