Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victor Benichoux is active.

Publication


Featured researches published by Victor Benichoux.


Frontiers in Neuroinformatics | 2014

Equation-oriented specification of neural models for simulations

Marcel Stimberg; Dan F. M. Goodman; Victor Benichoux; Romain Brette

Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.


eLife | 2013

Decoding neural responses to temporal cues for sound localization

Dan F. M. Goodman; Victor Benichoux; Romain Brette

The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001


Frontiers in Neuroinformatics | 2011

Brian Hears: Online Auditory Processing Using Vectorization Over Channels

Bertrand Fontaine; Dan F. M. Goodman; Victor Benichoux; Romain Brette

The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.


Journal of Neurophysiology | 2013

Predicting spike timing in highly synchronous auditory neurons at different sound levels.

Bertrand Fontaine; Victor Benichoux; Philip X. Joris; Romain Brette

A challenge for sensory systems is to encode natural signals that vary in amplitude by orders of magnitude. The spike trains of neurons in the auditory system must represent the fine temporal structure of sounds despite a tremendous variation in sound level in natural environments. It has been shown in vitro that the transformation from dynamic signals into precise spike trains can be accurately captured by simple integrate-and-fire models. In this work, we show that the in vivo responses of cochlear nucleus bushy cells to sounds across a wide range of levels can be precisely predicted by deterministic integrate-and-fire models with adaptive spike threshold. Our model can predict both the spike timings and the firing rate in response to novel sounds, across a large input level range. A noisy version of the model accounts for the statistical structure of spike trains, including the reliability and temporal precision of responses. Spike threshold adaptation was critical to ensure that predictions remain accurate at different levels. These results confirm that simple integrate-and-fire models provide an accurate phenomenological account of spike train statistics and emphasize the functional relevance of spike threshold adaptation.


eLife | 2015

Neural tuning matches frequency-dependent time differences between the ears

Victor Benichoux; Bertrand Fontaine; Tom P. Franken; Shotaro Karino; Philip X. Joris; Romain Brette

The time it takes a sound to travel from source to ear differs between the ears and creates an interaural delay. It varies systematically with spatial direction and is generally modeled as a pure time delay, independent of frequency. In acoustical recordings, we found that interaural delay varies with frequency at a fine scale. In physiological recordings of midbrain neurons sensitive to interaural delay, we found that preferred delay also varies with sound frequency. Similar observations reported earlier were not incorporated in a functional framework. We find that the frequency dependence of acoustical and physiological interaural delays are matched in key respects. This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays. Using recordings from the nerve and brainstem we show that this tuning may emerge from neurons detecting coincidences between input fibers that are mistuned in frequency. DOI: http://dx.doi.org/10.7554/eLife.06072.001


Journal of the Acoustical Society of America | 2016

On the variation of interaural time differences with frequency

Victor Benichoux; Marc Rébillat; Romain Brette

Interaural time difference (ITD) is a major cue to sound localization in humans and animals. For a given subject and position in space, ITD depends on frequency. This variation is analyzed here using a head related transfer functions (HRTFs) database collected from the literature and comprising human HRTFs from 130 subjects and animal HRTFs from six specimens of different species. For humans, the ITD is found to vary with frequency in a way that shows consistent differences with respect to a spherical head model. Maximal ITD values were found to be about 800 μs in low frequencies and 600 μs in high frequencies. The ITD variation with frequency (up to 200 μs for some positions) occurs within the frequency range where ITD is used to judge the lateral position of a sound source. In addition, ITD varies substantially within the bandwidth of a single auditory filter, leading to systematic differences between envelope and fine-structure ITDs. Because the frequency-dependent pattern of ITD does not display spherical symmetries, it potentially provides cues to elevation and resolves front/back confusion. The fact that the relation between position and ITDs strongly depends on the sounds spectrum in turn suggests that humans and animals make use of this relationship for the localization of sounds.


Journal of the Acoustical Society of America | 2014

Estimation of the low-frequency components of the head-related transfer functions of animals from photographs

Marc Rébillat; Victor Benichoux; Makoto Otani; Renaud Keriven; Romain Brette

Reliable animal head-related transfer function (HRTF) estimation procedures are needed for several practical applications, for example, to investigate the neuronal mechanisms of sound localization using virtual acoustic spaces or to have a quantitative description of the different localization cues available to a given animal species. Here, two established techniques are combined to estimate an animals HRTF from photographs by taking into account as much morphological detail as possible. The first step of the method consists in building a three-dimensional-model of the animal from pictures taken with a standard camera. The HRTFs are then estimated by means of a rapid boundary-element-method implementation. This combined method is validated on a taxidermist model of a cat by comparing binaural and monaural localization cues extracted from estimated and measured HRTFs. It is shown that it provides a reliable way to estimate low-frequency HRTF, which is difficult to obtain with standard acoustical measurements procedures because of reflections.


Otology & Neurotology | 2017

Semicircular Canal Pressure Changes During High-intensity Acoustic Stimulation

Anne K. Maxwell; Renee M. Banakis Hartl; Nathaniel T. Greene; Victor Benichoux; Jameson K. Mattingly; Stephen P. Cass; Daniel J. Tollin

HYPOTHESIS Acoustic stimulation generates measurable sound pressure levels in the semicircular canals. BACKGROUND High-intensity acoustic stimuli can cause hearing loss and balance disruptions. To examine the propagation of acoustic stimuli to the vestibular end-organs, we simultaneously measured fluid pressure in the cochlea and semicircular canals during both air- and bone-conducted sound presentation. METHODS Five full-cephalic human cadaveric heads were prepared bilaterally with a mastoidectomy and extended facial recess. Vestibular pressures were measured within the superior, lateral, and posterior semicircular canals, and referenced to intracochlear pressure within the scala vestibuli with fiber-optic pressure probes. Pressures were measured concurrently with laser Doppler vibrometry measurements of stapes velocity during stimulation with both air- and bone-conduction. Stimuli were pure tones between 100 Hz and 14 kHz presented with custom closed-field loudspeakers for air-conducted sounds and via commercially available bone-anchored device for bone-conducted sounds. RESULTS Pressures recorded in the superior, lateral, and posterior semicircular canals in response to sound stimulation were equal to or greater in magnitude than those recorded in the scala vestibuli (up to 20 dB higher). The pressure magnitudes varied across canals in a frequency-dependent manner. CONCLUSION High sound pressure levels were recorded in the semicircular canals with sound stimulation, suggesting that similar acoustical energy is transmitted to the semicircular canals and the cochlea. Since these intralabyrinthine pressures exceed intracochlear pressure levels, our results suggest that the vestibular end-organs may also be at risk for injury during exposure to high-intensity acoustic stimuli known to cause trauma in the auditory system.


BMC Neuroscience | 2013

A unifying theory of ITD-based sound azimuth localization at the behavioral and neural levels

Victor Benichoux; Marcel Stimberg; Bertrand Fontaine; Romain Brette

In many species, azimuthal sound source localization relies on the processing of fine temporal differences between the incoming signals at both ears (interaural time differences, ITDs). There exists no consensual theory of ITD-based localization that explains the behavioral and neural data alike. The classical view of a place code for localization [1] is questioned by electrophysiological data [2], while its alternative is functionally inefficient [3]. We propose as a functional principle that the system performs a maximum-likelihood estimation of the position of the source given the cues in the stimulus. This Bayesian approach implies that the behavioral and neural data are constrained by natural distributions of binaural cues, as observed in acoustical recordings of head related transfer functions (HRTFs). We first record and analyze HRTFs in humans and cats. Then we discuss the implications of our hypothesis on psychoacoustical data in humans and electrophysiological data in the cat. In a maximum-likelihood approach, the current observed cue is compared to the a priori distribution of cues (marginal prior normalization). It is thus fundamental to uncover what the cues are and how they are distributed across the spectrum. We recorded HRTFs in different species, and performed simulations of natural environments to quantify the robustness of ITD cues. We find that ITD is a frequency-dependent quantity that decreases by about 30% across the spectrum, and that such variations occur within the bandwidth of a cochlear filter. We also show how the distributions of cues vary across frequencies, in relation with various features of the environment such as reflections. Because the ITD as a constant delay is an insufficient cue, azimuth should be extracted by the system based on a frequency-dependent representation of ITD. We test this prediction in a psychoacoustical setup. Using a matching paradigm, subjects are asked to adjust the lateralization of two noises with different frequency contents, by varying the ITD of one of the stimuli. The HRTF data allows us to predict that the higher frequency sounds should be matched with a lower ITD than the lower frequency sound. We show how this prediction is met both qualitatively and quantitatively in our experiment. We give a model of the function of binaural cells in the cat brainstem. We predict the responses of those neurons to binaural beats at different frequencies from the cat HRTFs. We show how this simple model can explain already observed features of the electrophysiological literature [4], namely the presence of cells sensitive to frequency-dependent interaural delays. Finally, we propose a spiking neuron implementation of this maximum-likelihood principle. Cells are tuned to the frequency-dependent cues of their best position by means of both cochlear mismatches and axonal delays [5]. The Bayesian marginal prior normalization is implemented through the use of inhibition. Probing the model with various input sources, in a simulated virtual environment, we show that the network accurately localizes sound sources, comparably with an optimal Bayesian observer. Moreover, this model predicts qualitative differences in those observations for mammals of different sizes such as the cat and gerbil.


BMC Neuroscience | 2013

Brian 2 - the second coming: spiking neural network simulation in Python with code generation

Marcel Stimberg; Dan F. M. Goodman; Victor Benichoux; Romain Brette

Brian 2 is a fundamental rewrite of the Brian [1,2] simulator for spiking neural networks. Brian is written in the Python programming language and focuses on simplicity and extensibility: neuronal models can be described using mathematical formulae (differential equations) and with the use of physical units. Depending on the model equations, several integration methods are available, ranging from exact integration for linear differential equations to numerical integration for arbitrarily complex equations. The same formalism can also be used to specify synaptic models, allowing the user to easily define complex synapse models. Brian 2 keeps most of the syntax and functionality consistent with previous versions of Brian, but achieves more consistency and modularity as well as adding new features such as a simpler and more general new formulation of refractoriness. A consistent interface centered around human-readable descriptions using mathematical notation allows the specification of neuronal models (including complex reset, threshold and refractory conditions), synaptic models (including complex plasticity rules) and synaptic connections. Every aspect of Brian 2 has been designed with extensibility and adaptability in mind, which, for example, makes it straightforward to implement new numerical integration methods. Even though Brian 2 benefits from the ease of use and the flexibility of the Python programming language, its performance is not limited by the speed of Python: At the core of the simulation machinery Brian 2 makes use of fully automated runtime code generation [3], allowing the same model to be run in the Python interpreter, in compiled C++ code or on a GPU using CUDA libraries[4]. The code generation system is designed to be extensible to new target languages and its output can also be used on its own: for situations where high performance is necessary and/or where a Python interpreter is not available (for example for robotics applications), Brian 2 offers tools to assist in assembling the generated code into a stand-alone version that runs independently of Brian or a Python interpreter. To ensure the correctness and maintainability of the software, Brian 2 includes an extensive, full coverage test suite. Debugging of simulation scripts is supported by a configurable logging system, allowing simple monitoring of the internal details of the simulation process. Brian is made available under a free software license and all development takes place in public code repositories [5].

Collaboration


Dive into the Victor Benichoux's collaboration.

Top Co-Authors

Avatar

Romain Brette

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Tollin

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bertrand Fontaine

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Philip X. Joris

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Marc Rébillat

Arts et Métiers ParisTech

View shared research outputs
Top Co-Authors

Avatar

Marcel Stimberg

French Institute of Health and Medical Research

View shared research outputs
Top Co-Authors

Avatar

Kelsey L. Anbuhl

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge