Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elie Bienenstock is active.

Publication


Featured researches published by Elie Bienenstock.


Neural Computation | 1992

Neural networks and the bias/variance dilemma

Stuart Geman; Elie Bienenstock; René Doursat

Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.


Neural Computation | 2006

Bayesian Population Decoding of Motor Cortical Activity Using a Kalman Filter

Wei Wu; Yun Gao; Elie Bienenstock; John P. Donoghue; Michael J. Black

Effective neural motor prostheses require a method for decoding neural activity representing desired movement. In particular, the accurate reconstruction of a continuous motion signal is necessary for the control of devices such as computer cursors, robots, or a patients own paralyzed limbs. For such applications, we developed a real-time system that uses Bayesian inference techniques to estimate hand motion from the firing rates of multiple neurons. In this study, we used recordings that were previously made in the arm area of primary motor cortex in awake behaving monkeys using a chronically implanted multielectrode microarray. Bayesian inference involves computing the posterior probability of the hand motion conditioned on a sequence of observed firing rates; this is formulated in terms of the product of a likelihood and a prior. The likelihood term models the probability of firing rates given a particular hand motion. We found that a linear gaussian model could be used to approximate this likelihood and could be readily learned from a small amount of training data. The prior term defines a probabilistic model of hand kinematics and was also taken to be a linear gaussian model. Decoding was performed using a Kalman filter, which gives an efficient recursive method for Bayesian inference when the likelihood and prior are linear and gaussian.In off-line experiments, the Kalman filter reconstructions of hand trajectory were more accurate than previously reported results.The resulting decoding algorithm provides a principled probabilistic model of motor-cortical coding, decodes hand motion in real time, provides an estimate of uncertainty, and is straightforward to implement. Additionally the formulation unifies and extends previous models of neural coding while providing insights into the motor-cortical code.


Network: Computation In Neural Systems | 1995

A model of neocortex

Elie Bienenstock

Prompted by considerations about (i) the compositionality of cognitive functions, (ii) the physiology of individual cortical neurons, (iii) the role of accurately timed spike patterns in cortex, and (iv) the regulation of global cortical activity, we suggest that the dynamics of cortex on the 1-ms time scale may be described as the activation of circuits of the synfire-chain type (Abeles 1982, 1991). We suggest that the fundamental computational unit in cortex may be a wave-like spatio-temporal pattern of synfire type, and that the binding mechanism underlying compositionality in cognition may be the accurate synchronization of synfire waves that propagate simultaneously on distinct, weakly coupled, synfire chains. We propose that Hebbian synaptic plasticity may result in a superposition of synfire chains in cortical connectivity, whereby a given neuron participates in many distinct chains. We investigate the behaviour of a much-simplified model of cortical dynamics devised along these principles. Calcula...


Archive | 1986

Disordered systems and biological organization

Elie Bienenstock; Françoise Fogelman-Soulié; Gérard Weisbuch

The NATO workshop on Disordered Systems and Biological Organization was attended, in march 1985, by 65 scientists representing a large variety of fields: Mathematics, Computer Science, Physics and Biology. It was the purpose of this interdisciplinary workshop to shed light on the conceptual connections existing between fields of research apparently as different as: automata theory, combinatorial optimization, spin glasses and modeling of biological systems, all of them concerned with the global organization of complex systems, locally interconnected. Common to many contributions to this volume is the underlying analogy between biological systems and spin glasses: they share the same properties of stability and diversity. This is the case for instance of primary sequences of biopo Iymers I ike proteins and nucleic acids considered as the result of mutation-selection processes [P. W. Anderson, 1983] or of evolving biological species [G. Weisbuch, 1984]. Some of the most striking aspects of our cognitive apparatus, involved In learning and recognttlon [J. Hopfield, 19821, can also be described in terms of stability and diversity in a suitable configuration space. These interpretations and preoccupations merge with those of theoretical biologists like S. Kauffman [1969] (genetic networks) and of mathematicians of automata theory: the dynamics of networks of automata can be interpreted in terms of organization of a system in multiple possible attractors. The present introduction outlInes the relationships between the contributions presented at the workshop and brIefly discusses each paper in its particular scientific context.


EPL | 1987

A Neural Network for Invariant Pattern Recognition

Elie Bienenstock; C. von der Malsburg

We consider the problem of recognizing a shifted and distorted 2-dimensional shape. This task is formalized as a problem of labelled graph matching. To solve this problem, we construct an energy function similar to the one used in a previous paper for solving the subgraph retrieval problem. We present a neuronal model for invariant pattern recognition, based on this solution to subgraph retrieval and graph matching.


EPL | 1987

A Neural Network for the Retrieval of Superimposed Connection Patterns

C. von der Malsburg; Elie Bienenstock

The principle of associative memory is extended to a system with dynamical links capable of retrieval of superimposed connection patterns. The system consists of formalized neurons. Its dynamics is described by two separate Hamiltonians, one for spins and one for links. The spin part is treated in analogy to the Ising system on a 2D grid. Several such network patterns, related by permutations of neurons, are superimposed. Energy minima correspond to the activation of one connection pattern and the deactivation of all others. One important application of this system is invariant pattern recognition.


NATO ASI series. Series F : computer and system sciences | 1986

Statistical coding and short-term synaptic plasticity: a scheme for knowledge representation in the brain

Christoph von der Malsburg; Elie Bienenstock

This work is a theoretical investigation of some consequences of the hypothesis that transmission efficacies of synapses in the Central Nervous System (CNS) undergo modification on a short time-scale. Short-term synaptic plasticity appears to be an almost necessary condition for the existence of activity states in the CNS which are stable for about 1 sec., the time-scale of psychological processes. It gives rise to joint “activity-and-connectivity” dynamics. This dynamics selects and stabilizes particular high-order statistical relationships in the timing of neuronal firing; at the same time, it selects and stabilizes particular connectivity patterns. In analogy to statistical mechanics, these stable states, the attractors of the dynamics, can be viewed as the minima of a hamiltonian, or cost function. It is found that these low-cost states, termed synaptic patterns, are topologically organized. Two important properties of synaptic patterns are demonstrated: (i) synaptic patterns can be “memorized” and later “retrieved”, and (ii) synaptic patterns have a tendency to assemble into compound patterns according to simple topological rules. A model of position-invariant and size-invariant pattern recognition based on these two properties is briefly described. It is suggested that the scheme of a synaptic pattern may be more adapted than the classical cell-assembly notion for explaining cognitive abilities such as generalization and categorization, which pertain to the notion of invariance.


Neurocomputing | 2003

At what time scale does the nervous system operate

Nicholas G. Hatsopoulos; Stuart Geman; Asohan Amarasingham; Elie Bienenstock

A novel statistical strategy, the spike jitter method, was developed to assess temporal structure in spike trains from neuronal ensembles. Its keyfeature is the introduction of a null hy pothesis that assumes a uniform relative likelihood of observing a spike at one temporal location versus another within a small temporal window. We applied the method to simultaneouslyrecorded motor cortical neurons in behaving monkeys and examined the occurrence of /nely timed synchronybetween neuron pairs. Evidence was found for millisecond sy nchronythat could onlybe accounted for byassuming /ne temporal structure in the constituent neurons’ spike trains. The method was also applied to higher-order patterns. c 2002 Elsevier Science B.V. All rights reserved.


Entropy | 2008

Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

Yun Gao; Ioannis Kontoyiannis; Elie Bienenstock

Abstract: Partly motivated by entropy-estimation problems in neuroscience, we present adetailed and extensive comparison between some of the most popular and effective entropyestimation methods used in practice: The plug-in method, four different estimators basedon the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on theContext-Tree Weighting (CTW) method, and the renewal entropy estimator.M ETHODOLOGY : Three new entropy estimators are introduced; two new LZ-basedestimators, and the “renewal entropy estimator,” which is tailored to data generated by abinary renewal process. For two of the four LZ-based estimators, a bootstrap procedure isdescribed for evaluating their standard error, and a practical rule of thumb is heuristicallyderived for selecting the values of their parameters in practice. T HEORY : We prove that,unlike their earlier versions, the two new LZ-based estimators are universally consistent,that is, they converge to the entropy rate for every finite-valued, stationary and ergodicprocess. An effective method is derived for the accurate approximation of the entropy rateof a finite-state hidden Markov model (HMM) with known distribution. Heuristiccalculations are presented and approximate formulas are derived for evaluating the bias andthe standard error of each estimator. S


international ieee/embs conference on neural engineering | 2003

A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions

Yun Gao; Michael J. Black; Elie Bienenstock; Wei Wu; John P. Donoghue

Many models have been proposed for the motor cortical encoding of arm motion. In particular, recent work has shown that simple linear models can be used to approximate the firing rates of a population of cells in primary motor cortex as a function of the position, velocity, and acceleration of the hand. Here we perform a systematic study of these linear models and of various non-linear generalizations. Specifically we consider linear Gaussian models, Generalized Linear Models (GLM), and Generalized Additive Models (GAM) of neural encoding. We evaluate their ability to represent the relationship between hand motion and neural activity, by looking at the likelihood of observed patterns of neural firing in a test data set and by evaluating the decoding performance of the different models (i.e. in terms of the error in reconstructing hand position from firing rates). To provide a level playing field for evaluating the decoding performance, we test all the models using a general recursive Bayesian estimator known as the particle filter, thus isolating the effect of the encoding model on reconstruction accuracy.

Collaboration


Dive into the Elie Bienenstock's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Wu

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon J. Thorpe

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Yves Frégnac

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel E. Shulz

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge