Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Izzet Burak Yildiz is active.

Publication


Featured researches published by Izzet Burak Yildiz.


Neural Networks | 2012

Re-visiting the echo state property

Izzet Burak Yildiz; Herbert Jaeger; Stefan J. Kiebel

An echo state network (ESN) consists of a large, randomly connected neural network, the reservoir, which is driven by an input signal and projects to output units. During training, only the connections from the reservoir to these output units are learned. A key requisite for output-only training is the echo state property (ESP), which means that the effect of initial conditions should vanish as time passes. In this paper, we use analytical examples to show that a widely used criterion for the ESP, the spectral radius of the weight matrix being smaller than unity, is not sufficient to satisfy the echo state property. We obtain these examples by investigating local bifurcation properties of the standard ESNs. Moreover, we provide new sufficient conditions for the echo state property of standard sigmoid and leaky integrator ESNs. We furthermore suggest an improved technical definition of the echo state property, and discuss what practicians should (and should not) observe when they optimize their reservoirs for specific tasks.


PLOS Computational Biology | 2011

A Hierarchical Neuronal Model for Generation and Online Recognition of Birdsongs

Izzet Burak Yildiz; Stefan J. Kiebel

The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments.


PLOS Computational Biology | 2013

From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

Izzet Burak Yildiz; Katharina von Kriegstein; Stefan J. Kiebel

Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.


The Journal of Neuroscience | 2016

Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields

Izzet Burak Yildiz; Nima Mesgarani; Sophie Denève

A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by “explaining away,” a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.


Nonlinearity | 2011

Monotonicity of the Lozi family and the zero entropy locus

Izzet Burak Yildiz

Ishii and Sands (1998 Commun. Math. Phys. 198 379–406) show the monotonicity of the Lozi family in a neighbourhood of a-axis in the a–b parameter space. We show the monotonicity of the entropy in the vertical direction around a = 2 and in some other directions for 1 < a ≤ 2. Also, we give some rigorous and numerical results for the parameters at which the Lozi family has zero entropy.


BMC Neuroscience | 2013

Learning speech recognition from songbirds

Izzet Burak Yildiz; Katharina von Kriegstein; Stefan J. Kiebel

Our knowledge about the computational mechanisms underlying human learning and recognition of speech is still very limited [1]. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at a different species, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input partitioned into sequences of syllables, in an online fashion [2]. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level [3,4], we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model [5] into a human speech learning and recognition model. The model performs a Bayesian version of dynamical, predictive coding [6] based on an internal generative model of how speech dynamics are produced. This generative model consists of a two-level hierarchy of recurrent neural networks similar to the song production hierarchy of songbirds [7]. In this predictive coding scheme, predictions about the future trajectory of the speech stimulus are dynamically formed based on a learned repertoire and the ongoing stimulus. The hierarchical inference uses top-down and bottom-up messages, which aim to minimize an error signal, the so-called prediction error. We show that the resulting neurobiologically plausible model can learn words rapidly and recognize them robustly, even in adverse conditions. Also, the model is capable of dealing with variations in speech rate and competition by multiple speakers. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents--an everyday situation in which current state-of-the-art speech recognition models often fail. We use the model to provide computational explanations for inter-individual differences in accent adaptation, as well as age of acquisition effects in second language learning. For the latter, we qualitatively modeled behavioral results from an experimental study [8].


Current Biology | 2015

Visual and Motor Cortices Differentially Support the Translation of Foreign Language Words

Katja M. Mayer; Izzet Burak Yildiz; Manuela Macedonia; Katharina von Kriegstein


arXiv: Neurons and Cognition | 2012

Online discrimination of nonlinear dynamics with switching differential equations

Sebastian Bitzer; Izzet Burak Yildiz; Stefan J. Kiebel


Frontiers in Computational Neuroscience | 2012

Learning and recognizing human speech using dynamical, hierarchical bayesian inference

Izzet Burak Yildiz; Stefan J. Kiebel


Piecewise and Low-Dimensional Dynamics Conference | 2011

Topological entropy of the Lozi maps

Izzet Burak Yildiz

Collaboration


Dive into the Izzet Burak Yildiz's collaboration.

Top Co-Authors

Avatar

Stefan J. Kiebel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herbert Jaeger

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophie Denève

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge