Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takuya Isomura is active.

Publication


Featured researches published by Takuya Isomura.


Scientific Reports | 2016

A Local Learning Rule for Independent Component Analysis

Takuya Isomura; Taro Toyoizumi

Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering.


Current Opinion in Neurobiology | 2017

Learning with three factors: modulating Hebbian plasticity with errors

Łukasz Kuśmierz; Takuya Isomura; Taro Toyoizumi

Synaptic plasticity is a central theme in neuroscience. A framework of three-factor learning rules provides a powerful abstraction, helping to navigate through the abundance of models of synaptic plasticity. It is well-known that the dopamine modulation of learning is related to reward, but theoretical models predict other functional roles of the modulatory third factor; it may encode errors for supervised learning, summary statistics of the population activity for unsupervised learning or attentional feedback. Specialized structures may be needed in order to generate and propagate third factors in the neural network.


Scientific Reports | 2018

Error-Gated Hebbian Rule: A Local Learning Rule for Principal and Independent Component Analysis

Takuya Isomura; Taro Toyoizumi

We developed a biologically plausible unsupervised learning algorithm, error-gated Hebbian rule (EGHR)-β, that performs principal component analysis (PCA) and independent component analysis (ICA) in a single-layer feedforward neural network. If parameter β = 1, it can extract the subspace that major principal components span similarly to Oja’s subspace rule for PCA. If β = 0, it can separate independent sources similarly to Bell-Sejnowski’s ICA rule but without requiring the same number of input and output neurons. Unlike these engineering rules, the EGHR-β can be easily implemented in a biological or neuromorphic circuit because it only uses local information available at each synapse. We analytically and numerically demonstrate the reliability of the EGHR-β in extracting and separating major sources given high-dimensional input. By adjusting β, the EGHR-β can extract sources that are missed by the conventional engineering approach that first applies PCA and then ICA. Namely, the proposed rule can successfully extract hidden natural images even in the presence of dominant or non-Gaussian noise components. The results highlight the reliability and utility of the EGHR-β for large-scale parallel computation of PCA and ICA and its future implementation in a neuromorphic hardware.


bioRxiv | 2017

Common features in plastic changes rather than constructed structures in recurrent neural network prefrontal cortex models

Satoshi Kuroki; Takuya Isomura

We have flexible control over our cognition depending on the context or surrounding environments. The prefrontal cortex (PFC) controls this cognitive flexibility; however, the detailed underlying mechanisms remain unclear. Recent developments in machine learning techniques have allowed simple recurrent neural network PFC models to perform human- or animal-like behavioral tasks. These systems allow us to acquire parameters, which we could not in biological experiments, for performing the tasks. We compared four models, in which a flexible cognition task, called context-dependent integration task, was performed; subsequently, we searched for common features. In all the models, we observed that high plastic synapses were concentrated in the small neuronal population and the more concentrated neuronal units contributed further to the performance. However, there were no common properties in the constructed structures. These results suggest that plastic changes can be more general and important to accomplish cognitive tasks than features of the constructed structures.


bioRxiv | 2018

Social intelligence model with multiple internal models

Takuya Isomura; Thomas Parr; K. J. Friston

To exhibit social intelligence, animals have to recognize who they are communicating with. One way to make this inference is to select among multiple internal generative models of each conspecific. This induces an interesting problem: when receiving sensory input generated by a particular conspecific, how does an animal know which internal model to update? We consider a theoretical and neurobiologically plausible solution that enables inference and learning under multiple generative models by integrating active inference and (online) Bayesian model selection. This scheme fits sensory inputs under each generative model. Model parameters are then updated in proportion to the probability it could have generated the current input (i.e., model evidence). We show that a synthetic bird who employs the proposed scheme successfully learns and distinguishes (real zebra finch) birdsongs generated by several different birds. These results highlight the utility of having multiple internal models to make inferences in complicated social environments.


bioRxiv | 2018

In vitro neural networks minimise variational free energy

Takuya Isomura; K. J. Friston

In this work, we address the neuronal encoding problem from a Bayesian perspective. Specifically, we ask whether neuronal responses in an in vitro neuronal network are consistent with ideal Bayesian observer responses under the free energy principle. In brief, we stimulated an in vitro cortical cell culture with stimulus trains that had a known statistical structure. We then asked whether recorded neuronal responses were consistent with variational message passing (i.e., belief propagation) based upon free energy minimisation (i.e., evidence maximisation). Effectively, this required us to solve two problems: first, we had to formulate the Bayes-optimal encoding of the causes or sources of sensory stimulation, and then show that these idealised responses could account for observed electrophysiological responses. We describe a simulation of an optimal neural network (i.e., the ideal Bayesian neural code) and then consider the mapping from idealised in silico responses to recorded in vitro responses. Our objective was to find evidence for functional specialisation and segregation in the in vitro neural network that reproduced in silico learning via free energy minimisation. Finally, we combined the in vitro and in silico results to characterise learning in terms of trajectories in a variational information plane of accuracy and complexity.


bioRxiv | 2018

Multi-context blind source separation by error-gated Hebbian rule

Takuya Isomura; Taro Toyoizumi

Animals need to adjust their inferences according to the context they are in. This is required for the multi-context blind source separation (BSS) task, where an agent needs to infer hidden sources from their context-dependent mixtures. The agent is expected to invert this mixing process for all contexts. Here, we show that a neural network that implements the error-gated Hebbian rule (EGHR) with sufficiently redundant sensory inputs can successfully learn this task. After training, the network can perform the multi-context BSS without further updating synapses, by retaining memories of all experienced contexts. Finally, if there is a common feature shared across contexts, the EGHR can extract it and generalize the task to even inexperienced contexts. This demonstrates an attractive use of the EGHR for dimensionality reduction by extracting common sources across contexts. The results highlight the utility of the EGHR as a model for perceptual adaptation in animals.


Frontiers in Computational Neuroscience | 2018

Task-Related Synaptic Changes Localized to Small Neuronal Population in Recurrent Neural Network Cortical Models

Satoshi Kuroki; Takuya Isomura

Humans have flexible control over cognitive functions depending on the context. Several studies suggest that the prefrontal cortex (PFC) controls this cognitive flexibility, but the detailed underlying mechanisms remain unclear. Recent developments in machine learning techniques allow simple PFC models written as a recurrent neural network to perform various behavioral tasks like humans and animals. Computational modeling allows the estimation of neuronal parameters that are crucial for performing the tasks, which cannot be observed by biologic experiments. To identify salient neural-network features for flexible cognition tasks, we compared four PFC models using a context-dependent integration task. After training the neural networks with the task, we observed highly plastic synapses localized to a small neuronal population in all models. In three of the models, the neuronal units containing these highly plastic synapses contributed most to the performance. No common tendencies were observed in the distribution of synaptic strengths among the four models. These results suggest that task-dependent plastic synaptic changes are more important for accomplishing flexible cognitive tasks than the structures of the constructed synaptic networks.


neural information processing systems | 2018

Objective and efficient inference for couplings in neuronal networks

Yu Terada; Tomoyuki Obuchi; Takuya Isomura; Yoshiyuki Kabashima


arXiv: Machine Learning | 2018

On the achievability of blind source separation for high-dimensional nonlinear source mixtures.

Takuya Isomura; Taro Toyoizumi

Collaboration


Dive into the Takuya Isomura's collaboration.

Top Co-Authors

Avatar

Taro Toyoizumi

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Satoshi Kuroki

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Tomoyuki Obuchi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshiyuki Kabashima

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

K. J. Friston

University College London

View shared research outputs
Top Co-Authors

Avatar

Łukasz Kuśmierz

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Thomas Parr

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Researchain Logo
Decentralizing Knowledge