Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shin-ichi Asakawa is active.

Publication


Featured researches published by Shin-ichi Asakawa.


international conference on neural information processing | 2008

Mixtures of Experts: As an Attempt to Integrate the Dual Route Cascaded and the Triangle Models for Reading English Words

Shin-ichi Asakawa

An implementation of neural network models for reading English words aloud is proposed. Since 1989, there has been existing a debate in neuropsycholgy and cognitive science about the models of reading. One is the Dual Route Cascaded model, another is the Triangle model. Since there exist arbitrary variables in both models, it was difficult to decide which model would be appropriate to explain the data from psychological experiments and neuropsychological evidence. Therefore, in order to provide a solution of this debate, an attempt to integrate both models was attempted. By introducing the Mixtures of Experts Network model, a solution to overcome the arbitrariness of both models could be given. The Mixtures of Experts Network model could include both models as a special case. From the Mixtures of Experts networks point of view, the difference between the Dual Route Cascaded model and the Triangle model would be considered as a quantitative difference of the dispersion parameters.


BMC Neuroscience | 2014

Multilayer perceptrons, Hopfield’s associative memories, and restricted Boltzmann machines

Shin-ichi Asakawa

This study was intended to describe multilayer perceptrons (MLP), Hopfield’s associative memories (HAM), and restricted Boltzmann machines (RBM) from a unified point of view. Despite of mutual relation between three models, for example, RBMs have been utilizing to construct deeper architectures than shallower MLPs. The energy function in HAM is analogous to the Ising model in statistical mechanics, and it connects microscopic physics to thermodynamics. The canonical partition function Z in the Boltzmann distribution is also utilized RBMs. Asynchronous updating and contrastive divergence (CD) based upon Gibbs sampling is also related. Therefore, it seems to be worth considering these three models within a common framework. This attempt might lead to “one algorithm hypothesis.”, which insists that our brains might rule a single but universal rule. An algorithm, which someone could find out in a region, may be applicable to other regions. Multilayer perceptrons (henceforth, MLP) are feed forward models for pattern recognition and classification. Hopfield proposed another kind of neural network models for associative memory and optimization (HAM). Hiton adopted the restricted Boltzmann machines (RBM) in “Deep Learning” in order to construct deeper layered neural networks. The energy employed in RBMs are elicited the generalized EM algorithm, which was closely related to the energy employed by HAM. In spite of other various differences, see Table 1, it is worth considering to compare among them. At least, an attempt is worth attempting to explain all of them in a unified terminology. HAM and RBM have symmetrically weighted connections, wij = wji, although generalized Boltzmann machines can not satisfy this constraints. Similarly, there are no feedback connections in MLP in general. When we denote a connection weight from j-th unit to i-th unit as wij, wij ∈ R, wji = 0 in MLP. When we consider a merged weight matrix W, all the models can be considered as identical. The construction methods adopted by Deep Learning are based upon RBMs. One of key concepts to success for constructing multilayer deep architecture is the non–linearity, because units in hidden layer in RBMs are binary. The non–linearity seems to play an important role to construct deep architecture. When we suppose to abandon CD and binary feature, multilayer architecture might replace one weight matrix W = W1W2... Wp. Also, we can consider a thought experiment with only one hidden unit in RBM. If h = 0, then there are no meanings at all. If h = 1, then it must be an identity mapping, or at least, it might be extract the eigenvector vector corresponded to the maximum eigenvalue value in data matrix X. This


international conference on neural information processing | 2002

SOMDS: multidimensional scaling through self organization map

K. Shina; Shin-ichi Asakawa

We propose SOMDS that is a combination of MDS (multidimensional scaling) and SOM. SOMDS is a special type of MDS that can learn locally and adaptively the structure of similarity data. SOMDS is a special type of SOM without neighborhood functions and whose inputs are similarities between objects. Convergence properties of the algorithm and some applications are presented.


Japanese Journal of Educational Psychology | 1993

Reexamination of rule assessment approach

Shin-ichi Asakawa; Kenpei Shiina


international conference on neural computation theory and applications | 2016

Attractor Neural Networks for Simulating Dyslexic Patients’ Behavior

Shin-ichi Asakawa


Psychology | 2013

Re-Evaluation of Attractor Neural Network Model to Explain Double Dissociation in Semantic Memory Disorder

Shin-ichi Asakawa


BMC Neuroscience | 2013

Hopfield neural network model for explaining double dissociation in semantic memory impairment

Shin-ichi Asakawa; Ikuko Kyoya


IJCCI | 2012

Attractor Neural Networks for Simulating Dyslexic Patients' Behavior.

Shin-ichi Asakawa


Cognitive Science | 2012

The model comparison through orthography, phonology, and semantics.

Shin-ichi Asakawa


Cognitive Science | 2011

Attractor Neural Networks as Models of Categorization Task and Word Reading

Shin-ichi Asakawa

Collaboration


Dive into the Shin-ichi Asakawa's collaboration.

Top Co-Authors

Avatar

Ikuko Kyoya

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge