Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koji Kurata is active.

Publication


Featured researches published by Koji Kurata.


Physical Review Letters | 1999

Statistical Mechanics of an Oscillator Associative Memory with Scattered Natural Frequencies

Toru Aonishi; Koji Kurata; Masato Okada

Analytic treatment of a nonequilibrium random system with large degrees of freedom is one of the most important problems of physics. However, little research has been done on this problem as far as we know. In this paper, we propose a new mean field theory that can treat a general class of nonequilibrium random system. We apply the present theory to an analysis of an associative memory with oscillatory elements, which is a well-known typical random system with large degrees of freedom.


Biological Cybernetics | 2000

Properties of basis functions generated by shift invariant sparse representations of natural images.

Wakako Hashimoto; Koji Kurata

Abstract. The idea that a sparse representation is the computational principle of visual systems has been supported by Olshausen and Field [Nature (1996) 381: 607–609] and many other studies. On the other hand neurons in the inferotemporal cortex respond to moderately complex features called icon alphabets, and such neurons respond invariantly to the stimulus position. To incorporate this property into sparse representation, an algorithm is proposed that trains basis functions using sparse representations with shift invariance. Shift invariance means that basis functions are allowed to move on image data and that coefficients are equipped with shift invariance. The algorithm is applied to natural images. It is ascertained that moderately complex graphical features emerge that are not as simple as Gabor filters and not as complex as real objects. Shift invariance and moderately complex features correspond to the property of icon alphabets. The results show that there is another connection between visual information processing and sparse representations.


Biological Cybernetics | 2001

Formation of a direction map by projection learning using Kohonen's self-organization map

Hayaru Shouno; Koji Kurata

Abstract. In this paper, we propose a modification of Kohonens self-organization map (SOM) algorithm. When the input signal space is not convex, some reference vectors of SOM can protrude from it. The input signal space must be convex to keep all the reference vectors fixed on it for any updates. Thus, we introduce a projection learning method that fixes the reference vectors onto the input signal space. This version of SOM can be applied to a non-convex input signal space. We applied SOM with projection learning to a direction map observed in the primary visual cortex of area 17 of ferrets, and area 18 of cats. Neurons in those areas responded selectively to the orientation of edges or line segments, and their directions of motion. Some iso-orientation domains were subdivided into selective regions for the opposite direction of motion. The abstract input signal space of the direction map described in the manner proposed by Obermayer and Blasdel [(1993) J Neurosci 13: 4114–4129] is not convex. We successfully used SOM with projection learning to reproduce a direction-orientation joint map.


Cognitive Neurodynamics | 2009

Ordering process of self-organizing maps improved by asymmetric neighborhood function

Takaaki Aoki; Kaiichiro Ota; Koji Kurata; Toshio Aoyagi

The Self-organizing map (SOM) is an unsupervised learning method based on the neural computation, which has found wide applications. However, the learning process sometime takes multi-stable states, within which the map is trapped to an undesirable disordered state including topological defects on the map. These topological defects critically aggravate the performance of the SOM. In order to overcome this problem, we propose to introduce an asymmetric neighborhood function for the SOM algorithm. Compared with the conventional symmetric one, the asymmetric neighborhood function accelerates the ordering process even in the presence of the defect. However, this asymmetry tends to generate a distorted map. This can be suppressed by an improved method of the asymmetric neighborhood function. In the case of one-dimensional SOM, it is found that the required steps for perfect ordering is numerically shown to be reduced from O(N3) to O(N2). We also discuss the ordering process of a twisted state in two-dimensional SOM, which can not be rectified by the ordinary symmetric neighborhood function.


Neural Networks | 2004

Self-organization of globally continuous and locally distributed information representation

Koji Wada; Koji Kurata; Masato Okada

A number of findings suggest that the preferences of neighboring neurons in the inferior temporal (IT) cortex of macaque monkeys tend to be similar. However, a recent study reports convincingly that the preferences of neighboring neurons actually differ. These findings seem contradictory. To explain this conflict, we propose a new view of information representation in the IT cortex. This view takes into account sparse and local neuronal excitation. Since the excitation is sparse, information regarding visual objects seems to be encoded in a distributed manner. The local excitation of neurons coincides with the classical notion of a column structure. Our model consists of input layer and output layer. The main difference from conventional models is that the output layer has local and random intra-layer connections. In this paper, we adopt two rings embedded in three-dimensional space as an input signal space, and examine how resultant information representation depends on the distance between two rings that is denoted as D. We show that there exists critical value for the distance Dc. When D > Dc the output layer becomes able to form the column structure, this model can obtain the distributed representation within the column. While the output layer acquires the conventional information representation observed in the V1 cortex when D < Dc. Moreover, we consider the origin of the difference between information representation of the V1 cortex and that of the IT cortex. Our finding suggests that the difference in the information representations between the V1 and the IT cortices could be caused by difference between the input space structures.


Biological Cybernetics | 1998

A PHASE-LOCKING THEORY OF MATCHING BETWEEN ROTATED IMAGES BY DYNAMIC LINK MATCHING

Toru Aonishi; Koji Kurata; T. Mito

Abstract. Pattern recognition invariant to deformation or translation can be performed with the dynamic link matching proposed by von der Malsburg. Dynamic link matching has been applied to some engineering examples efficiently, but has not yet been analyzed mathematically. We propose two models of dynamic link matching, both of which are mathematically tractable. The first model can perform matching between rotated images. The second model can also do that and additionally detect common parts in a template image and in a data image. To analyze these models mathematically, we reduce each models equation to a phase equation, showing the mathematical principle behind the rotating invariant matching process. We also carry out computer simulations to verify the mathematical theories involved.


Biological Cybernetics | 1995

Self-organization of the velocity selectivity of a directionally selective neural network

Ken-ichiro Miura; Koji Kurata; Takashi Nagano

We first present a mathematical analysis of the relation between the parameters and the behavior of the basic module in the proposed neural network model for visual motion detection. Based on the analytical results, a learning rule is put forth that can develop velocity selectivity of directionally selective cells in the basic module. The learning rule is furthermore introduced into the total model called a ‘mass model’, which is constructed with many basic modules. Numerical simulation results showed that each basic module in the mass model learned in a self-organizing manner to acquire selectivity for the velocity of an input stimulus. The proposed learning rule would be plausible in the actual nervous system in that it is simple and can be described with only local information.


Neural Networks | 2010

Self-consistent signal-to-noise analysis of Hopfield model with unit replacement

Toru Aonishi; Yasunao Komatsu; Koji Kurata

The Hopfield model has a storage capacity: the maximum number of memory patterns that can be stably stored. The memory state of this network model disappears if the number of embedded memory patterns is larger than 0.138N, where N is the system size. Recently, it has been shown in numerical simulations that the Hopfield model with a unit replacement process, in which a small number of old units are replaced with new ones at each learning step for embedding a new pattern, can stably retrieve recently embedded memory patterns even if an infinite number of patterns have been embedded. In this paper, we analyze the Hopfield model with the replacement process by utilizing self-consistent signal-to-noise analysis. We show that 3.21 is the minimum number of replaced units at each learning step that avoids an overload evoking disappearance of the memory state when embedding an infinite number of patterns. Furthermore, we show that the optimal number of replaced units at each learning step that maximizes the number of retrievable patterns is 6.95. These critical numbers of replaced units are independent of the system size N. Finally, we compare this model with the Hopfield model with the forgetting process.


Artificial Life and Robotics | 2008

A property of neural networks of associative memory with replacing units

Akira Date; Koji Kurata

A memory capacity exists for artificial neural networks of associative memory. The addition of new memories beyond the capacity overloads the network system and makes all learned memories irretrievable (catastrophic forgetting) unless there is a provision for forgetting old memories. This article describes a property of associative memory networks in which a number of units are replaced when networks learn. In our network, every time the network learns a new item or pattern, a number of units are erased and the same number of units are added. It is shown that the memory capacity of the network depends on the number of replaced units, and that there exists a optimal number of replaced units in which the memory capacity is maximized. The optimal number of replaced units is small, and seems to be independent of the network size.


Artificial Life and Robotics | 2004

Separating visual information into position and direction by SOM

Koji Kurata; Naoki Oshiro

A model is proposed to self-organize a map for the visual recognition of position and direction by a robot moving autonomously in a room. The robot is assumed to have visual sensors. The model is based on Kohonen’s self-organizing map (SOM), which was proposed as a model of self-organization of the cortex. An ordinary SOM consists of a two-dimensional array of neuron-like feature detector units. In our model, however, units are arranged in a three-dimensional array, and a periodic boundary condition is assumed in one dimension. Also, some new learning rules are added. Our model is shown by a computer simulation to form a map which can extract from the visual input two factors of information separately, i.e., the position and direction of the robot. This is an example of so-called two-factor problems. In our algorithm, the difference in the topology of the information is used to separate two factors of information.

Collaboration


Dive into the Koji Kurata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masato Okada

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar

Ryota Miyata

University of the Ryukyus

View shared research outputs
Top Co-Authors

Avatar

Kazushi Mimura

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar

Naoki Oshiro

University of the Ryukyus

View shared research outputs
Top Co-Authors

Avatar

Akira Date

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Wada

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge