Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryotaro Kamimura is active.

Publication


Featured researches published by Ryotaro Kamimura.


Connection Science | 2001

Flexible feature discovery and structural information control

Ryotaro Kamimura; Taeko Kamimura; Osamu Uchida

In this paper, we propose a new information theoretic method called structural information control for flexible feature discovery. The new method has three distinctive characteristics, which traditional competitive learning fails to offer. First, the new method can directly control competitive unit activation patterns, whereas traditional competitive learning does not have any means to control them. Thus, with the new method, it is possible to extract salient features not discovered by traditional methods. Second, competitive units compete witheach other by maximizing their information content about input patterns. Consequently, this information maximization makes it possible to control flexibly competition processes. Third, in structural information control, it is possible to define many different kinds of information content, and we can choose a specific type of information according to a given objective. When applied to competitive learning, structural information can be used to control the number of dead or spare units, and to extract macro as well as micro features of input patterns in explicit ways. We first applied this method to simple pattern classification to demonstrate that information can be controlled and that different neuron firing patterns can be generated. Second, a dipole problem was used to show that structural information could provide representations similar to those by the conventional competitive learning methods. Finally, we applied the method to a language acquisition problem in which networks must flexibly discover some linguistic rules by changing structural information. Especially, we attempted to examine the effect of the information parameter to control the number of dead neurons, and thus to examine how macro and micro features in input patterns can explicitly be discovered by structural information.


Neural Processing Letters | 2003

Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units

Ryotaro Kamimura

In this paper, we propose a new information theoretic competitive learning method. We first construct a learning method in single-layered networks, and then we extend it to supervised multi-layered networks. Competitive unit outputs are computed by the inverse of Euclidean distance between input patterns and connection weights. As distance is smaller, competitive unit outputs are stronger. In realizing competition, neither the winner-take-all algorithm nor the lateral inhibition is used. Instead, the new method is based upon mutual information maximization between input patterns and competitive units. In maximizing mutual information, the entropy of competitive units is increased as much as possible. This means that all competitive units must equally be used in our framework. Thus, no under-utilized neurons or dead neurons are generated. When using multi-layered networks, we can improve noise-tolerance performance by unifying information maximization and minimization. We applied our method with single-layered networks to a simple artificial data problem and an actual road classification problem. In both cases, experimental results confirmed that the new method can produce the final solutions almost independently of initial conditions, and classification performance is significantly improved. Then, we used multi-layered networks, and applied them to a character recognition problem and a political data analysis. In these problem, we could show that noise-tolerance performance was improved by decreasing information content on input patterns to certain points.


Communications in Statistics-theory and Methods | 2007

A Stepwise AIC Method for Variable Selection in Linear Regression

Toshie Yamashita; Keizo Yamashita; Ryotaro Kamimura

In this article, we study stepwise AIC method for variable selection comparing with other stepwise method for variable selection, such as, Partial F, Partial Correlation, and Semi-Partial Correlation in linear regression modeling. Then we show mathematically that the stepwise AIC method and other stepwise methods lead to the same method as Partial F. Hence, there are more reasons to use the stepwise AIC method than the other stepwise methods for variable selection, since the stepwise AIC method is a model selection method that can be easily managed and can be widely extended to more generalized models and applied to non normally distributed data. We also treat problems that always appear in applications, that are validation of selected variables and problem of collinearity.


Network: Computation In Neural Systems | 1995

Hidden information maximization for feature detection and rule discovery

Ryotaro Kamimura; Shohachiro Nakanishi

In this paper, We propose a method to maximize the hidden information stored in hidden units. The hidden information is defined by the decrease in uncertainty of hidden units with respect to input patterns. By maximizing the hidden information, the hidden unit can detect features and extract rules behind input patterns. Our method was applied to two problems: an autoencoder to produce six alphabet letters and the assimilation for the formation of plurals and nasalization in an artificial language. In the first problem, the results explicitly confirmed that the features of input patterns could be detected by maximizing the hidden information. In the second experiment, we could clearly see that the rules of the assimilation were extracted by maximizing the hidden information, even if the rules are obscured by some other factors.


Connection Science | 2002

Greedy information acquisition algorithm: a new information theoretic approach to dynamic information acquisition in neural networks

Ryotaro Kamimura; Taeko Kamimura; Haruhiko Takeuchi

In this paper, we proopose a new information theoretic approach to competitive learning. The new approach is called greedy information acquisition , because networks try to absorb as much information as possible in every stage of learning. In the first phase, with minimum network architecture for realizing competition, information is maximized. In the second phase, a new unit is added, and thereby information is again increased as much as possible. This proceess continues until no more increase in information is possible. Through greedy information maximization, different sets of important features in input patterns can be cumulatively discovered in successive stages. We applied our approach to three problems: a dipole problem; a language classification problem; and a phonological feature detection problem. Experimental results confirmed that information maximization can be repeatedly applied and that different features in input patterns are gradually discovered. We also compared our method with conventional competitive learning and multivariate analysis. The experimental results confirmed that our new method can detect salient features in input patterns more clearly than the other methods.


Connection Science | 2003

Information theoretic competitive learning in self-adaptive multi-layered networks

Ryotaro Kamimura

In this paper, we propose self-adaptive multi-layered networks in which information in each processing layer is always maximized. Using these multi-layered networks, we can solve complex problems and discover salient features that single-layered networks fail to extract. In addition, this successive information maximization enables networks gradually to extract important features. We applied the new method to the Iris data problem, the vertical-horizontal lines detection problem, a phonological data analysis problem and a medical data problem. Experimental results confirmed that information can repeatedly be maximized in multi-layered networks and that the networks can extract features that cannot be detected by single-layered networks. In addition, features extracted in successive layers are cumulatively combined to detect more macroscopic features.


Connection Science | 2003

Teacher-directed learning: information-theoretic competitive learning in supervised multi-layered networks

Ryotaro Kamimura; Fumihiko Yoshida

In this paper, we propose a new type of efficient learning method called teacher-directed learning. The method can accept training patterns and correlated teachers, and we need not back-propagate errors between targets and outputs into networks. Information flows always from an input layer to an output layer. In addition, connections to be updated are those from an input layer to the first competitive layer. All other connections can take fixed values. Learning is realized as a competitive process by maximizing information on training patterns and correlated teachers. Because information is maximized, information is compressed into networks in simple ways, which enables us to discover salient features in input patterns. We applied this method to the vertical and horizontal lines detection problem, the analysis of US–Japan trade relations and a fairly complex syntactic analysis system. Experimental results confirmed that teacher information in an input layer forces networks to produce correct answers. In addition, because of maximized information in competitive units, easily interpretable internal representations can be obtained.


IEEE Transactions on Neural Networks | 2006

Cooperative information maximization with Gaussian activation functions for self-organizing maps

Ryotaro Kamimura

In this paper, we propose a new information-theoretic method to produce explicit self-organizing maps (SOMs). Competition is realized by maximizing mutual information between input patterns and competitive units. Competitive unit outputs are computed by the Gaussian function of distance between input patterns and competitive units. A property of this Gaussian function is that, as distance becomes smaller, a neuron tends to fire strongly. Cooperation processes are realized by taking into account the firing rates of neighboring neurons. We applied our method to uniform distribution learning, chemical compound classification and road classification. Experimental results confirmed that cooperation processes could significantly increase information content in input patterns. When cooperative operations are not effective in increasing information, mutual information as well as entropy maximization is used to increase information. Experimental results showed that entropy maximization could be used to increase information and to obtain clearer SOMs, because competitive units are forced to be equally used on average.


International Journal of General Systems | 2005

Improving information-theoretic competitive learning by accentuated information maximization

Ryotaro Kamimura

In this paper, we propose a new computational method for information-theoretic competitive learning. We have so far developed information-theoretic methods for competitive learning in which competitive processes can be simulated by maximizing mutual information between input patterns and competitive units. Though the methods have shown good performance, networks have had difficulty in increasing information content, and learning is very slow to attain reasonably high information. To overcome the shortcoming, we introduce the rth power of competitive unit activations used to accentuate actual competitive unit activations. Because of this accentuation, we call the new computational method “accentuated information maximization”. In this method, intermediate values are pushed toward extreme activation values, and we have a high possibility to maximize information content. We applied our method to a vowel–consonant classification problem in which connection weights obtained by our methods were similar to those obtained by standard competitive learning. The second experiment was to discover some features in a dipole problem. In this problem, we showed that as the parameter r increased, less clear representations could be obtained. For the third experiment of economic data analysis, much clearer representations were obtained by our method, compared with those obtained by the standard competitive learning method.


Biological Cybernetics | 2011

Self-enhancement learning: target-creating learning and its application to self-organizing maps

Ryotaro Kamimura

In this article, we propose a new learning method called “self-enhancement learning.” In this method, targets for learning are not given from the outside, but they can be spontaneously created within a neural network. To realize the method, we consider a neural network with two different states, namely, an enhanced and a relaxed state. The enhanced state is one in which the network responds very selectively to input patterns, while in the relaxed state, the network responds almost equally to input patterns. The gap between the two states can be reduced by minimizing the Kullback–Leibler divergence between the two states with free energy. To demonstrate the effectiveness of this method, we applied self-enhancement learning to the self-organizing maps, or SOM, in which lateral interactions were added to an enhanced state. We applied the method to the well-known Iris, wine, housing and cancer machine learning database problems. In addition, we applied the method to real-life data, a student survey. Experimental results showed that the U-matrices obtained were similar to those produced by the conventional SOM. Class boundaries were made clearer in the housing and cancer data. For all the data, except for the cancer data, better performance could be obtained in terms of quantitative and topological errors. In addition, we could see that the trustworthiness and continuity, referring to the quality of neighborhood preservation, could be improved by the self-enhancement learning. Finally, we used modern dimensionality reduction methods and compared their results with those obtained by the self-enhancement learning. The results obtained by the self-enhancement were not superior to but comparable with those obtained by the modern dimensionality reduction methods.

Collaboration


Dive into the Ryotaro Kamimura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haruhiko Takeuchi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge