Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryozo Kitajima is active.

Publication


Featured researches published by Ryozo Kitajima.


soft computing | 2015

Accumulative information enhancement in the self-organizing maps and its application to the analysis of mission statements

Ryozo Kitajima; Ryotaro Kamimura

Abstract This paper proposes a new information-theoretic method based on the information enhancement method to extract important input variables. The information enhancement method was developed to detect important components in neural systems. Previous methods have focused on the detection of only the most important components, and therefore have failed to fully incorporated the information contained in the components into learning processes. In addition, it has been observed that the information enhancement method cannot always extract input information from input patterns. Thus, in this paper a computational method is developed to accumulate information content in the process of information enhancement. The method was applied to an artificial data set and the analysis of mission statements. The results demonstrate that while we were able to explicitly extract the symmetric properties of the data from the artificial data set, only one main factor was able to be extracted from the mission statement, namely, “contribution to the society”. The companies with higher profits tend to have mission statements concerning the society. The results can be considered to be a first step toward the full clarification of the importance of mission statements in actual business activities.


international symposium on neural networks | 2015

Selective potentiality maximization for input neuron selection in self-organizing maps

Ryotaro Kamimura; Ryozo Kitajima

The present paper proposes a new type of information-theoretic method to enhance the potentiality of input neurons for improving the class structure of the self-organizing maps (SOM). The SOM has received much attention in neural networks, because it can be used to visualize input patterns, in particular, to clarify class structure. However, it has been observed that the good performance of visualization is limited to relatively simple data sets. To visualize more complex data sets, it is needed to develop a method to extract main characteristics of input patterns more explicitly. For this, several information-theoretic methods have been developed with some problems. One of the main problems is that the method needs much heavy computation to obtain the main features, because the computational procedures to obtain information content should be repeated many times. To simplify the procedures, a new measure called “potentiality” of input neurons is proposed. The potentiality is based on the variance of connection weights for input neurons and it can be computed without the complex computation of information content. The method was applied to the artificial and symmetric data set and the biodegradation data from the machine learning database. Experimental results showed that the method could be used to enhance a smaller number of input neurons. Those neurons were effective in intensifying class boundaries for clearer class structures. The present results show the effectiveness of the new measure of the potentiality for improved visualization and class structure.


international symposium on neural networks | 2007

Forced Information Maximization to Accelerate Information-Theoretic Competitive Learning

Ryotaro Kamimura; Ryozo Kitajima

Information-theoretic competitive learning has been proved to be more general and more flexible type of competitive learning. However, one of the major shortcomings of this method is that it is sometimes very slow in learning. To overcome this problem, we introduce forced information used to force networks to increase information by supposing maximum information. We applied the method to an artificial data as well as a student survey. In both cases, we observed that information was very rapidly increased to stable points. Compared with results by the principal component analysis, our method showed more clearly the main features of input patterns. In addition, we can more easily explain a main mechanism of feature detection by forced information.


Software Engineering / 811: Parallel and Distributed Computing and Networks / 816: Artificial Intelligence and Applications | 2014

Gradual Information Maximization in Information Enhancement to Extract Important Input Neurons

Ryotaro Kamimura; Ryozo Kitajima

In this paper, we propose a new type of informationtheoretic method called “gradual information maximization” to detect important input neurons (variables) in the self-organizing maps. The information enhancement method has been developed to detect important components in neural networks. However, in the information enhancement method, we have found that information for detecting important neurons is not necessarily acquired. The gradual information maximization aims to acquire information generated in the course of learning as much as possible. This means that information accumulated in every stage of learning can be used for the detection of important neurons. We applied the method to the analysis of a public poll opinion toward a city government in Tokyo metropolitan area. The method extracted clearly one important variable of “meeting places.” By examining carefully the public documents of the city, we found that the problem of “meeting places” in the city was considered to be one of the most serious financial problems. Thus, the finding by the gradual information maximization represents an important problem in the city.


foundations of computational intelligence | 2007

Information-Theoretic Variable Selection in Neural Networks

Ryotaro Kamimura; Fumihiko Yoshida; Yamashita Toshie; Ryozo Kitajima

In this paper, we propose a new type of information-theoretic approach to variable selection. Many approaches have been proposed in estimating the importance of input variables. The majority of these approaches have focused upon output errors. We here introduce an approach concerning internal representations. First, we delete an input unit with corresponding connection weights. Then, by examining some change in hidden unit activation with and without a input variable, we can extract an important variable. We apply this method to an artificial data in which the number of hidden units is redundantly increased so as to clearly show improved performance and the stability of our method. Then, we apply the method to the cabinet approval ratings in which better interpretation of input variables can be given


international conference on neural information processing | 2006

Collective information-theoretic competitive learning: emergency of improved performance by collectively treated neurons

Ryotaro Kamimura; Fumihiko Yoshida; Ryozo Kitajima

In this paper, we try to show that the simple collection of competitive units can show some emergent property such as improved generalization performance. We have so far defined information-theoretic competitive learning with respect to individual competitive units. As information is increased, one competitive unit tends to win the competition. This means that competitive learning can be described as a process of information maximization. However, in living systems, a large number of neurons behave collectively. Thus, it is urgently needed to introduce collective property in information-theoretic competitive learning. In this context, we try to treat several competitive units as one unit, that is, one collective unit. Then, we try to maximize information content not in individual competitive units but in collective competitive units. We applied the method to an artificial data and cabinet approval rating estimation. In all cases, we successfully demonstrated that improved generalization could be obtained.


COGNITIVE 2016, The Eighth International Conference on Advanced Cognitive Technologies and Applications | 2016

Self-Organized Potential Competitive Learning to Improve Interpretation and Generalization in Neural Networks

Ryotaro Kamimura; Ryozo Kitajima; Osamu Uchida


soft computing and pattern recognition | 2015

Neural potential learning for tweets classification and interpretation

Ryozo Kitajima; Ryotaro Kamimura; Osamu Uchida; Fujio Toriumi


international conference on artificial intelligence and applications | 2008

An information-theoretic approach to feature extraction in competitive learning

Ryotaro Kamimura; Tadanari Taniguchi; Ryozo Kitajima


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2016

Identifying Important Tweets by Considering the Potentiality of Neurons

Ryozo Kitajima; Ryotaro Kamimura; Osamu Uchida; Fujio Toriumi

Collaboration


Dive into the Ryozo Kitajima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge