Takato Tatsumi
University of Electro-Communications
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Takato Tatsumi.
genetic and evolutionary computation conference | 2016
Takato Tatsumi; Takahiro Komine; Masaya Nakata; Hiroyuki Sato; Tim Kovacs; Keiki Takadama
Learning Classifier System (LCS) [2] is an evolutionary machine learning method that is constituted by reinforcement learning and genetic algorithm. As an important feature of LCS, LCS can acquire generalized rules that match multiple states using # symbol. Among LCSs, Accuracybased LCS (XCS) [4] can acquire“accurate”generalized rules by reducing the difference between the predicted reward and the acquired reward, but XCS is hard to correctly estimate such difference in noisy environments. To address this issue, our previous research proposed XCS-SAC (XCS with Self-adaptive Accuracy Criterion) [3] for noisy environments. Since the estimated standard deviation of the rewards of the inaccurate rules is larger than that of the accurate ones, the fitness of rules in XCS-SAC is calculated according to the estimated standard deviation of the rewards. However, XCS-SAC needs to wait until convergence of the estimated standard deviation of all state-action pairs. This paper pays attention that the average value of rewards is distributed around a true value. To overcome this problem, this paper proposes XCS without Convergence of Reward Estimation (XCS-CRE) that can determine the accuracy of rules according to the distribution range of the average value of rewards of the matched state-action pair.
Journal of Advanced Computational Intelligence and Intelligent Informatics | 2017
Kazuma Matsumoto; Takato Tatsumi; Hiroyuki Sato; Tim Kovacs; Keiki Takadama
The correctness rate of classification of neural networks is improved by deep learning, which is machine learning of neural networks, and its accuracy is higher than the human brain in some fields. This paper proposes the hybrid system of the neural network and the Learning Classifier System (LCS). LCS is evolutionary rule-based machine learning using reinforcement learning. To increase the correctness rate of classification, we combine the neural network and the LCS. This paper conducted benchmark experiments to verify the proposed system. The experiment revealed that: 1) the correctness rate of classification of the proposed system is higher than the conventional LCS (XCSR) and normal neural network; and 2) the covering mechanism of XCSR raises the correctness rate of proposed system.
genetic and evolutionary computation conference | 2018
Kazuma Matsumoto; Ryo Takano; Takato Tatsumi; Hiroyuki Sato; Tim Kovacs; Keiki Takadama
This paper proposes the novel Learning Classifier System (LCS) which can solve high-dimensional problems, and obtain human-readable knowledge by integrating deep neural networks as a compressor. In the proposed system named DCAXCSR, deep neural network called Deep Classification Autoencoder (DCA) compresses (encodes) input to lower dimension information which LCS can deal with, and decompresses (decodes) output of LCS to the original dimension information. DCA is hybrid network of classification network and autoencoder towards increasing compression rate. If the learning is insufficient due to lost information by compression, by using decoded information as an initial value for narrowing down state space, LCS can solve high dimensional problems directly. As LCS of the proposed system, we employs XCSR which is LCS for real value in this paper since DCA compresses input to real values. In order to investigate the effectiveness of the proposed system, this paper conducts experiments on the benchmark classification problem of MNIST database and Multiplexer problems. The result of the experiments shows that the proposed system can solve high-dimensional problems which conventional XCSR cannot solve, and can obtain human-readable knowledge.
genetic and evolutionary computation conference | 2018
Takato Tatsumi; Tim Kovacs; Keiki Takadama
Accuracy based Learning Classifier System (XCS) prefers to generalize the classifiers that always acquire the same reward, because they make accurate reward predictions. However, real-world problems have noise, which means that classifiers may not receive the same reward even if they always take the correct action. For this case, since all classifiers acquire multiple values as the reward, XCS cannot identify accurate classifiers. In this paper, we study a single step environment with action noise, where XCSs action is sometimes changed at random. To overcome this problem, this paper proposes XCS based on Collective weighted Reward (XCS-CR) to identify the accurate classifiers. In XCS each rule predicts its next reward by averaging its past rewards. Instead, XCS-CR predicts its next reward by selecting a reward from the set of past rewards, by comparing the past rewards to the collective weighted average reward of the rules matching the current input for each action. This comparison helps XCS-CR identify rewards that result from action noise. In experiments, XCS-CR acquired the optimal generalized classifier subset in 6-Multiplexer problems with action noise, similar to the environment without noise, and judged those optimal generalized classifiers correctly accurate.
genetic and evolutionary computation conference | 2017
Takato Tatsumi; Hiroyuki Sato; Keiki Takadama
XCS (Accuracy-based learning classifier system) can acquire accurate classifiers on the basis of consistent reward, but it does not always receive the consistent reward in real world problems even if it provides the same output for the same input. Such a situation prevents XCS from reducing the number of overspecific accurate classifiers by the subsumption mechanism. This means that XCS finds it hard to acquire the optimal classifiers. For this issue, our previous research proposed XCS-MR (XCS based on Mean of Reward) which can reduce the number of classifiers even in the environments where the size of the rewards is uncertain. However, XCS-MR requires a large amount of learning data to correctly determine the accuracy of classifiers because XCS-MR needs to record the average and variance of the rewards in all input-output space. To overcome this problem, this paper proposes a new XCS that can reduce the number of the classifiers even in the uncertain reward environments without recording the average and variance of the rewards in all input-output space. This paper shows the effectiveness of the proposed XCS through the experiments.
congress on evolutionary computation | 2017
Takato Tatsumi; Hiroyuki Sato; Tim Kovacs; Keiki Takadama
This paper focuses on a generalization of classifiers in noisy problems and aims at exploring learning classifier systems (LCSs) that can evolve accurately generalized classifiers as an optimal solution in several environments which include different type of noise. For this purpose, this paper employs XCS-CRE (XCS without Convergence of Reward Estimation) which can correctly identify classifiers as either accurate or inaccurate ones even in a noisy problem, and investigates its effectiveness in several noisy problems. Through intensive experiments of three LCSs (i.e., XCS as the conventional LCS, XCS-SAC (XCS with Self-adaptive Accuracy Criterion) as our previous LCS, and XCS-CRE) on the noisy 11-multiplexer problem where reward value changes according to (a) Gaussian distribution, (b) Cauchy distribution, or (c) Lognormal distribution, the following implications have been revealed: (1) the correct rate of the classifier of XCS-CRE and XCS-SAC converge to 100% in all three types of the reward distribution while that of XCS cannot reach 100%; (2) the population size of XCS-CRE is smallest followed by that of XCS-SAC and XCS; and (3) the percentage of the acquired optimal classifiers of XCS-CRE is highest followed by that of XCS-SAC and XCS.
Journal of Advanced Computational Intelligence and Intelligent Informatics | 2017
Caili Zhang; Takato Tatsumi; Masaya Nakata; Keiki Takadama
This paper presents an approach to clustering that extends the variance-based Learning Classifier System (XCS-VR). In real world problems, the ability to combine similar rules is crucial in the knowledge discovery and data mining field. Conventionally, XCS-VR is able to acquire generalized rules, but it cannot further acquire more generalized rules from these rules. The proposed approach (called XCS-VRc) accomplishes this by integrating similar generalized rules. To validate the proposed approach, we designed a benchmark problem to examine whether XCS-VRc can cluster both the generalized and more generalized features in the input data. The proposed XCS-VRc proved to be more efficient than XCS and the conventional XCSVR.
soft computing | 2016
Caili Zhang; Takato Tatsumi; Masaya Nakata; Keiki Takadama; Hiroyuki Sato; Tim Kovacs
This paper extends the variance-based Learning Classifier System called XCS-SAC, in order to extract two different abstracted level rules (i.e.classifiers). Since XCS-SAC attempts to evolve classifiers whose generality depends on their own parameter, such an attempt results in generating many specific classifiers (i.e.the classifiers having a less number of #). Due to inappropriate generalization, some of classifiers might not be human-understandable. To overcome this problem, our LCS focuses on an extraction of only two different abstracted level rules, both the specific and general rules, to understand a tendency in a given problem. In detail, the specific rules can be only utilized in limited situations but they are very accurate, while the general rules can be widely utilized but they are not accurate. The experimental result shows that our LCS succeeds to extract both specific and general rules appropriately in comparison with XCS-SAC.
congress on evolutionary computation | 2015
Takato Tatsumi; Takahiro Komine; Hiroyuki Sato; Keiki Takadama
XCS is an accuracy-based learning classifier system (LCS) which is powered by a reinforcement algorithm. We expect it will have when the reward for a state / action pair is unstable, because it is not possible to correctly estimate the evaluation. This paper focuses on learning in a different level of an unstable reward environment and proposes XCS-URE (XCS for Unstable Reward Environment) by improving XCS for such an environment. For this purpose, XCS-URE estimates the reward distribution of the classifier (i.e., if-then rule) by using the standard deviation of the acquired reward, and adjusts the accuracy of the classifier depending on the reward distribution. In order to investigate the effectiveness of XCS-URE, this paper applies XCS and XCS-URE into the multiple unstable reward environments which have a different level of the unstable rewards added by Gaussian noise. The experiments on the modified multiplexer problems have the following implications: (1) in the environment same Gaussian noise is added, XCS cannot performs properly due to the low accuracy of the classifier in the noisy environments, while XCS-URE can perform properly by acquiring the appropriate classifiers even in such an environment; (2) in the same environment, XCS-URE can reduce the population size without decreasing the correct rate as compared to XCS; and (3) even in the environment different Gaussian noises depending on the situation are added, XCS-URE can reduce the population size without decreasing the correct rate by adjusting the accuracy of the classifier depending on the reward distribution.
genetic and evolutionary computation conference | 2018
Caili Zhang; Takato Tatsumi; Hiyoyuki Sato; Tim Kovacs; Keiki Takadama