Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fumito Uwano is active.

Publication


Featured researches published by Fumito Uwano.


congress on evolutionary computation | 2016

A modified cuckoo search algorithm for dynamic optimization problems

Yuta Umenai; Fumito Uwano; Yusuke Tajima; Masaya Nakata; Hiroyuki Sato; Keiki Takadama

This paper proposes a simple modification of the Cuckoo Search called CS for a dynamic environment. In this paper, we consider a dynamic optimization problem where the global optimum can be cyclically changed depending on time. Our modified CS algorithm holds good candidates in order to effectively explore the search space near those candidates with an intensive local search. Our first experiment tests the prosed method on a set of static optimization problems, which aims at evaluating the potential performance of the proposed method. Then, we apply it to a dynamic optimization problem. Experimental results on the static problems show that the proposed method derives a better performance than the conventional method, which suggest the proposed method potentially has a good capability of finding a good solution. On the dynamic problem, the proposed method also performs well while the conventional method fails to find a better solution.


Archive | 2017

Communication-Less Cooperative Q-Learning Agents in Maze Problem

Fumito Uwano; Keiki Takadama

This paper introduces a reinforcement learning technique with an internal reward for a multi-agent cooperation task. The proposed method is an extension of Q-learning which changes the ordinary (external) reward to the internal reward for agent-cooperation under the condition of no communication. To increase the certainty of the proposed methods, we theoretically investigate what values should be set to select the goal for the cooperation among agents. In order to show the effectiveness of the proposed method, we conduct the intensive simulation on the maze problem for the agent-cooperation task, and confirm the following implications: (1) the proposed method successfully enable agents to acquire cooperative behaviors while a conventional method fails to always acquire such behaviors; (2) the cooperation among agents according to their internal rewards is achieved no communication; and (3) the condition for the cooperation among any number of agent is indicated.


Journal of Advanced Computational Intelligence and Intelligent Informatics | 2017

Comparison Between Reinforcement Learning Methods with Different Goal Selections in Multi-Agent Cooperation

Fumito Uwano; Keiki Takadama

This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.


international conference on human-computer interaction | 2018

Correcting Wrongly Determined Opinions of Agents in Opinion Sharing Model

Eiki Kitajima; Caili Zhang; Haruyuki Ishii; Fumito Uwano; Keiki Takadama

This paper aims at achieving a stable high accuracy of opinion sharing in a distributed network with the agents which have initial opinions. Specifically, the network is composed of multi-agents, and most agents form their opinions according to the neighbors opinions which may be incorrect while a few agents only can receive outside information which is expected to be correct but may be incorrect with noise. To order for the agents to form the correct opinions, we employ Autonomous Adaptive Tuning algorithm (AAT) which can improve the rate of correct opinion shared among the agents where incorrect opinions are filtered out during the opinion sharing process. However, AAT is hard to promote agents to form the correct opinions when all agents have their initial opinions. To tackle this problem, we proposed Autonomous Adaptive Tuning Dynamic (AATD) for the network where initial opinions of all agents are unknown. The intensive experiments have revealed, the following implications: (1) the accuracy rate of the agents with AATD is stably \(70\%\)–\(80\%\) regardless initial opinion state in small network, while the accuracy rate with AAT varies from 0\(\%\) to \(100\%\) depending on the state of the initial opinion; and (2) AATD is robust to different complex network topology in comparison with AAT.


genetic and evolutionary computation conference | 2018

Multiple swarm intelligence methods based on multiple population with sharing best solution for drastic environmental change

Yuta Umenai; Fumito Uwano; Hiroyuki Sato; Keiki Takadama

This paper proposes the multiple swarm optimization method composed of some numbers of populations, each of which is optimized by the different swarm optimization algorithm to adapt to dynamically change environment. To investigates the effectiveness of the proposed method, we apply it into the complex environment, where the objective function changes in a certain interval. The intensive experiments have revealed that the performance of the proposed method is better than the other conventional algorithms (i.e., particle swarm optimization (PSO), cuckoo search (CS), differential evolution (DE)) in terms of convergence and fitness.


genetic and evolutionary computation conference | 2018

Generalizing rules by random forest-based learning classifier systems for high-dimensional data mining

Fumito Uwano; Koji Dobashi; Keiki Takadama; Tim Kovacs

This paper proposes high-dimensional data mining technique by integrating two data mining methods: Accuracy-based Learning Classifier Systems (XCS) and Random Forests (RF). Concretely the proposed system integrates RF and XCS: RF generates several numbers of decision trees, and XCS generalizes the rules converted from the decision trees. The convert manner is as follows: (1) the branch node of the decision tree becomes the attribute; (2) if the branch node does not exist, the attribute of that becomes # for XCS; and (3) One decision tree becomes one rule at least. Note that # can become any value in the attribute. From the experiments of Multiplexer problems, we derive that: (i) the good performance of the proposed system; and (ii) RF helps XCS to acquire optimal solutions as knowledge by generating appropriately generalized rules.


International Conference on Principles and Practice of Multi-Agent Systems | 2018

Strategy for Learning Cooperative Behavior with Local Information for Multi-agent Systems

Fumito Uwano; Keiki Takadama

Toward learning cooperative behavior for any number of agents, this paper proposes a multi-agent reinforcement learning method without communication, called PMRL-based Learning for Any number of Agents (PLAA). PLAA prevents from agents reaching the purpose for spending too many times, and to promote the local multi-agent cooperation without communication by PMRL as a previous method. To guarantee the effectiveness of PLAA, this paper compares PLAA with Q-learning, and two previous methods in 10 kinds of the maze for the 2 and 3 agents. From the experimental result, we revealed those things: (a) PLAA is the most effective method for cooperation among 2 and 3 agents; (b) PLAA enable the agents to cooperate with each other in small iterations.


international conference on swarm intelligence | 2017

Strategies to Improve Cuckoo Search Toward Adapting Randomly Changing Environment

Yuta Umenai; Fumito Uwano; Hiroyuki Sato; Keiki Takadama

Cuckoo Search (CS) is the powerful optimization algorithm and has been researched recently. Cuckoo Search for Dynamic Environment (D-CS) has proposed and tested in dynamic environment with multi-modality and cyclically before. It was clear that has the hold capability and can find the optimal solutions in this environment. Although these experiments only provide the valuable results in this environment, D-CS not fully explored in dynamic environment with other dynamism. We investigate and discuss the find and hold capabilities of D-CS on dynamic environment with randomness. We employed the multi-modal dynamic function with randomness and applied D-CS into this environment. We compared D-CS with CS in terms of getting the better fitness. The experimental result shows the D-CS has the good hold capability on dynamic environment with randomness. Introducing the Local Solution Comparison strategy and Concurrent Solution Generating strategy help to get the hold and find capabilities on dynamic environment with randomness.


sice journal of control, measurement, and system integration | 2018

Multi-Agent Cooperation Based on Reinforcement Learning with Internal Reward in Maze Problem

Fumito Uwano; Naoki Tatebe; Yusuke Tajima; Masaya Nakata; Tim Kovacs; Keiki Takadama


sice journal of control, measurement, and system integration | 2018

Weighted Opinion Sharing Model for Cutting Link and Changing Information among Agents as Dynamic Environment

Fumito Uwano; Rei Saito; Keiki Takadama

Collaboration


Dive into the Fumito Uwano's collaboration.

Top Co-Authors

Avatar

Keiki Takadama

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Akinori Murata

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Yuta Umenai

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Masaya Nakata

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Yusuke Tajima

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haruyuki Ishii

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Hiroyuki Sato

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Naoki Tatebe

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Takato Tatsumi

University of Electro-Communications

View shared research outputs
Researchain Logo
Decentralizing Knowledge