Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomohiro Harada is active.

Publication


Featured researches published by Tomohiro Harada.


computational intelligence and games | 2015

Combining pathfmding algorithm with Knowledge-based Monte-Carlo tree search in general video game playing

Chun Yin Chu; Hisaaki Hashizume; Zikun Guo; Tomohiro Harada; Ruck Thawonmas

This paper proposes a general video game playing AI that combines a pathfmding algorithm with Knowledge-based Fast-Evolutionary Monte-Carlo tree search (KB Fast-Evo MCTS). This AI is able to acquire knowledge of the game through simulation, select suitable targets on the map using the acquired knowledge, and head to the target in an efficient manner. In addition, improvements have been proposed to handle various features of the GVG-AI platform, including avatar type changes, portals and item usage. Experiments on the GVG-AI Competition framework has shown that our proposed AI can adapt to a wide range of video games, and performs better than the original KB Fast-Evo MCTS controller in 75% of all games tested, with a 64.2% improvement on the percentage of winning.


european conference on genetic programming | 2013

Asynchronous evaluation based genetic programming: comparison of asynchronous and synchronous evaluation and its analysis

Tomohiro Harada; Keiki Takadama

This paper compares an asynchronous evaluation based GP with a synchronous evaluation based GP to investigate the evolution ability of an asynchronous evaluation on the GP domain. As an asynchronous evaluation based GP, this paper focuses on Tierra-based Asynchronous GP we have proposed, which is based on a biological evolution simulator, Tierra. The intensive experiment compares TAGP with simple GP by applying them to a symbolic regression problem, and it is revealed that an asynchronous evaluation based GP has better evolution ability than a synchronous one.


ieee global conference on consumer electronics | 2016

Application of Monte-Carlo tree search in a fighting game AI

Shubu Yoshida; Makoto Ishihara; Taichi Miyazaki; Yuto Nakagawa; Tomohiro Harada; Ruck Thawonmas

This paper describes an application of Monte-Carlo Tree Search (MCTS) in a fighting game AI. MCTS is a best-first search technique that uses stochastic simulations. In this paper, we evaluate its effectiveness on FightingICE, a game AI competition platform at Computational Intelligence and Games Conferences. Our results confirm that MCTS is an effective search for controlling a game AI in the aforementioned platform.


advances in computer entertainment technology | 2016

Applying and Improving Monte-Carlo Tree Search in a Fighting Game AI

Makoto Ishihara; Taichi Miyazaki; Chun Yin Chu; Tomohiro Harada; Ruck Thawonmas

This paper evaluates the performance of Monte-Carlo Tree Search (MCTS) in a fighting game AI and proposes an improvement for the algorithm. Most existing fighting game AIs rely on rule bases and react to every situation with predefined actions, making them predictable for human players. We attempt to overcome this weakness by applying MCTS, which can adapt to different circumstances without relying on predefined action patterns or tactics. In this paper, an AI based on Upper Confidence bounds applied to Trees (UCT) and MCTS is first developed. Next, the paper proposes improving the AI with Roulette Selection and a rule base. Through testing and evaluation using FightingICE, an international fighting game AI competition platform, it is proven that the aforementioned MCTS-based AI is effective in a fighting game, and our proposed improvement can further enhance its performance.


Archive | 2015

Artificial Bee Colony Algorithm Based on Local Information Sharing in Dynamic Environment

Ryo Takano; Tomohiro Harada; Hiroyuki Sato; Keiki Takadama

This paper focuses on Artificial Bee Colony (ABC) algorithm which can utilize global information in the static environment and extends it to ABC algorithm based on local information sharing (ABC-lis) in dynamic environment. In detail, ABC-lis algorithm shares only local information of solutions unlike the conventional ABC algorithm. To investigates the search ability and adaptability of ABC-lis algorithm to environmental change, we compare it with the conventional two ABC algorithms by applying them to a multimodal problem with dynamic environmental change. The experimental results have revealed that the proposed ABC-lis algorithm can maintain the search performance in the multimodal problem with the dynamic environmental change, meaning that ABC-lis algorithm shows its search ability and adaptability to environmental change.


computational intelligence and games | 2016

Position-based reinforcement learning biased MCTS for General Video Game Playing

Chun Yin Chu; Suguru Ito; Tomohiro Harada; Ruck Thawonmas

This paper proposes an application of reinforcement learning and position-based features in rollout bias training of Monte-Carlo Tree Search (MCTS) for General Video Game Playing (GVGP). As an improvement on Knowledge-based Fast-Evo MCTS proposed by Perez et al., the proposed method is designated for both the GVG-AI Competition and improvement of the learning mechanism of the original method. The performance of the proposed method is evaluated empirically, using all games from six training sets available in the GVG-AI Framework, and the proposed method achieves better scores than five other existing MCTS-based methods overall.


ieee global conference on consumer electronics | 2015

Procedural generation of angry birds levels that adapt to the player's skills using genetic algorithm

Misaki Kaidan; Chun Yin Chu; Tomohiro Harada; Ruck Thawonmas

This paper proposes a procedural generation method that automatically creates game levels for Angry Birds, a famous mobile game, using genetic algorithm. By adjusting the parameters of the genetic algorithm according to the players gameplay results, our proposed method can generate game levels that adapt to the players skills. Our experiment proves that the proposed method is able to procedurally generate game levels that befit the players skill.


genetic and evolutionary computation conference | 2014

Asynchronously evolving solutions with excessively different evaluation time by reference-based evaluation

Tomohiro Harada; Keiki Takadama

The asynchronous evolution has an advantage when evolving solutions with excessively different evaluation time since the asynchronous evolution evolves each solution independently without waiting for other evaluations, unlike the synchronous evolution requires evaluations of all solutions at the same time. As a novel asynchronous evolution approach, this paper proposes Asynchronous Reference-based Evaluation (ARE) that asynchronously selects good parents by the tournament selection using reference solution in order to evolve solutions through a crossover of the good parents. To investigate the effectiveness of ARE in the case of evolving solutions with excessively different evaluation time, this paper applies ARE to Genetic Programming (GP), and compares GP using ARE (ARE-GP) with GP using (μ+λ) selection ((μ+λ)-GP) as the synchronous approach in particular situation where the evaluation time of individuals differs from each other. The intensive experiments have revealed the following implications: (1) ARE-GP greatly outperforms (μ+λ)-GP from the viewpoint of the elapsed unit time in the parallel computation environment, (2) ARE-GP can evolve individuals without decreasing the searching ability in the situation where the computing speed of each individual differs from each other and some individuals fail in their execution.


congress on evolutionary computation | 2017

Performance comparison of parallel asynchronous multi-objective evolutionary algorithm with different asynchrony

Tomohiro Harada; Keiki Takadama

This paper proposes a parallel asynchronous evolutionary algorithm (EA) with different asynchrony and verifies its effectiveness on multi-objective optimization problems. We represent such EA with different asynchrony as semi-asynchronous EA. The semi-asynchronous EA continuously evolves solutions whenever a part of solutions in the population completes their evaluations in the master-slave parallel computation environment, unlike a conventional synchronous EA, which waits for evaluations of all solutions to generate next population. To establish the semi-asynchronous EA, this paper proposes the asynchrony parameter to decide how many solutions are waited, and clarifies the effectual asynchrony related to the number of slave nodes. In the experiment, we apply the semi-asynchronous EA to NSGA-II, which is a well-known multi-objective evolutionary algorithm, and the semi-asynchronous NSGA-IIs with different asynchrony are compared with synchronous one on the multi-objective optimization benchmark problems with several variances of evaluation time. The experimental result reveals that the semi-asynchronous NSGA-II with low asynchrony has possibility to perform the best search ability than the complete asynchronous and the synchronous NSGA-II in the optimization problems with large variance of evaluation time.


soft computing | 2012

Evolving conditional branch programs in Tierra-based Asynchronous Genetic Programming

Tomohiro Harada; Yoshihiro Ichikawa; Keiki Takadama

This paper explores the methods which can evolve conditional branch programs in Tierra-based Asynchronous Genetic Programming (TAGP) to improve an evolutionary ability for complex programs. For this purpose, we propose three methods, namely, the label address, the elite preserving strategy with the program size restriction, and the gradient fitness calculation. An intensive experiment on a calculation program evolution reveals the following implications: (1) the label addressing can simply construct the conditional branch; (2) the elite preserving strategy contributes to maintaining the correct programs and the program size restriction prevents the ineffective instructions; and (3) the gradient fitness calculation can correctly evaluate the multiple outputs programs; and (4) the above three methods, however, are difficult to generate the shortest size programs such as sharing instructions with different calculations.

Collaboration


Dive into the Tomohiro Harada's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keiki Takadama

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroyuki Sato

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Suguru Ito

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazuki Mori

Ritsumeikan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge