Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fei Qian is active.

Publication


Featured researches published by Fei Qian.


ieee international conference on evolutionary computation | 1996

A parallel learning cellular automata for combinatorial optimization problems

Fei Qian; Hironori Hirata

Reinforcement learning is a class of learning methodologies in which the controller (or agent) adapts based on external feedback from the random environment. We present a theoretic model of stochastic learning cellular automata (SLCA) as a model of reinforcement learning systems. The SLCA is an extended model of traditional cellular automata, defined as a stochastic cellular automaton with its random environment. There are three rule spaces for the SLCA: parallel, sequential and mixture. We especially study the parallel SLCA with a genetic operator and apply it to the combinatorial optimization problems. The computer simulations of graph partition problems show that the convergence of SLCA is better than the parallel mean field algorithm.


ieee wic acm international conference on intelligent agent technology | 2003

Network load balancing algorithm using ants computing

Hiroyuki Une; Fei Qian

It is important to reduce the mean transfer time of network, thus the routing algorithm designed to get load balancing is required. In such algorithms, every node must have the following functionalities: getting network traffic information and updating the routing table to reflect traffic information provided by node. In this paper, we describe a routing algorithm for load balancing. Our algorithm deployed ants computing and reinforcement learning. We show that our algorithm achieves load balancing for all nodes of network and works better than other algorithms.


ieee wic acm international conference on intelligent agent technology | 2003

Q-learning automaton

Fei Qian; Hironori Hirata

Reinforcement learning is the problem faced by a controller that must learn behavior through trial and error interactions with a dynamic environment. The controllers goal is to maximize reward over time, by producing an effective mapping of states of actions called policy. To construct the model of such systems, we present a generalized learning automaton approach with Q-learning behaviors. Compared to Q-learning, the computational experiments of the pursuit problems show that the proposed reinforcement scheme obtains better results in terms of convergence speed and memory size.


systems man and cybernetics | 2000

A parallel reinforcement computing model for function optimization problems

Fei Qian; Shigeya Ikebou; Takashi Kusunoki; Jijun Wu; Hironori Hirata

Learning Automaton is a learning model with outstanding learning ability, autonomy and guaranteed convergence in the learning process. We propose a parallel computing model with learning automata for function optimization problems and implement it as a sparse distributed parallel computing system. The problems with the traditional reinforcement method using learning automata are: increased difficulty of the adjustment of learning parameters and that of convergence time, with an increased output number. To overcome these problems, we introduce a genetic algorithm (GA) to construct a search space with reduced dimension to search for the optimal output from the entire output space, and provide an efficient way of searching for the smaller-sized search space for the optimal solution. The results of computer simulations verify the usefulness of the proposed method for multivariable function optimization problems.


Transactions of the Institute of Systems, Control and Information Engineers | 1991

A Collective Model of Learning Automata with N-Cooperative Random Environments

Fei Qian; Hironori Hirata

複数の目標が存在する最適制御においては, 制御システムは, これらの目標の実現を目指す複数の制御者からなる.複数の制約条件のもとで環境の変動が事前に予知できない場合に, 競争的あるいは協力的に共存している各々の制御者が, ある種の集団行動を形成する現象は社会システム, 経済システムなどの中でよく見られる.ゲーム理論の立場から見れば, このようなシステムは協力, 非協力ゲームに帰着される.本研究では, このような複数の目標のもとで動作しているシステムの最適化問題を取り上げ, 複数の静的なランダム環境 (教師) 下で動作している学習オートマトンの集団モデルについて考察し, モデルの定義評価基準などを与え, ランダム環境に適応するための各オートマトンの強化法の構築方法を示す.集団モデルとしては, 種々のタイプが考えられるが, 本文では, とくに, 協力環境下での非協力ゲームのモデルについて考察し, このモデルの静特性および動特性を議論したうえでP-modelとS-modelの二つのタイプの学習オートマトンに対応できる強化法を提案する.また, 強化法の収束性の評明を与え, 簡単なシミュレーション例を用いて, それを検証する


Ieej Transactions on Electronics, Information and Systems | 1998

Learning Automata Modeling of Distributed Reinforcement Learning Systems with Collective Behavior

Fei Qian; Hironori Hirata


Ieej Transactions on Electronics, Information and Systems | 2001

Learning Cellular Automata for Function Optimization Problems

Fei Qian; Yue Zhao; Hironori Hirata


Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications | 1995

Stochastic Learning Cellular Automata

Fei Qian; Hironori Hirata


Ieej Transactions on Electronics, Information and Systems | 2001

A Parallel Distributed Learning Automaton Computing Model for Function Optimization Problems

Shigeya Ikebou; Fei Qian; Hironori Hirata


Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications | 1996

A Genetic Operator for the Two Dimensional Stochastic Learning Cellular Automata

Fei Qian; Hironori Hirata

Collaboration


Dive into the Fei Qian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jijun Wu

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Yue Zhao

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Hiroyuki Une

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Shigeya Ikebou

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shigeya Ikebo

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Takashi Yokoyama

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge