Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Caihong Li is active.

Publication


Featured researches published by Caihong Li.


ieee international conference on information acquisition | 2006

Application of Artificial Neural Network Based on Q-learning for Mobile Robot Path Planning

Caihong Li; Jingyuan Zhang; Yibin Li

Path planning is a difficult part of the navigation task for the mobile robot under dynamic and unknown environment. It needs to solve a mapping relationship between the sensing space and the action space. The relationship can be achieved through different ways. But it is difficult to be expressed by an accurate equation. This paper uses multi-layer feedforward artificial neural network (ANN) to construct a path-planning controller by its powerful nonlinear functional approximation. Then the path planning task is simplified to a classified problem which are five state-action mapping relationship. One reinforcement learning method, Q-learning, is used to collect training samples for the ANN controller. At last the trained controller runs in the simulation environment and retrains itself furthermore combining the reinforcement signal during the interaction with the environment. Strategy based on the Combination of ANN and Q-learning is better than using only one of the two methods. The simulation result also shows that the strategy can find the optimal path than using Q-learning only.


bio-inspired computing: theories and applications | 2010

Multi-steps prediction of chaotic time series based on echo state network

Yong Song; Yibin Li; Qun Wang; Caihong Li

Considering of the ill-posed problem in learning process of echo state network(ESN), a new learning algorithm of ESN is proposed based on regularization method. The regularization term provides a stable solution to function approximation with a tradeoff between accuracy and smoothness of the solutions. So the redundant weights of neural network are damped and converged to the zero state. The structure of neural network will become more compact with a particular accuracy. The neural network has good generalization. The simulation results show that the proposed algorithm has higher accuracy than the prediction model based on RBF network in multi-steps prediction by Lorenz and Chen mapping.


ieee international conference on information acquisition | 2006

Q-Learning Based Method of Adaptive Path Planning for Mobile Robot

Yibin Li; Caihong Li; Zijian Zhang

Reinforcement learning (RL) is a learning technique based on trial and error. Q-learning is a method of RL algorithms. It has been applied widely in the adaptive path planning for the autonomous mobile robot. In order to decrease the learning space and increase the learning convergent speed, this paper adopts Q-layered learning method to divide the task of searching optimal path into three basic behaviors (or subtasks), namely static obstacle-avoidance, dynamic obstacle-avoidance and goal approaching. Especially in the learning for the static obstacle-avoidance behavior, a novel priority Q search method (PQA) is used to avoid the blindly search of the random search algorithm (RA) which is always used to select actions in Q- learning. PQA uses the sum of weighted vectors pointing away from obstacles to predict the magnitude of the reinforcement reward receiving from the possible state-action after executing the action. Robot controller will select an action based on the result at the next executing time. At last PQA and RA are both simulated in two different environments. The learning results show that learn steps are fewer by PQA than by RA under same environment to achieve the task. And in the total learning periods PQA has the higher task complete percent. PQA is an effective way to solve the problem of the path planning under dynamic and unknown environment.


International Journal of Advanced Robotic Systems | 2013

An Improved Chaotic Motion Path Planner for Autonomous Mobile Robots based on a Logistic Map

Caihong Li; Fengying Wang; Lei Zhao; Yibin Li; Yong Song

This paper presents a chaotic motion path planner based on a Logistic Map (SCLCP) for an autonomous mobile robot to cover an unknown terrain randomly, namely entirely, unpredictably and evenly. The path planner has been improved by arcsine and arccosine transformation. A motion path planner based only on the Logistic Chaotic Map (LCP) can show chaotic behaviour, which possesses the chaotic characteristics of topological transitivity and unpredictability, but lacks better evenness. Therefore, the arcsine and arccosine transformations are used to enhance the randomness of LCP. The randomness of the followed path planner, LCP, the improved path planner SCLCP and the commonly used Random Path Planner (RP) are discussed and compared under different sets of initial conditions and different iteration rounds. Simulation results confirm that a better evenness index of SCLCP can be obtained with regard to previous works.


International Journal of Advanced Robotic Systems | 2016

A Bounded Strategy of the Mobile Robot Coverage Path Planning Based on Lorenz Chaotic System

Caihong Li; Yong Song; Fengying Wang; Zhiqiang Wang; Yibin Li

According to the requirements of the mobile robot complete coverage path planning for some special missions, this paper introduces a bounded strategy based on the integration of the Lorenz dynamic system and the robot kinematics equation. Chaotic variables of the Lorenz system are confined in a limited range, while when they are used to produce the robots coordinate position, the trajectory range is determined by the starting point, iterative times, and iterative step. The proposed bounded strategy can constrain all the robot positions in the workplace by a mirror mapping method that can reflect the overflow waypoints returning to it. Moreover, the statistical characteristics of the Lorenz chaotic variables and the robot trajectory are discussed in order to choose the best mapping variable. The simulation results show that the bounded strategy that uses the chosen variable can achieve a higher coverage rate contrary to other similar works.


world congress on intelligent control and automation | 2008

Research of the obstacle avoidance based on RBFNN for the mobile robot under dynamic environment

Caihong Li; Yibin Li; Fengying Wang

A new obstacle avoidance algorithm for the mobile robot is introduced. When the dynamic obstacle is in a nonlinear random movement, a radial basis function neural network (RBFNN) is used to build the prediction model. The next location of the obstacle is predicted based on the three adjacent value of time sequence. Thus the dynamic obstacle avoidance issue is converted into the instantaneous static one and the realization of real-time planning is achieved. The prediction model performance of RBFNN has been compared with a back propagation neural network (BPNN) forecast model which is normally used. The results show that RBFNN model has the higher forecast accuracy and faster learning rate. Combined with the designed N/M data division, the model is very suitable for systems of nonlinear time series prediction.


world congress on intelligent control and automation | 2006

Study on Adaptive Path Planning for Mobile Robot Based on Q Learning

Caihong Li; Yibin Li; Zijian Zhang; Rui Song

Q learning is a popular method of reinforcement learning algorithms. In order to decrease the learning space and increase the learning convergent velocity, Q-layered learning method was adopted to divide the task of searching optimal path into three basic behaviors, namely static obstacle-avoidance, dynamic obstacle-avoidance and goal-finding. Especially in the learning for the static obstacle-avoidance behavior, a new priority Q search method (PQA) was used to avoid the blindly search of the random search algorithm (RA). PQA used the sum of weighted vectors pointing away from obstacles to predict the reinforcement reward receiving from the possible state-action after acting. Robot controller selected an action based on the result at the next executing time. At last PQA and RA were both simulated in two different environments. The learning results show that PQA has fewer learning steps and higher task complete percent than RA. PQA is an effective way to solve the problem of the path planning under dynamic environment


Mathematical Problems in Engineering | 2015

Chaotic Path Planner of Autonomous Mobile Robots Based on the Standard Map for Surveillance Missions

Caihong Li; Yong Song; Fengying Wang; Zhenying Liang; Baoyan Zhu

This paper proposes a fusion iterations strategy based on the Standard map to generate a chaotic path planner of the mobile robot for surveillance missions. The distances of the chaotic trajectories between the adjacent iteration points which are produced by the Standard map are too large for the robot to track. So a fusion iterations strategy combined with the large region iterations and the small grids region iterations is designed to resolve the problem. The small region iterations perform the iterations of the Standard map in the divided small grids, respectively. It can reduce the adjacent distances by dividing the whole surveillance workspace into small grids. The large region iterations combine all the small grids region iterations into a whole, switch automatically among the small grids, and maintain the chaotic characteristics of the robot to guarantee the surveillance missions. Compared to simply using the Standard map in the whole workspace, the proposed strategy can decrease the adjacent distances according to the divided size of the small grids and is convenient for the robot to track.


Journal of Robotics | 2015

Mathematical modeling and analysis of multirobot cooperative hunting behaviors

Yong Song; Yibin Li; Caihong Li; Xin Ma

This paper presents a mathematical model of multirobot cooperative hunting behavior. Multiple robots try to search for and surround a prey. When a robot detects a prey it forms a following team. When another searching robot detects the same prey, the robots form a new following team. Until four robots have detected the same prey, the prey disappears from the simulation and the robots return to searching for other prey. If a following team fails to be joined by another robot within a certain time limit the team is disbanded and the robots return to searching state. The mathematical model is formulated by a set of rate equations. The evolution of robot collective hunting behaviors represents the transition between different states of robots. The complex collective hunting behavior emerges through local interaction. The paper presents numerical solutions to normalized versions of the model equations and provides both a steady state and a collaboration ratio analysis. The value of the delay time is shown through mathematical modeling to be a strong factor in the performance of the system as well as the relative numbers of the searching robots and the prey.


International Journal of Computational Intelligence Systems | 2018

An Integrated Algorithm of CCPP Task for Autonomous Mobile Robot under Special Missions

Caihong Li; Zhiqiang Wang; Chun Fang; Zhenying Liang; Yong Song; Yibin Li

Due to the difficult problem of avoiding obstacles to achieve the complete coverage path planning (CCPP) for special missions, this paper introduces a novel integrated algorithm of CCPP for autonomous mobile robot under an obstacles-included environment. The algorithm combines cellular decomposition approach and the Standard map together for designing. The cellular decomposition approach is used to simplify the given workplace into smaller sub-regions for coverage via a chaotic path planner. The planner is constructed based on the chaotic Standard map at full mapping and produces the needed trajectories inside each decomposed sub-region. The simulation results verify the effectiveness of the designed method.

Collaboration


Dive into the Caihong Li's collaboration.

Top Co-Authors

Avatar

Yibin Li

Shandong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fengying Wang

Shandong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhiqiang Wang

Shandong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Baoyan Zhu

Shandong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zijian Zhang

Shandong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guangyuan Zhao

Shandong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge