J. Frankl. Inst. | 2021

Optimized control for human-multi-robot collaborative manipulation via multi-player Q-learning

 
 
 

Abstract


Abstract In this paper, optimized interaction control is investigated for human-multi-robot collaboration control problems, which cannot be described by the traditional impedance controller. To realize global optimized interaction performance, the multi-player non-zero sum game theory is employed to obtain the optimized interaction control of each robot agent. Regarding the game strategies, Nash equilibrium strategy is utilized in this paper. In human-multi-robot collaboration problems, the dynamics parameters of the human arm and the manipulated object are usually unknown. To obviate the dependence on these parameters, the multi-player Q-learning method is employed. Moreover, for the human-multi-robot collaboration problem, the optimized solution is difficult to resolve due to the existence of the desired reference position. A multi-player Nash Q-learning algorithm considering the desired reference position is proposed to deal with the problem. The validity of the proposed method is verified through simulation studies.

Volume 358
Pages 5639-5658
DOI 10.1016/J.JFRANKLIN.2021.03.017
Language English
Journal J. Frankl. Inst.

Full Text