Archive | 2021

Tudásbázis redukció a szakértői szabályrendszerrel bővített FRIQ-learning módszerben

 
 

Abstract


The knowledge representation of reinforcement learning (RL) methods can be different, in case of the conventional Q-learning method it is a Q-table and in case of fuzzy-based RL systems it is a fuzzy rulebase. The size of the final knowledgebase (number of elements in the Q-table, number of rules in the fuzzy rule-base) to be depend on the complexity of the problem (and the dimension size), thus there may be cases when the number of elements in the final knowledge can be considered high. In the fuzzy rulebased RL systems the rule-base reduction methods can be applied to reduce the size of the complete rule-base. In the Fuzzy Rule Interpolation based Q-learning (FRIQ-learning) the rule-base reduction can be performed optionally after the learning phase. In the expert knowledge-included FRIQ-learning, due to the knowledge building method, there can be cases when rules can get close to each other. Merging those rules, which are close to each other, could significantly reduce the size of the final rule-base. The main goal of this paper is to introduce a rule-base reduction strategy of the expert knowledgeincluded FRIQ-learning, which is able to reduce the rule-base size during the construction (learning) phase. Tompa T., Kovács Sz. Tudásbázis redukció a szakértői szabályrendszerrel bővített FRIQ-learning módszerben 71

Volume 11
Pages 70-80
DOI 10.35925/J.MULTI.2021.4.8
Language English
Journal None

Full Text