Proceedings of the 2021 International Conference on Multimodal Interaction | 2021

Modelling and Predicting Trust for Developing Proactive Dialogue Strategies in Mixed-Initiative Interaction

 
 
 

Abstract


In mixed-initiative user interactions, a user and an autonomous agent collaborate for solving tasks by taking interleaving actions. However, this shift of control towards the agent requires a formation of trust for the user, otherwise the assistance possibly will be rejected and becomes obsolete. One approach for fostering a trustworthy interaction is to equip an agent with proactive dialogue capabilities. However, the development of adequate proactive dialogue strategies is complex and highly user- as well as context-dependent. Inappropriate usage of proactive conversation may even do more harm than good and corrupt the human-computer trust relationship. In order to alleviate this problem, modelling and predicting a proactive system’s perceived trustworthiness during an ongoing interaction is essential. Therefore, this paper presents novel work on the development of a user model for live prediction of trust during proactive interaction, incorporating user-, system-, and context-dependent features. For predicting trust, three machine-learning algorithms – support vector machine, eXtreme Gradient Boost, gated recurrent unit network – are trained and tested on a proactive dialogue corpus. The experimental results show that among the classifiers the support vector machine showed the most well-rounded performance, while the gated recurrent unit had the best accuracy. The results prove the developed user model to be reliable for predicting trust in proactive dialogue. Based on the outcomes, the usability of the proposed method in real-life scenarios is discussed and implications for developing user-adaptive proactive dialogue strategies are described.

Volume None
Pages None
DOI 10.1145/3462244.3479906
Language English
Journal Proceedings of the 2021 International Conference on Multimodal Interaction

Full Text