Towards Personalized Explanation of Robot Path Planning via User Feedback

1 Nov 2020  ·  Kayla Boggess, Shenghui Chen, Lu Feng ·

Prior studies have found that explaining robot decisions and actions helps to increase system transparency, improve user understanding, and enable effective human-robot collaboration. In this paper, we present a system for generating personalized explanations of robot path planning via user feedback. We consider a robot navigating in an environment modeled as a Markov decision process (MDP), and develop an algorithm to automatically generate a personalized explanation of an optimal MDP policy, based on the user preference regarding four elements (i.e., objective, locality, specificity, and corpus). In addition, we design the system to interact with users via answering users' further questions about the generated explanations. Users have the option to update their preferences to view different explanations. The system is capable of detecting and resolving any preference conflict via user interaction. The results of an online user study show that the generated personalized explanations improve user satisfaction, while the majority of users liked the system's capabilities of question-answering and conflict detection/resolution.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here