Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-valued Inverse Reinforcement Learning

27 Sep 2018  ·  Baoxiang Wang, Tongfang Sun, Xianjun Sam Zheng ·

In recent years, reinforcement learning methods have been applied to model gameplay with great success, achieving super-human performance in various environments, such as Atari, Go and Poker. However, those studies mostly focus on winning the game and have largely ignored the rich and complex human motivations, which are essential for understanding the agents' diverse behavior. In this paper, we present a multi-motivation behavior modeling which investigates the multifaceted human motivations and models the underlying value structure of the agents. Our approach extends inverse RL to the vectored-valued setting which imposes a much weaker assumption than previous studies. The vectorized rewards incorporate Pareto optimality, which is a powerful tool to explain a wide range of behavior by its optimality. For practical assessment, our algorithm is tested on the World of Warcraft Avatar History dataset spanning three years of the gameplay. Our experiments demonstrate the improvement over the scalarization-based methods on real-world problem settings.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here