Adversarial Style Transfer for Robust Policy Optimization in Reinforcement Learning

29 Sep 2021  ·  Md Masudur Rahman, Yexiang Xue ·

This paper proposes an algorithm that aims to improve generalization for reinforcement learning agents by removing overfitting to confounding features. Our approach consists of a max-min game theoretic objective. A generator transfers the style of observation during reinforcement learning. An additional goal of the generator is to perturb the observation, which maximizes the agent's probability of taking a different action. In contrast, a policy network updates its parameters to minimize the effect of such perturbations, thus staying robust while maximizing the expected future reward. Based on this setup, we propose a practical deep reinforcement learning algorithm, Adversarial Robust Policy Optimization (ARPO), to find an optimal policy that generalizes to unseen environments. We evaluate our approach on visually enriched and diverse Procgen benchmarks. Empirically, we observed that our agent ARPO performs better in generalization and sample efficiency than a few state-of-the-art algorithms.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here