A Note on Optimization Formulations of Markov Decision Processes

17 Dec 2020  ·  Lexing Ying, Yuhua Zhu ·

This note summarizes the optimization formulations used in the study of Markov decision processes. We consider both the discounted and undiscounted processes under the standard and the entropy-regularized settings. For each setting, we first summarize the primal, dual, and primal-dual problems of the linear programming formulation. We then detail the connections between these problems and other formulations for Markov decision processes such as the Bellman equation and the policy gradient method.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control