Stochastic Optimal Control for Multivariable Dynamical Systems Using Expectation Maximization

1 Oct 2020  ·  Prakash Mallick, Zhiyong Chen ·

Trajectory optimization is a fundamental stochastic optimal control problem. This paper deals with a trajectory optimization approach for dynamical systems subject to measurement noise that can be fitted into linear time-varying stochastic models. Exact/complete solutions to these kind of control problems have been deemed analytically intractable in literature because they come under the category of Partially Observable Markov Decision Processes (POMDPs). Therefore, effective solutions with reasonable approximations are widely sought for. We propose a reformulation of stochastic control in a reinforcement learning setting. This type of formulation assimilates the benefits of conventional optimal control procedure, with the advantages of maximum likelihood approaches. Finally, an iterative trajectory optimization paradigm called as Stochastic Optimal Control - Expectation Maximization (SOC-EM) is put-forth. This trajectory optimization procedure exhibits better performance in terms of reduction of cumulative cost-to-go which is proved both theoretically and empirically. Furthermore, we also provide novel theoretical work which is related to uniqueness of control parameter estimates. Analysis of the control covariance matrix is presented, which handles stochasticity through efficiently balancing exploration and exploitation.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here