Matryoshka Policy Gradient for Entropy-Regularized RL: Convergence and Global Optimality

Every’s Master Plan


View a PDF of the paper titled Matryoshka Policy Gradient for Entropy-Regularized RL: Convergence and Global Optimality, by Franc{c}ois Ged and Maria Han Veiga

View PDF
HTML (experimental)

Abstract:A novel Policy Gradient (PG) algorithm, called $textit{Matryoshka Policy Gradient}$ (MPG), is introduced and studied, in the context of fixed-horizon max-entropy reinforcement learning, where an agent aims at maximizing entropy bonuses additional to its cumulative rewards. In the linear function approximation setting with softmax policies, we prove uniqueness and characterize the optimal policy of the entropy regularized objective, together with global convergence of MPG. These results are proved in the case of continuous state and action space. MPG is intuitive, theoretically sound and we furthermore show that the optimal policy of the infinite horizon max-entropy objective can be approximated arbitrarily well by the optimal policy of the MPG framework. Finally, we provide a criterion for global optimality when the policy is parametrized by a neural network in terms of the neural tangent kernel at convergence. As a proof of concept, we evaluate numerically MPG on standard test benchmarks.

Submission history

From: Maria Han Veiga [view email]
[v1]
Wed, 22 Mar 2023 17:56:18 UTC (32 KB)
[v2]
Sun, 25 Jun 2023 10:35:31 UTC (449 KB)
[v3]
Mon, 7 Oct 2024 20:41:29 UTC (282 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.