A Fisher-Rao gradient flow for entropy-regularised Markov decision processes in Polish spaces

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled A Fisher-Rao gradient flow for entropy-regularised Markov decision processes in Polish spaces, by Bekzhan Kerimkulov and 4 other authors

View PDF

Abstract:We study the global convergence of a Fisher-Rao policy gradient flow for infinite-horizon entropy-regularised Markov decision processes with Polish state and action space. The flow is a continuous-time analogue of a policy mirror descent method. We establish the global well-posedness of the gradient flow and demonstrate its exponential convergence to the optimal policy. Moreover, we prove the flow is stable with respect to gradient evaluation, offering insights into the performance of a natural policy gradient flow with log-linear policy parameterisation. To overcome challenges stemming from the lack of the convexity of the objective function and the discontinuity arising from the entropy regulariser, we leverage the performance difference lemma and the duality relationship between the gradient and mirror descent flows. Our analysis provides a theoretical foundation for developing various discrete policy gradient algorithms.

Submission history

From: Yufei Zhang [view email]
[v1]
Wed, 4 Oct 2023 16:41:36 UTC (50 KB)
[v2]
Thu, 5 Dec 2024 16:35:46 UTC (78 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.