Selective Uncertainty Propagation in Offline RL

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Selective Uncertainty Propagation in Offline RL, by Sanath Kumar Krishnamurthy and 5 other authors

View PDF
HTML (experimental)

Abstract:We consider the finite-horizon offline reinforcement learning (RL) setting, and are motivated by the challenge of learning the policy at any step h in dynamic programming (DP) algorithms. To learn this, it is sufficient to evaluate the treatment effect of deviating from the behavioral policy at step h after having optimized the policy for all future steps. Since the policy at any step can affect next-state distributions, the related distributional shift challenges can make this problem far more statistically hard than estimating such treatment effects in the stochastic contextual bandit setting. However, the hardness of many real-world RL instances lies between the two regimes. We develop a flexible and general method called selective uncertainty propagation for confidence interval construction that adapts to the hardness of the associated distribution shift challenges. We show benefits of our approach on toy environments and demonstrate the benefits of these techniques for offline policy learning.

Submission history

From: Sanath Kumar Krishnamurthy [view email]
[v1]
Wed, 1 Feb 2023 07:31:25 UTC (1,541 KB)
[v2]
Mon, 12 Feb 2024 19:35:55 UTC (1,203 KB)
[v3]
Thu, 19 Dec 2024 06:52:07 UTC (1,045 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.