Suppressing Overestimation in Q-Learning through Adversarial Behaviors

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Suppressing Overestimation in Q-Learning through Adversarial Behaviors, by HyeAnn Lee and 1 other authors

View PDF
HTML (experimental)

Abstract:The goal of this paper is to propose a new Q-learning algorithm with a dummy adversarial player, which is called dummy adversarial Q-learning (DAQ), that can effectively regulate the overestimation bias in standard Q-learning. With the dummy player, the learning can be formulated as a two-player zero-sum game. The proposed DAQ unifies several Q-learning variations to control overestimation biases, such as maxmin Q-learning and minmax Q-learning (proposed in this paper) in a single framework. The proposed DAQ is a simple but effective way to suppress the overestimation bias thourgh dummy adversarial behaviors and can be easily applied to off-the-shelf reinforcement learning algorithms to improve the performances. A finite-time convergence of DAQ is analyzed from an integrated perspective by adapting an adversarial Q-learning. The performance of the suggested DAQ is empirically demonstrated under various benchmark environments.

Submission history

From: HyeAnn Lee [view email]
[v1]
Tue, 10 Oct 2023 03:46:32 UTC (2,919 KB)
[v2]
Mon, 26 Feb 2024 07:20:34 UTC (2,644 KB)
[v3]
Sat, 28 Sep 2024 07:47:15 UTC (3,264 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.