Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques, by Natalia Zhang and 5 other authors

View PDF
HTML (experimental)

Abstract:We initiate the study of Preference-Based Multi-Agent Reinforcement Learning (PbMARL), exploring both theoretical foundations and empirical validations. We define the task as identifying the Nash equilibrium from a preference-only offline dataset in general-sum games, a problem marked by the challenge of sparse feedback signals. Our theory establishes the upper complexity bounds for Nash Equilibrium in effective PbMARL, demonstrating that single-policy coverage is inadequate and highlighting the importance of unilateral dataset coverage. These theoretical insights are verified through comprehensive experiments. To enhance the practical performance, we further introduce two algorithmic techniques. (1) We propose a Mean Squared Error (MSE) regularization along the time axis to achieve a more uniform reward distribution and improve reward learning outcomes. (2) We propose an additional penalty based on the distribution of the dataset to incorporate pessimism, improving stability and effectiveness during training. Our findings underscore the multifaceted approach required for PbMARL, paving the way for effective preference-based multi-agent systems.

Submission history

From: Natalia Zhang [view email]
[v1]
Sun, 1 Sep 2024 13:14:41 UTC (3,552 KB)
[v2]
Wed, 4 Sep 2024 15:50:40 UTC (3,550 KB)
[v3]
Thu, 9 Jan 2025 11:24:44 UTC (3,556 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.